Systems and methods are provided to ensure lossless and ordered traversal of digital information destined for and generated from a plurality of live compute assets during the relocation of a plurality of live compute assets from one network enabled computer to a plurality of network enabled computers. After the storage of digital information is initiated through the controlled devices, the live compute assets are relocated to the new computer(s). Simultaneously, or following the relocation of the computer assets, the digital information temporarily stored within the network may be moved and stored subsequently within the network to optimize the reliable delivery through software control of the physical and virtual network/network enabled devices. Upon completion of the relocation of the live compute assets, software is utilized to complete network traversal of new and temporarily stored digital information through the network to/from the relocated compute assets in an ordered, lossless, and reliable manner.
|
11. A network controller device, comprising:
a memory; and
a processor, coupled to the memory, configured to:
receive a first notification from an orchestrator device indicating that a compute asset is sought to be relocated within a computer network and a destination location of a device for the compute asset, wherein the network controller device does not include the orchestrator device,
determine a network device to use to temporarily store data scheduled to be sent to the compute asset,
instruct the network device to temporarily store the data scheduled to be sent to the compute asset,
receive a second notification from a software/virtualization controller indicating that the compute asset has been relocated,
determine that the compute asset has finished relocating in response to receiving the second notification, and
instruct the network device to transmit the stored data to the relocated compute asset at the destination location.
1. A method, comprising:
receiving, using a network controller device, a notification from an orchestrator device indicating that a compute asset is sought to be relocated within a computer network, wherein the network controller device does not include the orchestrator device;
determining, using the network controller device, a destination location of a device for the compute asset;
determining, using the network controller device, a network device to use to temporarily store data scheduled to be sent to the compute asset;
instructing, using the network controller device, the network device to temporarily store the data scheduled to be sent to the compute asset;
instructing, using a software/virtualization controller, computer software to relocate the compute asset, wherein the network controller device does not include the software/virtualization controller;
determining, using the network controller device, that the compute asset has finished relocating; and
instructing, using the network controller device, the network device to transmit the stored data to the relocated compute asset at the destination location.
13. A system, comprising:
a processor device;
an orchestrator device, configured to:
send, using the processor device, a first notification indicating that a compute asset is sought to be relocated within a computer network and a destination location of a device for the compute asset, and
send, using the processor device, a second notification indicating that the compute asset has finished relocating;
a network controller device, wherein the network controller device does not include the orchestrator device, and wherein the network controller device is configured to:
receive, using the processor device, the first notification,
in response to receiving the first notification:
determine, using the processor device, a network device to use to temporarily store data scheduled to be sent to the compute asset, and
instruct, using the processor device, the network device to temporarily store the data scheduled to be sent to the compute asset,
receive, using the processor device, the second notification, and
instruct, using the processor device, the network device to transmit the stored data to the relocated compute asset at the destination location in response to receiving the second notification; and
a software/virtualization controller configured to instruct, using the processor device, computer software to relocate the compute asset.
2. The method of
instructing the network device to insert rules to store the data scheduled to be sent to the compute asset.
3. The method of
instructing the network device to remove the rules to store the data scheduled to be sent to the compute asset after determining that the compute asset has finished relocating.
4. The method of
instructing the network device to add forwarding entries to push the stored data to the relocated compute asset at the destination location.
5. The method of
instructing the network device to remove the forwarding entries after determining that the compute asset has finished relocating.
6. The method of
determining that the compute asset has finished relocating in response to receiving a second notification from the orchestrator device.
7. The method of
selecting the network device from a plurality of network devices based on determining that the network device would transmit the stored data to the relocated compute asset faster than another network device in the plurality of network devices.
8. The method of
selecting the network device from a plurality of network devices based on determining that the network device has available memory to temporarily store the data scheduled to be sent to the compute asset.
9. The method of
moving the data from the network device to a second network device before transmitting the stored data to the relocated compute asset at the destination location.
10. The method of
selecting the second network device based on a determination that selecting the second network device would optimize speed of delivery of the stored data to the relocated compute asset at the destination location.
12. The network controller device of
instruct the network device to add forwarding entries to transmit the stored data to the relocated compute asset at the destination location.
14. The system of
instruct, using the processor device, the network device to insert rules to store the data scheduled to be sent to the compute asset in response to receiving the first notification.
15. The system of
instruct, using the processor device, the network device to add forwarding entries to push the stored data to the relocated compute asset at the destination location in response to receiving the second notification.
16. The system of
17. The system of
18. The system of
19. The system of
send, using the processor device, a third notification to the orchestrator device, wherein the third notification indicates that the compute asset has been relocated, and wherein the orchestrator device is further configured to send, using the processor device, the second notification in response to receiving the third notification.
20. The system of
in response to receiving the third notification, program, using the processor device, a plurality of match/action mechanisms in a plurality of network devices for sending information to and from the relocated compute asset.
|
This application claims the benefit of U.S. Provisional Patent Application No. 62/330,434, filed on May 2, 2016, which is incorporated by reference herein in its entirety.
This disclosure relates to computer networks, including distributed temporary storage of data in a network.
Compute assets are computer network architectures or emulations that can be used to store information on a computer network. For example, compute assets can include virtual machines or containers. Currently, when a plurality of live compute assets (e.g., virtual machines, containers, etc.) are relocated within a network, transmitted digital information to and from a live compute asset fails to reach the intended destination as a result of the relocation process. For example, when a compute asset is relocated, packets may still be transmitted to the previous location while the compute asset is being moved. As a result, some packets can be dropped, and information can be lost. No current solutions control the temporary storage capabilities of a plurality of network/network enabled devices to store traffic during a relocation process that ensure the digital information arrives reliably when the relocation process concludes.
Referring now to
Referring now to
In normal operation and still referring to
The accompanying drawings, which are incorporated in and constitute part of the specification, illustrate embodiments of the disclosure and, together with the general description given above and the detailed descriptions of embodiments given below, serve to explain the principles of the present disclosure. In the drawings:
Features and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
In the following description, numerous specific details are set forth to provide a thorough understanding of the disclosure. However, it will be apparent to those skilled in the art that the disclosure, including structures, systems, and methods, may be practiced without these specific details. The description and representation herein are the common means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the disclosure.
References in the specification to “one embodiment,” “an embodiment,” “an exemplary embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
For purposes of this discussion, the term “module” shall be understood to include one of software, or firmware, or hardware (such as circuits, microchips, processors, or devices, or any combination thereof), or any combination thereof. In addition, it will be understood that each module can include one, or more than one, component within an actual device, and each component that forms a part of the described module can function either cooperatively or independently of any other component forming a part of the module. Conversely, multiple modules described herein can represent a single component within an actual device. Further, components within a module can be in a single device or distributed among multiple devices in a wired or wireless manner.
Embodiments of the present disclosure provide systems and methods to ensure lossless and ordered delivery of digital information destined for and generated from a plurality of live compute assets following a plurality of compute asset relocations through a plurality of network/network enabled devices (not necessarily in the same facility or physical proximity) in a network. Exemplary methods utilize software to control temporary storage of digital information within physical/virtual network/network enabled devices and within physical/virtual network enabled devices. In an embodiment, the temporary storage of digital information being transmitted to/from the compute asset to be relocated is initiated by software control of one or more controlled devices before the compute asset is signaled to begin relocation. In an embodiment, after the storage of digital information is initiated through the controlled devices, a plurality of live compute assets are relocated to a plurality of computers.
In an embodiment, substantially simultaneously to the relocation of the compute assets, the digital information temporarily stored within the network may be moved and subsequently stored elsewhere within the network to optimize the reliable delivery through software control of the network/network enabled devices. Upon completion of the relocation of the live compute assets, embodiments of the present disclosure complete the traversal of new and stored digital information through the network to/from the relocated compute assets in an ordered, lossless, and reliable manner. In an embodiment, the number of compute assets that can be relocated simultaneously is mainly limited by, but not limited to, network link capacity, latency of the network connectivity between compute locations, time required to prepare compute for movement, time required to restore compute after movement, and network/network enabled device memory available to temporarily store digital information.
In an embodiment, a network controller is notified (or otherwise becomes aware) of when (and, in an embodiment, to where) a compute asset is sought to be relocated. The network controller can either initiate the relocation of the compute asset or otherwise become aware that the relocation is going to be initiated. In an embodiment, the network controller can notify one or more other network components that the relocation of the compute asset is going to take place.
In an embodiment, the network controller can insert rules or mechanisms in a network device (e.g., a switch) to start storing data that was scheduled to be sent to the compute asset. In an embodiment, this temporary storage of data scheduled to be sent to the compute asset preserves information that would otherwise be lost while the compute asset is being relocated. For example, in an embodiment, the network controller could analyze header information in data to make this determination.
In an embodiment, the network controller can select one or more network/network enabled devices (“network devices”) to temporarily store the data during the relocation based on a network optimization determination. For example, in an embodiment, the network controller can determine (or otherwise become aware of) how long it will take to start the relocation of the compute asset and can optimize network traffic based on this time determination. In an embodiment, the network controller can select one or more network devices to insert rules for temporary data storage based on determining a path that data would travel through the network and determining which network device would result in the fastest data transmission time (e.g., data transmission time from the old compute asset location, to the temporary storage location, and then to the new compute asset location). In an embodiment, the network controller can also select one or more network devices for temporary storage based on other considerations, such as the volume of the data, how many network elements are in the network, how many network pathways exist, etc.
After the compute asset relocation has been completed, the network controller can determine (or otherwise become aware, e.g., via a system notification) that the compute asset relocation has been completed and can determine (or otherwise become aware, e.g., via a system notification) of a new forwarding location for the compute asset. Subsequently, in an embodiment, the network controller can send an instruction (or other notification) to the one or more network devices used to temporarily store the data during the relocation that instruct the one or more network devices to add forwarding entries to push data to the new location of the relocated compute asset. The network controller can also instruct the one or more network devices to begin transmitting the stored data to the new location of the compute asset. Once the stored data has been transmitted, the one or more network devices used to temporarily store the data during the relocation can remove the inserted rules or mechanisms for temporarily storing the data (e.g., in response to an instruction by the network controller or a determination by the one or more network devices that all stored data has been transmitted). In an embodiment, once the stored data has been transmitted, the one or more network devices used to temporarily store the data during the relocation can also remove the inserted forwarding entries.
In an embodiment, the one or more network devices can also temporarily store any data that was scheduled to be sent to the relocated compute asset that arrives between the time the one or more network devices are notified to begin forwarding the stored data and the time that the one or more network devices have finished transmitting this stored data. In an embodiment, the one or more network devices can remove the rules for temporarily storing data after this data has also been forwarded to the new location of the compute asset. In another embodiment, the one or more network devices can immediately begin forwarding all data scheduled to be sent to the new location of the compute asset once the one or more network devices have received a notification that the relocation has finished. In an embodiment, the one or more network devices can keep track of the time of arrival of data scheduled to be sent to the new location of the compute asset by using timestamps on the stored data and any other data scheduled to be sent to the new location of the compute asset.
Exemplary embodiments of the present disclosure will now be discussed in more detail with reference to
Referring to
In accordance with embodiments of the present disclosure, the network controller 24 can effectively program one or more of network/network enabled devices 17 to implement a store action for the digital information 19 traversing the network to/from the relocating compute asset 12. In an embodiment, the selected network/network enabled device(s) 17 implement the store action 26 for digital information 19 arriving from the existing compute asset 12 in computer 10, as well as a store action 27 for digital information 19 traversing a plurality of network/network enabled devices 17 destined for the compute asset 12 in the existing computer 10. The storage of the digital information 19 occurs inside the network/network enabled devices 17 in memory 25.
In
Orchestrator 23 can be implemented using hardware, software, and/or a combination of hardware and software. In an embodiment, orchestrator includes one or more memories and one or more processors (e.g., coupled to or in communication with the one or more memories) configured to perform operations. In an embodiment, orchestrator 23 can be implemented using computer software, digital logic, circuitry, and/or any other combination of hardware and/or software in accordance with embodiments of the present disclosure. In an embodiment, orchestrator 23 is implemented using software running on a network device (e.g., a computer) coupled to the computer network illustrated in
In
In
In
In
The advantages of embodiments of the present disclosure include, without limitation, 1) the ability to ensure 99.999% reliable delivery of digital information to compute assets that are being relocated on the same or different computer platforms; 2) the ability to utilize a plurality of network/network enabled devices as a storage medium for digital information that would otherwise not be delivered during a compute asset relocation 3) the ability to optimize the temporary storage location of digital information being temporarily stored in relation to the final compute asset destination computer; 4) the ability to use all controlled network/network enabled devices as temporary storage medium; 5) the ability to change the network/network enabled device match/actions in conjunction with the relocation of a compute asset to ensure traffic is delivered to the final compute asset destination computer 6) the ability to ensure that delivery of digital information stored and in transit through the network arrives to/from the compute asset in an ordered, lossless, and reliable manner per algorithms designed for different compute asset applications and requirements; and 7) the ability to relocate compute assets through a network without any loss of digital information will significantly increase agility and decrease cost of high reliability computer and network based applications.
Embodiments of the present disclosure provide the ability to selectively and temporarily store digital information within a plurality of virtual and physical network/network enabled devices using hardware, software, and/or a combination of hardware or software. For example, in an embodiment, a centralized software algorithm (e.g., implemented using network controller 24) can be used to selectively and temporarily store the digital information within the plurality of virtual and physical network/network enabled devices. In an embodiment, digital logic or other circuitry (e.g., implemented using network controller 24) can be used to selectively and temporarily store the digital information within the plurality of virtual and physical network/network enabled devices.
In step 802, the network controller (e.g., network controller 24) can determine one or more network devices (e.g., one or more of network/network enabled devices 17) to use to temporarily store data scheduled to be sent to the compute asset (e.g., compute asset 12). In step 804, the network controller (e.g., network controller 24) can instruct the one or more network devices to temporarily store the data scheduled to be sent to the compute asset. For example, in an embodiment, network controller 24 can insert rules or mechanisms in one or more of network/network enabled devices 17 to start storing data that was scheduled to be sent to compute asset 12.
In step 806, the network controller (e.g., network controller 24) can determine that the compute asset (e.g., compute asset 12) has finished relocating. In step 808, the network controller (e.g., network controller 24) can instruct the network device (e.g., one or more of network/network enabled devices 17) to transmit the stored data to the relocated compute asset.
It is to be appreciated that the Detailed Description, and not the Abstract, is intended to be used to interpret the claims. The Abstract may set forth one or more but not all exemplary embodiments of the present disclosure as contemplated by the inventor(s), and thus, is not intended to limit the present disclosure and the appended claims in any way.
The present disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
Any representative signal processing functions described herein can be implemented using computer processors, computer logic, application specific integrated circuits (ASIC), digital signal processors, etc., as will be understood by those skilled in the art based on the discussion given herein. Accordingly, any processor that performs the signal processing functions described herein is within the scope and spirit of the present disclosure.
The above systems and methods may be implemented as a computer program executing on a machine, as a computer program product, or as a tangible and/or non-transitory computer-readable medium having stored instructions. For example, the functions described herein could be embodied by computer program instructions that are executed by a computer processor or any one of the hardware devices listed above. The computer program instructions cause the processor to perform the signal processing functions described herein. The computer program instructions (e.g., software) can be stored in a tangible non-transitory computer usable medium, computer program medium, or any storage medium that can be accessed by a computer or processor. Such media include a memory device such as a RAM or ROM, or other type of computer storage medium such as a computer disk or CD ROM. Accordingly, any tangible non-transitory computer storage medium having computer program code that cause a processor to perform the signal processing functions described herein are within the scope and spirit of the present disclosure.
While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
8495629, | Sep 24 2009 | International Business Machines Corporation | Virtual machine relocation system and associated methods |
20050044114, | |||
20110131568, | |||
20110268113, | |||
20120096459, | |||
20120192182, | |||
20120311568, | |||
20150365436, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 29 2017 | STERN, DAVID J | The Government of the United States of America, as represented by the Secretary of the Navy | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 042197 | /0656 | |
May 01 2017 | The Government of the United States of America, as represented by the Secretary of the Navy | (assignment on the face of the patent) | / | |||
Feb 23 2018 | FORREST, JOHN L | The Government of the United States of America, as represented by the Secretary of the Navy | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 046615 | /0607 | |
Feb 23 2018 | THE GOVERNMENT OF THE UNITED STATES OF AMERICA , A REPRESENTED BY THE SECRETARY OF THE NAVY | DEFENSE INFORMATION SYSTEMS AGENCY | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 046623 | /0852 | |
Feb 23 2018 | The Government of the United States of America, as represented by the Secretary of the Navy | DEFENSE INFORMATION SYSTEMS AGENCY | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 046624 | /0672 |
Date | Maintenance Fee Events |
Sep 04 2023 | REM: Maintenance Fee Reminder Mailed. |
Feb 19 2024 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jan 14 2023 | 4 years fee payment window open |
Jul 14 2023 | 6 months grace period start (w surcharge) |
Jan 14 2024 | patent expiry (for year 4) |
Jan 14 2026 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 14 2027 | 8 years fee payment window open |
Jul 14 2027 | 6 months grace period start (w surcharge) |
Jan 14 2028 | patent expiry (for year 8) |
Jan 14 2030 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 14 2031 | 12 years fee payment window open |
Jul 14 2031 | 6 months grace period start (w surcharge) |
Jan 14 2032 | patent expiry (for year 12) |
Jan 14 2034 | 2 years to revive unintentionally abandoned end. (for year 12) |