A method for routing is disclosed. The method comprises provisioning an endpoint in a network with a reactive path selection policy; monitoring, by the endpoint, current conditions relating to various paths available to said end point for the transmission of traffic; and selectively applying, by the endpoint, at least a portion of the reactive path selection policy based on the current conditions of the available paths.
|
1. A method, comprising:
identifying traffic received at an endpoint in a network having a reactive path selection policy defining actions to apply in transmitting a plurality of different types of received traffic from the endpoint;
determining one or more actions of the actions that are applicable in managing transmission of the traffic from the endpoint;
reactively selecting, by the endpoint, a path of a plurality of paths from the endpoint for transmitting the traffic from the endpoint based on the one or more actions, comprising:
dropping the traffic in response to lack of valid available path to transmit the traffic;
applying, when a valid path is available, a path loss action to the traffic in response to the presence of a path loss action in the reactive path selection policy; and
forwarding, in the absence of further eligibility criteria limiting forwarding of the traffic, any traffic remaining to be sent after the applying.
19. A non-transitory computer-readable storage medium comprising instructions stored therein, which when executed by one or more processors, cause the one or more processors to:
identify traffic received at an endpoint in a network having a reactive path selection policy defining actions to apply in transmitting a plurality of different types of received traffic from the endpoint;
determine one or more actions of the actions that are applicable in managing transmission of the traffic from the endpoint;
reactively select, by the endpoint, a path of a plurality of paths from the endpoint for transmitting the traffic from the endpoint based on the one or more actions, comprising:
drop the traffic in response to lack of valid available path to transmit the traffic;
apply, when a valid path is available, a path loss action to the traffic in response to the presence of a path loss action in the reactive path selection policy; and
forward, in the absence of further eligibility criteria limiting forwarding of the traffic, any traffic remaining to be sent after the applying.
10. A system comprising:
one or more processors; and
a computer-readable medium comprising instructions stored therein, which when executed by the one or more processors, cause the one or more processors to:
identify traffic received at an endpoint in a network having a reactive path selection policy defining actions to apply in transmitting a plurality of different types of received traffic from the endpoint;
determine one or more actions of the actions that are applicable in managing transmission of the traffic from the endpoint; and
reactively select, by the endpoint, a path of a plurality of paths from the endpoint for transmitting the traffic from the endpoint based on the one or more actions, comprising:
drop the traffic in response to lack of valid available path to transmit the traffic;
apply, when a valid path is available, a path loss action to the traffic in response to the presence of a path loss action in the reactive path selection policy; and
forward, in the absence of further eligibility criteria limiting forwarding of the traffic, any traffic remaining to be sent after the applying.
2. The method of
3. The method of
4. The method of
5. The method of
determining whether at least one of the one or more valid paths are available for transmitting the traffic from the endpoint; and
dropping the traffic from the endpoint if all of the one or more valid paths are unavailable.
6. The method of
7. The method of
8. The method of
9. The method of
11. The system of
12. The system of
13. The system of
14. The system of
determine whether at least one of the one or more valid paths are available for transmitting the traffic from the endpoint; and
drop the traffic from the endpoint if all of the one or more valid paths are unavailable.
15. The system of
16. The system of
17. The system of
18. The system of
20. The non-transitory computer-readable storage medium of
|
This application is a continuation of U.S. Non-Provisional patent application Ser. No. 16/590,064, filed Oct. 1, 2019, which is a continuation of U.S. Non-Provisional patent application Ser. No. 15/468,015, filed Mar. 23, 2017, the full disclosures of which are incorporated herein by reference in their entireties.
Embodiments of the present disclosure relate to systems and/or methods of reactive path selection.
Computer networking has largely been using dynamic routing protocols for finding the optimal path between two endpoints, with consideration taken only for the availability of endpoints and various forms of cost influencing which path may be selected as the preferred one out of a given set. The default operational mode of routing protocols has no consideration for the capability of the path to deliver the traffic from a quality perspective. The quality issue has been dealt with using other mechanisms including: (a) the use of probing techniques designed to determine packet loss, latency and jitter of the available paths and then using the collected data as input to a secondary path selection process; and (b) employing forward error correction to enable the receiving end to reconstitute messages subject to partial loss along the path enabling the delivery of a complete message to the destination.
Forward error correction is typically only applied to ensure that a given path is capable of delivering traffic despite challenging conditions, primarily related to loss of traffic. An acceptable implementation of forward error correction is typically capable of recovering and constituting the original packets sent despite a loss rate along the path as high as 10%.
The downside of using forward error correction is that using forward error correction requires additional traffic to be sent since the information necessary to reconstitute the original packets is sent in addition to the original traffic, thus consuming additional bandwidth. For this reason, forward error correction has significant drawbacks.
Furthermore, a common reactive measure to adapt to changing conditions in terms of available links, bandwidth or variation in Service Level Agreements (SLAs) is to limit the overall bandwidth available to all applications, to provide for an all-encompassing but equally impactful impairment across all applications and services transiting a device.
According to a first aspect of the present disclosure, there is provided a technology to enable an endpoint such as an edge network device in a network to perform a reactive path selection based on predetermined criteria set by policy in advance of the reactive path selection.
According to a second aspect of the present disclosure, probes may be built into the endpoints of a path employing a method of quality measurement for each path available to the endpoint.
According to a third aspect of the present disclosure, a path selection mechanism may be built in to the endpoints capable of choosing paths based on multiple criteria (such as routing protocol metrics as well as path quality).
According to a fourth aspect of the present disclosure, an endpoint may define one or more local SLA-classes and actively find suitable paths for traffic assigned to each of the defined SLA-classes.
According to a fifth aspect of the present disclosure, an endpoint, via configuration, may establish a pre-determined behavior that is activated by the changing availability of useful transmission paths. The behavior is affected by both qualification and disqualification of transmission path resources.
According to a sixth aspect of the present disclosure, a change in routing path selection may be achieved based on the variation of path resources including, but not limited to, policing of certain application traffic at different rates, shaping of certain application traffic at different rates and dropping of application traffic. The rates may vary depending on the experienced impact on available and useful transmission path resources.
According to a seventh aspect of the present disclosure, an endpoint may be configured to only enable forward error correction as a last resort when no path is found that satisfies a given SLA requirement.
According to an eighth aspect of the present disclosure, an endpoint may accept configuration information dictating how path selection and forward error correction enablement interact.
According to a ninth aspect of the present disclosure, an endpoint may engage forward error correction interactively with no regard for the actual forward error correction algorithm employed (meaning that the present disclosure is not dependent on a specific forward error correction algorithm).
Other aspects of the present disclosure will be apparent from the detailed description below.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure can be practiced without these specific details. In other instances, structures and devices are shown in block or flow diagram form only in order to avoid obscuring the present disclosure.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
Moreover, although the following description contains many specifics for the purposes of illustration, anyone skilled in the art will appreciate that many variations and/or alterations to the details are within the scope of the present disclosure. Similarly, although many of the features of the present disclosure are described in terms of each other, or in conjunction with each other, one skilled in the art will appreciate that many of these features can be provided independently of other features. Accordingly, the present disclosure is set forth without any loss of generality to, and without imposing limitations upon, the present disclosure.
In one embodiment, the control plane is established by a control device 102, which is configured to maintain control plane connections with various edge network devices of the network 100. In the example shown in
Establishment of the control plane, and the data plane may be in accordance with the techniques described co-pending U.S. application Ser. Nos. 14/133,558, and 14/146,683, which are incorporated herein by reference in their entireties.
Each edge network device of the network 100 may have at its disposal a plurality of paths or links defining communication paths for the transmission of data packets to a remote edge network device. For the example of
Broadly, embodiments of the present disclosure disclose a mechanism for each edge network device of the network 100 to perform a reactive path selection procedure to select one of the paths available to the edge network device for the transmission of data. Advantageously, the reactive path selection procedure may be based on changing bandwidth availability conditions and/or changing bandwidth quality conditions, as will be explained in greater detail below. In one embodiment, the reactive path selection procedure may be used to police bandwidth allocation on a per application basis. For example, such an approach may be applied in cases where there are no quality issues relating to the available paths, but merely capacity issues. In other embodiments, the reactive path selection procedure may implement the path selection based on a measure of quality, for example using forward error correction. For example, the path selection may be based on a measure of quality in cases where all paths are functional, but show differences in the quality of the available bandwidth.
In one embodiment, in order to enable reactive path selection, each edge network device maintains information on the capacity of each path, and information on the condition of each path.
For illustrative purposes, the table below provides example information on the various paths and their attributes for the paths illustrated in
TABLE 1
Paths and Path Attributes
Path Index
Type
Capacity
Path Management
1
Private Multi-Protocol
10 Mbps
Path Evaluation
Label Switching (MPLS)
Forward Error
Correction (FEC)
Reactive Policing
Reactive Shaping
Reactive Dropping
2
Private MPLS
5 Mbps
Path Evaluation
FEC
Reactive Policing
Reactive Shaping
Reactive Dropping
3
Broadband Internet
20 Mbps
Path Evaluation
FEC
Reactive Policing
Reactive Shaping
Reactive Dropping
4
Long-Term Evolution
6 Mbps
Path Evaluation
(LTE)
FEC
Reactive Policing
Reactive Shaping
Reactive Dropping
As noted above, each given edge network device of the network 100 has access to the transmission paths with capabilities as per the table above.
Additionally, the table shows the different techniques for reactive path and traffic management that are available for each path, in accordance with one embodiment of the present disclosure.
In one embodiment, in order to perform reactive path and traffic management, each edge network device is provisioned with policy to determine behavior of the edge network device upfront in case certain conditions arise. This policy may be described at a high-level with the associated actions that are to be taken in response to the various conditions that arise. The policy may be configured locally on an edge network device, configured centrally and then distributed using control device 102, or a combination of both.
For illustrative purposes, applications are labeled with simple index numbers (e.g. App1, App2, etc.). In this regard, it is to be understood that each application may have a variety of performance requirements as defined for actual application traffic carried across a live network. Moreover, in an actual deployment, applications may be generally grouped together based on similarities in Service-Level Agreements (SLAs). However, for the sake of simplicity, the examples described herein only deal with single applications. In one embodiment, an application may constitute a flow that is defined by an established pattern or signature involving packet information at any layer between Layer 3 and Layer 7 (e.g. IP-address Pairs, Layer 4 (UDP/TCP) Port pairs, specific Layer 7 application signatures or combinations of any of the prior).
The table below illustrates a sample policy construct, in accordance with one embodiment of the present disclosure.
TABLE 2
Sample Policy Construct
App 1
Forward Error Correction (FEC) dynamic
Path-eligibility 1,2,3
Equal-Cost Mult-Path (ECMP)
Path-loss 3
Police 1Mbps
SLA
Loss 1%
Latency 100ms
App2
Path-eligibility 1,3
Path-loss 3
Shape 1Mbps
App3
FEC last-resort
Path-eligibility 1,2,3,4
Path-of-last-resort 4
SLA
Loss 1%
The various keywords in the policy description above may include the following, in accordance with one embodiment of the present disclosure:
In one embodiment, the above policy may enable the following behavior on a given edge network device:
For App1:
Since there are a wide range of permutations that allow for extreme flexibility in terms of how links are used within the defined functionality, the above is simply one example of how the functionality documented herein could be used and the text should not be viewed as limiting in terms of the breadth of applicability and functionality covered.
In one embodiment, the control device 102 may be used to distribute the reactive path selection policy to the various edge network devices of the network 100. By way of example,
To enable the reactive path selection techniques disclosed herein, the control device 102 may be configured to perform operations shown in the flowchart of
Referring now to
The hardware also typically receives a number of inputs and outputs for communicating information externally. For interface with a user or operator, the hardware may include one or more user input output devices 606 (e.g., a keyboard, mouse, etc.) and a display 608. For additional storage, the hardware 600 may also include one or more mass storage devices 610, e.g., a Universal Serial Bus (USB) or other removable disk drive, a hard disk drive, a Direct Access Storage Device (DASD), an optical drive (e.g. a Compact Disk (CD) drive, a Digital Versatile Disk (DVD) drive, etc.) and/or a USB drive, among others. Furthermore, the hardware may include an interface with one or more networks 612 (e.g., a local area network (LAN), a wide area network (WAN), a wireless network, and/or the Internet among others) to permit the communication of information with other computers coupled to the networks. The hardware may include suitable analog and/or digital interfaces between the processor 602 and each of the components.
The hardware 600 operates under the control of an operating system 614, and executes application software 616 which includes various computer software applications, components, programs, objects, modules, etc. to perform the techniques described above.
In general, the routines executed to implement the embodiments of the present disclosure, may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects of the present disclosure. Moreover, while the present disclosure has been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments of the present disclosure are capable of being distributed as a program product in a variety of forms, and that the present disclosure applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution. Examples of computer-readable media include but are not limited to recordable type media such as volatile and non-volatile memory devices, USB and other removable media, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), flash drives, among others.
Although the present disclosure has been described with reference to specific exemplary embodiments, it will be evident that the various modification and changes can be made to these embodiments without departing from the broader spirit of the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than in a restrictive sense.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
8948001, | Jun 26 2012 | Juniper Networks, Inc. | Service plane triggered fast reroute protection |
20030103465, | |||
20100054241, | |||
20120106428, | |||
20120195200, | |||
20120224691, | |||
20130223221, | |||
20150074283, | |||
20150124606, | |||
20150312801, | |||
20160028616, | |||
20160037434, | |||
20180159779, | |||
20180227229, | |||
CN103516604, | |||
CN104335540, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 20 2021 | Cisco Technology, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 20 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Aug 08 2026 | 4 years fee payment window open |
Feb 08 2027 | 6 months grace period start (w surcharge) |
Aug 08 2027 | patent expiry (for year 4) |
Aug 08 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 08 2030 | 8 years fee payment window open |
Feb 08 2031 | 6 months grace period start (w surcharge) |
Aug 08 2031 | patent expiry (for year 8) |
Aug 08 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 08 2034 | 12 years fee payment window open |
Feb 08 2035 | 6 months grace period start (w surcharge) |
Aug 08 2035 | patent expiry (for year 12) |
Aug 08 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |