In general, techniques are described for automatically discovering services in computer networks. A service node comprising a control unit and an interface may implement the techniques. The control unit determines services provided by the service node and generates a routing protocol message that includes service discovery information related to the services. The interface transmits the routing protocol message to enable network devices of the network to discover the services provided by the service node based on the service discovery information. The interface then receives traffic via a path established based on the service discovery information included in the routing protocol message and configured so that the service node applies at least one of the services to the traffic received via the path. The control unit then applies the one or more services to the traffic received via the path.
|
20. A method comprising:
receiving, with the network device of a network, a routing protocol message that includes service discovery information describing one or more services provided by a service plane of a router positioned in the network, the service plane separate from a forwarding plane of the router configured to forward network traffic; and
configuring, with the network device, a path through the network from an ingress network device requesting at least one of the services described by the service discovery information to the service plane such that the service plane applies the at least one of the services to the network traffic received via the path prior to outputting the network traffic to a forwarding plane of the router to be forwarded in accordance with forwarding information.
31. A network device of a network comprising:
at least one interface configured to receive a routing protocol message that includes service discovery information describing one or more services provided by a service plane of a router positioned in the network, the service plane separate from a forwarding plane of the router configured to forward network traffic; and
a control unit configured to configure a path through the network from an ingress network device requesting at least one of the services described by the service discovery information to the service plane such that the service plane applies the at least one of the services to the network traffic received via the path prior to outputting the network traffic to a forwarding plane of the router to be forwarded in accordance with forwarding information.
42. A non-transitory computer-readable medium comprising instructions that, when executed, cause one or more processors of a network device included within a network to:
receive a routing protocol message that includes service discovery information describing one or more services provided by a service plane of a router positioned in the network, the service plane separate from a forwarding plane of the router configured to forward network traffic; and
configure a path through the network from an ingress network device requesting at least one of the services described by the service discovery information to the service plane such that the service plane applies the at least one of the services to the network traffic received via the path prior to outputting the network traffic to a forwarding plane of the router to be forwarded in accordance with forwarding information.
19. A non-transitory computer-readable medium comprising instructions that, when executed, cause one or more processors of a router in a network to:
determine one or more services provided by a service plane of the router;
generate a routing protocol message that includes service discovery information related to the one or more services provided by the service plane;
transmit the routing protocol message to enable network devices of the network to discover the one or more services provided by the service plane based on the service discovery information included in the routing protocol message;
receive traffic via a path established based on the service discovery information included in the routing protocol message; and
redirect the traffic received via the path, with a forwarding plane of the router, to the service plane;
apply, by the service plane, the one or more services to the traffic received via the path;
output the traffic, with the service plane and after applying the one or more of the services, to the forwarding plane; and
perform a route lookup with the forwarding plane to forward the traffic output by the service plane.
10. A router of a network comprising:
a control unit having, at least in part, a service plane, a forwarding plane and a control plane, the control plane configured to determine one or more services provided by the service plane and generate a routing protocol message that includes service discovery information related to the one or more services provided by the service plane; and
at least one interface configured to transmit the routing protocol message to enable network devices of the network to discover the one or more services provided by the service plane based on the service discovery information included in the routing protocol message and receive traffic via a path established based on the service discovery information included in the routing protocol message,
wherein the forwarding plane is configured to redirect the traffic received via the path to the service plane;
wherein the service plane is configured to apply the one or more services to the traffic received via the path and output the traffic to the forwarding plane, and
wherein the forwarding plane is configured to perform a route lookup to forward the traffic output by the service plane.
1. A method comprising:
determining, with a control plane of a router positioned in a network, one or more services provided by a service plane of the router;
generating, with the control plane, a routing protocol message that includes service discovery information related to the one or more services provided by the service plane;
transmitting, with a forwarding plane of the router, the routing protocol message to enable network devices of the network to discover the one or more services provided by the service plane based on the service discovery information included in the routing protocol message;
receiving traffic with the forwarding plane via a path established based on the service discovery information included in the routing protocol message;
redirecting, with the forwarding plane, the traffic received via the path to the service plane;
applying, with the service plane, the one or more services to the traffic received via the path;
outputting, with the service plane and after applying the one or more of the services, the traffic to the forwarding plane; and
performing a route lookup with the forwarding plane to forward the traffic output by the service plane.
43. A network system comprising:
a router comprising:
a control unit having, at least in part, a service plane, a forwarding plane and a control plane, the control plane, the control plane configured to determine one or more services provided by the service plane and generate a routing protocol message that includes service discovery information related to the one or more services provided by the service plane; and
at least one interface configured to transmit the routing protocol message;
an orchestrating network device; and
a network device operated by a user that interfaces with the orchestrating network device to request at least one of the services provided by the service plane,
wherein the orchestrating network device comprises:
at least one interface configured to receive the routing protocol message that includes the service discovery information; and
a control unit configured to automatically, based on the service discovery information, configure a path through the network from the network device operated by the user requesting the at least one of the services described by the service discovery information and provided by the service plane to the router,
wherein the at least one interface of the service node receives traffic via the path, and
wherein the service plane is configured to apply the one or more services to the traffic received via the path.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
wherein transmitting the routing protocol message comprises transmitting the routing protocol message to an orchestrating network device so as to enable the orchestrating network device to discover the one or more services provided by the service plane based on the service discovery information included in the routing protocol message, and
wherein the method further comprises receiving forwarding information from the orchestrating network device configuring the path within the forwarding plane and associating the path with the at least one of the services to be applied to the traffic received via the path.
8. The method of
generating a label mapping message that maps a multi-protocol label switching (MPLS) label to each of the one or more services; and
transmitting the label mapping message to enable network devices of the network to associate traffic sent via paths to the router with the one or more services,
wherein the traffic received via the path includes packets that each comprise a label stack having at least one of the MPLS labels associated with the one or more services, and
wherein applying the one or more services comprises applying the one or more services to the traffic received via the path based on the MPLS labels specified in the label stack of the packets.
9. The method of
11. The router of
12. The router of
13. The router of
14. The router of
15. The router of
16. The router of
17. The router of
wherein the control plane further includes a multi-protocol label switching (MPLS) protocol module that generates a label mapping message that maps a multi-protocol label switching (MPLS) label to each of the one or more services; and
wherein the at least one interface transmits the label mapping message to enable network devices of the network to associate traffic sent via paths to the router with the one or more services,
wherein the traffic received via the path includes packets that each comprise a label stack having at least one of the MPLS labels associated with the one or more services, and wherein the control unit includes one or more service modules that applying the one or more services to the traffic received via the path based on the MPLS labels specified in the label stack of the packets.
18. The router of
21. The method of
22. The method of
23. The method of
24. The method of
25. The method of
26. The method of
wherein the network device comprises an orchestrating network device,
wherein configuring a path through the network comprises:
generating forwarding information for the ingress network device associating the network traffic with the path;
installing the forwarding information in the ingress network device so that the network traffic is forwarded via the path to the service node;
generating forwarding information for the router associating the path with the at least one of the services to be applied to the network traffic received via the path; and
installing the forwarding information in the forwarding plane of the router so that the forwarding plane redirects the network traffic to enable the service plane to apply the at least one service to the network traffic received via the path.
27. The method of
wherein configuring the path comprising configuring the path automatically in response to receiving the request from the customer.
29. The method of
wherein configuring the path comprises configuring the ingress network device of the path to push at least one of the MPLS label to packets of the network traffic sent via the path so as to associate the packets with the at least one of the services.
30. The method of
32. The network device of
33. The network device of
34. The network device of
35. The network device of
36. The network device of
37. The network device of
wherein the network device comprises an orchestrating network device,
wherein the control unit, in order to configure the path through the network, generates forwarding information for the ingress network device associating the network traffic with the path, installs the forwarding information in the ingress network device so that the network traffic is forwarded via the path to the router, generates forwarding information for the router associating the path with the at least one of the services to be applied to the network traffic received via the path and installs the forwarding information in the forwarding plane of the router so that the forwarding plane redirects the network traffic to enable the service plane to apply the at least one service to the network traffic received via the path.
38. The network device of
wherein the control unit configures the path automatically in response to receiving the request from the customer.
39. The network device of
40. The network device of
wherein the control unit, in order to configure the path, configures the ingress network device of the path to push at least one of the MPLS label to packets of the network traffic sent via the path so as to associate the packets with the at least one of the services.
41. The network device of
|
The invention relates to computer networks and, more particularly, to service discovery within computer networks.
Recently, network devices that form computer networks have been adapted to enable a form of networking referred to as “software-defined networking.” In software-defined networking, the forwarding plane of a network switch, router or other network device is made available via a communication protocol such that this forwarding plane may be configured via the communication protocol rather than a routing protocol. In many implementations, the network devices may execute software to enable communications with other network devices in the network via this communication protocol so as to configure paths through the network. One example of a communication protocol that facilitates software-defined networking is the so-called “OpenFlow” communication protocol. OpenFlow is an open standard that allows users (e.g., researchers) to install software on the routers to run experimental or proprietary protocols to control packet routing or switching within a network.
The software controlled path setup may then orchestrate the configuration and deployment of paths on an as-needed basis to suit a particular service. To illustrate, orchestrating software that controls path setup may be manually configured with information identifying a network address translation (NAT) service operated by a first network device (service node). A second network device that does not provide a NAT service may communicate with the orchestrating software, learn of this NAT service and subscribe to this service, whereupon the orchestrating software may configure a path through the network from the second network device to the first network device. The second network device may then push traffic requiring NAT service through the path, where the first network device may apply the NAT traffic. When establishing the path, the orchestrating software may configure one or more filters on the second network device controlling admission of network traffic into the path. These paths having been engineered for a particular service, i.e., NAT service in this example, having filters controlling admission to the path may be referred to as “service engineered paths” (or “SEPs”). In this respect, the orchestrating software may define these SEPs to suit a particular service in terms of defining filters tailored for that service.
While SEPs may provide for service sharing and enable other network devices to forward traffic via a path that meets the needs of the service application (e.g., in terms of quality of service (QoS) provided, bandwidth, etc.), often network administrations must statically configure or otherwise specify the location of the service and various parameters associated with the service. Thus, when deploying new services within a network, the network administrator must update this database of service related information so that the orchestrating software may configure SEPs through the network directing traffic to these services. Manual entry of this data is both time consuming and prone to human error. Moreover, network administrators may not update this data as various service parameters change over time potentially resulting in inefficient utilization of the service.
In general, the disclosure describes techniques for automated discovery of services in computer networks. Rather than require network administrators to manually specify the services and service-related parameters, the techniques described in this disclosure may enable a service node (which may refer to a device that provides one or more services, such as a network address translation (NAT) service, a firewall service, a secure socket layer (SSL) service, or any other service) to advertise services provided by the service node, as well as, service related parameters associated with those services provided by the service node within a computer network. The service node may advertise these services in accordance with the techniques described in this invention using, as one example, a type-length-value (TLV) field of a routing protocol message to store what may be referred to as “service discovery information.”
The orchestrating software of a software-defined network in which the service node operates may receive these routing protocol messages and automatically update a service directory (which may refer to a database associated with the orchestrating software that stores information concerning services) to include the service discovery information. The orchestrating software after “learning” of this service may then configure one or more service engineered paths (SEPs) through the software-defined network that utilize the advertised service. Moreover, routers and other network devices may receive these routing protocol messages that include the service discovery information and independently configure paths through the network that utilize the advertised service. In this way, the techniques may avoid manual entry of service related information, and thereby potentially reduce administrative burdens and occurrence of incorrectly entered service data.
In one embodiment, a method comprises determining, with a service node of a network, one or more services provided by the service node, generating, with the service node, a routing protocol message that includes service discovery information related to the one or more services provided by the service node and transmitting, with the service node, the routing protocol message to enable network devices of the network to discover the one or more services to be provided by the service node based on the service discovery information included in the routing protocol message. The method further comprises receiving traffic with the service node via a path established based on the service discovery information included in the routing protocol message and configured so that the service node applies at least one of the services to the traffic received via the path and applying, with the service node, the one or more services to the traffic received via the path.
In another embodiment, a service node of a network comprises a control unit configured to determine one or more services provided by the service node and generate a routing protocol message that includes service discovery information related to the one or more services provided by the service node. The service node also comprises at least one interface configured to transmit the routing protocol message to enable network devices of the network to discover the one or more services provided by the service node based on the service discovery information included in the routing protocol message and receive traffic with the service node via a path established based on the service discovery information included in the routing protocol message and configured so that the service node applies at least one of the services to the traffic received via the path. The control unit is further configured to apply the one or more services to the traffic received via the path.
In another embodiment, a non-transitory computer-readable medium comprises instructions that, when executed, cause one or more processors of a service node in a network to determine one or more services provided by the service node, generate a routing protocol message that includes service discovery information related to the one or more services provided by the service node, transmit the routing protocol message to enable network devices of the network to discover the one or more services provided by the service node based on the service discovery information included in the routing protocol message, receive traffic via a path established based on the service discovery information included in the routing protocol message and configured so that the service node applies at least one of the services to the traffic received via the path and apply the one or more services to the traffic received via the path.
In another embodiment, a method comprises receiving, with the network device of a network, a routing protocol message that includes service discovery information describing one or more services provided by a service node in the network and configuring, with the network device, a path through the network from an ingress network device requesting at least one of the services described by the service discovery information to the service node such that the service node applies the at least one of the services to network traffic received via the path.
In another embodiment, a network device of a network comprises at least one interface configured to receive a routing protocol message that includes service discovery information describing one or more services provided by a service node in the network and a control unit configured to configure a path through the network from an ingress network device requesting at least one of the services described by the service discovery information to the service node such that the service node applies the at least one of the services to network traffic received via the path.
In another embodiment, a non-transitory computer-readable medium comprises instructions that, when executed, cause one or more processors of a network device included within a network to receive a routing protocol message that includes service discovery information describing one or more services provided by a service node in the network and configure a path through the network from an ingress network device requesting at least one of the services described by the service discovery information to the service node such that the service node applies the at least one of the services to network traffic received via the path.
In another embodiment, a network system comprises a service node that includes a control unit configured to determine one or more services provided by the service node and generate a routing protocol message that includes service discovery information related to the one or more services provided by the service node and at least one interface configured to transmit the routing protocol message. The network system also includes an orchestrating network device and a network device operated by a user that interfaces with the orchestrating network device to request at least one of the services provided by the service node. The orchestrating network device comprises at least one interface configured to receive the routing protocol message that includes the service discovery information and a control unit configured to automatically, based on the service discovery information, configure a path through the network from the network device operated by the user requesting the at least one of the services described by the service discovery information and provided by the service node to the service node. The at least one interface of the service node receives traffic via the path, and the control unit of the service node is further configured to apply the one or more services to the traffic received via the path.
The details of one or more embodiments of the techniques are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques will be apparent from the description and drawings, and from the claims.
Enterprise network 10 may represent a network owned and operated by an enterprise or business. Service node 13 may represent a service card or so-called “service blade” installed within or otherwise coupled to a network device (such as network switch 6B), a virtual network device executing within a network device, or a separate physical network device. In any event, service node 13 may apply one or more services to network traffic, such as Network Address Translation (NAT) or other firewall services, intrusion detection and prevention (IDP) services, anti-virus services, anti-malware services, parental control services, or any other type of service. While described with respect to enterprise network 10 establishing a path to service node 13, the techniques may generally be applied with respect to any two network entities, such as networks, service nodes, devices, etc., and should not be limited to the examples described in this disclosure.
In any event, a base network layer of network 3 (or “base network”) includes network switches 6A, 6B (collectively, “network switches 6”) arranged in a physical topology. Network switches 6 receive and forward packet data units (PDUs) for network flows according to forwarding information programmed into the switches by an administrator or external entity (e.g., overlay controller 14 or multi-topology path computation element 8) and/or according to forwarding information learned by the switches, whether by operation of one or more protocols (e.g., interior gateway protocols (IGPs)) or by recording information learned during PDU forwarding. Each of network switches 6 may represent a router, a layer three (“L3”) switch, a layer three (“L2”) switch, an L2/L3 switch, or another network device that switches traffic according to forwarding information. Accordingly, PDUs forwarded by network switches 6A may include, for example, L3 network packets (e.g., Internet Protocol) packets and/or L2 packets (e.g., Ethernet datagrams or Asynchronous Transfer Mode (ATM) cells). PDUs may be unicast, multicast, anycast, and/or broadcast.
An overlay network layer of network 3 includes overlay switches 12A, 12B (collectively, “overlay switches 12”) arranged in a virtual topology “over” a physical topology defined by network switches 6. For example, each of network switches 6 may include a data plane and typical routing and/or switching protocols for controlling packet forwarding, but may also expose the data plane to other software installed on the device to control packet forwarding within this “overlay network” using propriety or third-party protocols. Individual links of the virtual topology of the overlay network (or “overlay links”) may be established paths through the base network and/or physical links connecting overlay switches 12. The overlay network may represent a virtual private network (VPN), an OpenFlow network consisting of one or more OpenFlow switches, or an application-layer network with selection functionality built-in to endpoint devices, for example. Accordingly, each of overlay switches 12 may represent a router or routing instance (e.g., a virtual routing and forwarding (VRF) instance); a Virtual Private Local Area Network (LAN) Service (VPLS) instance; a dedicated L2, L3, or L2/L3 switch; or a virtual or “soft” switch (e.g., an OpenFlow switch) implemented by a router or by a dedicated switch, for example. Overlay switch 12A, for instance, represents a dedicated overlay switch, i.e., a physical switch that performs switching and/or routing in response to control plane software configured to control the data plane of the switch. Overlay switch 12B, in contrast, is implemented by network switch 6A and may represent, for instance, a soft switch that utilizes the underlying data plane of network switch 6A. Network 3 may include multiple overlay network layers of different or similar types (e.g., multiple VPNs and/or OpenFlow networks).
Topology server 4 receives topology information from network switches 6 for the base network of multi-topology network 3. For example, topology server 4 may execute one or more IGPs or Exterior Gateway Protocols (e.g., the Border Gateway Protocol (BGP)) to listen to routing protocol advertisements sent by network switches 6. Topology server 4 collects and stores the base network topology information, then provides the base network topology information to multi-topology path computation element (PCE) 8 in base topology update messages 22. Topology information may include traffic engineering information for the network links, such as the links' administrative attributes and bandwidth at various priority levels available for use by label-switched paths (LSPs). In some examples, network switches 6 may send topology update messages to topology server 4 that specify L2 link information for L2 links connecting the network switches. In some examples, topology server 4 is a component of PCE 8.
Overlay controller 14 receives topology information for the overlay network of multi-topology network 3 in topology update messages sent by overlay switches 12 in respective communication sessions 30. Topology update messages sent by overlay switches 12 may include virtual and physical switch port information, PDUs and associated metadata specifying respective ports and/or interfaces on which PDUs are received. In some examples, overlay controller 14 is a routing protocol listener that executes one or more routing protocols to receive routing protocol advertisements sent by overlay switches 12. Such routing protocol advertisements may be associated with one or more VRFs, for instance. Overlay controller 14 collects and stores the overlay topology information, then provides the overlay topology information to PCE 8 in overlay topology update messages 26. In some examples, overlay controller 14 is a component of PCE 8.
Network switches 6 may be configured to or otherwise be directed to establish paths through the base network of multi-topology network 3. Such paths may include, for instance, IP tunnels such as Generic Route Encapsulation (GRE) tunnels, General Packet Radio Service (GPRS) Tunneling Protocol (GTP) tunnels, LSPs, or a simple route through the base network or a VPN (identified by a static route with a route target, for instance). Network switches 6 provide path status information for paths established through the base network of multi-topology network to PCE 8 in communication sessions 28. Path status (alternatively, “path state” or “LSP state”) information may include descriptors for existing, operational paths as well as indications that an established path or path setup operation has failed. For example, network switch 6A may attempt establish an LSP using a reservation protocol such as Resource reSerVation Protocol (RSVP) but fail due to insufficient network resources along a path specified by an Explicit Route Object (ERO). As a result, network switch 6A may provide an indication that the path setup operation failed to PCE 8 in a communication session 28. PCE 8 receives path status information and adds established paths through the base network of network 3 as links in the overlay network topology.
PCE 8 presents an interface by which clients 18A-18N (collectively, “clients 18”) may request a dedicated path, often for a specific time, between any combination of network entities, such as enterprise network 10 and service node 13. Generally, clients 18 may request paths that conform to bandwidth, host, and time path parameters, quality of service (QoS) path request parameters, such as latency and jitter, and may further specify additional associated classifiers to identify a flow between the specified endpoints. Example flow classifiers (or “parameters”) are provided below. Moreover, PCE 8 may present an interface by which client 18 may request services to be applied to network traffic originating from networks owned and operated by respective ones of clients 18.
PCE 8 uses base network topology information for network 3 received from topology server 4, overlay network topology information for network 3 received from overlay controller 14, and path status information received from network switches 6 to compute and schedule paths between network entities through network 3 that satisfy the parameters for the paths requested by clients 18. PCE 8 may receive multiple path requests from clients 18 that overlap in time. PCE 8 reconciles these requests by scheduling corresponding paths for the path requests that traverse different parts of network 3 and increase capacity utilization, for example, or by denying some of the path requests.
At the scheduled time for a scheduled path, PCE 8 installs forwarding information to network 3 nodes (e.g., overlay switches 12 and network switches 6) to cause the nodes to forward traffic in a manner that satisfies the requested path parameters. A requested path may traverse either or both domains of network 3. That is, a requested path may traverse either or both of the base network and overlay network of multi-topology network 3. For example, a requested path for traffic may traverse only the base network domain as a simple network route, for instance, from network switch 6A to network switch 6B. However, some paths may traverse multiple domains. For example, any requested path for traffic between a network entity coupled to overlay switch 12B, such as enterprise network 10, and a network entity coupled to network switch 6B, such as service node 13, first traverses the overlay network domain and then traverses the base network domain.
PCE 8 installs forwarding information to overlay switches 12 using overlay controller 14. Overlay controller 14 presents a programming interface by which PCE 8 may add, delete, and modify forwarding information in overlay switches 12. Forwarding information of overlay switches 12 may include a flow table having one or more entries that specify field values for matching PDU properties and a set of forwarding actions to apply to matching PDUs. A set of one or more PDUs that match a particular flow entries represent a flow. Flows may be broadly classified using any parameter of a PDU, such as source and destination MAC and IP addresses, a Virtual Local Area Network (VLAN) tag, transport layer information, a Multiprotocol Label Switching (MPLS) or Generalized MPLS (GMPLS) label, and an ingress port of a network device receiving the flow. For example, a flow may be all PDUs transmitted in a Transmission Control Protocol (TCP) connection, all PDUs sourced by a particular MAC address or IP address, all PDUs having the same VLAN tag, or all PDUs received at the same switch port.
PCE 8 invokes the programming interface of overlay controller 14 by sending overlay network path setup messages 24 directing overlay controller 14 to establish paths in the overlay network of network 3 and/or steer flows from hosts 13 onto established paths. Overlay controller 14 responds to overlay network path setup messages 24 by installing, to overlay switches 12 using communication sessions 30, forwarding information that implements the paths and/or directs flows received from hosts onto established paths.
PCE 8 installs forwarding information to network switches 6 using communication sessions 28. Each of network switches 6 may present a programming interface in the form of a management interface, configuration interface, and/or a path computation client (PCC). PCE 8 may invoke the programming interface of network switches 6 to configure a tunnel (e.g., an LSP), install static routes, configure a VPLS instance, configure an Integrated Routing and Bridging (IRB) interface, and to otherwise configure network switches 6 to forward packet flows in a specified manner. In some instances, PCE 8 directs one or more of networks switches 6 to signal a traffic engineered LSP (TE LSP) through the base network of network 3 to establish a path. In this way, PCE 8 may program a scheduled path through network 3 by invoking a programming interface of only the head network device for the path.
PCE 8 may, in some instances, remove a path, invoking the programming interfaces of network switches 6 and overlay switches 12 to remove forwarding information implementing the requested paths. In this way, PCE 8 frees resources for future paths.
Because PCE 8 has an enhanced view of the current state of the network 3 at both the overlay network layer and base network 3, PCE 8 may identify paths that are not visible to any one of network switches 6 or overlay switches 12 having a more limited view. PCE 8 may additionally, by virtue of having access to this enhanced view, steer traffic to underutilized portions of network 3 to increase capacity utilization of network 3, including to those service nodes that are underutilized. In addition, centralizing the path computation and establishment with PCE 8 may allow network operators to reconcile multiple, possibly conflicting application path requests and may reduce first-in-time, first-in-right access to network resources in favor of explicit, centralized prioritization of application requests for dedicated paths. More information regarding path computation and establishment, PCEs, OpenFlow and other aspects related to the formation of paths within network by external orchestrating or coordinating devices may be found in U.S. patent application Ser. No. 13/339,983, entitled “MULTI-TOPOLOGY RESROURCE SCHEDULING WITHIN A COMPUTER NETWORK,” filed Dec. 29, 2011, the entire contents of which are hereby incorporated by reference.
To illustrate an example path configured for enterprise network 10, client 18A, which may represent a network administrator of enterprise network 10 or of a service provider network to which enterprise network 10 subscribes to gain access to a public network (e.g., the Internet), may establish a path 11 to service node 13 in response to enterprise 10 purchasing URL filtering services for enterprise network 10. Service node 13 may, in this example, represent a service node that performs a URL filtering service to filter uniform resource locators (URLs) based on the content associated with requested URLs. PCE 8 may orchestrate path 11 by first computing path 11 from overlay switch 12B through network switch 6B to service node 13 and then configuring overlay switches 12B and network switch 6B to support delivery of traffic from enterprise network 10 to service node 13 via path 11. PCE 8 may install filters that have admit and/or deny criteria for traffic entering path 11 via overlay switch 12B. In this example, PCE 8 may install filters associated with path 11 that only admit and/or deny traffic from enterprise network 10. The association of filters to control admission of traffic to path 11 effectively limits admission of traffic to application of parental control services performed by service node 13. As a result, these paths may be referred to as “service engineered paths” (which is commonly abbreviated as “SEP”) in the sense that these paths are engineered to steer traffic in an efficient manner so that one or more services may be applied to this traffic. In this sense, path 11, in this example, is engineered to provide parental control services to traffic originated form enterprise network 10.
In this sense, PCE 8 acts as an orchestrating device and/or software driving network 3, which may be characterized as a “software-defined network” in the sense that PCE 8 performs path computation and directs configuration of the computed paths or SEPs rather than switches 6 and 12 themselves. That is, network 3 is configured externally via software executing on a device removed from network 3, i.e., PCE 8 in this example. In this respect, the orchestrating software may define these SEPs to suit a particular service in terms of defining filters tailored for that service. In the previous example, PCE 8 configured SEP 11 to promote utilization of service node 13. Typically, switches or other network devices, in order to provide a service, included a service node that directly coupled to the switches or other network devices that also coupled to the network entity requiring the service application. Thus, in the example of
While SEPs may provide for service sharing and enable other network devices to forward traffic via a path that meets the needs of the service application (e.g., in terms of quality of service (QoS) provided, bandwidth, etc.), often network administrations must statically configure or otherwise specify the location of the service and various parameters associated with the service. Thus, when deploying new services within a network, the network administrator must update this database of service related information so that the orchestrating software may configure SEPs through the network directing traffic to these services. Manual entry of this data is both time consuming and prone to human error. Moreover, network administrators may not update this data as various service parameters change over time potentially resulting in inefficient utilization of the service.
In accordance with the techniques described in this disclosure, rather than require network administrators to manually specify the services and service-related parameters, the techniques described in this disclosure may enable service node 13 to advertise services provided by service node 13, as well as, service related parameters associated with those services provided by service node 13 within network 3. Service node 13 may advertise these services in accordance with the techniques described in this invention using, as one example, a type-length-value (TLV) field of a routing protocol message 15 to store what may be referred to as “service discovery information.” Service node 13 may in some examples flood this routing protocol message 15 throughout network 3, where topology server 4 may receive this message 15 and parse the service discovery information from message 15.
Service node 13 may also, in some instances, generate a label mapping message 17 mapping each service (or instance of the same service) to a different label and forward this label mapping message 17 to network 17. This label mapping message 17 may enable PCE 8 to establish path 11 such that overlay switch 12B pushes this label onto the label stack before pushing on the label associated with path 11. This “inner label” associating the traffic with the service may enable service node 13 to determine to which service the traffic is associated.
The orchestrating software, e.g., PCE 8, of software-defined network 3 may receive this service discovery information via topology update messages 22 sent by topology server 4 and automatically update a service directory (which may refer to a database associated with the orchestrating software that stores information concerning services) to include the service discovery information (that is, the information describing the services and service parameters of those services provided by service node 13). PCE 8 may also receive this label mapping message via topology server 4 and update the appropriate entries in the service directory to associate the various services with the labels. PCE 8 after “learning” of the services may then receive requests from clients 18 requesting this service and configure one or more service engineered paths (SEPs), such as path 11, through software-defined network 3 that utilize the advertised service provided by service node 13.
For example, in the above illustration where client 18A requests a URL filtering service to be applied to all traffic originating from enterprise network 10, PCE 8 may access the service directory and identify those service nodes in network 3 that provide a URL filtering services. Assuming service discovery information sent by service node 13 via routing protocol message 15 identifies a URL filtering service, PCE 8 may select service node 13 as the service node that services traffic from enterprise network 10. PCE 8 may then interface with overlay switch 12B, network switch 6B and service node 13 to configure path 11 through network 3. PCE 8 may also install one or more filters in overlay switch 12B to restrict access to path 11 such that only traffic originating from enterprise network 10 may enter path 11. These filters may also specify actions, such as pushing labels onto the label stack appended to each packet admitted to path 11. These filters may specify that the inner service identifying label is first pushed onto the label stack and the second outer label associating the traffic with path 11 is to be pushed onto the label stack second. In this way, service node 13 may automatically (meaning without requiring any user or administrator intervention) advertise service discovery information for consumption by an orchestrating device that configures paths through network 3.
In addition, while not shown in the example of
In operation, service node 13 determines one or more services provided by the service node 13. Often, service node 13 performs this determination after being powered-up or otherwise activated or enabled. Alternatively, service node 13 may monitor service modules that perform these services to identify which of the services are activated or otherwise enabled and continually perform this determination, generating the routing protocol message, not just upon start up or powering-up, but in response to any changes in the operational status of the one or more services. In any event, service node 13 may generate routing protocol message 15 that includes service discovery information related to the one or more services provided by service node 13, transmitting routing protocol message 15 to enable network devices of network 3 to discover the one or more services to be provided by service node 13 based on the service discovery information included in routing protocol message 15. Service node 13 may also generate and transmit the above noted label mapping message 17.
In the example of
Service node 13 then receives traffic via path 11 established based on the service discovery information included in routing protocol message 15 and label mapping information 17. Service node 13 then applies the at least one of the services to the traffic received via path 11 based on the inner label (which may be referred to as a service label). In this manner, service node 13 may automatically advertise services provided by service node 13 using service discovery information so as to enable another network device, e.g., PCE 8, to establish paths through network 3 to utilize these services without requiring an administrator of network 3 to manually update a service directory in PCE 8 to include services provided by service node 13.
While described above as being performed by an orchestrating device and/or software, the techniques may also be implemented with respect to a more distributed network in which the network devices of the network independently form paths through the network to service node 13. In this respect, the ingress network device, i.e., overlay switch 12B in this example, to path 11 may receive routing protocol message 15 (especially when routing protocol message 15 is a link state routing protocol message that is flooded throughout network 3). Overlay switch 12B may then establish path 11 itself using RSVP-TE or another multi-protocol label switching (MPLS) protocol and install filters restricting access to path 11 to traffic originating from enterprise network 10. In signaling path 11, overlay switch 12 may signal that traffic sent via path 11 is to require one or more of the services specified as being provided by service node 13, such that service node 13 configures the switching or forwarding elements to direct this traffic to the appropriate one or more of the services. Consequently, the techniques should not be limited strictly to the examples provided above but may be implemented with respect to other non-software-defined networks.
PCE 8 includes a control unit 40 and a network interface (not shown) to exchange packets with other network devices. Control unit 40 may include one or more processors (not shown in
Clients, such as clients 18 shown in the example of
To compute and schedule paths through a network intelligently, PCE 8 receives topology information describing available resources at multiple layers of the network. Topology server interface 56 (illustrated as “topology server IF 56”) executed by control unit 40 of PCE 8 communicates with a topology server to receive topology information for a base network layer of the network, while overlay controller interface 58 communicates with an overlay controller to receive topology information for an overlay network layer of the network. Topology server interface 56 may include a routing protocol daemon that executes a routing protocol to receive routing protocol advertisements, such as Open Shortest Path First (OSPF) or Intermediate System-to-Intermediate System (IS-IS) link state advertisements (LSAs) or BGP UPDATE messages. Topology server interface 56 may in some instances be a passive listener that neither forwards nor originates routing protocol advertisements.
In this example, topology server interface 56 receives topology information that includes traffic engineering (TE) information. Topology server interface 56 may, for example, execute Intermediate System-to-Intermediate System with TE extensions (IS-IS-TE) or Open Shortest Path First with TE extensions (OSPF-TE) to receive TE information for advertised links. Such TE information includes one or more of the link state, administrative attributes, and metrics such as bandwidth available for use at various LSP priority levels of links connecting routers of the domain. In some instances, topology server interface 56 executes Border Gateway Protocol to receive advertised TE information for inter-AS and other out-of-network links. Additional details regarding executing BGP to receive TE info are found in U.S. patent application Ser. No. 13/110,987, filed May 19, 2011 and entitled “DYNAMICALLY GENERATING APPLICATION-LAYER TRAFFIC OPTIMIZATION PROTOCOL MAPS,” which is incorporated herein by reference in its entirety.
Topology server interface 56 may in some instances receive a digest of topology information collected by a topology server, rather than executing a routing protocol to receive routing protocol advertisements directly. Topology server interface 56 stores base network topology information with TE information in multi-topology traffic engineering database 54 (illustrated as “multi-topology TED 54,” hereinafter “MT TED 54”), which is stored by a computer-readable storage medium of control unit 40 for use in path computation. MT TED 54 is described in further detail below.
Overlay controller interface 58 (illustrated as “overlay controller IF 58”) represents a module that may implement a standardized interface, such as OpenFlow, to receive topology information from an overlay controller, such as an OpenFlow controller, that describes overlay network links connecting overlay switches. In general, overlay network links are not advertised by network switches (e.g., routers) of the base network for the overlay network and so will not be described by topology information received by topology server interface 56. An overlay controller augments the base network topology with overlay network topology links by providing overlay network topology information to overlay controller interface 58, which stores the overlay network topology information to MT TED 54. Overlay controller interface 58 may receive topology information for multiple different overlay networks, including VPNs and/or OpenFlow networks. Different overlay networks may require different instances of overlay controller interface 58 that communicate with network switches of the overlay network or with a topology server, for example, to receive overlay network topology information for respective overlay networks.
Multi-topology traffic engineering database 54 stores topology information for a base network layer and one or more overlay network layers of a network that constitutes a path computation domain for PCE 8. MT TED 54 may organize topology information for respective network layers hierarchically, with the base network topology information supporting the topology information for one or more overlay networks. Paths in a lower-layer topology may appear as links in a higher-layer topology. For example, tunnels (e.g., TE LSPs) created in the base network layer can appears as links in an overlay network TE topology. PCE 8 may then correlate overlay network links with paths established in the base network layer to efficiently compute paths that cross multiple overlay topologies. MT TED 54 may include one or more link-state databases (LSDBs), where link and node data is received in routing protocol advertisements, received from a topology server, and/or discovered by link-layer entities such as an overlay controller and then provided to PCE 8 via overlay controller interface 58. In some instances, an operator may configure traffic engineering or other topology information within MT TED 54 via operator interface 50.
Topology server interface 56 may also receive, from a topology server or by execution of routing protocols to receive routing protocol advertisements that include reachability information, endpoint information that describes endpoints reachable by specified nodes in any of the network topologies. Topology server interface 56 may receive endpoint information for a base layer of the network as well as for one or more services (e.g., VPNs) provided by the network that may correspond to overlay networks of the network. Endpoint information may associate network address prefixes with a nodes of the multi-topology network layers, where network address prefixes may be, e.g., IPv4 or IPv6. For example, topology server interface 56 may receive a BGP UPDATE message advertising a particular subnet as reachable from a particular node of the base network. As another example, topology server interface 56 may receive an Application-Layer Traffic Optimization (ALTO) map that includes PIDs associating respective nodes of a multi-topology network layer with network address prefixes reachable from the nodes. Endpoints that have network addresses that are members of the subnet are therefore reachable from the node, and PCE 8 may calculate paths for those endpoints to terminate (i.e., begin or end) at the node. Topology server interface 56 stores endpoint information received for a layer to a corresponding one of endpoint databases 70A-70K (illustrated as “endpoint DB 70A-70K” and collectively referred to as “endpoint databases 70”), where K refers to a number of layers of the multi-topology network that constitutes a path computation domain for PCE 8. Some of endpoint databases 70 may therefore be associated with respective service instances, e.g., respective VPNs that constitute overlay network layers of a multi-topology network. PCE 8 may therefore use endpoint databases 70 to locate and validate endpoints specified in path requests received from clients.
Each of service path engines 52A-52K (collectively, “SPEs 52”) compute requested paths through a layer of the multi-topology network with which it is associated and for which it is responsible. Control unit 40 may execute multiple SPEs 52 concurrently, e.g., as separate processes. Each of SPEs 52 is associated with a corresponding one of generated path databases 46A-46K (illustrated as “generated path DB 46A-46K” and collectively referred to as “generated path databases 46”). Path manager 64 dequeues path requests from path request queue 72 and assigns path requests to SPEs 52 based on the layer of the multi-topology network in which the endpoints reside, as determined by path manager 64 from endpoint databases 70. That is, endpoints reachable by layers of a multi-topology network that is a path computation domain for PCE 8 are stored by at least one of endpoint databases 70, and path manager 64 determines the one or more endpoint databases 70 that include endpoints specified for a dequeued path request.
Paths are unidirectional. If a client requests a bidirectional path, path manager 64 triggers two path requests for the requested path—one for each direction. In some cases, a path may cross multiple layers of the network, e.g., at a gateway to the base layer that is implemented by one of the overlay network nodes or at a network node that participates in multiple overlay networks. In such cases, multiple SPEs 52 may cooperate to compute segments of the multi-layer path that path manager 64 stitches together at the gateway. Upon computing paths, SPEs 52 schedule the paths by storing the paths to respective generated path databases 46. A scheduled path stored in one of generated path databases 46 includes path information used by path manager 64 to establish the path in the network and may include scheduling information used by scheduler 68 to trigger path manager to establish the path. As described in further detail below, path scheduling may require locking generated path databases 46 to perform path validation prior to committing the path.
When servicing a path request received from path manager 64, an SPE 52 may initially validate the request by determining from endpoint databases 70 that the endpoints for the requested path, whether expressed as logical interfaces or network addresses, are known to PCE 8, i.e., exist within the path computation domain of PCE 8. SPE 52 may additionally validate flow classifiers to ensure that the flow classifiers specified for a requested path exist. If initial validation fails for either/both of these reasons, SPE 52 rejects the requested path and path manager 64 sends a path rejection message detailing the reasons to the requesting client via client interface 74.
To compute a requested path at a layer of a multi-topology network, a service path engine 52 for the layer uses MT TED 54 and the corresponding one of generated path databases 46 for the layer to determine whether there exists a path in the layer that satisfies the TE specifications for the requested path for the duration of the requested time. SPEs 52 may use the Djikstra constrained SPF (CSPF) and/or the Bhandari Edge disjoint shortest pair (for determining disjointed main and backup paths) path computation algorithms for identifying satisfactory paths though the multi-topology network. If a satisfactory computed path for the requested path exists, the computing service path engine 52 for the layer re-validates the computed path and, if validation is successful, schedules the computed path by adding the computed path to the one of generated path databases 46 for the layer. In addition, the computing one of SPE 52 adds the requested path start/complete times to scheduler 68 (if any). A computed path added to one of generated path databases 46 is referred to as a “scheduled” path, until such time as path manager 64 programs the scheduled path into the multi-topology network, whereupon the scheduled path becomes an “active” path. A scheduled or active path is a temporarily dedicated bandwidth channel for the scheduled time in which the path is, or is to become, operational to transport flows.
As noted above, generated path databases 46 store path information for scheduled and active paths. Path information may include an ERO that specifies a list of overlay or base network nodes for a TE LSP, routes or tunnels to be configured in one or more overlay network or base network nodes, forwarding information for overlay network nodes specifying respective sets of forwarding actions (which may also be referred to as “forwarding filters” or “filters”) to apply to PDUs inbound to the overlay network nodes, and/or any other information usable by any of topology node interfaces 63 to establish and steer flows onto scheduled paths in a multi-topology network.
SPEs 52 compute scheduled paths based upon a current state (or “snapshot”) of the multi-topology network, as represented by MT TED 54 and generated path databases 46. Because multiple SPEs 52 execute simultaneously, in this example, to compute and schedule paths through the multi-topology network, multiple SPEs 52 may attempt to update generated path databases 46 simultaneously, which could in some cases result in network resource oversubscription and failure by PCE 8 to satisfy requested paths. One of SPEs 52 may therefore, having computed a path, execute a transaction that conforms to the ACID properties (atomicity, consistency, isolation, durability) or another type of atomic transaction to both re-validate and update generated path databases 46 with a scheduled path. That is, the SPE 52 may first lock generated path databases 46 to prevent other SPEs 52 from modifying generated path databases 46. The SPE 52 may then validate the computed path against the locked generated path databases 46 as well as MT TED 54. If the computed path is valid, the SPE 52 updates generated path databases 46 by adding the computed path as a scheduled path. The SPE 52 then unlocks generated path databases 46. In this way, all affected links are updated in the same transaction, and subsequent path validations by other SPEs 52 account for the updates. SPEs 52 may use any suitable data structure locking mechanism, such as monitors, mutexes, or semaphores, to lock generated path databases 46.
If the SPE 52 fails to validate a previously computed path, the SPE 52 attempts to re-compute the path. Upon identifying a satisfactory path against the current snapshot of the multi-topology network, the SPE 52 again attempts to validate the computed path and update generated path databases 46.
In some cases, SPEs 52 may be unable to identify a path through an overlay network with which to satisfy a path request. This failure may be due to any of a number of factors. For example, sufficient network resources with which to satisfy the path request may be unavailable for the scheduled time due, for instance, to previously scheduled paths that include one or more links of the base network layer for any possible paths between the endpoints of the path request at an overlapping time. In this example, path computation fails. In other words, one or more paths between the endpoints of the path request exist, but the paths are already sufficiently subscribed to prevent the additional reservation of adequate resources for the requested path. As another example, SPEs 52 may be unable to identify any paths through an overlay network between the endpoints of the path request because the computation failed due to a missing link in the overlay network. In other words, the computed overlay network graph, after removing unusable edges unable to satisfy path request constraints, includes two disjoint subgraphs of the overlay network. However, in this case, a suitable path may be generated by creating a tunnel through the base layer between the subgraphs for the overlay network.
Where path computation fails because sufficient network resources do not exist at the requested time, the computing SPE 52 may consider policies 48, set by an operator via operator interface 50, that establish priorities among clients of PCE 8 and/or among path request parameters, including bandwidth, hosts, time, and QoS parameters as well as flow classifiers. A policy of policies 48 may prioritize the requested path for which path computation failed over and against one or more scheduled paths of generated path databases 46. In such instances, the computing SPE 52 may preempt one or more of these scheduled paths by removing (again, in accordance with policies 48) the paths from generated path databases 46 and scheduler 68. In addition, the computing SPE 52 in such instances enqueues the removed paths as path requests to path request queue 72. Components of PCE 8 may then again attempt to compute satisfactory paths for the path requests corresponding to paths removed from generated path databases 46. Where SPEs 52 are unable to identify a satisfactory path for such a path request, SPEs 52 direct path manager 64 to send a path rejection message to a requesting client that issued the path request via client interface 74. In effect, PCE 8 revokes a grant of scheduled multi-topology network resources made to the requesting client.
Where path computation fails due to a missing link between disjoint subgraphs of an overlay network each providing reachability to respective endpoints for a requested path, the computing SPE 52 requests one of tunnel managers 44A-44K (collectively, “tunnel managers 44”) to establish a tunnel in a lower layer of the multi-topology network. For example, one of SPEs 52 for an overlay network may request a tunnel in a lower layer overlay network or in the base network layer. Each of tunnel managers 44 is associated with one of the layers of the multi-topology network and with one of generated path databases 46. In other words, each of tunnel managers 44 manages tunnels for one of the topologies.
Tunnel managers 44 operate as intermediaries between generated path databases 46 and SPEs 52. A higher layer SPE of SPEs 52 may request a lower layer one of tunnel managers 44 to establish a tunnel between two nodes of the lower layer to create a link in the higher layer. Because a tunnel traverses two layers of the multi-topology network, each of the two nodes may straddle the two layers by having an ingress and egress interface coupling the two layers. That is, a first one of the two nodes may be an ingress network switch having an ingress interface to the base network layer, while a second one of the two nodes may be an egress network switch having an egress interface from the base network layer. The tunnel manager 44, in response, may enqueue a path request specifying the two nodes in the lower layer of the multi-topology network to path request queue 72. If a lower layer SPE 52 is able to schedule a path for the path request, this path becomes a link in the lower layer generated path database 46, and the lower layer SPE 52 notifies the requesting one of tunnel managers 44 with link tunnel information for the link. The tunnel manager 44 propagates this tunnel information to MT TED 54, which triggers the higher layer SPE 52 that a new link is available in the higher layer topology and prompts the higher layer SPE to reattempt computing a satisfactory path for the original requested path. Tunnel managers 44 may also validate tunnel setup at their respective layer of a multi-topology network.
Scheduler 68 instigates path setup by tracking scheduled start times for scheduled paths in generated path databases 46 and triggering path manager 64 to establish the scheduled paths at their respective start times. Path manager 64 establishes each scheduled path using one or more of topology node interfaces 63 including overlay controller interface 58, device management interface 60, and network switch interface 62. Different instances of PCE 8 may have different combinations of topology node interfaces 63.
Path manager 64 may invoke the overlay controller interface 14 to sending overlay network path setup messages, e.g., overlay network path setup messages 24 of
Device management interface 60 may represent a Simple Network Management Protocol (SNMP) interface, a Device Management Interface (DMI), a CLI, or any other network device configuration interface. Path manager 64 may invoke device management interface 60 to configure network switches (e.g., routers) with static routes, TE LSPs, or other tunnels in accordance with path information for scheduled paths in generated path databases 46. Network switch interface 62 establishes communication sessions, such as communication sessions 28 of
Path manager 64 may invoke device management interface 60 and/or network switch interface 62 to configure and direct network switches to establish paths in a base network layer or overlay network layer of a multi-topology network. For example, path manager 64 may first configure a TE LSP within a network switch at a network edge, then direct the network switch to signal a path for the TE LSP using RSVP with traffic engineering extensions (RSVP-TE) or another signaling protocol. In this way, PCE 8 may program paths, including TE LSPs, into the network according to a soft PVC (SPVC) model. In this model, the network presents a programming interface that PCE 8 invokes to dynamically set up the SPVCs. In some examples, PCE 8 may use a combination of PVC and SPVC models to program paths into a multi-topology network.
Upon receiving confirmation from topology node interfaces 63 that a scheduled path setup is successful, path manager 64 transitions a status of the scheduled path in generated path databases 46 to “active.” At the scheduled end time (if any) for an active path, scheduler 68 notifies path manager 64 to tear down the active path using topology node interfaces 63. After tearing down the path, path manager 64 removes the path from generated paths 46.
In accordance with the techniques described in this disclosure, topology server interface 56 executed by control unit 40 of PCE 8 may receive routing protocol messages, such as routing protocol message 15, that include service discovery information, where this service discovery information may include information describing service provided by service nodes and service parameters related to the services provided by the service nodes. Topology server interface 56 may parse this service discovery information and store this information to endpoint DBes 70, effectively forming what may be referred to as a “service directory.” In some instances, topology server interface 56 may receive routing protocol messages that include service discovery information from newly installed or deployed service nodes. As a result, no entry in endpoint DBs 70 may be associated with the service node identified by the routing protocol message and topology server interface 56 may create a new entry in endpoint DBes 70, storing the service discovery information to this newly created entry in endpoint DBes 70.
Topology server interface 56 may also receive label mapping messages, such as label mapping message 17, mapping labels to one or more services advertised by service nodes. Topology server interface 56 may parse this label mapping and update service directory 70.
After updating this service directory, PCE 8 may receive SEP requests from clients via client interface 74. These SEP requests, as noted above, may request a particular service without providing any of the other data normally associated with a path request. Client interface 74 may enter this path request to path request queue 72, which when processed may cause PCE 8 to resolve the client to a particular endpoint device associated with the client. In the example of
Some examples of router 86 may not include the full functionality described and illustrated. For instance, some examples of router 86 may include different combinations of PCC 88B, OpenFlow switch 82, and IRB interface 87, rather than all such components. Moreover, while described with respect to a particular network device, e.g., a router, aspects of the techniques may be implemented by any network device or combination of network devices. The techniques should therefore not be limited to the exemplary embodiments described in this disclosure.
Router 86 includes a control unit 170 and interface cards 164A-164N (collectively, “IFCs 164”) coupled to control unit 170 via internal links. Control unit 170 may include one or more processors (not shown in
In this example, control unit 170 is divided into two logical or physical “planes” to include a first control or routing plane 172A (“control plane 172A”) and a second data or forwarding plane 172B (“data plane 172B”). That is, control unit 170 implements two separate functionalities, e.g., the routing/control and forwarding/data functionalities, either logically, e.g., as separate software instances executing on the same set of hardware components, or physically, e.g., as separate physical dedicated hardware components that either statically implement the functionality in hardware or dynamically execute software or a computer program to implement the functionality.
Control plane 172A of control unit 170 executes the routing functionality of router 86. In this respect, control plane 172A represents hardware or a combination of hardware and software of control unit 170 that implements routing protocols. In this example, routing protocol daemon (RPD) 177 is a process executed by control unit 170 that executes routing protocols 178B (illustrated as “RPs 178B”) by which routing information stored in routing information base 176 (“RIB 176”) and traffic engineering information stored in traffic engineering database 175 (“TED 175”) may be determined. In addition, RPD 177 may establish peering sessions for one or more routing protocols 178B with another router, route reflector, or routing protocol listener (e.g., an application layer traffic optimization (ALTO) server) and send L3 topology and/or traffic engineering in RIB 176 and/or TED 175 to the peers.
Routing protocols 178B may include, for example, IGPs such as OSPF-TE or IS-IS-TE and/or exterior gateway protocols such as BGP-TE. RIB 176 and TED 175 may include information defining a topology of a network, such as the base network layer of multi-topology network 3 of
Forwarding or data plane 172B represents hardware or a combination of hardware and software of control unit 170 that forwards network traffic in accordance with forwarding information 192. RIB 176 may in some aspects comprise one or more routing instances implemented by router 86, with each instance including a separate routing table and other routing information. Control plane 172A in such aspects updates forwarding information 192 with forwarding information for each of routing instances 194. In this respect, routing instances 194 each include separate forwarding information for use by data plane 172B in forwarding traffic in accordance with the corresponding routing instance. Further details of one example embodiment of a router can be found in U.S. patent application Ser. No. 12/182,619, filed Jul. 30, 2008, and entitled “STREAMLINED PACKET FORWARDING USING DYNAMIC FILTERS FOR ROUTING AND SECURITY IN A SHARED FORWARDING PLANE,” which is incorporated herein by reference.
Control plane 172A further includes management interface 174 by which a network management system or, in some instances an, administrator using a command line or graphical user interface, configures in VPN module 182 one or more VPN instances for a network to interconnect combinations of L2 networks into a single Ethernet domain. For example, an administrator may configure router 86 as a participant in a particular VPN instance, such as VPN instance 184. VPN module 182 may perform auto-discovery or other techniques to determine additional routers participating in a VPN instance and additionally performing signaling techniques to establish a full mesh of pseudowires between router 86 and each of the additional routers.
Data plane 172B includes one or more forwarding units, such as packet forwarding engines (“PFEs”), that provide high-speed forwarding of network traffic received by interface cards 164 via inbound links 160A-160N to outbound links 162A-162N. Integrated routing and bridging interface 187 (“IRB interface 187”) of data plane 172B processes and forwards network traffic received on interfaces associated with the IRB interface 187. An administrator may configure IRB interface 187 via management interface 174 to map routing interface 188 of IRB interface 187 to one of routing instances 194 of router 86. Routing interface 188 may represent a next hop or other reference of a logical interface (IFL) of IRB interface 187, for example. In some embodiments, aspects of data plane 172B are distributed to a number of distributed forwarding units, such as packet forwarding engines, each associated with a different one or more IFCs 164. In these embodiments, IRB interface 187 may be may be distributed to the distributed forwarding units to enable high-speed integrated routing and bridging within the data plane.
Router 86 implements VPN instance 184 associated with IRB interface 187 to operate as a virtual switch to interconnect multiple L2 networks. VPN instance 184 maps a gateway L2 address (e.g., a gateway MAC address) to routing interface 188, which maps to one of routing instances 194. In this respect, the gateway L2 address maps to the routing instance. IRB interface 187 classifies L2 PDUs received on an interface associated with VPN instance 62 and destined for a gateway L2 addresses of VPN instance 184 as L3 packets for routing using the one of routing instances 194 mapped to routing interface 188. In other words, when router 86 receives an L2 PDU on an interface associated with VPN instance 184, IRB interface 187 determines the destination L2 address of the L2 PDU. When the destination L2 address matches the gateway L2 address mapped to routing interface 188, IRB interface 187 classifies the L2 PDU as an L3 packet and provides the L2 PDU to the mapped one of routing instances 194 for L3 forwarding by data plane 172B. IRB interface 87 may decapsulate the L2 PDU of the L2 header and footer. When a destination L2 address of an L2 PDU does not match the gateway L2 address, VPN instance 184 may switch the L2 PDU according to a matching flow entry of flow table 186. As a result, router 86 may operate as a gateway between an L2 overlay network layer and an L3 base network layer of multi-topology network 3. In some instances, IRB interface 187 performs a prior logical operation to classify L2 PDU as either routing traffic or bridging traffic, and then bridges the traffic or provides the traffic to a routing interface based on the result of classification.
Router 86 implements OpenFlow switch 82 to control switching of L2 PDUs among the set of virtual and/or physical interfaces of router 86 that are associated with VPN instance 184. Such interfaces may include attachment circuits for attaching L2 networks to VPN instance 184. OpenFlow protocol interface (IF) 182 of control plane 172A establishes an OpenFlow protocol session with an OpenFlow controller to provide L2 topology information and to receive forwarding information. OpenFlow protocol IF 183 installs flow entries received in the OpenFlow protocol session to flow table 186 to direct forwarding of PDUs received on interfaces associated with the VPN instance 184. In some instances, VPN instance 184 includes a L2 learning table and performs L2 learning with respect to interfaces of router 86 associated with VPN instance 184.
A network management system or, in some instances, an administrator using a command line or graphical user interface may invoke management interface 174 to configure label switched paths described in LSP database 196 (illustrated as “LSP DB 196”). LSP database 196 includes LSP configuration data, for example, an LSP destination, path (e.g., a Reported Route Object), and LSP attributes such as setup priority and hold priority, number of hops, the reserved bandwidth, and/or a metric that has been optimized for the LSP (e.g., an IGP metric, a TE metric, or hop counts). LSP database 196 may also include information designating zero or more attributes of each configured LSP as delegable parameters that may be set/modified by a PCE using extended PCEP to modify the operation of the LSP when set up in the network. LSP attributes may be divided into three categories: (1) non-delegable parameters that RPD 177 applies immediately using RSVP 178A and that are neither re-signalled nor overridden by a PCE, (2) delegable parameters that RPD 177 applies when the LSP is re-signaled due, e.g., to LSP failure, and (3) delegable parameters that may be overridden by a PCE and trigger re-signaling by RPD 177. All delegable LSP parameters may include a configured default value that RPD 177 applies when, for example, a PCEP session terminates, the PCE otherwise becomes unavailable, or the PCE returns a delegation.
RPD 177 sets up LSP described in LSP database 196 by executing a resource reservation protocol, which in this instance is RSVP 178B, that signals other routers in the network to reserve resources and provide MPLS forwarding information to RPD 177 for use in forwarding MPLS packets. Various instances of router 86 may also, or alternatively, use RSVP-TE or another Label Distribution Protocol (LDP) to signal LSPs. In addition, RPD 177 executes RPs 178B to receive traffic engineering information that affects the state of LSPs, such as failed links and preempted resources that may result in a down state for LSPs. RPD 177 may associate such LSP state information with corresponding LSPs in LSP database 196 and may further directs path computation client 88B to send one or more LSP state reports to a PCE in response, as described in further detail below.
Path computation client (PCC) 88B of control plane 172A mediates communication between RPD 177 and a path computation element (e.g., PCE 8 of
Path computation client 88B establishes extended PCEP sessions with a PCE and sends, via the extended PCEP sessions, LSP state reports that include up-to-date LSP state for LSPs described in LSP state information. LSP state reports may be included in PCRpt messages. In this way, PCC 88B maintains strict LSP state synchronization between router 86 and the PCE, which the PCE may use when computing paths for an overlay network that make use of the LSPs.
In addition, PCC 88B may advertise router 86 as allowing modification of delegable parameters. As a result, LSP state reports sent by PCC 88B may in some case include a delegation that provides access rights to a PCE to modify parameters of the target LSP. In some instances, the delegation may specify the particular parameters of the target LSP that are exposed for modification. PCC 88B may, after delegating LSPs, receive LSP update requests that specify LSP parameter modifications for one or more of the LSPs. LSP update requests may be included in PCUpd messages. PCC 88B, in response, notifies RPD 177 of new parameters for target LSPs identified in LSP update requests. RPD 177 may re-signal the target LSPs, in turn, and as new LSPs are established, switch traffic over to the new LSPs and send a notification to PCC 88B that the new LSPs have been successfully signaled. PCC 88B provides this updated LSP state in LSP status reports to a PCE with which router 86 has extended PCEP sessions. Router 86 thus extends existing RSVP-TE functionality with an extended PCEP protocol that enables a PCE to set parameters for a TE LSP configured within the router. In this way, router 86 may implement an SPVC-like model to allow a PCE to signal computed paths through a multi-topology network, thereby dynamically setting up end-to-end paths as requested by clients.
For example, PCE 8 may configure router 86 to forward traffic along a path from an ingress network device, e.g., router 86 in this example, to a service node, such as service node 13, in accordance with configuration data specified by PCE 8, which may represent an external orchestrating device that computes and configures paths through the network and that does not forward any traffic through the network other than that used to compute and configure the paths through the network. The configuration data may be stored to various databases, such as RIB 176, TED 175, and LSP DB 196 to provide a few examples. The configuration data may define filters and other parameters that admit or deny traffic to path 11, as described above, including a label associated with a service to be applied by, as one example, service node 13. These filters may then be installed in flow table 186 of VPN instance 184 as filters 196. Filters 196 may be associated with individual entries or may represent catch-all filters to be applied for making classification decisions as noted above. In this sense, flow table 186 may represent a collection of filters for making decisions with respect to flows as to whether to admit or deny flows to VPN instance 184 and perform other actions, such as push a label onto a label stack. Data plane 172B, which includes at least one interface (e.g., IFCs 164), then forwards the traffic along path 11 to enable the service node to apply one or more services to the traffic.
While not shown explicitly in the example of
Additionally, RPS 178B of router 86 may receive routing protocol message 15 and label mapping messages 17. Router 86 may itself, without being directed by PCE 8, establish a path similar to path 11 for forwarding traffic originating from enterprise network 10 to service node 13 so that one or more services advertised or identified by the service discovery information are applied by service node 13 to the traffic originating from enterprise network 10. In this respect, ingress network device 86 may be the same as the network device that receives the routing protocol message that includes the service discovery information and configure path 11 through network 3 to service node 13. To configure path 11, router 86 may configure a label switched path or establish a VPN through network 3 to service node 13. The techniques should therefore not be limited to software-defined networks but may be implemented with respect to any type of network in which network devices may establish paths through a network so that services are applied to traffic sent via the path.
Additionally, router 86 may receive service-specific information related to the application of the one or more services to the traffic received via the path from service node 13, where this service-specific information may be received via what may be referred to as a feedback loop. Router 86 may then adapt path 11 based on this service-specific information in a number of ways. More information regarding this feedback look, the service-specific information and the ways in which paths may be adapted is described in co-pending U.S. patent application Ser. No. 13/534,121, entitled “FEEDBACK LOOP FOR SERVICE ENGINEERED PATHS,” filed Jun. 27, 2012, the entire contents of which are incorporated by reference herein.
Some examples of service node 120 may not include the full functionality described and illustrated. For instance, some examples of service node 120 may include different combinations of PCC 88B, OpenFlow switch 86B, and IRB interface 87, rather than all such components. Moreover, while described with respect to a particular network device, e.g., a router, aspects of the techniques may be implemented by any network device or combination of network devices. The techniques should therefore not be limited to the exemplary embodiments described in this disclosure.
Service node 120 includes a control unit 171 and interface cards 164A-164N (collectively, “IFCs 164”) coupled to control unit 171 via internal links. Control unit 171 may include, similar to control unit 170 of router 86, one or more processors (not shown in
In this example and different from router 86 described above with respect to the example of
Service plane 172C executes or otherwise provides one or more service modules 124. Service modules 124 may each perform the same service or different services, depending on the configuration of service plane 172C. These services may include a firewall service, a NAT service, an IDP service, an URL filtering service, an encryption service, and any other type of service that may be performed in a network. In accordance with the techniques described in this disclosure, control plane 172A includes a service monitoring module 126 that monitors these service modules 124, determining the services provided by service modules 124 and service-related parameters, such as an operational state of service modules 124, capacity information of service modules 124 (e.g., a number of flows that each of service modules 124 may service), types of transport supported by service modules 124, and the like. Service monitoring module 126 may generate service discovery information 128 that includes the services provided by service modules 124 and the service-related parameters describing various aspects of the services provided by service modules 124.
Upon determining or otherwise generating this service discovery information 128, service monitoring module 126 may then pass this information 128 to routing protocol daemon 177, which may invoke one or more of RPS 178B. The invoked one or more of RPS 178B may process service discovery information, generating a routing protocol message that includes this service discovery information. The invoked one or more of RPS 178B may then forward (often flooding these messages) to other routers and/or switches that also implement the corresponding routing protocol, thereby effectively advertising the service provided by service node 120 to any device in network 3, including PCE 8 by way of topology server 4 that snoops routing protocol messages sent within network 3.
RPS 178B may also generate a label mapping message mapping services to MPLS labels using service discovery information 128. For example, each instance of a service may be identified by a service name (e.g., “IDP” or “Firewall”) and a service instance identifier, which may comprise a number uniquely identifying this instance with respect to a given service node. Service discovery information 128 may provide this service name and service instance identifier. The label mapping message may map a label provided by an MPLS protocol to this service instance, such that service node 120 associates the service instance with the label. The label may then be used to indicate to service node 120 which traffic is associated with the particular service instance.
The MPLS protocol that provides the label may comprise a label distribution protocol (LDP). That is, LDP module 178C may associate a label with each service instance and generate a label mapping message. Alternatively, one of RPS 178B, such as a border gateway protocol (BGP), may carry label mapping information mapping the service instance to a label. IN any event, service node 120 may transmit this label mapping message to network 3, where topology server 4 may forward this label mapping information to PCE 8. In this manner, PCE 8 may learn of this mapping between service instance and label based on the label mapping message.
After advertising these services, service node 120 may receive forwarding information from PCE 8 configuring a SEP, such as SEP 11, to terminate or otherwise traverse service node 120 so that one or more of the services provided (and advertised) by service node 120 is applied to the traffic received via SEP 11. The path may be configured in a manner generally consistent with that described above with respect to PCE 8 and router 86. Service node 120 may then receive traffic via this path, re-directing the traffic received via path 11 to one or more of service modules 124, which then apply the corresponding one or more services to the traffic. The traffic received via SEP 11 may include the label service node 120 advertised as being associated with the corresponding service instance. Based on this label, service node 120 may redirect the traffic to the corresponding one of service modules 124, which may apply the service to the traffic. Service node 120 may then forward the traffic to its intended destination or possibly another service node via another SEP, which may apply additional services to the traffic.
While not shown explicitly in the example of
The routing protocol message may conform to one of any number of routing protocols. In some instances, the routing protocol message may comprise an Intermediate System-to-Intermediate System (IS-IS) routing protocol message that includes a type-length-value (TLV) field generally referred to as a generic information (GENINFO TLV), which is described in more detail in an Internet Draft provided by the Network Working Group of the Internet Engineering Task Force (IETF), entitled “Advertising Generic Information in IS-IS,” dated Nov. 10, 2010, the entire contents of which are hereby incorporate by reference. The IS-IS routing protocol message may include this GENINFO TLV, which is partitioned into a set of APP sub-TLVs for specifying the service, service-related parameters and operational state of the service.
In some instances, the routing protocol message may comprise an Open Shortest Path First (OSPF) routing protocol message referred to as an opaque link state advertisement (LSA). The opaque LSA may include a TLV that stores the service discovery information. Again, this TLV may include additional sub-TLVs for specifying the service, service-related parameters and operational state of the service. The opaque LSA may be advertised through an OSPF “service discovery” transport instance using the methods described in Internet Drafts provided by the Network Working Group of the Internet Engineering Task Force (IETF), entitled “OSPF Transport Instance Extensions,” dated Oct. 10, 2011, and “OSPF Multi-Instance Extensions,” dated Oct. 29, 2011, the entire contents of each are hereby incorporate by reference as if both were set forth in their entirety herein.
RPS 178B may also generate a label mapping message mapping services to MPLS labels using service discovery information 128 (214). For example, each instance of a service may be identified by a service name (e.g., “IDP” or “Firewall”) and a service instance identifier, which may comprise a number uniquely identifying this instance with respect to a given service node. The service discovery information may provide this service name and service instance identifier. The label mapping message may map a label provided by an MPLS protocol to this service instance, such that service node 120 associates the service instance with the label. The label may then be used to indicate to service node 120 which traffic is associated with the particular service instance.
The MPLS protocol that provides the label may comprise a label distribution protocol (LDP). That is, LDP module 178C may associate a label with each service instance and generate a label mapping message in accordance with Request for Comments (RFC) 5036 provided by the Network Working Group of the IETF, entitled “LDP Specification,” dated October 2007, the entire contents of which are incorporated by reference. Alternatively, one of RPS 178B, such as a border gateway protocol (BGP), may carry label mapping information mapping the service instance to a label. More information concerning BGP and carrying label mapping information may be found in RFC 3107 of the IETF Network Working Group, entitled “Carrying Label Information in BGP-4,” dated May 2001, the entire contents of which are hereby incorporated by reference.
In any event, RPS 178B and/or LDP 178C may transmit the RPM and label mapping message to network 3 (216). For some interior gateway protocol (IGP), such as OSPF, IS-IS or other flooding or link-state protocols, transmitting RPM may involve flooding the RPM to every device in a portion of network 3 or every device in network 3. In this way, topology server 4 may receive RPM and relay this RPM (or at least the service discovery information) to PCE 8 such that PCE 8 may receive RPM. Moreover, label mapping messages may be forwarded throughout network 3, where again topology server 4 may provide this label mapping message to PCE 8. In this manner, PCE 8 may receive both the RPM and the label mapping message (218).
PCE 8 may extract the service discovery information from the RPM and update service directory 70 with the extracted service discovery information (220). Likewise, PCE 8 may extract the label mapping from the label mapping message and update service directory 70 with the extracted label mapping (222). That is, for each service instance, PCE 8 may associate a label with the service instance in service directory 70. At some point after receiving the RPM and label mapping message, PCE 8 may receive a request for one of the services advertised via the RPM from one of clients 18 (
Service node 120 may then receive traffic via SEP 11 and apply the requested service to the traffic receive via SEP 11 (228, 230). Service node 120 may determine that one or more services provided by service modules 124 are to be applied to the traffic received via SEP 11 using the inner label appended to each packet of the network traffic. Service node 120 may be configured to associate this inner label with one of service modules 124 and forward packets having inner labels to one or more of service modules 124. The service module 124 may then apply the requested services to the network traffic. Service node 120 may then forward the traffic to its intended destination or possibly via another SEP to another service node (232).
Service node 120 may continue to monitor service modules 124 and generate RPMs having updated service discovery information. This updated service discovery information may contain updates about operational state and parameters. Service node 120 may transmit these RPMs to network devices of network 3, which may update their service directories accordingly and configure SEPs to service node 120 to make use of the services provided by service node 120 in view of the updated service discovery information.
PDU length field 252L stores a value indicating the entire length of the PDU (including variable fields) in octets. Remaining lifetime field 252M stores a value indicating a number of seconds before the link state protocol data unit (LSP) is considered expired. LSP ID field 252N stores a value identifying the system ID of the source of the LSP. Sequence number field 252O stores a sequence number of the LSP. Checksum field 252P stores a value defining a checksum of the contents of the LSP from the source ID to the end. P/ATT/LSPDBOL/IS type fields 252Q-T store a number of bits that are used to signal various functionalities and attributes of the source of the routing protocol message. More information regarding IS-IS and IS-IS PDUs is described in more detail in International Standard ISO/IEC 10589, entitled “Information Technology-Telecommunications and Information Exchange Between Systems—Intermediate System to Intermediate System Intra-domain Routeing Information Exchange Protocol for Providing the Connectionaless-Mode Network Service (ISO 8473),” dated Nov. 15, 2002, the entire contents of which are hereby incorporated by reference.
While IS-IS provides for extensions to the PDU in the form of variable length fields specified in accordance with a type-length-value (TLV) field format, these extensible fields are reserved for communicating information related to the IS-IS routing protocol. An extension to the IS-IS routing protocol adapts IS-IS for communicating generic information (GENINFO) that is not directly related to the operation of the IS-IS protocol. This GENINFO extension is defined in a Network Working Group Internet Draft entitled “Advertising Generic Information in IS-IS,” dated Nov. 10, 2010, which was incorporated by reference above. Using this extension to IS-IS, as one example, the service-discovery information described above may be encoded in a TLV denoted as service discovery information field 254. Service discovery information field 254 may include sub-fields defining a service name, a service instance identifier, an optional service instance VPN identifier, a service instance parameter length, a service instance parameters and a service instance operational state. Using this information, routers, switches and other network devices (and orchestrating devices) may automatically determine services provided by a service node, parameters related to the services provided by the service node and operational state of the services provided by the service node.
While described above with respect to IS-IS routing protocol, the techniques may be implemented with respect to other routing protocols that provide for TLV or other fields to which service-specific information may be stored. These other routing protocols may include OSPF and BGP, to name a few examples. In this respect, the techniques should not be limited to the IS-IS routing protocol described above.
With respect to OSPF, an opaque LSA may be defined that is referred to as a service enabling technology (SET) Discovery Opaque LSA having a SET discovery TLV. The SET discovery TLV may include sub-TLVs defining the service name, the service instance identifier, the optional service instance VPN identifier, the service instance parameter length, the service instance parameters and the service instance operational state.
Various embodiments of the invention have been described. These and other embodiments are within the scope of the following claims.
Medved, Jan, Ward, David, Guichard, James
Patent | Priority | Assignee | Title |
10003540, | May 24 2013 | Huawei Technologies Co., Ltd. | Flow forwarding method, device, and system |
10044605, | Dec 23 2016 | Juniper Networks, Inc.; Juniper Networks, Inc | Apparatus, system, and method for distributing routing-protocol information in clos fabrics |
10069716, | Jul 29 2015 | AT&T Intellectual Property I, L.P. | Methods and apparatus to reflect routes from a remotely located virtual route reflector |
10200204, | Aug 30 2014 | Huawei Technologies Co., Ltd. | Link state information advertisement method and device |
10244003, | Sep 25 2014 | Microsoft Corporation | Media session between network endpoints |
10374831, | Aug 29 2017 | FUTUREWEI TECHNOLOGIES, INC | Stitching multi-domain LSPs in hierarchical SDN architecture |
10439814, | Jun 29 2016 | Amazon Technologies, Inc | Resource advertisement and routing via discovery configuration |
10447601, | Oct 20 2017 | Hewlett Packard Enterprise Development LP | Leaf-to-spine uplink bandwidth advertisement to leaf-connected servers |
10476700, | Aug 04 2016 | Cisco Technology, Inc. | Techniques for interconnection of controller- and protocol-based virtual networks |
10511544, | Nov 22 2016 | AT&T Intellectual Property I, L.P. | Path computation element protocol response and simple network management protocol confirmation for tunnel configuration |
10516626, | Mar 16 2016 | BAREFOOT NETWORKS, INC. | Generating configuration data and API for programming a forwarding element |
10601879, | Nov 18 2015 | Microsoft Technology Licensing, LLC | Media session between network endpoints |
10616339, | Nov 28 2017 | DELL PRODUCTS, L.P. | System and method to configure, manage, and monitor stacking of ethernet devices in a software defined network |
10848576, | Oct 29 2018 | Cisco Technology, Inc. | Network function (NF) repository function (NRF) having an interface with a segment routing path computation entity (SR-PCE) for improved discovery and selection of NF instances |
10965582, | Jul 29 2015 | AT&T Intellectual Property I, L.P. | Methods and apparatus to reflect routes from a remotely located virtual route reflector |
10972510, | Sep 25 2014 | Microsoft Technology Licensing, LLC | Media session between network endpoints |
11218543, | Nov 28 2017 | Dell Products L.P. | System and method to configure, manage, and monitor stacking of Ethernet devices in a software defined network |
11765000, | Aug 14 2012 | Nicira, Inc. | Method and system for virtual and physical network integration |
11784924, | Dec 03 2020 | Cisco Technology, Inc. | Optimizing application performance in hierarchical SD-WAN |
11799688, | Jun 01 2018 | HUAWEI TECHNOLOGIES CO , LTD | Method for managing virtual private network, and device |
11848865, | May 27 2021 | Cisco Technology, Inc.; Cisco Technology, Inc | Application programming interface (API)-based multi-tenant routing control plane |
9509728, | Oct 05 2012 | Ciena Corporation | Software defined networking systems and methods via a path computation and control element |
Patent | Priority | Assignee | Title |
7027731, | Jun 28 2001 | CIENA LUXEMBOURG S A R L ; Ciena Corporation | User-constrained optical route flooding system and method |
7539775, | Dec 10 2002 | Fujitsu Limited | Communications system, and communication device and network management device, which are used therein |
7546635, | Aug 11 2004 | Juniper Networks, Inc | Stateful firewall protection for control plane traffic within a network device |
7633860, | Nov 17 2004 | Huawei Technologies Co., Ltd. | Method for fast rerouting |
7693979, | Dec 26 2006 | Verizon Patent and Licensing Inc | Traffic fencing mechanism for control plane enabled networks |
7826482, | Nov 17 2006 | Juniper Networks, Inc. | Service-specific forwarding in an LDP-RSVP hybrid network |
7920572, | Sep 20 2005 | Cisco Technology, Inc | Modifying operation of peer-to-peer networks based on integrating network routing information |
7948986, | Feb 02 2009 | Juniper Networks, Inc. | Applying services within MPLS networks |
7991864, | May 04 2006 | Cisco Technology, Inc. | Network element discovery using a network routing protocol |
8020200, | Aug 11 2004 | Juniper Networks, Inc. | Stateful firewall protection for control plane traffic within a network device |
8224948, | Dec 26 2006 | Verizon Patent and Licensing Inc. | Traffic fencing mechanism for control plane enabled networks |
8224971, | Dec 28 2009 | Amazon Technologies, Inc. | Using virtual networking devices and routing information to initiate external actions |
8259722, | May 19 2008 | Juniper Networks, Inc. | Integrated packet inspection and modification system and network device for internet service provider market research and influence |
8271680, | Dec 24 1998 | Ericsson AB | Domain isolation through virtual network machines |
8484372, | Oct 09 2001 | Juniper Networks, Inc. | Distributed filtering for networks |
8576841, | Jun 30 2011 | Juniper Networks, Inc | Hybrid port range encoding |
8615009, | Jan 25 2010 | Juniper Networks, Inc. | Interface for extending service capabilities of a network device |
8627137, | Sep 16 2010 | Cisco Technology, Inc.; Cisco Technology, Inc | Graceful handling of critical traffic blackholing faults |
8780699, | Oct 12 2009 | Juniper Networks, Inc. | Handling switchover of multi-homed connections in VPLS networks |
8861340, | Mar 12 2012 | Juniper Networks, Inc. | Fast reroute using maximally redundant trees |
8958286, | Mar 12 2012 | Juniper Networks, Inc. | Fast reroute for multicast using maximally redundant trees |
20020109879, | |||
20050281205, | |||
20060050719, | |||
20070258387, | |||
20080049621, | |||
20080195755, | |||
20090328219, | |||
20110103263, | |||
20110110328, | |||
20110197274, | |||
20120297088, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 12 2012 | WARD, DAVID | Juniper Networks, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028450 | /0143 | |
Jun 18 2012 | MEDVED, JAN | Juniper Networks, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028450 | /0143 | |
Jun 20 2012 | GUICHARD, JAMES | Juniper Networks, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028450 | /0143 | |
Jun 27 2012 | Juniper Networks, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
May 03 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Apr 21 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Nov 03 2018 | 4 years fee payment window open |
May 03 2019 | 6 months grace period start (w surcharge) |
Nov 03 2019 | patent expiry (for year 4) |
Nov 03 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 03 2022 | 8 years fee payment window open |
May 03 2023 | 6 months grace period start (w surcharge) |
Nov 03 2023 | patent expiry (for year 8) |
Nov 03 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 03 2026 | 12 years fee payment window open |
May 03 2027 | 6 months grace period start (w surcharge) |
Nov 03 2027 | patent expiry (for year 12) |
Nov 03 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |