The present disclosure provides a separate Ethernet forwarding and control plane system, method, network, and architecture with a Link State Interior Gateway route reflector for the control plan system and a layer two network architecture for the forwarding system. The present invention optionally utilizes a cloud implementation for the Designated Router (DR) or designated peering node reducing peering requirements and distributing the functionality. Through the architecture of the present invention, the provider router is obviated by the application of layer two switches and servers. Such an architecture provides improved scaling, performance, and cost reduction.
|
1. A network, comprising:
a plurality of interconnected switches;
a plurality of provider edge (pe) routers interconnected via the plurality of interconnected switches;
one or more Link State Interior Gateway Route Reflectors interconnected to the pe routers;
a control plane formed between the pe routers and the one or more Link State Interior Gateway Route Reflectors; and
a forwarding plane between the pe routers over the plurality of interconnected switches;
wherein the one or more Link State Interior Gateway Route Reflectors comprise a plurality of geographically diverse servers, and wherein each of the pe routers is communicatively coupled to one of the plurality of geographically diverse servers.
2. The network of
3. The network of
4. The network of
5. The network of
6. The network of
7. The network of
8. The network of
9. The network of
10. The network of
|
The present invention relates generally to networking. More particularly, the present invention relates to a separate Ethernet forwarding and control plane system, method, network, and architecture with an Interior Gateway route reflector associated with a Link State Routing System, such as Open Shortest Path First (OSPF) or Intermediate System to Intermediate System (IS-IS), for the control plane system, and a layer two network architecture for the forwarding system.
In Multi-Protocol Label Switching (MPLS), a P router or Provider Router is a Label Switch Router (LSR) that functions as a transit router of the core network. The P router typically connected to one or more Provider Edge (PE) routers. In conventional embodiments, P routers and PE routers each operate a control plane and a forwarding plane and each of the routers forms a direct adjacency with every other router to which it is directly attached at the IP layer. An important function of the P router transit function is to limit the number of direct IP adjacencies required, by connecting each of the numerous PE routers only to a subset of the much smaller number of P routers, and connecting the P routers to each other. It would be advantageous to eliminate the need for the P routers, but this would require every PE router to form a direct adjacency with many if not all other PE routers. Disadvantageously, the requirement for direct adjacency poses scaling challenges. For example, with thousands of PE routers in an area, the adjacency count per PE router would be of the same order, which is substantially in excess of the adjacency count which can be supported by a conventional router implementation using an embedded control plane. What is needed is an alternative architecture, system, method, and the like replacing the P router architecture enabling scaling and efficiency in operation between PE routers.
Referring to
In an exemplary embodiment, a network includes a plurality of interconnected switches; a plurality of Provider Edge (PE) routers interconnected via the plurality of interconnected switches; one or more Distributed Link State Interior Gateway Route Reflectors interconnected to the PE routers; a control plane comprising the PE routers and the one or more Interior Gateway Route Reflectors; and a forwarding plane between the PE routers over the plurality of interconnected switches. In this description, the phrase “Link State Interior Gateway Route Reflectors” is equivalent to “Interior Gateway Route Reflectors” or IGRR. Optionally, the one or more Interior Gateway Route Reflectors include a single server disposed at one of the plurality of interconnected switches, and wherein each of the plurality of PE routers is communicatively coupled to the single server through the plurality of interconnected switches. Alternatively, the one or more Interior Gateway Route Reflectors include a plurality of geographically diverse servers, and wherein each of the PE routers is communicatively coupled to one of the plurality of geographically diverse servers. The forwarding plane may utilize traffic engineered Ethernet over Shortest Path Bridging-Media Access Control (SPBM). The plurality of geographically diverse servers are configured to appear as a single designated peering node to the plurality of PE routers. A logical layer two network server-server interconnect extends a messaging fabric between the geographically diverse servers to create the single designated peering node. At each of the PE routers, the forwarding plane appears as various interconnects through the plurality of interconnected switches, and wherein, at each of the PE routers, the control plane appears as interconnects to the one or more Interior Gateway Route Reflectors. The forwarding plane is logically separated from the control plane. The plurality of interconnected switches and the one or more Interior Gateway Route Reflectors replace functionality associated with a Provider router such that the network does not include the Provider router.
In another exemplary embodiment, a server includes one or more processing components; an interconnect communicatively coupling the one or more processing components; and a plurality of network interfaces communicatively coupled to a plurality of Provider Edge (PE) routers through a plurality of interconnected switches; wherein the server is configured to operate as a designated peering node to the plurality of PE routers. The server is communicatively coupled via the plurality of interconnected switches to a second server that is geographically diverse from the server. The server and the second server function together as the designated peering node. The server, the second server and the PE routers mutually communicate over the plurality of interconnected switches, which communication may preferably be achieved using Shortest Path Bridging-MAC.
In yet another exemplary embodiment, a method includes providing a plurality of Provider Edge (PE) routers over a plurality of interconnected switches; providing one or more servers configured to operate as a designated peering node; operating a forwarding plane between the plurality of PE routers over the plurality of interconnected switches; and operating a control plane between the plurality of PE routers and the designated peering node. In yet another exemplary embodiment, a method includes connecting a plurality of Provider Edge (PE) routers over a plurality of interconnected switches; providing one or more servers configured to operate as a designated peering node; operating a forwarding plane between the plurality of PE routers over the plurality of interconnected switches; and operating a control plane between the plurality of PE routers and the designated peering node. The PE routers and the one or more servers are communicatively coupled via any of Shortest Path Bridging-Virtual Local Area Network Identification (SPB-V), Virtual Private LAN Service (VPLS), and any network technology which is capable of emulating Ethernet LAN service.
The present invention is illustrated and described herein with reference to the various drawings, in which like reference numbers denote like method steps and/or system components, respectively, and in which:
In various exemplary embodiments, the present invention provides a separate Ethernet forwarding and control plane system, method, network, and architecture with a distributed Interior Gateway route reflector for the control plane system and a layer two network architecture for the forwarding system. The present invention utilizes a cloud implementation for the DR reducing peering requirements on individual components and distributing the functionality. The use of an Interior Gateway Route Reflector (via the special treatment of the “pseudo-node” and “designated router” mechanisms inherent in IS-IS and OSPF) enables a layer two cloud (SPB-M, SPB-V, Virtual Private LAN Service (VPLS), conventional IEEE 802.1) to provide scalable and robust connectivity within a routed network. The distribution of this entity improves robustness and scalability. Through the architecture of the present invention, the P router is eliminated and is replaced by layer two switches and servers. Such an architecture provides improved scaling, performance, and cost reduction.
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
The processor 702 is a hardware device for executing software instructions. The processor 702 may be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the blade 701, a semiconductor-based microprocessor (in the form of a microchip or chip set), or generally any device for executing software instructions. When the blade 701 is in operation, the processor 702 is configured to execute software stored within the memory 706, to communicate data to and from the memory 706, and to generally control operations of the blade 701 pursuant to the software instructions. The data store 704 may be used to store data, and may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store 704 may incorporate electronic, magnetic, optical, and/or other types of storage media. Additionally, the data store 704 may be located on other blades 701 or on separate blades operating as shared data storage.
The memory 706 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.), and combinations thereof. Moreover, the memory 706 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 706 may have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processor 702. The software in memory 706 may include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. The software in the memory 706 includes a suitable operating system (O/S) 714 and one or more programs 716. The operating system 714 essentially controls the execution of other computer programs, such as the one or more programs 716, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The one or more programs 716 may be configured to implement the various processes, algorithms, methods, techniques, etc. described herein with respect to the blade server 700. Further, the blade 701 may include the management unit 708 configured to control operations of the blade 701 within the blade server 700 and on the high-speed interface 712.
The blade server 700 may include other types of blades besides the blades 701. For example, the blade server 700 may include network interface blades 720. The network interface blades 720 may be used to enable the blade server 700 to communicate on a network, such as the Internet, a data communications network, etc. For example, the blade server 700 can utilize the network interface blades 720 to communicate to/from the PE routers 102, the switches 202, etc. The network interface blades 720 may include a plurality of interfaces, for example, an Ethernet adapter (e.g., 10 BaseT, Fast Ethernet, Gigabit Ethernet, 10 GE) or a wireless local area network (WLAN) adapter (e.g., 802.11a/b/g/n). The network interface blades 720 may include address, control, and/or data connections to enable appropriate communications on the network. An I/O interface blade 722 may be used to receive user input from and/or for providing system output to one or more devices or components. User input may be provided via, for example, a keyboard, touch pad, and/or a mouse. System output may be provided via a display device and a printer (not shown). I/O interfaces can include, for example, a serial port, a parallel port, a small computer system interface (SCSI), an infrared (IR) interface, a radio frequency (RF) interface, and/or a universal serial bus (USB) interface. Also, the blade server 700 may include a management/control module 724 providing overall management and control of the blade server 700.
Referring to
Based on its role, and hence connectivity, the Interior Gateway route reflector 204 is guaranteed to be outside the traffic forwarding path, i.e. the forwarding path 120. This may preferably be achieved by presenting all IGRR components to all PEs as if each has only a single connection to the traffic forwarding topology 120, and therefore cannot provide a useful route to anywhere for traffic. The workload of the Interior Gateway route reflector 204 is receiving/processing and generating protocol messages. The bulk of the processing is associated with maintaining routing adjacencies with the PE routers 102. Of note, processing within Link State Routing Protocols readily partitions across several functional boundaries. With inter-process message-based interfaces across those boundaries, the workload becomes straightforward to place across multiple processors.
In
Referring to
The line modules 1004 may be communicatively coupled to the switch modules 1006, such as through a backplane, mid-plane, or the like. The line modules 1004 are configured to provide ingress and egress to the switch modules 1006, and are configured to provide interfaces for the services described herein. In an exemplary embodiment, the line modules 1004 may form ingress and egress switches with the switch modules as center stage switches for a three-stage switch, e.g. three stage Clos switch. The line modules 1004 may include optical transceivers, such as, for example, 2.5 Gb/s (OC-48/STM-1, OTU1, ODU1), 10 Gb/s (OC-192/STM-64, OTU2, ODU2), 40 Gb/s (OC-768/STM-256, OTU3, ODU4), GbE, 10 GbE, etc. Further, the line modules 1004 may include a plurality of optical connections per module and each module may include a flexible rate support for any type of connection, such as, for example, 155 Mb/s, 622 Mb/s, 1 Gb/s, 2.5 Gb/s, 10 Gb/s, 40 Gb/s, and 100 Gb/s. The line modules 1004 may include DWDM interfaces, short reach interfaces, and the like, and may connect to other line modules 1004 on remote nodes 1000, NEs, end clients, and the like. From a logical perspective, the line modules 1004 provide ingress and egress ports to the node 1000, and each line module 1004 may include one or more physical ports.
The switch modules 1006 are configured to switch services between the line modules 1004. For example, the switch modules 1006 may provide wavelength granularity, SONET/SDH granularity such as Synchronous Transport Signal-1 (STS-1), Synchronous Transport Module level 1 (STM-1), Virtual Container 3 (VC3), etc.; OTN granularity such as Optical Channel Data Unit-1 (ODU1), Optical Channel Data Unit-2 (ODU2), Optical Channel Data Unit-3 (ODU3), Optical Channel Data Unit-4 (ODU4), Optical channel Payload Virtual Containers (OPVCs), etc.; Ethernet granularity including SPBM support; and the like. Specifically, the switch modules 1006 may include both Time Division Multiplexed (TDM) and packet switching engines. The switch modules 1006 may include redundancy as well, such as 1:1, 1:N, etc. Collectively, the line modules 1004 and the switch modules 1006 may provide connections across the domains 102, 104, 106. Those of ordinary skill in the art will recognize the present invention contemplates use with any type of node, network element, etc. with the switch 202 illustrated as one exemplary embodiment.
Although the present invention has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present invention and are intended to be covered by the following claims.
Bragg, Nigel L., Duncan, Ian H.
Patent | Priority | Assignee | Title |
10491546, | Feb 25 2015 | AT&T Intellectual Property I, L P | Provider edge router system and provider edge router system controller for hybrid virtualization of provider edge router functions |
11785365, | Sep 10 2021 | Ciena Corporation | Interworking between variable capacity optical layer and ethernet/IP/MPLS layer |
9781055, | Aug 18 2014 | Cisco Technology, Inc.; Cisco Technology, Inc | Dynamic cascaded clustering for dynamic VNF |
Patent | Priority | Assignee | Title |
6018625, | Aug 27 1997 | Nortel Networks Limited | Management system architecture and design method to support reuse |
6587469, | May 15 1998 | RPX CLEARINGHOUSE LLC | Telecommunications system |
6704307, | Sep 27 1999 | RPX CLEARINGHOUSE LLC | Compact high-capacity switch |
7590123, | Nov 22 2005 | Cisco Technology, Inc. | Method of providing an encrypted multipoint VPN service |
7688756, | Oct 05 2005 | RPX CLEARINGHOUSE LLC | Provider link state bridging |
20020184388, | |||
20040034702, | |||
20040059829, | |||
20050177636, | |||
20050220096, | |||
20050262264, | |||
20060029032, | |||
20060083215, | |||
20070047540, | |||
20070053284, | |||
20070076719, | |||
20070165657, | |||
20080002588, | |||
20080062891, | |||
20080080509, | |||
20080092229, | |||
20080095047, | |||
20100020797, | |||
20110142053, | |||
20120093166, | |||
20120113991, | |||
20130242802, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 01 2010 | BRAGG, NIGEL L | Ciena Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025728 | /0607 | |
Feb 01 2010 | DUNCAN, IAN H | Ciena Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025728 | /0607 | |
Feb 01 2011 | Ciena Corporation | (assignment on the face of the patent) | / | |||
Jul 15 2014 | Ciena Corporation | DEUTSCHE BANK AG NEW YORK BRANCH | SECURITY INTEREST | 033329 | /0417 | |
Jul 15 2014 | Ciena Corporation | BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENT | PATENT SECURITY AGREEMENT | 033347 | /0260 | |
Oct 28 2019 | DEUTSCHE BANK AG NEW YORK BRANCH | Ciena Corporation | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 050938 | /0389 | |
Oct 28 2019 | Ciena Corporation | BANK OF AMERICA, N A , AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 050969 | /0001 | |
Oct 24 2023 | BANK OF AMERICA, N A | Ciena Corporation | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 065630 | /0232 |
Date | Maintenance Fee Events |
Jun 19 2014 | ASPN: Payor Number Assigned. |
Jan 15 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jan 18 2022 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Jul 22 2017 | 4 years fee payment window open |
Jan 22 2018 | 6 months grace period start (w surcharge) |
Jul 22 2018 | patent expiry (for year 4) |
Jul 22 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 22 2021 | 8 years fee payment window open |
Jan 22 2022 | 6 months grace period start (w surcharge) |
Jul 22 2022 | patent expiry (for year 8) |
Jul 22 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 22 2025 | 12 years fee payment window open |
Jan 22 2026 | 6 months grace period start (w surcharge) |
Jul 22 2026 | patent expiry (for year 12) |
Jul 22 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |