forwarding liveness, such as the ability of an interface to send and receive packets and forwarding capabilities of the interface, is determined. The determined forwarding liveness may be sent in a single message, allowing forwarding liveness information to be sent more frequently which permits fast detection of failures. The message may also include aggregating liveness information for multiple protocols.
|
1. For use with a node, a computer-implemented method comprising:
a) receiving, using the node, an aggregated message including, for a first set of at least two different interfaces of a neighbor node, at least two indicators, each indicator identifying a different one of the at least two different interfaces of the neighbor node and corresponding forwarding liveness status information for each of the at least two different interfaces of the first set of the at least two different interfaces as data within the aggregated message; and
b) updating, using the node, neighbor node forwarding liveness status information using the aggregated message,
wherein the forwarding liveness status information includes an integrity and correct operation of a forwarding table of the neighbor node, and
wherein the act of updating neighbor node forwarding liveness status information includes
i) determining, by the node, whether the first set of at least two different interfaces is the same as a second set of at least two different interfaces of the neighbor node included in an earlier message,
ii) if the first set of at least two different interfaces is determined to be the same as the second set of at least two different interfaces, then for each of the at least two different interfaces of both the first and second sets having a changed status, informing, by the node, a local interface of the changed status of its peer interface of the neighbor node, and
iii) if the first set of at least two different interfaces is determined to be different from the second set of at least two different interfaces, then
A) for any interface in the first set but not in the second set, informing, by the node, a local interface of the status indicated in the aggregated message of its peer interface of the neighbor node, and
B) for any interface in the second set but not in the first set, informing, by the node, a local interface that the status of its peer interface of the neighbor node is down.
25. Apparatus comprising:
a) one or more processors;
b) at least one input device; and
c) one or more storage devices storing processor-executable instructions which, when executed by one or more processors, perform a method of:
i) receiving an aggregated message including, for a first set of at least two different interfaces of a neighbor node, at least two indicators, each indicator identifying a different one of the at least two different interfaces of the neighbor node and corresponding forwarding liveness status information for the at least two different interfaces of the first set of the at least two different interfaces as data within the aggregated message; and
ii) updating neighbor node forwarding liveness status information using the aggregated message, wherein the forwarding liveness status information includes an integrity and correct operation of a forwarding table of the neighbor node, and
wherein the act of updating neighbor node forwarding liveness status information includes
A) determining, by the node, whether the first set of at least two different interfaces is the same as a second set of at least two different interfaces of the neighbor node included in an earlier message,
B) if the first set of at least two different interfaces is determined to be the same as the second set of at least two different interfaces, then for each of the at least two different interfaces of both the first and second sets having a changed status, informing, by the node, a local interface of the changed status of its peer interface of the neighbor node, and
C) if the first set of at least two different interfaces is determined to be different from the second set of at least two different interfaces, then
1) for any interface in the first set but not in the second set, informing, by the node, a local interface of the status indicated in the aggregated message of its peer interface of the neighbor node, and
2) for any interface in the second set but not in the first set, informing, by the node, a local interface that the status of its peer interface of the neighbor node is down.
10. A computer-implemented method for monitoring interface forwarding liveness, the method comprising:
a) determining, at a first node, forwarding liveness status information for a first set of at least two different interfaces of the first node;
b) sending, from the first node, an aggregated message including, for the first set of at least two different interfaces, at least two indicators, each indicator identifying a different one of at least two different interfaces and the corresponding determined status information for the at least two different interfaces as data within the aggregated message;
c) receiving, at the second node, the aggregated message; and
d) updating, by the second node, first node forwarding liveness status information using the aggregated message,
wherein the forwarding liveness status information includes an integrity and correct operation of a forwarding table of the first node, and
wherein the act of updating first node forwarding liveness status information includes
i) determining, by the second node, whether the first set of at least two different interfaces of the first node is the same as a second set of at least two different interfaces of the first node included in an earlier message,
ii) if the first set of at least two different interfaces is determined to be the same as the second set of at least two different interfaces, then for each of the at least two different interfaces of both the first and second sets having a changed status, informing, by the second node, a local interface of the changed status of its peer interface of the first node, and
iii) if the first set of at least two different interfaces is determined to be different from the second set of at least two different interfaces, then
A) for any interface in the first set but not in the second set, informing, by the second node, a local interface of the status indicated in the aggregated message of its peer interface of the first node, and
B) for any interface in the second set but not in the first set, informing, by the second node, a local interface that the status of its peer interface of the first node is down.
32. A system comprising:
a) a first node including
i) one or more processors;
ii) at least one input device; and
iii) one or more storage devices storing processor-executable instructions which, when executed by one or more processors, perform a method of:
A) determining, at the first node, forwarding liveness status information for a first set of at least two different interfaces of the first node, and
B) sending, from the first node, an aggregated message including, for the first set of at least two different interfaces, at least two indicators, each indicator identifying a different one of at least two different interfaces and the corresponding determined status information for the at least two different interfaces as data within the aggregated message,
wherein the forwarding liveness status information includes an integrity and correct operation of a forwarding table of the first node; and
b) a second node including
i) one or more processors;
ii) at least one input device; and
iii) one or more storage devices storing processor-executable instructions which, when executed by one or more processors, perform a method of:
A) receiving, at the second node, the aggregated message sent by the first node, and
B) updating first node forwarding liveness status information using the aggregated message,
wherein the act of updating first node forwarding liveness status information includes
1) determining, by the second node, whether the first set of at least two different interfaces is the same as a second set of at least two different interfaces of the first node included in an earlier message,
2) if the first set of at least two different interfaces is determined to be the same as the second set of at least two different interfaces, then for each of the at least two different interfaces of both the first and second sets having a changed status, informing, by the second node, a local interface of the changed status of its peer interface of the first node, and
3) if the first set of at least two different interfaces is determined to be different from the second set of at least two different interfaces, then
(a) for any interface in the first set but not in the second set, informing, by the second node, a local interface of the status indicated in the aggregated message of its peer interface of the first node, and
(b) for any interface in the second set but not in the first set, informing, by the second node, a local interface that the status of its peer interface of the first node is down.
2. The computer-implemented method of
setting a first timer to the time interval and starting the first timer,
if the first timer expires, setting a status of each of the at least two different interfaces of the neighbor node to down; and
if a further message, sourced from the neighbor node, and including
A) for a third set of at least two different interfaces, at least two indicators, each indicator identifying a different one of the at least two different interfaces of the neighbor node and corresponding forwarding liveness status information for each of the interfaces of the third set, and
B) a new time interval, is received then, resetting the first timer to the new time interval and restarting the first timer.
3. The computer-implemented method of
4. The computer-implemented method of
5. The computer-implemented method of
6. The computer-implemented method of
7. The computer-implemented method of
8. The computer-implemented method of
9. The computer-implemented method of
11. The method of
maintaining, using the first node, a first timer for tracking a send time interval, wherein the act of sending the aggregated message is performed after expiration of the first timer; and
restarting, using the first node, the first timer after the aggregated message is sent.
12. The computer-implemented method of
13. The computer-implemented method of
14. The computer-implemented method of
15. The computer-implemented method of
16. The computer-implemented method of
17. The computer-implemented method of
18. The computer-implemented method of
19. The computer-implemented method of
setting a timer to the dead interval;
starting the timer;
determining whether or not a further message including forwarding liveness status information is received from the first node before the expiration of the timer; and
if it is determined that a further message including forwarding liveness status information is not received from the first node by the second node before the expiration of the timer, then informing the second node that the at least two different inferfaces of the first node are down.
20. The computer-implemented method of
21. The computer-implemented method of
22. The computer-implemented method of
23. The computer-implemented method of
24. The computer-implemented method of
26. The apparatus of
setting a first timer to the time interval and starting the first timer,
setting a status of each of the at least two different interfaces of the neighbor node to down if the first timer expires; and
if a further message, sourced from the neighbor node, and including
1) for a third set of at least two different interfaces, at least two indicators, each indicator identifying a different one of the at least two different interfaces of the neighbor node and corresponding forwarding liveness status information for each of the interfaces of the third set, and
2) a new time interval, is received, resetting the first timer to the new time interval and restarting the first timer.
27. The apparatus of
28. The apparatus of
29. The apparatus of
30. The apparatus of
31. The apparatus of
33. The apparatus of
maintaining a first timer for tracking a send time interval, wherein the act of sending the aggregated message composes and sends the aggregated message after expiration of the first timer; and
restarting the first timer after the aggregated message is sent.
34. The apparatus of
35. The apparatus of
37. The apparatus of
39. The apparatus of
40. The apparatus of
41. The system of
setting a timer to the dead interval;
starting the timer;
determining whether or not a further message including forwarding liveness status information is received from the first node before the expiration of the timer; and
informing the second node that the at least two different interfaces of the first node are down if it is determined that a further message including forwarding liveness status information is not received from the first node by the second node before the expiration of the timer.
42. The system of
43. The system of
44. The system of
|
This application claims the benefit of U.S. Provisional Application No. 60/472,859, entitled “DETERMINING LIVENESS OF MULTIPLE PROTOCOLS AND/OR INTERFACES,” filed on May 23, 2003 and listing Kireeti Kompella as the inventor. That application is expressly incorporated herein by reference. The scope of the invention is not limited to any requirements of the specific embodiments in that application.
§1.1 Field of the Invention
The invention concerns detecting errors in connections, protocols, data plane components, or any combination of these.
§1.2 Background Information
The description of art in this section is not, and should not be interpreted to be, an admission that such art is prior art to the invention.
A protocol is a specific set of rules, procedures, or conventions relating to the format and timing of data transmission between two devices. Accordingly, a protocol is a standard set of procedures that two data devices use to work with each other. Nodes, such as routers, in communications networks may use protocols to exchange information. For example, routers may use routing protocols to exchange information used to determine routes.
Conventional routing protocols may include some form of liveness detection. For example, the intermediate system-intermediate system protocol (IS-IS) and the open shortest path first protocol (OSPF) include a “hello” mechanism that lets a router running IS-IS or OSPF know whether nodes sharing a communications link with the router are still up. Some protocols, such as a border gateway protocol (BGP), use the underlying transport to determine the liveness of their neighbors. In the case of BGP, transmission control protocol (TCP) keepalives are used. Other protocols, such as routing information protocols (RIP), have intrinsic liveness mechanisms. In most cases, once an adjacency with a neighbor node running the same protocol is established with an initial hello message, subsequent hello messages don't need to carry a lot of information.
In most, if not all, of these liveness detection mechanisms, the time needed to conclude that one's neighbor is down ranges from seconds, to tens, or even hundreds of seconds. For example, with IS-IS, hellos are normally sent every nine (9) seconds. A node determines a neighbor to be down only after three (3) consecutive hellos have been unanswered. Accordingly, a node running IS-IS normally needs at least 27 seconds before it determines that a neighbor node is down. Similarly, with the point-to-point protocol (PPP) hellos are normally sent every ten (10) seconds. A node determines a neighbor to be down only after three (3) consecutive hellos have been unanswered. Accordingly, a node running PPP normally needs at least 30 seconds before it determines whether a neighbor node is down.
Since routers and other nodes on the Internet are predominantly used for communicating data for applications (such as e-mail) that are tolerant of some delay or packets received out of sequence, the conventional liveness detection schemes are acceptable. However, as more demanding applications (such as voice over IP) use the Internet or other packet-switched networks, there are instances where detecting that a neighbor is down in a few tenths of a second, or even hundredths of a second may be necessary. Such fast liveness detection is needed, for example, where failover needs to occur quickly so that an end user doesn't perceive, or at least isn't unduly annoyed by, the failure of an adjacency (e.g., due to any one of a node failure, a link failure, or a protocol failure).
One approach to determining liveness faster is to allow faster (e.g., sub-second) protocol hello timers. This is feasible for some protocols, but might require changes to the protocol. Implementing these protocol changes on new nodes, and propagating these protocol changes to nodes previously deployed in a communications network is not trivial. Moreover, for some other protocols faster protocol hello timers are simply not feasible.
Even if all protocols could implement fast protocol hello timers, at least two additional issues make such a simple, brute force change unattractive. First, routers often implement multiple routing protocols, each having its own liveness detection mechanism. Consequently, updating each routing protocol to enable fast detection can lead to a considerable amount of work. Second, hello messages often carry more than just liveness information, and can therefore be fairly large and require non-trivial computational effort to process. Consequently, running fast liveness detection between a pair of neighbor nodes, each running multiple protocols, can be expensive in terms of communications and computational resources required to communicate and process the frequent, lengthy messages for liveness detection.
Additionally, it is desirable to check interface forwarding liveness (i.e., the ability to forward data over an interface). Forwarding liveness may be a function of various components in the “data plane” of a data forwarding device such as a router. For example, data plane components may include a forwarding table (sometimes referred to as a forwarding information base), switch fabric, forwarding lookup engine, traffic scheduler, traffic classifier, buffers, segmenters, reassemblers, resequencers, etc. Such components may be embodied as memory, processors, ASICs, etc.
In view of the foregoing, there is a need to detect liveness faster that conventional liveness detection schemes. It is desirable that such liveness detection (i) have minimal impact on existing protocols, (ii) not waste communications resources, and (iii) not be computationally expensive.
Apparatus, data structures, and methods consistent with the principles of the invention may also be applied for determining liveness of a data plane used by, and including, an interface (simply referred to as “interface forwarding liveness”). This is especially useful for interfaces whose failure detection mechanisms at the physical or link layer are slow (such as PPP) or presently non-existent (such as Ethernet).
Alternatively, or in addition, apparatus, data structures, and methods consistent with the principles of the invention may also be applied for determining interface forwarding liveness.
In one embodiment consistent with the principles of the present invention, a sending node may (a) accept forwarding liveness status information, (b) compose a message including the forwarding liveness status information, and (c) send the message towards a neighbor node. In at least one embodiment, the sending mode may further (d) maintain a first timer for tracking a send time interval, where the acts of composing a message and sending the message are performed after expiration of the first timer, and (e) restart the first timer after the message is sent.
In one embodiment consistent with the principles of the present invention, a receiving node may (a) receive a message including forwarding liveness status information and a time interval, and (b) update neighbor node forwarding liveness status information using the message. In at least one embodiment of the invention, the receiving node may update the neighbor node liveness status information by (a) setting a first timer to the time interval and starting the first timer, (b) if the first timer expires, setting a status of an interface of the neighbor node to down, and (c) if a further message is received, resetting the first timer to the new time interval and restarting the first timer.
Elements, apparatus, systems, computer-implemented code, data structures and methods consistent with the principles of the invention permit the liveness of protocols, interfaces, or both to be monitored. The following description is presented to enable one skilled in the art to make and use the invention, and is provided in the context of particular applications and their requirements. Various modifications to the disclosed embodiments will be apparent to those skilled in the art, and the general principles set forth below may be applied to other embodiments and applications. Thus, the invention is not limited to the embodiments shown and the inventors regard their invention as the following disclosed methods, apparatus and data structures and any other patentable subject matter.
An exemplary environment in which the invention may operate is described in §4.1. Then, elements, apparatus, systems, computer-implemented code, methods and data structures that may be used to perform operations and store information in manners consistent with the principles of the invention are described in §4.2. An example illustrating operations performed by an exemplary embodiment of the invention is then provided in §4.3. Finally, some conclusions regarding the invention are set forth in §4.4.
§4.1 Environment in which the Invention May Operate
The invention may be used in communication systems including nodes for forwarding data, such as packets. Such nodes may be routers. For example, the invention may be used to quickly detect a down connection (e.g., a down link, node, or interface), protocol, or both. The invention may be used in conjunction with a fast reroute technique, a graceful (or hitless) restart technique, or some other failover technique.
The invention can be used in an exemplary communications environment, such as the one illustrated in
Once node 105 and node 110 have established various routing protocol “sessions” between themselves, they can begin exchanging aggregated protocol and/or forwarding liveness (APFL) hellos in a manner consistent with the principles of the invention. In one embodiment of the invention, an APFL hello contains a list of protocols that it is reporting on (in this case, IS-IS and RSVP), as well as the status of those protocols (e.g., up or down). The APFL hello message may also contain a dead interval. Node 105 is essentially saying “If I don't send you an APFL hello within the dead interval of my previous APFL hello, declare all protocols reported in the last received APFL hello as dead.”
Note that the regular IS-IS hellos should also be running. Thus, node 110 will declare its IS-IS adjacency with node 105 dead if any of the following occur:
Moreover, Node 105 may include means for monitoring forwarding liveness for each of one or more of its interfaces 130, 132, 134, 135. Similarly, Node 110 may include means for monitoring forwarding liveness for each of one or more of its interfaces 140, 142, 144 and 146. Forwarding liveness refers to the ability to forward traffic over an interface.
§4.2 Exemplary Methods, Elements, Apparatus, Systems and Data Structures
Exemplary methods, elements, apparatus, systems and data structures for performing APFL operations will now be described.
APFL operations 240 use hello interval information 210, protocols, interfaces, or both using aggregated liveness 230, {dead interval, neighbor node} pair information 220, and APFL neighbor information 250 to determine liveness of various peer protocols, forwarding liveness of interfaces of various neighbor nodes, or both. Configuration operations 205 may be used to configure hello interval information 210, protocols, interfaces, or both using aggregated liveness 230, and {dead interval, neighbor node} pair information 220.
§4.2.1 Exemplary APFL Method
Returning to block 302, if a lost hello timer has expired, the last received protocol status for each protocol in the last received protocol registration is set to “down” and the protocol is notified (Loop 310-314, including 312). The lost hello timer is then stopped (316) and the method is left (330, 390).
Again referring back to block 302, if APFL information, such as a packet, is received from another node, it is determined whether or not to discard the information. This determination may be based on configuration or rate-limiting (318). If it is decided to discard the APFL information, the APFL information is discarded (329) and the method is left (330 and 390). If, on the other hand, it is decided to not discard the APFL information, the contents of the APFL packet may be checked for sanity, such as self-consistency, as well as consistency with previously received APFL packets (320). If sanity check fails, the APFL information should be discarded (329). Stored neighbor information (described below with reference to
If the new protocol registration is the same as the last protocol registration, it is determined whether or not the status of any of the protocols has changed (Loop 350-358). For each protocol with a changed status, it is determined whether the status of the protocol is up or down (352). If the status of the protocol changed to “down,” the last received protocol status for the protocol is set to down, and the local instance of the protocol is notified (358). If, on the other hand, the status of the protocol changed to “up”, the last received protocol status for the protocol is set to up, and the local instance of the protocol is notified (356). After any protocols with changed status are processed, the last received sequence number is set to the sequence number in the received information (360). The lost hellos timer is set to the “dead interval” (362) and the method is left (390).
Referring back to block 340, if the new protocol registration is not the same as the last received protocol registration (i.e., if the status for each of one or more protocols has been added and/or removed), processing is performed for each newly added protocols, if any, and for each deleted protocol, if any. More specifically, for each added protocol (Loop 370-380), the last received status for the protocol is set to “down” (372) and it is determined whether the status of the new protocol is up or down (372). If the status of the new protocol is determined to be up, the last received protocol status for the protocol is set to “up” and the local instance of the protocol is notified (376). If, on the other hand, the status of the new protocol is determined to be down, the last received protocol status for the protocol is set to “down” and the local instance of the protocol is notified (378). For each deleted or removed protocol (Loop 382-386), the last received protocol status is set to “down” and the local instance of the protocol is notified (384). Once processing is performed for each newly added protocol, if any, and for each deleted protocol, if any, as described above, the last received sequence number is set to the sequence number in the received information (360). The lost hellos timer is set to the “dead interval” (362) and the method is left (390).
Although the foregoing description referred to aggregated protocol status, the aggregated protocol status may include forwarding liveness status. Such forwarding liveness status may be tracked per interface. Thus, the forwarding liveness of an interface may be treated as just another protocol. Consistent with the principles of the invention, some embodiments may simply track forwarding liveness status, yet have utility even without tracking any protocol status information.
§4.2.2 Exemplary APFL Information Messages
APFL information processed as described above may be carried in a packet, such as an Internet protocol (IP) packet.
In one exemplary embodiment, all APFL packets contain a single APFL message, and each APFL message may include a common header 410, a message 420, and extensions 430. The total length of the APFL message, i.e., the sum of the lengths of all sections, may be provided in common header 410. Each section may be zero-padded so that its length is a multiple of four octets. Common header 410 has a length of 12 octets. The length of message section 420 may be fixed for each message type. The length of extensions section 430 may be inferred as the length of the APFL message 400 minus the lengths of the other sections 410 and 420. It is expected that APFL messages will be small enough so as not to require fragmentation. However, fragmentation and re-assembly should be supported. Naturally, the APFL information may be carried in ways other than the described APFL message in an APFL packet.
An exemplary format 410a of common header 410 has the following structure. An R bit 411 indicates whether the APFL message is being sent to a directly attached node (R=0), or to a remote node (R=1). A Version field 412 (7 bits) indicates an APFL version number. A Message Type field 413 (8 bits) may include the following values:
Type
Message
0
Unused
1
Hello
2-255
Reserved for future use
A Length field 414 (16 bits) indicates the combined lengths of common header 410, message 420 and extensions 430, if any, in octets. A Router ID field 415 (32 bits) is set to the sender's four octet router ID. APFL messages sent to directly attached neighbors (R=0) are associated with an interface. If the interface is numbered, i.e., configured with a unique IP address, an Interface Index field 416 (32 bits) may be set to zero, and the interface identified by the source IPv4 or IPv6 address in the IP header. Otherwise, Interface Index field 416 is set to the index allocated by the sending node for this interface, and the source IP address is an address identifying the sender. For APFL messages sent to a node not directly attached (R=1), Interface Index field 416 is set to zero, and the source IPv4 or IPv6 address is a routable address identifying the sending node.
An exemplary format 420a of the message section 420 has the following structure. A Session field 421 (8 bits) can be used to identify several independent APFL sessions between a pair of nodes. Dead Interval field (24 bits) 422 is specified in microseconds. A node sending a Hello with a Dead Interval of N tells its APFL neighbor node to consider all the protocols that the node is reporting on as dead if the neighbor node doesn't receive another Hello from the sending node in N microseconds. (Recall, e.g., 310, 312, and 314 of
Bit position
Protocol
0
BGP
1
IS-IS
2
OSPF v2
3
OSPF v3
4
RIP v1/v2
5
RIP NG
6
PIM
7
DVMRP
8
LDP
9
RSVP
10
LMP
11
Reserved (should be zero)
30
Forwarding liveness
31
Layer-2 (or interface liveness)
Finally, a Protocol Status field 425 is a 32-bit vector that parallels Protocol Registry field 424. For example, if ith bit of Protocol Status field 425 is set (i.e., is 1), this indicates that the protocol represented by the ith bit of Protocol Registry field 424 is down. Note that bit i in Protocol Status field 425 vector should not be set if bit i in Protocol Registry field 424 is not set—any bit so set should be ignored by the receiving node.
Notice that the forwarding liveness of an interface, i.e., the ability of a node to forward packets received on that interface to other interfaces, or to forward packets received on other interfaces to that interface, may be indicated consistent with the principles of the invention. Forwarding liveness may include the ability to receive and process packets from an interface, the integrity and correct operation of forwarding (route lookup) tables, and the ability to rewrite and send packets on the interface.
Although message segment data structure 420a is compact and permits a short simple message, the use of separate a protocol registry field 424 and a separate protocol status field 425 conveys three states—(i) not reporting, (ii) reporting and up, and (iii) reporting and down—for each protocol using two bits. In an alternative embodiment, two bits are provided per protocol to indicate one of four, not only three, possible states. These four states are (i) not reporting, (ii) reporting and up, (iii) reporting and down, (iv) reporting and up, but in restart mode. This fourth state can be used with so-called “graceful restart” techniques, such as those described in U.S. patent application Ser. No. 10/095,000 entitled “GRACEFUL RESTART FOR USE IN NODES EMPLOYING LABEL SWITCHED PATH SIGNALING PROTOCOLS,” filed on Mar. 11, 2002, and listing Kireeti Kompella, Manoj Leelanivas, Ping Pan, and Yakov Rekhter as inventors (incorporated herein by reference). More specifically, under some graceful restart techniques, a protocol may restart, but the node may continue forwarding data using existing forwarding information. If the restart of the protocol is not complete within a certain time however, the forwarding information may be considered too stale to be used. The fact that a node is restarting is known by a peer (e.g., an adjacent node), but is generally not distributed beyond peers so that the rest of the network is not aware that a node is restarting. This prevents a large number of nodes from updating network topology information, re-computing routes, and re-computing forwarding information when doing so may be unnecessary.
An exemplary format of extensions section 430 includes a list of type-length-value (TLV) triplets. Each TLV 430a may include a Flags field 431, a Type field 432, a Length field 433 and a Value field 434. Each message type 413 defines the set of types it supports in Type field 432. That is, the message is parsed first in order to interpret Type. Each type defines its own flags found in Flags field 431. That is, Type field 432 is parsed first in order to interpret Flags field 431. Length field 433 indicates the length of Value field 434 in octets. Value field 434 is padded with octets of zero so that the total length of TLV 430a is a multiple of four octets.
Extensions section 430 can have multiple TLV 430a fields. If parsing the TLVs occurs beyond the end of message 400 (as defined by Length field 414 in common header 410a), it is assumed that the APFL message has been corrupted and is to be discarded. (Recall, e.g., 318 of
Although the APFL packet and APFL message may be used to carry APFL information, alternatives are possible. Such alternatives may convey the status of multiple protocols in a compact form. In yet another alternative, the information may include interface forwarding liveness status information, without any protocol status information. In at least one embodiment, interface forwarding liveness status information may only convey whether the interface can forward data or not. In another embodiment, interface forwarding liveness status information may convey additional information such as (i) the integrity and correct operation of forwarding (route lookup) tables, (ii) the integrity and correct operation of switch fabric, (iii) the integrity and correct operation of a forwarding lookup engine, (iv) the integrity and correct operation of a traffic scheduler, (v) the integrity and correct operation of a traffic (flow) classifier, (vi) the integrity and correct operation of buffers in the data plane, (vii) the integrity and correct operation of packet segmentation modules, (viii) the integrity and correct operation of packet reassembly modules, (ix) the integrity and correct operation of packet re-sequencing modules, (x) whether or not a node is restarting, (xi) whether or not the forwarding plane is congested, (xii) the integrity and correct operation of fragmentation modules, (xiii) bit error rate at a link interface, (xiv) clock synchronization at a link interface, and/or (xv) various quantitative values reflecting some quality of forwarding, or qualitative information derived therefrom. Alternatively, or in addition, interface forwarding liveness status information may convey when a data plane component is operating in a particular manner, such as at a predetermined capacity (e.g., buffers >75% full, N packets outstanding the switch fabric, etc.) Additional bits may be used to convey quantitative forwarding plane status information. Alternatively, or in addition, forwarding liveness status information may convey whether or not a link terminated by the interface can forward data, or whether or not the link can forward data under certain conditions.
§4.2.3 Exemplary Neighbor APFL Information
Recall from 322 of
§4.2.4 Exemplary Apparatus
Machine 600 may be a router for example. In an exemplary router, processor 610 may include a microprocessor, a network processor, (e.g., custom) integrated circuits, or any combination of these. In the exemplary router, storage device 620 may include one or more ROM, RAM, SDRAM, SRAM, SSRAM, DRAM, flash drive, hard disk drive, flash card, other types of memory, or any combination of these. Storage device 620 may include program instructions defining an operating system (OS), a protocol module (e.g. daemon), other modules, or any combination of these. In one embodiment, methods of the invention may be performed by processor 600 executing stored program instructions (e.g., defining a part of the protocol module or daemon). At least a portion of the machine executable instructions may be stored (temporarily or more permanently) on storage device 620, may be received from an external source via an input/output interface unit 630, or both. Finally, in the exemplary router, input/output interface unit 630, input device 632 and output device 634 may include interfaces to terminate communications links.
Operations consistent with the principles of the invention may be performed on systems other than routers. Such other systems may employ different hardware, different software, or both. Exemplary machine 600 may include other elements in addition to, or in place of, the elements listed in
§4.2.5 Protocol Methods for Supporting APFL
Recall from
In one embodiment of the invention, each of operations 264, 266, 268 take two arguments—the protocol P and the APFL neighbor Y. One exemplary status check operation 264—Status_Check(P, Y)—normally returns “up,” regardless of the current state of protocol P′s adjacency with Y. Status_Check(P,Y) only returns “down” when protocol P is not configured to run with neighbor Y; or if P is planning to go down shortly (graceful shutdown). If protocol P doesn't respond in a timely fashion to the Status_Check( ) query, APFL operations 240 may declare the status of protocol P as “down.” In one exemplary down callback operations 266, a call Down(P, Y) should be treated by protocol P as if its regular hellos, if any, timed out. In one exemplary up callback operations 268, a call Up(P, Y) is generally ignored. The following sections provide protocol-specific details that may be implemented.
§4.2.5.1 BGP v4
BGP should treat a Down(BGP, Y) callback just as if the Hold Timer of the session with neighbor Y had expired (See Section 6.5 of Rekhter, Y., and T. Li (Editors), “A Border Gateway Protocol 4 (BGP-4)”, RFC 1771, March 1995, incorporated herein by reference). Following a Down(BGP, Y) call, BGP may re-establish peers as usual. BGP should ignore Up(BGP, Y) callbacks.
§4.2.5.2 IS-IS, OSPF v2 and OSPF v3
IS-IS, OSPF v2 and OSPF v3 should treat a Down(P, Y) callback (where P is one of IS-IS, OSPFv2 or OSPFv3) just as they would loss of hellos from neighbor Y. Following a Down(P, Y) callback, IS-IS, OSPF v2 and OSPF v3 should re-establish adjacencies as usual. IS-IS, OSPF v2 and OSPF v3 should ignore Up( ) callbacks.
§4.2.5.3 RIP v1, RIP v2 and RIP ng
RIP should respond to a Down(P, Y) callback (where P is one of RIPv1, RIPv2 or RIP-ng) by immediately deleting all RIP routes received from Y, as if the “timeout” timer in Section 3.8 of Malkin, G., “RIP Version 2”, STD 56, RFC 2453, November 1998 (or section 2.3 of Malkin, G., “RIPng for IPv6”, RFC 2080, January 1997, both incorporated herein by reference, expired for all those routes. RIP should ignore Up( ) callbacks.
§4.2.5.4 RSVP
RSVP should respond to a Down(RSVP, Y) callback just as it would loss of hellos from neighbor Y, or some other indication that either Y or the interface to Y is no longer working. Following a Down(RSVP, Y) callback, RSVP should attempt to re-establish the state that it had held for neighbor Y by following its normal protocol operation. RSVP should ignore Up( ) callbacks.
§4.2.5.5 Forwarding Liveness
As described earlier, APFL can be used to communicate, to a neighbor, one's ability to forward packets from or to a given interface, and to learn about a neighbor's ability for the same. An interface receiving a Down(P, Y) callback (where P is ‘forwarding liveness’) should inform all modules (such as routing protocols) interested in the forwarding capability status of neighbor Y that Y is no longer capable of forwarding packets received over the communication link attached to that interface. An interface receiving an Up(P, Y) callback should inform the modules interested in the forwarding capability status of neighbor Y that Y can forward packets received over the communication link attached to that interface.
§4.2.6 Interface Methods for Supporting APFL
Without any protocols registered, APFL operations 240 can act as an interface liveness protocol for interfaces. Thus, the principles of the invention may be applied to test the liveness of interfaces that don't have layer 2 liveness mechanisms, such as Ethernet, and other interfaces whose layer 2 liveness mechanisms may be considered too slow for some purposes, such as the point-to-point protocol (PPP) for example. Recall from
For PPP interfaces, a Down(Layer-2, Y) callback should be ignored unless the PPP is in state 9 (“opened”) for the interface. If the Down callback is received while in state 9, the following actions should be taken: (i) declare “This-Layer-Down”; (ii) send a Configure Request, and (iii) transition to state 6 (in the notation of Section 4.1 of Simpson, W., (Editor), “The Point-to-Point Protocol (PPP)”, STD 51, RFC 1661, July 1994, incorporated herein by reference. Up(Layer-2, Y) callbacks should be ignored on PPP interfaces.
Ethernet interfaces are a bit more complicated since they are multipoint interfaces. A Down(Layer-2, Y) callback should tell all modules interested in the layer-2 status of the interface (such as routing protocols, SNMP agents, etc.) that neighbor Y is no longer reachable, and appropriate action should be taken. For example, a routing protocol may recompute routing information to no longer use this interface. An implementation may declare that the Ethernet interface is itself down; however, this behavior should be configurable. An Up(Layer-2, Y) callback should tell all modules that neighbor Y is again reachable (or that the Ethernet interface is up).
§4.2.6.1 Forwarding Liveness
Although forwarding liveness can be thought of as a protocol to be included in the protocol registration and status bit vectors, interface forwarding liveness status may be tracked and communicated independently of protocol status information. Such interface forwarding liveness status information may be used to communicate, to a neighbor, one's ability to forward packets from or to a given interface, and to learn about a neighbor's ability for the same. As was the case when this forwarding liveness status information is included with status information of protocols, an interface receiving a Down(P, Y) callback (where P is ‘forwarding liveness’) should inform all modules (such as routing protocols) interested in the forwarding capability status of neighbor Y that Y is no longer capable of forwarding packets received over the communication link attached to that interface. An interface receiving an Up(P, Y) callback should inform the modules interested in the forwarding capability status of neighbor Y that Y can forward packets received over the communication link attached to that interface.
§4.2.7 Configuration
Recall from
If the Hello Interval 210 or Dead Interval 220 change, APFL operations 240 may issue a Hello before hello timer T expires. If the protocols/interfaces using aggregated liveness 230 to be reported on are changed such that the new set of protocols, interfaces, or both to be reported on is a superset of the old, APFL operations 240 may issue a Hello before hello timer T expires. However, if there is any other change in the protocols/interfaces using aggregated liveness 230 to be reported on, APFL operations 240 should issue a Hello as soon as is reasonable. Moreover, multiple copies of this Hello should be issued to improve the chances of the neighbors receiving it correctly.
Configuration operations 205 should also permit authorized users to turn off reporting on any given protocol. Configuration operations 205 may also allow users to turn off running APFL operations over any given interface, or to any given neighbor node.
If APFL operations 240 can register to be notified by a protocol when the protocol's status changes, on receiving such a notification with a status transition from up to down, APFL operations 240 should rebuild the Hello with the latest values, and send it out as soon as is reasonable. If the status transition is down to up, APFL operations 240 may rebuild and send out a Hello before the timer T expires.
§4.2.8 Aggregating Protocol Liveness Determinations with Protocols Supporting and/or Running Graceful Restart
Graceful Restart (See, e.g., Sangli, S., Y. Rekhter, R. Fernando, J. Scudder and E. Chen, “Graceful Restart Mechanism for BGP”, work in progress; Berger, L., (Editor), “Generalized Multi-Protocol Label Switching (GMPLS) Signaling Resource ReserVation Protocol-Traffic Engineering (RSVP-TE) Extensions”, RFC 3473; Shand, M., “Restart signaling for ISIS”, work in progress. Leelanivas, M., Y. Rekhter, and R. Aggarwal, “Graceful Restart Mechanism for LDP”, work in progress; Farrel, A. (Editor), “Fault Tolerance for the Label Distribution Protocol (LDP)”, work in progress; and Moy, J., P. Pillay-Esnault, and A. Lindem, “Hitless OSPF Restart”, work in progress, all incorporated herein by reference), also known as Hitless Restart, allows a protocol to restart while leaving the forwarding path undisturbed. If a node X and its neighbors can restart gracefully, it is not quite as urgent for X's neighbors to learn when X goes down. However, the principles of the invention can be used to assist the graceful restart process by, for example, pinpointing the time that the restarting protocol of node X goes down more accurately. This information can be used, for example, to begin restart procedures, and to permit more precise estimates of when to declare that (the protocol restarting on) node X is beyond recovery.
§4.2.9 Security Considerations
APFL messages should be authenticated, because spoofing or replaying APFL messages may deceive a router about the state of all its protocol peers. Encrypting the contents of APFL messages is not as important, although doing so may be useful in certain applications. In any event, since the invention mainly serves to provide more frequent liveness information (e.g., via more frequent hellos), a part of which is achieved by minimizing processing overhead, adding strong authentication systems may impose severe processing burdens.
§4.3 Illustrative Example
In the following example, it is assumed that liveness operations 722 on node A 710 sends liveness information about protocols S 712 and T 714 to node B 730. Thus, node A 710 acts as a sender and node B 730 acts as a receiver. Of course, when node B 730 acts as a sender, node A 710 will act as the receiver. It is assumed that the liveness operation 722 can access the protocol status of the protocols and interfaces that it has been configured to report on, as well as report back to the protocols any received change of state.
§4.3.1 Sender Processing
Recall that the left branch of
(Step 0): Liveness operation 722 creates an appropriate IP header.
(Step 1): Liveness operation 722 creates a Common Header 410a with: R field 411 set to 0 if node B 730 is directly attached, else 1. Version field 412 is set to 1. Length field 414 is set to 28. Message Type field 413 is set to 1 (Hello). Interface Index field 416 is set to <index of interface A-B 724 or zero>. Common Header 410a will not change unless the interface index of interface A-B 724 changes.
(Step 2): Liveness operation 722 creates a protocol registry vector PRV that consists of the bits corresponding to the configured protocols S, T . . . set and the leaves the rest unset. Liveness operation 722 queries each configured protocol for its status with neighbor node B 730, and creates a protocol status vector PSV. Finally, liveness operation 722 creates a Hello message with Session field 412 set to 0, Dead Interval field 422 set to D, Sequence Number field 423 set to <monotonically increasing number>, protocol registry field 424 set to PRV, protocol status field 425 set to PSV, builds an APFL packet with common header 410a and Hello message 420a, and sends it to the ALL-APL-ROUTERS multicast address. (Recall, e.g., the left branch in
(Step 3): Liveness operation 722 then sets a timer T to expire in H microseconds, and goes to sleep. (Recall, e.g., 308 of
§4.3.2 Receiver Processing
Recall that the middle and right branches of
Field
Type
Initial Value
Last_Received_Sequence_Number
64-bit integer
0
Last_Received_Protocol_Registry
32-bit vector
0
Last_Received_Protocol_Status
32-bit vector
all “down”
Lost_Hellos_Timer
time
stopped
(Recall, e.g., FIG. 5.)
(Step 0): Sanity-check the packet. (Recall, e.g., 320 of
(Step 1): Identify the APFL neighbor B by looking up the key
<Session, Source IP Address, Interface Index> in a table. (Recall, e.g., 322 of
(Step 2): Liveness operation 722 may then check that the received Sequence Number is larger than the Last_Received_Sequence_Number for this key (Recall, e.g., 328 of
(Step 3): If New_Registry==Last_Received_Protocol_Registry, go to Step 4 below. Otherwise, for each protocol P that is in New_Registry and not in Last_Received_Protocol_Registry (added protocol):
set Last_Received_Protocol_Status[P] to down;
if (New_Status[P]==up)
call Up(P, B)
else
call Down(P, B)
(Recall, e.g., loop 370-380 of
(Step 4): If the New_Status !=Last_Received_Protocol_Status
(Step 5): Set Last_Received_Sequence_Number=received Sequence Number; Last_Received_Protocol_Registry=New_Registry. (Recall 360 of
(Step 6): Reset the Lost_Hellos_Timer to fire after the received Dead Interval. (Recall 362 of
(Step 7): Done processing APFL Hello.
If the Lost_Hello_Timer fires, call Down(P, B) for each protocol P that is set in Last_Received_Protocol_Registry for node B 730 (Recall, e.g., loop 310-314 of
Down(P, B) invokes protocol P′s Down callback, and sets Last_Received_Protocol_Status[P] to down. Up(P, B) invokes protocol P′s Up Callback. The number of times this callback is actually propagated to the protocol should be subject to some maximum limit. If Up(P, B) is sent to protocol P, then liveness operation 722 sets Last_Received_Protocol_Status[P] to up.
§4.4 Conclusions
As can be appreciated from the foregoing disclosure, the principles of the invention may comprise elements, apparatus, systems, data structures, computer-implemented code and methods for permitting the liveness of various protocols to be determined quickly, in a scalable manner (e.g., in terms of message size, total message frequency and processing overhead). By providing a small number of bits per protocol, which relay a simple set of information (such as up, down, not reporting, restarting, etc.), a compact, simple message may be used for conveying liveness-related information. Since the messages are small and can aggregate information from more than one protocol, they can be sent frequently. Normal operations of the protocols, such as normal hellos, are not affected, but may be relaxed (i.e., run less frequently). Moreover, the APFL messages and processing of such messages are not subject to the constraints of the protocols being monitored. Furthermore, interface forwarding liveness status information may be included with the protocol status information, or may be provided independent of protocol status information.
The foregoing description of embodiments consistent with the principles of the invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, although a series of acts may have been described with reference to a flow diagram, the order of acts may differ in other implementations when the performance of one act is not dependent on the completion of another act. Further, non-dependent acts may be performed in parallel.
No element, act or instruction used in the description should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. The scope of the invention is defined by the claims and their equivalents.
Kompella, Kireeti, Rekhter, Yakov
Patent | Priority | Assignee | Title |
10061588, | Oct 03 2011 | International Business Machines Corporation | Tracking operand liveness information in a computer system and performing function based on the liveness information |
10078515, | Oct 03 2011 | International Business Machines Corporation | Tracking operand liveness information in a computer system and performing function based on the liveness information |
10311085, | Aug 31 2012 | NETSEER, INC | Concept-level user intent profile extraction and applications |
10387892, | May 06 2008 | NETSEER, INC | Discovering relevant concept and context for content node |
10860619, | Aug 31 2012 | Netseer, Inc. | Concept-level user intent profile extraction and applications |
7958120, | May 10 2005 | NETSEER, INC | Method and apparatus for distributed community finding |
8301617, | May 10 2005 | NETSEER, INC | Methods and apparatus for distributed community finding |
8380721, | Jan 18 2006 | NETSEER, INC | System and method for context-based knowledge search, tagging, collaboration, management, and advertisement |
8417695, | Oct 30 2008 | NETSEER, INC | Identifying related concepts of URLs and domain names |
8607211, | Oct 03 2011 | International Business Machines Corporation | Linking code for an enhanced application binary interface (ABI) with decode time instruction optimization |
8612959, | Oct 03 2011 | International Business Machines Corporation | Linking code for an enhanced application binary interface (ABI) with decode time instruction optimization |
8615745, | Oct 03 2011 | International Business Machines Corporation | Compiling code for an enhanced application binary interface (ABI) with decode time instruction optimization |
8615746, | Oct 03 2011 | International Business Machines Corporation | Compiling code for an enhanced application binary interface (ABI) with decode time instruction optimization |
8756591, | Oct 03 2011 | International Business Machines Corporation | Generating compiled code that indicates register liveness |
8825654, | May 10 2005 | NETSEER, INC | Methods and apparatus for distributed community finding |
8838605, | May 10 2005 | NETSEER, INC | Methods and apparatus for distributed community finding |
8843434, | Feb 28 2006 | NETSEER, INC | Methods and apparatus for visualizing, managing, monetizing, and personalizing knowledge search results on a user interface |
9110985, | Oct 16 2009 | NETSEER, INC | Generating a conceptual association graph from large-scale loosely-grouped content |
9146822, | Jun 30 2010 | Veritas Technologies LLC | Cluster configuration systems and methods |
9176853, | Jan 29 2010 | Veritas Technologies LLC | Managing copy-on-writes to snapshots |
9286072, | Oct 03 2011 | International Business Machines Corporation | Using register last use infomation to perform decode-time computer instruction optimization |
9311093, | Oct 03 2011 | International Business Machines Corporation | Prefix computer instruction for compatibly extending instruction functionality |
9311095, | Oct 03 2011 | International Business Machines Corporation | Using register last use information to perform decode time computer instruction optimization |
9329869, | Oct 03 2011 | International Business Machines Corporation | Prefix computer instruction for compatibily extending instruction functionality |
9354874, | Oct 03 2011 | International Business Machines Corporation | Scalable decode-time instruction sequence optimization of dependent instructions |
9424036, | Oct 03 2011 | International Business Machines Corporation | Scalable decode-time instruction sequence optimization of dependent instructions |
9443018, | Jan 19 2006 | NETSEER, INC | Systems and methods for creating, navigating, and searching informational web neighborhoods |
9483267, | Oct 03 2011 | International Business Machines Corporation | Exploiting an architected last-use operand indication in a system operand resource pool |
9614721, | Sep 29 2010 | Telefonaktiebolaget L M Ericsson (publ) | Fast flooding based fast convergence to recover from network failures |
9690583, | Oct 03 2011 | International Business Machines Corporation | Exploiting an architected list-use operand indication in a computer system operand resource pool |
9697002, | Oct 03 2011 | International Business Machines Corporation | Computer instructions for activating and deactivating operands |
9817902, | Oct 27 2006 | NETSEER, INC | Methods and apparatus for matching relevant content to user intention |
Patent | Priority | Assignee | Title |
5926463, | Oct 06 1997 | Hewlett Packard Enterprise Development LP | Method and apparatus for viewing and managing a configuration of a computer network |
6728214, | Jul 28 1999 | WSOU Investments, LLC | Testing of network routers under given routing protocols |
7092360, | Dec 28 2001 | WSOU Investments, LLC | Monitor, system and method for monitoring performance of a scheduler |
7155536, | Oct 25 2001 | Alcatel Lucent | Fault-tolerant IS-IS routing system, and a corresponding method |
7362700, | Jun 27 2002 | RPX Corporation | Methods and systems for hitless restart of layer 3 packet forwarding |
7417987, | Jun 04 2002 | WSOU Investments, LLC | Distribution of forwarding information in a network node |
20020131362, | |||
20020167900, | |||
20040121792, | |||
20050076231, | |||
20050265260, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 10 2004 | Juniper Networks, Inc. | (assignment on the face of the patent) | / | |||
Jul 06 2004 | REHKTER, YAKOV | Juniper Networks, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 015627 | /0528 | |
Jul 06 2004 | KOMPELLA, KIREETI | Juniper Networks, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016481 | /0552 | |
Jul 10 2004 | KOMPELLA, KIREETI | Juniper Networks, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 015627 | /0528 | |
Jul 10 2004 | REKHTER, YAKOV | Juniper Networks, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 016481 | /0552 |
Date | Maintenance Fee Events |
Mar 28 2014 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 14 2018 | REM: Maintenance Fee Reminder Mailed. |
Aug 27 2018 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Aug 27 2018 | M1555: 7.5 yr surcharge - late pmt w/in 6 mo, Large Entity. |
Feb 17 2022 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 28 2013 | 4 years fee payment window open |
Mar 28 2014 | 6 months grace period start (w surcharge) |
Sep 28 2014 | patent expiry (for year 4) |
Sep 28 2016 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 28 2017 | 8 years fee payment window open |
Mar 28 2018 | 6 months grace period start (w surcharge) |
Sep 28 2018 | patent expiry (for year 8) |
Sep 28 2020 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 28 2021 | 12 years fee payment window open |
Mar 28 2022 | 6 months grace period start (w surcharge) |
Sep 28 2022 | patent expiry (for year 12) |
Sep 28 2024 | 2 years to revive unintentionally abandoned end. (for year 12) |