A technique for lessening the likelihood of congestion in a congestible node is disclosed. In accordance with the illustrative embodiments of the present invention, one node—a proxy node—drops protocol data units to lessen the likelihood of congestion in the congestible node. In some embodiments of the present invention, the proxy node receives a metric of a queue at a congestible node and, based on the metric, decides whether to drop protocol data units en route to the congestible node. In some other embodiments of the present invention, the proxy node estimates a metric of a queue at a congestible node and, based on the metric, decides whether to drop protocol data units en route to the congestible node.
|
4. A protocol-data-unit excisor comprising:
a first input for receiving a first plurality of protocol data units, wherein all of the protocol data units received at said first input are en route to a first congestible node;
a second input for receiving a metric of a queue in said first congestible node; and
a processor for selectively dropping one or more of said first plurality of protocol data units based on said metric of said queue in said first congestible node.
1. A method comprising:
receiving a first plurality of protocol data units at a first input of a protocol-data-unit excisor, wherein all of the protocol data units received at said first input are en route to a first congestible node;
receiving at said protocol-data-unit excisor a metric of a queue in said first congestible node; and
selectively dropping, at said protocol-data-unit excisor, one or more of said first plurality of protocol data units based on said metric of said queue in said first congestible node.
10. A protocol-data-unit excisor comprising:
a first input for receiving a first plurality of protocol data units, wherein all of the protocol data units received at said first input are en route to a first congestible node; and
a processor for estimating a first metric of a first queue of protocol data units in said first congestible node based on said first plurality of protocol data units, and for selectively dropping one or more of said first plurality of protocol data units en route to said first congestible node based on said first metric.
7. A method comprising:
receiving a first plurality of protocol data units at a first input of a protocol-data-unit excisor, wherein all of the protocol data units received at said first input are en route to a first congestible node;
estimating in said protocol-data-unit excisor a first metric of a first queue of protocol data units in said first congestible node based on said first plurality of protocol data units; and
selectively dropping, at said protocol-data-unit excisor, one or more of said first plurality of protocol data units en route to said first congestible node based on said first metric.
2. The method of
3. The method of
receiving a second plurality of protocol data units at a second input of said protocol-data-unit excisor, wherein all of the protocol data units received at said second input are en route to a second congestible node;
receiving at said protocol-data-unit excisor a metric of a queue in said second congestible node; and
selectively dropping, at said protocol-data-unit excisor, one or more of said second plurality of protocol data units based on said metric of said queue in said second congestible node.
5. The protocol-data-unit excisor of
6. The protocol-data-unit excisor of
a third input for receiving a second plurality of protocol data units, wherein all of the protocol data units received at said third input are en route to a second congestible node;
a fourth input for receiving a metric of a queue in said second congestible node;
wherein said processor is also for selectively dropping one or more of said second plurality of protocol data units based on said metric of said queue in said second congestible node.
8. The method of
9. The method of
receiving a second plurality of protocol data units at a second input of said protocol-data-unit excisor, wherein all of the protocol data units received at said second input are en route to a second congestible node;
estimating in said protocol-data-unit excisor a second metric of a second queue of protocol data units in said second congestible node based on said second plurality of protocol data units; and
selectively dropping, at said protocol-data-unit excisor, a one or more of said second plurality of protocol data units en route to said second congestible node based on said second metric.
11. The protocol-data-unit excisor of
12. The protocol-data-unit excisor of
a second input for receiving a second plurality of protocol data units, wherein all of the protocol data units received at said second input are en route to a second congestible node; and
a processor for estimating a second metric of a second queue of protocol data units in said second congestible node based on said second plurality of protocol data units, and for selectively dropping one or more of said second plurality of protocol data units en route to said second congestible node based on said second metric.
|
The present invention relates to telecommunications in general, and, more particularly, to congestion management in telecommunications networks.
In a store-and-forward telecommunications network, each network node passes protocol data units to the next node, in bucket-brigade fashion, until the protocol data units arrive at their final destination. A network node can have a variety of names (e.g. “switch,” “router,” “access point,” etc.) and can perform a variety of functions, but it always has the ability to receive a protocol data unit on one input link and transmit it on one or more output links.
For the purposes of this specification, a “protocol data unit” is defined as the data object that is exchanged by entities. Typically, a protocol data unit exists at a layer of a multi-layered communication protocol and is exchanged across one or more network nodes. A “frame,” a “packet,” and a “datagram” are typical protocol data units.
In some cases, a protocol data unit might spend a relatively brief time in a network node before it is processed and transmitted on an output link. In other cases, a protocol data unit might spend a long time.
One reason why a protocol data unit might spend a long time in a network node is because the output link on which the protocol data unit is to be transmitted is temporarily unavailable. Another reason why a protocol data unit might spend a long time in a network node is because a large number of protocol data units arrive at the node faster than the node can process and output them.
Under conditions such as these, a network node typically stores or “queues” a protocol data unit until it is transmitted. Sometimes, the protocol data units are stored in an “input queue” and sometimes the protocol data units are stored in an “output queue.” An input queue might be employed when protocol data units arrive at the network node (in the short run) more quickly than they can be processed. An output queue might be employed when protocol data units arrive and are processed (in the short run) more quickly than they can be transmitted on the output link.
A queue has a finite capacity, and, therefore, it can fill up with protocol data units. When a queue is filled, the attempted addition of protocol data units to the queue causes the queue to “overflow” with the result that the newly arrived protocol data units are discarded or “dropped.” Dropped protocol units are forever lost and do not leave the network node.
A network node that comprises a queue that is dropping protocol data units is called “congested.” For the purposes of this specification, a “congestible node” is defined as a network node (e.g. a switch, router, access point, etc.) that is susceptible to dropping protocol data units.
The loss of a protocol data unit has a negative impact on the intended end user of the protocol data unit, but the loss of any one protocol data unit does not have the same degree of impact as every other protocol data unit. In other words, the loss of some protocol data units is more injurious than the loss of some other protocol data units.
When a node is congested, or close to becoming congested, it can be prudent for the node to intentionally and proactively drop one or more protocol data units whose loss will be less consequential than to allow arriving protocol data units to overflow and be dropped and whose loss might be more consequential. To accomplish this, the node can employ an algorithm to intelligently identify:
Some legacy nodes, however, were not designed to intentionally drop a protocol data unit and it is often technically or economically difficult to retrofit them to add that functionality. Furthermore, it can be prohibitively expensive to build nodes that have the computing horsepower needed to run an algorithm such as Random Early Discard or Random Early Detection.
Therefore, the need exists for a new technique for ameliorating the congestion in network nodes without some of the costs and disadvantages associated with techniques in the prior art.
The present invention is a technique for lessening the likelihood of congestion in a congestible node without some of the costs and disadvantages for doing so in the prior art. In accordance with the illustrative embodiments of the present invention, one node—a proxy node—drops protocol data units to lessen the likelihood of congestion in the congestible node.
The illustrative embodiments of the present invention are useful because they lessen the likelihood of congestion in legacy nodes. Furthermore, the illustrative embodiments are useful with new “lightweight” nodes because the proxy nodes enable the lightweight nodes to be built without the horsepower needed to run a discard algorithm such as Random Early Detection.
In some embodiments of the present invention, the proxy node receives a metric of a queue at a congestible node and, based on the metric, decides whether to drop protocol data units en route to the congestible node.
In some other embodiments of the present invention, the proxy node estimates a metric of a queue at a congestible node and, based on the metric, decides whether to drop protocol data units en route to the congestible node.
In addition to the metric, the protocol data unit dropping decision can also be made based on a queue management technique such as Random Early Detection, thus realizing the benefits of that technique even though Random Early Detection (or another queue management technique) is not performed at the congestible node.
In these embodiments queue management is done on a proxy basis, that is, by one network node, not itself necessarily prone to congestion, on behalf of another network node that is prone to congestion. Since queue management is done on another network node, the congestible node can be a light-weight node or a legacy node and still receive the benefits of queue management.
An illustrative embodiment of the present invention comprises: receiving at a protocol-data-unit excisor a metric of a queue in a first congestible node; and selectively dropping, at the protocol-data-unit excisor, one or more protocol data units en route to the first congestible node based on the metric of the queue in the first congestible node.
Switch and protocol-data-unit excisor 200 has two principal functions. First, it switches protocol data units from each of inputs 201-1 through 201-T to one or more of outputs 202-1 through 202-M, and second it selectively drops protocol data units to ameliorate congestion in one or more of congestible nodes 204-1 through 204-N. In other words, some protocol data units enter switch and protocol-data-unit excisor 200 but do not leave it.
In accordance with the first illustrative embodiment of the present invention, both functions are performed by one mechanically-integrated node. It will be clear to those skilled in the art, however, after reading this specification, how to make and use embodiments of the present invention that perform the two functions in a plurality of non-mechanically-integrated nodes.
Each of inputs 201-1 through 201-T represents a logical or physical link on which protocol data units flow into switch and protocol-data-unit excisor 200.
Each link represented by one of inputs 201-1 through 201-T can be implemented in a variety of ways. For example, in some embodiments of the present invention such a link can be realized as a separate physical link. In other embodiments such a link can be realized as a logical channel on a multiplexed line. It will be clear to those skilled in the art, after reading this specification, how to implement the links represented by inputs 201-1 through 201-T.
Each of outputs 202-1 through 202-M represents a logical or physical link on which protocol data units flow from switch and protocol-data-unit excisor 200 toward a congestible node. In the first illustrative embodiment of the present invention, switch and protocol-data-unit excisor 200 is less susceptible to congestion than is the congestible nodes fed by switch and protocol-data-unit excisor 200.
Each link represented by one of inputs 202-1 through 202-M can be implemented in a variety of ways. For example, in some embodiments of the present invention such a link can be realized as a separate physical link. In other embodiments such a link can be realized as a logical channel on a multiplexed line. It will be clear to those skilled in the art, after reading this specification, how to implement the links represented by inputs 202-1 through 202-M.
Each of inputs 203-1 through 203-P represents a logical or physical link on which one or more metrics of a queue in a congestible node arrives at switch and protocol-data-unit excisor 200.
Each link represented by one of inputs 203-1 through 203-P can be implemented in a variety of ways. For example, in some embodiments of the present invention such a link can be realized as a separate physical link. In other embodiments such a link can be realized as a logical channel on a multiplexed line, or as an Internet Protocol address to which datagrams carrying the metrics are directed. It will be clear to those skilled in the art, after reading this specification, how to implement the links represented by inputs 203-1 through 203-P.
A metric of a queue represents information about the status of the queue. In some embodiments of the present invention, a metric can indicate the status of a queue at one moment (e.g., the current length of the queue, the greatest sojourn time of a protocol data unit in the queue, etc.). In some alternative embodiments of the present invention, a metric can indicate the status of a queue during a time interval (e.g., an average queue length, the average sojourn time of a protocol data unit in the queue, etc.). It will be clear to those skilled in the art how to formulate these and other metrics of a queue.
Each of congestible nodes 204-1 through 204-N represents a network node that comprises a queue (not shown) that stores one or more protocol data units from switch and protocol-data-unit excisor 200 and generates the metric or metrics fed back to switch and protocol-data-unit excisor 200. It will be clear to those skilled in the art how to make and use each of congestible nodes 204-1 through 204-N.
In accordance with the illustrative embodiment, M=N=P. It will be clear to those skilled in the art, however, after reading this specification, how to make and use embodiments of the present invention in which:
In order to mitigate the occurrence of congestion at the congestible nodes, switch and protocol-data-unit excisor 200 selectively drops protocol data units which are en route to a queue in a congestible node. In the first illustrative embodiment of the present invention, switch and protocol-data-unit excisor 200 decides whether to drop a protocol data unit en route to queue 210-i in a congestible node by performing an instance of Random Early Detection using a metric received on input 203-i as a Random Early Detection parameter.
Switching fabric 301 accepts protocol data units on each of inputs 201-1 through 201-T and switches them to one or more of links 303-1 through 303-M, in well-known fashion. It will be clear to those skilled in the art how to make and use switching fabric 301.
Each of links 303-1 through 303-M carries protocol data units from switching fabric 301 to protocol-data-unit excisor 302. Each of links 303-1 through 303-M can be implemented in various ways, for example as a distinct physical channel or as a logical channel on a multiplexed medium, such as a time-multiplexed bus. In the first illustrative embodiment of the present invention, each of links 303-1 through 303-M corresponds to one of outputs 202-1 through 202-M, such that a protocol data unit arriving at protocol-data-unit excisor 302 on link 303-m exits protocol-data-unit excisor 302 on output 202-m, unless it is dropped within protocol-data-unit excisor 302.
In
Furthermore, switching fabric 301 and protocol-data-unit excisor 302 are depicted in
Processor 401 is a general-purpose processor that is capable of performing the functionality described below and with respect to
Transmitter 402-m accepts a protocol data unit from processor 401 and transmits it on output 202-m, in well-known fashion, depending on the physical and logical protocol for output 202-m. It will be clear to those skilled in the art how to make and use each of transmitters 402-1 through 402-M.
Receiver 403-p receives a metric of a queue in a congestible node on input 203-p, in well-known fashion, and passes the metric to processor 401. It will be clear to those skilled in the art how to make an use receivers 403-1 through 403-P.
At task 501, protocol-data-unit excisor 302 periodically or sporadically receives one or more metrics for the queue associated with each of outputs 202-1 through 202-M.
At task 502, protocol-data-unit excisor 302 periodically or sporadically decides whether to drop a protocol data unit en route to each of outputs 202-1 through 202-M. The details of task 502 are described in detail below and with respect to
At subtask 601, protocol-data-unit excisor 302 receives a protocol data unit on link 303-m, which is en route to output 202-m.
At subtask 602, protocol-data-unit excisor 302 decides whether to drop the protocol data unit received at subtasks 601 or let it pass to output 202-m. In accordance with the illustrative embodiment, the decision is based, at least in part, on the metrics received in task 501 and the well-known Random Early Detection algorithm.
The metric enables protocol-data-unit excisor 302 to estimate the status of the queue fed by output 202-m and the Random Early Detection algorithm enables protocol-data-unit excisor 200 to select which protocol data units to drop. The loss of a protocol data unit has a negative impact on the intended end user of the protocol data unit, but the loss of any one protocol data unit does not have the same degree of impact as every other protocol data unit. In other words, the loss of some protocol data units is more injurious than the loss of some other protocol data units.
As is well known to those skilled in the art, some embodiments of the Random Early Detection algorithm intelligently identify:
In some alternative embodiments of the present invention, protocol-data-unit excisor 302 uses a different algorithm for selecting which protocol data units to drop. For example, protocol-data-unit excisor 302 can drop all of the protocol data units it receives on a given link when the metric associated with that link is above a threshold. In any case, it will be clear to those skilled in the art, after reading this specification, how to make and use embodiments of the present invention that use other algorithms for deciding which protocol data units to drop, how many protocol data units to drop, and when to drop those protocol data units.
When protocol-data-unit excisor 302 decides at task 602 to drop a protocol data unit, control passes to subtask 603; otherwise control passes to task 604.
At subtask 603, protocol-data-unit excisor 302 drops the protocol data unit under consideration. From subtask 603, control passes back to subtask 601 where protocol-data-unit excisor 302 decides whether to drop or forward the next protocol data unit.
At subtask 604, protocol-data-unit excisor 302 forwards the protocol data unit under consideration. From subtask 604, control passes back to subtask 601 where protocol-data-unit excisor 302 decides whether to drop or forward the next protocol data unit.
Switch and protocol-data-unit excisor 700 has two principal functions. First, it switches protocol data units from each of inputs 701-1 through 701-T to one or more of outputs 702-1 through 702-M, and second it selectively drops protocol data units to ameliorate congestion in one or more of congestible nodes 704-1 through 704-N. In other words, some protocol data units enter switch and protocol-data-unit excisor 700 but do not leave it.
In accordance with the second illustrative embodiment of the present invention, both functions are performed by one mechanically-integrated node. It will be clear to those skilled in the art, however, after reading this specification, how to make and use embodiments of the present invention that perform the two functions in a plurality of non-mechanically-integrated nodes.
Each of inputs 701-1 through 701-T represents a logical or physical link on which protocol data units flow into switch and protocol-data-unit excisor 700.
Each link represented by one of inputs 701-1 through 701-T can be implemented in a variety of ways. For example, in some embodiments of the present invention such a link can be realized as a separate physical link. In other embodiments such a link can be realized as a logical channel on a multiplexed line. It will be clear to those skilled in the art, after reading this specification, how to implement the links represented by inputs 701-1 through 701-T.
Each of outputs 702-1 through 702-M represents a logical or physical link on which protocol data units flow from switch and protocol-data-unit excisor 700 toward a congestible node. In the second illustrative embodiment of the present invention, switch and protocol-data-unit excisor 700 is less susceptible to congestion than is the congestible nodes fed by switch and protocol-data-unit excisor 700.
Each link represented by one of inputs 702-1 through 702-M can be implemented in a variety of ways. For example, in some embodiments of the present invention such a link can be realized as a separate physical link. In other embodiments such a link can be realized as a logical channel on a multiplexed line. It will be clear to those skilled in the art, after reading this specification, how to implement the links represented by inputs 702-1 through 702-M.
Each of congestible nodes 704-1 through 704-N represents a network node that comprises a queue (not shown) that stores one or more protocol data units from switch and protocol-data-unit excisor 700 and generates the metric or metrics fed back to switch and protocol-data-unit excisor 700. It will be clear to those skilled in the art how to make and use each of congestible nodes 704-1 through 704-N.
In accordance with the illustrative embodiment, M=N. It will be clear to those skilled in the art, however, after reading this specification, how to make and use embodiments of the present invention in which M≠N (because, for example, one or more congestible nodes accepts more than one of outputs 702-1 through 702-M).
In order to mitigate the occurrence of congestion at the congestible nodes, switch and protocol-data-unit excisor 700 selectively drops protocol data units which are en route to a queue in a congestible node. In the second illustrative embodiment of the present invention, switch and protocol-data-unit excisor 700 decides whether to drop a protocol data unit en route to queue 210-i in a congestible node by performing an instance of Random Early Detection using an estimated metric as a Random Early Detection parameter.
Switching fabric 801 accepts protocol data units on each of inputs 701-1 through 701-T and switches them to one or more of links 803-1 through 803-M, in well-known fashion. It will be clear to those skilled in the art how to make and use switching fabric 801.
Each of links 803-1 through 803-M carries protocol data units from switching fabric 801 to protocol-data-unit excisor 802. Each of links 803-1 through 803-M can be implemented in various ways, for example as a distinct physical channel or as a logical channel on a multiplexed medium, such as a time-multiplexed bus. In the second illustrative embodiment of the present invention, each of links 803-1 through 803-M corresponds to one of outputs 702-1 through 702-M, such that a protocol data unit arriving at protocol-data-unit excisor 802 on link 803-m exits protocol-data-unit excisor 802 on output 702-m, unless it is dropped within protocol-data-unit excisor 802.
In
Furthermore, switching fabric 801 and protocol-data-unit excisor 802 are depicted in
Processor 901 is a general-purpose processor that is capable of performing the functionality described below and with respect to
Transmitter 902-m accepts a protocol data unit from processor 901 and transmits it on output 702-m, in well-known fashion, depending on the physical and logical protocol for output 702-m. It will be clear to those skilled in the art how to make and use each of transmitters 902-1 through 902-M.
In the first illustrative embodiment of the present invention, the queue metrics were received by protocol-data-unit excisor 802 by an external source (e.g., the congestible node, etc.) that was able to calculate and transmit the metric. In contrast, the second illustrative embodiment does not receive the metric from an external source but rather generates the metric itself based on watching each flow of protocol data units. This is described below and with respect to
At task 1001, protocol-data-unit excisor 802 receives a protocol data unit on link 803-m, which is en route to output 702-m.
At task 1002, protocol-data-unit excisor 802 estimates a metric for a queue that is associated with output 702-m. In accordance with the second illustrative embodiment, this metric is based on:
At subtask 1003, protocol-data-unit excisor 802 decides whether to drop the protocol data unit received at task 1001 or let it pass to output 702-m. This decision is made in the second illustrative embodiment in the same manner as in the first illustrative embodiment, as described above. When protocol-data-unit excisor 802 decides at task 1003 to drop a protocol data unit, control passes to subtask 1004; otherwise control passes to task 1005.
At subtask 1004, protocol-data-unit excisor 802 drops the protocol data unit under consideration. From subtask 1004, control passes back to subtask 1001.
At subtask 1005, protocol-data-unit excisor 802 forwards the protocol data unit under consideration. From subtask 1005, control passes back to subtask 1001.
It is to be understood that the above-described embodiments are merely illustrative of the present invention and that many variations of the above-described embodiments can be devised by those skilled in the art without departing from the scope of the invention. It is therefore intended that such variations be included within the scope of the following claims and their equivalents.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
6333917, | Aug 19 1998 | RPX CLEARINGHOUSE LLC | Method and apparatus for red (random early detection) and enhancements. |
6405258, | May 05 1999 | Advanced Micro Devices, INC | Method and apparatus for controlling the flow of data frames through a network switch on a port-by-port basis |
6463068, | Dec 31 1997 | Cisco Technology, Inc | Router with class of service mapping |
6570848, | Mar 30 1999 | Hewlett Packard Enterprise Development LP | System and method for congestion control in packet-based communication networks |
6650640, | Mar 01 1999 | Oracle America, Inc | Method and apparatus for managing a network flow in a high performance network interface |
7031341, | Jul 26 2000 | Wuhan Research Institute of Post and Communications, Mii. | Interfacing apparatus and method for adapting Ethernet directly to physical channel |
20020131365, | |||
20020159388, | |||
20030065788, | |||
EP795991, | |||
EP1128610, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 10 2003 | GARG, SACHIN | Avaya Technology Corp | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014502 | /0812 | |
Sep 10 2003 | KAPPES, MARTIN | Avaya Technology Corp | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014502 | /0812 | |
Sep 15 2003 | Avaya Inc. | (assignment on the face of the patent) | / | |||
Sep 30 2005 | Avaya Technology Corp | Avaya Technology LLC | CONVERSION FROM CORP TO LLC | 022677 | /0550 | |
Oct 26 2007 | VPNET TECHNOLOGIES, INC | CITICORP USA, INC , AS ADMINISTRATIVE AGENT | SECURITY AGREEMENT | 020166 | /0705 | |
Oct 26 2007 | OCTEL COMMUNICATIONS LLC | CITICORP USA, INC , AS ADMINISTRATIVE AGENT | SECURITY AGREEMENT | 020166 | /0705 | |
Oct 26 2007 | Avaya Technology LLC | CITICORP USA, INC , AS ADMINISTRATIVE AGENT | SECURITY AGREEMENT | 020166 | /0705 | |
Oct 26 2007 | Avaya, Inc | CITICORP USA, INC , AS ADMINISTRATIVE AGENT | SECURITY AGREEMENT | 020166 | /0705 | |
Oct 26 2007 | VPNET TECHNOLOGIES, INC | CITIBANK, N A , AS ADMINISTRATIVE AGENT | SECURITY AGREEMENT | 020156 | /0149 | |
Oct 26 2007 | OCTEL COMMUNICATIONS LLC | CITIBANK, N A , AS ADMINISTRATIVE AGENT | SECURITY AGREEMENT | 020156 | /0149 | |
Oct 26 2007 | Avaya Technology LLC | CITIBANK, N A , AS ADMINISTRATIVE AGENT | SECURITY AGREEMENT | 020156 | /0149 | |
Oct 26 2007 | Avaya, Inc | CITIBANK, N A , AS ADMINISTRATIVE AGENT | SECURITY AGREEMENT | 020156 | /0149 | |
Jun 26 2008 | Avaya Technology LLC | AVAYA Inc | REASSIGNMENT | 021156 | /0082 | |
Jun 26 2008 | AVAYA LICENSING LLC | AVAYA Inc | REASSIGNMENT | 021156 | /0082 | |
Feb 11 2011 | AVAYA INC , A DELAWARE CORPORATION | BANK OF NEW YORK MELLON TRUST, NA, AS NOTES COLLATERAL AGENT, THE | SECURITY AGREEMENT | 025863 | /0535 | |
Mar 07 2013 | Avaya, Inc | BANK OF NEW YORK MELLON TRUST COMPANY, N A , THE | SECURITY AGREEMENT | 030083 | /0639 | |
Nov 28 2017 | THE BANK OF NEW YORK MELLON TRUST, NA | AVAYA Inc | BANKRUPTCY COURT ORDER RELEASING ALL LIENS INCLUDING THE SECURITY INTEREST RECORDED AT REEL FRAME 025863 0535 | 044892 | /0001 | |
Dec 15 2017 | CITICORP USA, INC | SIERRA HOLDINGS CORP | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 045032 | /0213 | |
Dec 15 2017 | CITICORP USA, INC | Avaya Technology, LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 045032 | /0213 | |
Dec 15 2017 | CITICORP USA, INC | OCTEL COMMUNICATIONS LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 045032 | /0213 | |
Dec 15 2017 | CITICORP USA, INC | VPNET TECHNOLOGIES, INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 045032 | /0213 | |
Dec 15 2017 | CITICORP USA, INC | Avaya, Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 045032 | /0213 |
Date | Maintenance Fee Events |
Date | Maintenance Schedule |
Aug 16 2014 | 4 years fee payment window open |
Feb 16 2015 | 6 months grace period start (w surcharge) |
Aug 16 2015 | patent expiry (for year 4) |
Aug 16 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 16 2018 | 8 years fee payment window open |
Feb 16 2019 | 6 months grace period start (w surcharge) |
Aug 16 2019 | patent expiry (for year 8) |
Aug 16 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 16 2022 | 12 years fee payment window open |
Feb 16 2023 | 6 months grace period start (w surcharge) |
Aug 16 2023 | patent expiry (for year 12) |
Aug 16 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |