Methods and apparatus are provided to determine performance of a network. A first measurement relating to a first layer of communications in the network and a second measurement relating to a second layer of communications in the network are provided. Based on the first and second measurements, a set of parameters is generated. The performance of the network is then determined based on the generated set of parameters.

Patent
   7159026
Priority
Jan 31 2002
Filed
Jan 31 2002
Issued
Jan 02 2007
Expiry
Feb 09 2024
Extension
739 days
Assg.orig
Entity
Large
30
15
all paid
1. A method to determine performance of a network, the method comprising:
providing a database containing service level agreement information;
receiving a first measurement relating to a virtual connection in the network relating to a first layer of communications in the network;
receiving a second measurement relating to an attainable bit rate in a portion of the network relating to a second layer of communications in the network;
receiving at least one additional measurement including management information base parameters from one or more nodes in the network relating to a third layer of communications in the network;
generating a set of parameters for the network based on the first, second, and at least one additional measurements;
determining performance of the network based on the generated set of parameters; and
adjusting the set of parameters to meet the service level agreement.
4. An apparatus for determining performance of a network comprising:
database containing service level agreement information;
means for receiving a first measurement comprises means for receiving information relating to a virtual connection in the network relating to a first layer of communications in a network;
means for receiving a second measurement comprising means for receiving information relating to an attainable bit rate in a portion of the network relating to a second layer of communications in the network;
means for receiving at least one additional measurement comprises means for receiving management information base parameters from one or more nodes in the network relating to a third layer of communications in the network:
means for generating a set of parameters for the network based on the first, second, and at least one additional measurements:
means for determining performance of the network based on the generated set of parameters; and
means for adjusting the set of parameters to meet the service level agreement.
2. The method of claim 1, wherein receiving the first measurement further comprises:
receiving information relating to transport of internet protocol packets in the network.
3. The method of claim 1, wherein determining performance of the network based on the generated set of parameters comprises:
correlating the information relating to transport of internet protocol packets with the attainable bit rate in the portion of the network.
5. The apparatus of claim 4, wherein the means for receiving the first measurement further comprises:
means for receiving information relating to transport of internet protocol packets in the network.
6. The apparatus of claim 4, wherein the means for determining performance of the network based on the generated set of parameters comprises:
means for correlating the information relating to transport of internet protocol packets with the attainable bit rate in the portion of the network.

The present invention relates to communication networks and, in particular, to methods and apparatus for determining performance of a network.

Today, consumers are offered a variety of technologies for accessing the Internet, such as digital subscriber lines (DSL), cable, analog dial-up connections, integrated services digital network, etc. Typically, these technologies depend upon architectures which use multiple components and networks to access the Internet. For example, with DSL, a consumer connects to the Internet via a DSL modem, a DSL loop, an ATM access network, and a gateway device.

Current technologies for accessing the Internet often suffer from a variety of performance problems, including slow response, intermittent connections, and lost data. These performance problems are difficult to diagnose. Conventionally, these performance problems are diagnosed using tools and tests that measure a single parameter or analyze a single component. For example, a “ping” test is a common test used to diagnose network problems. In a ping test, a packet of data such as an Internet Protocol (“IP”) packet is sent from a source to a specified IP address. A network device at the specified IP address then returns the IP packet to the source to indicate that it was successfully received. Hence, a ping test is typically used to determine whether a network can transport an IP packet.

Unfortunately, conventional tests, such as the ping test, do not provide all of the information needed to fully diagnose a performance problem. For example, a ping test only provides information related to IP communications in a network. A ping test does not provide information related to DSL or ATM performance of a network. In order to diagnose the DSL and ATM performance of a network, a user or technician is required to perform tests specifically designed for this purpose. Thus, the root cause of a performance problem may not be discovered until after conducting numerous tests over a period of time.

It is therefore desired to provide methods and apparatus that overcome the above and other shortcomings of the prior art.

Accordingly, methods and apparatus are provided to determine performance of a network. A first measurement relating to a first layer of communications in the network and a second measurement relating to a second layer of communications in the network are provided based on the first and second measurements. A set of performance parameters is generated. The network performance of the network may then be determined based on the generated set of parameters.

In one embodiment, the performance of the network may be determined based upon measurements relating to the transport of internet protocol packets in the network and an attainable bit rate in a portion of the network. The measurement relating to internet protocol packets may then be compared to the attainable bit rate measurement to generate a performance parameter. For example, the performance parameter may be based on whether the measurement relating to internet protocol packets are above a threshold percentage of the attainable bit rate.

Additional features and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The features and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.

In the Figures:

FIG. 1 illustrates a network, in accordance with methods and apparatus consistent with the present invention; and

FIG. 2 illustrates exemplary steps performed by a server for analyzing performance of a network, in accordance with methods and apparatus consistent with the present invention.

Reference will now be made in detail to exemplary embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.

FIG. 1 illustrates a network 100, in accordance with methods and apparatus consistent with the principles of the present invention. As shown, network 100 comprises a user device 102, a network access device 104, an access link 106, an access multiplexer 108, an access network 110, a gateway device 112, a wide area network (“WAN”) 114, a destination device 116, a monitoring server 118, and a database 120.

User device 102 may be any device capable of accessing a network, such as access network 110. User device 102 may include any one or more of a variety of known devices, such as a personal computer, laptop personal digital assistant, or a mobile phone.

Network access device 104 may provide a communications interface for user device 102. For example, network access device 104 may include a dial-up modem, a digital subscriber line (“DSL”) modem, a wireless modem, or a cable modem. Although network access device 104 is shown as being separate from user device 102, network access device 104 may be integrated within user device 102. Alternatively, any number of network elements, such as a hub, router, firewall, or switch, may be interposed between user device 102 and network access device 104.

Access link 106 provides a physical link between network access device 106 and access multiplexer 108. Access link 106 may be based on wire-line technologies, such as, a telephone line, a DSL link, or a dedicated trunk, such as a T-1 trunk. Access link 106 may also be based on wireless technologies, such as, radio frequency (RF), satellite, and microwave.

Access multiplexer 108 may provide an access point for user device 102 to access network 110. For example, access multiplexer 108 may be implemented as a modem bank, a DSL access multiplexer, a wireless base station, etc. using known hardware and software. Although access multiplexer 108 is shown interfacing a single user device, access multiplexer 108 may provide an access point for a plurality of user devices.

Access network 110 may allow communications between user device 102 and wide area network 114. Access network 110 may be implemented over a range of distances, such as over a local area, a metropolitan area, or a wide area using any number of nodes (not shown). Access network 110 may be implemented using a variety of technologies, including Internet Protocol (IP), Asynchronous Transfer Mode (ATM), frame-relay, Ethernet, etc. For example, in one embodiment, access network 110 may be implemented using a plurality of ATM switches. Information communicated between user device 102 and wide area network 114 may pass through access network 110 over virtual connections, such as ATM permanent virtual circuits (“PVC”).

Gateway device 112 may connect access network 110 and WAN 114 and may translate communications between access network 110 and WAN 114. For example, gateway device 112 may be a switch, such as an ATM switch that terminates a virtual connection traversing access network 110 and provides IP packets to and from WAN 114. Alternatively, gateway device 112 may comprise multiple devices coupled together, such as a switch connected to a router.

WAN 114 may be any type of wide area network, such as the Internet or a corporate intranet. Destination device 116 may provide information from, for example, an Internet web site to user device 102 via WAN 114. Destination device 116 may include a web server or an application server. Alternatively, destination device 116 may be including other types devices, such as a router or firewall.

Monitoring server 118 may provide operations support and monitoring for network 100, including configuration management, provisioning, testing, billing, monitoring, and other management functions. Monitoring server 118 may be implemented using known hardware and software, such as one or more general purpose computers. Monitoring server 118 may be connected via links 122, 124, and 126 to management ports on access multiplexer 108, access network 110, and gateway device 112, respectively. Links 122, 124, and 126 may be dedicated physical links, between monitoring server 118, multiplexer 108, access network 110, and gateway device 112. Alternatively, link 124 may be a physical link between monitoring server and access network 110, while links 122 and 126 may be implemented as logical links, such as virtual connections over access network 110. In addition, monitoring server 118 may be connected to other components of network 100, such as access device 104 and destination device 116.

Database 120 may store information relating to the configuration and provisioning of network 100, including information relating to products and services provided over access link 106, access multiplexer 108, and access network 110, such as service level agreements, equipment descriptions, wiring information, transmission information, and information related to the physical and logical topologies of access network 110. Database 120 may also store information relating to the configuration of access device 104, access link 106, access multiplexer 108, one or more nodes (not shown) within access network 110, and gateway device 112. For example, database 120 may store information, such as a data rate capacity of access link 106, virtual connection information including, for example, virtual path identifiers (“VPI”) and virtual channel identifiers (VCI”), IP addresses of access device 104 and gateway device 112, an address of a management port on access multiplexer 108, addresses for one or more nodes within access network 110, and an address for gateway device 112. Database 120 may also store other data relating to operations support and monitoring of network 100, such as utilization statistics from access network 110. Although shown directly connected to monitoring server 118, database 120 may be integrated with monitoring server 118, or may be remotely connected to monitoring server 118.

FIG. 2 illustrates exemplary steps performed by monitoring server 118 for analyzing performance of network 100, in accordance with methods and apparatus consistent with the present invention. Monitoring server 118 may analyze the performance network 100 based on a request, such as from a user at user device 102. For example, a user at user device 102 may report slow response from network 100, or degraded voice quality related to voice-over-packet-over DSL service over network 100. Alternatively, monitoring server 118 may analyze the performance of network 100 at a predetermined interval, such as every 5 minutes.

Monitoring server 118 may receive one or more measurements to analyze the performance of network 100 (stage 200). The measurements may relate to various layers of communications transported across portions of network 100, such as access network 110, or access link 106. Communications transported across network 100 may comprise a physical layer, a data link layer, and a logical layer. The physical layer may relate to transmission of information over a physical link, such as equipment descriptions, wiring information, transmission information, and transmission media information, such as data rate capacity (or attainable bit rate) of access link 106 and service level agreement for access link 106.

The data link layer may relate to processes and mechanisms used to transmit information over a link between two devices, such as ATM virtual connection information for access network 110 and ATM statistics, including cell delay, cell delay variation, cell loss rate, etc.

The logical layer may relate to processes and mechanisms used to transmit information over networks, such as access network 110 and WAN 114, and routable protocols, such as IP. In addition, the logical layer may relate to end-to-end integrity of transmissions, session information between user device 102 and destination device 116, encoding transmissions over WAN 114, applications running on user device 102, and applications running on destination device 116.

Monitoring server 118 may initially receive measurements related to the logical layer of communications transported across network 100. For example, a user at user device 102 may initiate an application to run one or more IP protocol tests, such as a bandwidth test. The bandwidth test may comprise one or more IP packets with a given payload size of, for example, 256 kilobytes. The IP packets are time stamped and looped between user device 102 and gateway device 112. The application may then calculate an IP bandwidth by dividing the payload size by the time difference indicated by the time stamps. In addition, monitoring server 118 may receive other logical layer measurements, such as IP packet delay, IP jitter, and IP packet loss, based on information retrieved from database 120, or tests, such as a ping or traceroute, between user device 102, gateway device 112, and destination device 116.

Monitoring server 118 may also receive data link layer and physical layer measurements. For example, monitoring server 118 may use the simple network management protocol (“SNMP”) to retrieve data link layer information from access network 110, access multiplexer 108, and gateway device 112. RFC-1157, J. Case et al., (1990), titled “A Simple Network Management Protocol (SNMP),” describes, inter alia, the SNMP protocol and is incorporated herein by reference in its entirety. Using SNMP, monitoring server 118 may retrieve data link layer measurements, such as virtual connection information including the VPI of a virtual connection, the VCI of the virtual connection, and traffic statistics, such as cell delay, cell delay variation, cell transfer rate, and cell loss rate. In addition to SNMP, monitoring server 118 may use ATM operations and maintenance (“OAM”) cells, or proprietary management schemes from particular manufacturers to receive measurements from access network 110.

Monitoring server 118 may retrieve physical layer measurements relating to various links, such as access link 106 within network 100. Monitoring server 118 may use the SNMP protocol to retrieve physical layer measurements from either access multiplexer 108 or access device 104. For example, monitoring server 118 may send a command via the SNMP protocol to multiplexer 108 to initiate a metallic loop test on access link 106. Multiplexer 108 may then perform the metallic loop test by sending a variety of signals over a range of frequencies through access link 106. Based on the frequency response, access multiplexer 108 may then determine various physical layer parameters for access link 106, such as a data rate capacity (or attainable bit rate), a downstream data rate, an upstream data rate, a signal to noise ratio, an output power gauge, an impedance, and a capacitance.

Monitoring server 118 may also receive measurements based on information from database 120. For example, monitoring server 118 may query database 120 to receive information, such as equipment descriptions, equipment configurations, buffer sizes, wiring information, transmission information, and transmission media information for access link 106, and service level agreement information for access link 106. The service level agreement may indicate the nature of service provided over access link 106 and metrics used to measure the service, such as a promised data rate.

Monitoring server 118 may then generate one or more performance parameters based on which monitoring server 118 may analyze the performance of network 100 (stage 202). Monitoring server 118 may generate performance parameters based on one or more measurements from different layers of communications transported across network 100.

For example, monitoring server 118 may correlate the IP bandwidth test between access device 102 and gateway device 112 with the attainable bit rate measured from access link 106. Monitoring server 118 may then determine if the IP bandwidth test is above a threshold percentage of the attainable bit rate, such as 30% of the attainable bit rate. Monitoring server 118 may also determine if the attainable bit rate is above a threshold percentage of the service level agreement, such as 80% of the promised data rate.

As another example, for voice-over-packet-over-DSL communications, monitoring server 118 may correlate IP jitter with ATM cell transfer rate, attainable bit rate, buffer size, and an equipment configuration, such as echo cancellation. Monitoring server 118 may correlate other measurements to generate performance parameters.

Monitoring server 118 may then determine a root cause of a problem in network 100 based on the performance parameters (step 204). Monitoring server 118 may analyze for the following exemplary root causes: access link 106 too long; loss of synchronization signal on access link 106; excessive noise on access link 106; virtual connection mis-configured in access network 110; congestion at access multiplexer 108; congestion in access network 110; congestion at gateway device 112; congestion in WAN 114; congestion at user device 102; congestion at destination device 116; configuration problem at access device 104; IP parameters mis-configured at user device 102; configuration problem at access multiplexer 108; and a failure within network 100. Monitoring server 118 may analyze for other root causes in addition to those discussed above.

For example, if the IP bandwidth test is above 30% of the attainable bit rate for access link 106, then monitoring server 118 may consider this as normal performance for network 100. However, if the IP bandwidth test is below 30% of the attainable bit rate for access link 106, then monitoring server 118 may identify this as a performance issue and retrieve additional information to identify the root cause. Monitoring server 118 may retrieve SNMP protocol information from access multiplexer 108 and physical layer information from database 120. Monitoring server 118 may then verify the configuration of access device 104 and access multiplexer 108 based the retrieved information.

As another example, if the measured attainable bit rate of access link 106 is above 80% of the promised data rate in the service level agreement, then monitoring server 118 may consider this as normal performance for network 100. However, if the attainable bit rate is below 80%, then monitoring server 118 may identify this as a performance issue for a portion of network 100, such as access network 110. Monitoring server 118 may then receive SNMP protocol information from access network 110 to determine the location of a potential congestion in access network 110. Congestion within access network 110 may be indicated based on a variety of information, such as high cell delay, high cell delay variation, and high cell loss. Monitoring server 118 may also use other traffic statistics, such as IP packet loss at gateway device 112, to locate the congestion within access network 110.

As another example, for voice-over-packet-over DSL, if monitoring server 118 generates a performance parameter for IP jitter of 30 milliseconds, then monitoring server 118 may check for whether echo cancellation is configured in equipment in network 100, a buffer size of at least 50 milliseconds, an ATM cell transfer rate of at least 64 kilobits per second, and an attainable bit rate for access link 106 of approximately 1 megabit per second upstream, or 300 kilobits per second downstream. Monitoring server 118 may then receive SNMP protocol information, such as from access network 110, to determine a root cause for degradation in the voice-over-packet-over DSL communications over network 100.

Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Cisneros, Arturo, Lau, Richard C., Tsai, Frank C. D.

Patent Priority Assignee Title
10129179, Feb 13 2009 CommScope EMEA Limited; CommScope Technologies LLC Managed connectivity devices, systems, and methods
10205519, Sep 24 2013 CommScope Technologies LLC Pluggable active optical module with managed connectivity support and simulated memory table
10554582, Feb 13 2009 CommScope Technolgies LLC System including management system to determine configuration for inter-networking device based on physical layer information of a network
10700778, Sep 24 2013 CommScope Technologies LLC Pluggable active optical module with managed connectivity support and simulated memory table
11113642, Sep 27 2012 COMMSCOPE CONNECTIVITY UK LIMITED Mobile application for assisting a technician in carrying out an electronic work order
7408885, Aug 19 2003 Extreme Networks, Inc Method and apparatus for automatic determination of performance problem locations in a network
7603460, Sep 24 2004 Microsoft Technology Licensing, LLC Detecting and diagnosing performance problems in a wireless network through neighbor collaboration
7760654, Sep 24 2004 Microsoft Technology Licensing, LLC Using a connected wireless computer as a conduit for a disconnected wireless computer
7925765, Apr 07 2006 Microsoft Technology Licensing, LLC Cooperative diagnosis in a wireless LAN
7933214, Aug 29 2008 Telefonaktiebolaget LM Ericsson Fault detection in a transport network
8054750, Oct 18 2007 Cisco Technology, Inc. Virtual responder for the auto-discovery of a real responder in a network performance test
8086227, Sep 24 2004 Microsoft Technology Licensing, LLC Collaboratively locating disconnected clients and rogue access points in a wireless network
8086711, Jun 25 2003 International Business Machines Corporation Threaded messaging in a computer storage system
8326973, Dec 23 2008 JPMORGAN CHASE BANK, N A , AS SUCCESSOR AGENT Techniques for gauging performance of services
8982715, Feb 13 2009 CommScope EMEA Limited; CommScope Technologies LLC Inter-networking devices for use with physical layer information
9038141, Dec 07 2011 CommScope EMEA Limited; CommScope Technologies LLC Systems and methods for using active optical cable segments
9207417, Jun 25 2012 CommScope EMEA Limited; CommScope Technologies LLC Physical layer management for an active optical module
9244804, Dec 23 2008 JPMORGAN CHASE BANK, N A , AS SUCCESSOR AGENT Techniques for gauging performance of services
9380874, Jul 11 2012 CommScope EMEA Limited; CommScope Technologies LLC Cable including a secure physical layer management (PLM) whereby an aggregation point can be associated with a plurality of inputs
9407510, Sep 04 2013 CommScope EMEA Limited; CommScope Technologies LLC Physical layer system with support for multiple active work orders and/or multiple active technicians
9473361, Jul 11 2012 CommScope EMEA Limited; CommScope Technologies LLC Physical layer management at a wall plate device
9491119, Feb 13 2009 CommScope EMEA Limited; CommScope Technologies LLC Network management systems for use with physical layer information
9544058, Sep 24 2013 CommScope EMEA Limited; CommScope Technologies LLC Pluggable active optical module with managed connectivity support and simulated memory table
9602897, Jun 25 2012 CommScope Technologies LLC Physical layer management for an active optical module
9667566, Feb 13 2009 CommScope EMEA Limited; CommScope Technologies LLC Inter-networking devices for use with physical layer information
9674115, Feb 13 2009 CommScope EMEA Limited; CommScope Technologies LLC Aggregation of physical layer information related to a network
9742696, Feb 13 2009 CommScope EMEA Limited; CommScope Technologies LLC Network management systems for use with physical layer information
9742704, Jul 11 2012 CommScope Technologies LLC Physical layer management at a wall plate device
9905089, Sep 04 2013 CommScope Technologies LLC Physical layer system with support for multiple active work orders and/or multiple active technicians
RE47365, Dec 07 2011 CommScope Technologies LLC Systems and methods for using active optical cable segments
Patent Priority Assignee Title
5696701, Jul 12 1996 Hewlett Packard Enterprise Development LP Method and system for monitoring the performance of computers in computer networks using modular extensions
5757772, Sep 18 1995 WILKINSON, WILLIAM T Packet switched radio channel traffic supervision
5787253, May 28 1996 SILKSTREAM CORPORATION Apparatus and method of analyzing internet activity
5796633, Jul 12 1996 Hewlett Packard Enterprise Development LP Method and system for performance monitoring in computer networks
5905868, Jul 22 1997 NCR Voyix Corporation Client/server distribution of performance monitoring data
6148337, Apr 01 1998 Bridgeway Corporation Method and system for monitoring and manipulating the flow of private information on public networks
6292485, Jun 03 1999 Fujitsu Network Communications In-band management control unit software image download
6377550, Oct 28 1997 Texas Instruments Incorporated Nested measurement period switch algorithm for flow control of available bit rate ATM communications
6522629, Oct 10 2000 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Traffic manager, gateway signaling and provisioning service for all packetized networks with total system-wide standards for broad-band applications including all legacy services
6549533, Dec 30 1998 OBJECTIVE SYSTEMS INTEGRATORS, INC Managing switched virtual circuits in a network
6577642, Jan 15 1999 Hewlett Packard Enterprise Development LP Method and system for virtual network administration with a data-over cable system
6678245, Jan 30 1998 Lucent Technologies Inc Packet network performance management
6973057, Jan 29 1999 Telefonaktiebolaget LM Ericsson Public mobile data communications network
20050068890,
20050276218,
////////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jan 29 2002LAU, RICHARD C Telcordia Technologies, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0125820633 pdf
Jan 29 2002CISNEROS, ARTUROTelcordia Technologies, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0125820633 pdf
Jan 30 2002TSAI, FRANK C D Telcordia Technologies, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0125820633 pdf
Jan 31 2002Telcordia Technologies, Inc.(assignment on the face of the patent)
Mar 15 2005Telcordia Technologies, IncJPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENTSECURITY AGREEMENT0158860001 pdf
Jun 29 2007JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENTTelcordia Technologies, IncTERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS0195200174 pdf
Jun 29 2007Telcordia Technologies, IncWILMINGTON TRUST COMPANY, AS COLLATERAL AGENTSECURITY AGREEMENT0195620309 pdf
Feb 20 2009Wilmington Trust CompanyTelcordia Technologies, IncRELEASE OF SECURITY INTEREST0224080410 pdf
Jun 16 2009Telcordia Technologies, IncTelcordia Licensing Company LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0228780821 pdf
Apr 30 2010WILMINGTON TRUST COMPANY, AS COLLATERAL AGENTTelcordia Technologies, IncRELEASE0245150622 pdf
Jan 25 2011Telcordia Licensing Company LLCTTI Inventions A LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0259770412 pdf
Aug 26 2015TTI INVENTIONS A LLCNYTELL SOFTWARE LLCMERGER SEE DOCUMENT FOR DETAILS 0374070912 pdf
Date Maintenance Fee Events
May 07 2009ASPN: Payor Number Assigned.
May 07 2009RMPN: Payer Number De-assigned.
Jun 22 2010M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Jun 24 2014M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Jun 12 2018M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Jan 02 20104 years fee payment window open
Jul 02 20106 months grace period start (w surcharge)
Jan 02 2011patent expiry (for year 4)
Jan 02 20132 years to revive unintentionally abandoned end. (for year 4)
Jan 02 20148 years fee payment window open
Jul 02 20146 months grace period start (w surcharge)
Jan 02 2015patent expiry (for year 8)
Jan 02 20172 years to revive unintentionally abandoned end. (for year 8)
Jan 02 201812 years fee payment window open
Jul 02 20186 months grace period start (w surcharge)
Jan 02 2019patent expiry (for year 12)
Jan 02 20212 years to revive unintentionally abandoned end. (for year 12)