A method and apparatus to analyze real-time data transmissions across a network is described. The method may comprise transmitting a sample data stream between source and destination endpoints across a test data path which includes network devices. The method may then compare a measured quality of the received sample data stream with pre-defined quality criteria associated with the network. If the measured quality fails to meet the pre-defined quality criteria, the network devices in the test data path may be identified, device performance data may be obtained, and a network report may be generated based on the device performance data. The device performance data may comprise processor utilization, memory utilization, bandwidth over subscription, buffer over run, and/or a number of non-error packets that are discarded at the network device.
|
11. An apparatus comprising:
a network interface unit configured to enable communications over a network; and
a processor configured to:
transmit a sample data stream at a known first quality in the network between a source endpoint and a destination endpoint across a test data path that includes at least two network devices, at least one of the two network devices being a wan edge router that lies between the source endpoint and the destination endpoint in the test data path;
compare a measured second quality of the received sample data stream with the known first quality of the transmitted sample data stream and to determine that the measured second quality fails to meet the known first quality;
identify at least one network device in the test data path;
obtain device performance data of the at least one network device, wherein the device performance data of the wan edge router is obtained from an interface of the wan edge router;
use the device performance data of the wan edge router to determine if the wan edge router is contributing to the failure of the measured second quality to meet the known first quality; and
generate a network report based on the device performance data, the network report relating the at least one device in the test data path to a failure of the measured second quality to meet the know first quality.
1. A method comprising:
transmitting a sample data stream at a known first quality in a network between a source endpoint and a destination endpoint across a test data path that includes at least two network devices, at least one of the two network devices being a wan edge router that lies between the source endpoint and the destination endpoint in the test data path;
comparing a measured second quality of the received sample data stream with the known first quality of the transmitted sample data stream;
determining that the measured second quality is less than the known first quality; and
in response to the determination that the measured second quality fails to meet the known first quality, performing operations including:
identifying at least one network device in the test data path;
obtaining device performance data of the at least one network device, wherein the device performance data of the wan edge router is obtained from an interface of the wan edge router;
using the device performance data of the wan edge router to determine if the wan edge router is contributing to the failure of the measured second quality to meet the known first quality; and
generating a network report based on the device performance data, the network report relating the at least one device in the test data path to a failure of the measured second quality to meet the known first quality.
6. A non-transitory machine-readable storage medium storing instructions that, when executed by a machine, cause the machine to perform operations comprising:
transmitting a sample data stream at a known first quality in a network between a source endpoint and a destination endpoint across a test data path that includes at least two network devices, at least one of the two network devices being a wan edge router that lies between the source endpoint and the destination endpoint in the test data path;
comparing a measured second quality of the received sample data stream with the known first quality of the transmitted sample data stream;
determining that the measured second quality fails to meet the known first quality; and
in response to the determination that the measured second quality fails to meet the known first quality, performing operations including:
identifying at least one network device in the test data path;
obtaining device performance data of the at least one network device, wherein the device performance data of the wan edge router is obtained from an interface of the wan edge router;
using the device performance data of the wan edge router to determine if the wan edge router is contributing to the failure of the measured second quality to meet the known first quality; and
generating a network report based on the device performance data, the network report relating the at least one device in the test data path to a failure of the measured second quality to meet the known first quality.
2. The method of
3. The method of
monitoring a plurality of network devices to obtain corresponding performance data prior to transmitting the sample data stream;
identifying the at least one network device in the test data path from the monitored network devices; and
obtaining the device performance data of the at least one network device in the test data path from the performance data obtained prior to transmitting the sample data stream.
4. The method of
interrogating a wan edge router in the test data path to obtain device performance data for the wan edge router.
7. The non-transitory machine-readable storage medium of
8. The non-transitory machine-readable storage medium of
monitoring a plurality of network devices to obtain corresponding performance data prior to transmitting the sample data stream;
identifying the at least one network device in the test data path from the monitored network devices; and
obtaining the device performance data of the at least one network device in the test data path from the performance data obtained prior to transmitting the sample data stream.
9. The non-transitory machine-readable storage medium of
interrogating a wan edge router in the test data path to obtain device performance data for the wan edge router.
10. The non-transitory machine-readable storage medium of
12. The apparatus of
13. The apparatus of
monitor a plurality of network devices to obtain corresponding performance data prior to transmitting the sample data stream,
identify the at least one network device in the test data path from the monitored network devices, and
obtain the device performance data of the at least one network device in the test data path from the performance data obtained prior to transmitting the sample data stream.
14. The apparatus of
|
This application is a continuation of U.S. patent application Ser. No. 11/466,390, filed on Aug. 22, 2006, which is incorporated herein by reference in its entirety for all purposes.
This application relates generally to computer network communications, and particularly to a method of and system for identifying a particular network device which contributes to poor quality of service of real-time data transmission across a network.
Popularity of IP (Internet Protocol) telephony (e.g. VoIP, video calls, etc) is increasing, and deployments of IP Telephony are correspondingly increasing in terms of number of subscribers and size of networks. The increasing number of subscribers using IP telephony for their day to day communication places increased load on network infrastructure, which leads to poorer voice quality owing to inadequate capacity or faulty infrastructure.
IP telephony places strict requirements on IP packet loss, packet delay, and delay variation (or jitter). In multi-site complex customer networks, there may be many WAN edge routers that interconnect many branches of an enterprise or of many small businesses managed by a service provider.
Probable causes of poor voice quality at the WAN edge router are Codec conversions, mismatched link speeds, and bandwidth oversubscription owing to number of users, number of links, and/or link speed. Each of these causes results in buffer overruns, leading to packet discards, which in turn degrades the quality of voice or service.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of an embodiment of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details.
A plurality of IP telephones 124 to 132 are connected via the switches 110 to 114 and routers 104 to 108 to the WAN 102. The IP telephones 124 to 132 may be fixed or mobile telephones, e.g. VoIP telephones. In addition, the system 100 may include a voice application server 120, such as a voicemail system, IVR (Interactive Voice Response) system, or the like, and also includes a computer system in the form of a call manager 122, in accordance with an example embodiment. It should however be noted that the example embodiments are not limited to voice only transmission but also extend to any real-time (or time critical communications) such as video.
It is to be understood that the example IP telephones 124 to 132 communicate with one another and/or with other telephones by digitising voice or other sound (or even video with video telephones) and by sending the voice data in a stream of IP packets across the WAN 102 or other network. It is important for networks carrying voice streams to provide a high quality of service (QoS) so that the voice is clear or at least audible when received by a receiver telephone. Thus, packet loss or delay is undesirable as it lowers the QoS. This is not necessarily a problem with conventional data packet transmission, e.g., with non-voice or non real-time data, as dropped packets can be retransmitted and delayed packets reassembled in due course.
The computer system 200 includes a processor 202 and a network interface device 206 (e.g. a network card) for communication with a network. The processor 202 comprises a plurality of conceptual modules, which corresponded to functional tasks performed by the processor 202. To this end, the computer system 200 may include a machine-readable medium, e.g. the processor 202, main memory, and/or a hard disk drive, which carries a set of instructions to direct the operation of the processor 202, for example being in the form of a computer program. More specifically, the processor 202 is shown by way of example to include: a monitoring module 210 to monitor network devices connected to the system 200; a generating module 212 to generate a sample real-time data stream; a comparing module 214 to compare quality of the sample real-time data stream with pre-defined quality criteria; a detecting module 216 to detect network devices in a network of which the system 200 forms part; and a determining module 218 to determine whether or not any detected network devices are contributing to a poor QoS. It is to be understood that the processor 202 may be one or more microprocessors, controllers, or any other suitable computing device, resource, hardware, software, or embedded logic. Furthermore, the functional modules 210 to 218 may distributed among several processors, or alternatively may be consolidated within a single processor, as shown in
It is important to note that the computer system 200 need not include all the modules 210 to 218. Accordingly, some of the conceptual modules 210 to 218 may also be distributed across different network devices. For example, in an example embodiment, the monitoring module 210, the detecting module 216, the determining module 218, and a reporting module may be provided in a network management system. Further, in an example embodiment, the generating module 212 and the detecting module 216 may be provided in a call agent. It should also be noted that the multiple module (e.g., duplicate modules) may also be provided in different devices across the network.
The monitoring module 210 monitors L3 network devices in a network to which the computer system 200 is connected. The monitoring module 210 is configured to poll or interrogate the network devices intermittently, e.g. at pre-defined monitoring intervals, to determine performance data or statistics for at least one but preferably for all interfaces on the network devices. The monitoring module 210 is particularly configured to monitor performance statistics for network routers. The performance statistics which are monitored include processor utilisation and memory utilisation of each monitored network device, for example expressing the memory utilisation of each device as a percentage of maximum memory utilisation. The monitoring module 210 may further monitor non-error IP packets which are dropped or discarded, e.g. also in the form of a percentage. The monitoring module 210 may thus, for instance, record that 10% of non-error data packets are being dropped by a particular network device (e.g., due to buffer overruns).
These performance statistics which are monitored provide an indication of whether or not the particular network device, such as router 104 to 108, is coping satisfactorily with traffic on each of its interfaces, e.g. ATM (Asynchronous Transfer Mode) interface, T1 interface, etc. The traffic statistics may be temporarily stored for later use, e.g. on a memory module connected to the computer system 200.
The generating module 212 may be operable to generate and send a sample real-time data stream (e.g., a known voice clip) to a remote network device or other computer system. The sample real-time data stream may be of a known quality, so that quality degradation can be measured. It is to be appreciated that the generating module 212 may be remote from the other modules, e.g. hosted by a router or switch. The sample stream is transmitted between two endpoints, namely a source endpoint and a destination endpoint (which may be randomly selected). The generating module 212 may serve as the source endpoint, while the destination endpoint may be a remote computer system, e.g. a router or switch. One or more network devices (e.g. WAN edge routers 104 to 108) are in the path of the sample stream, so that the quality of the data stream after transmission is influenced by the networks device(s). In other embodiments, the generating module 212 can be located on a system separate from the computer system 200, the computer system 200 optionally serving as a destination endpoint.
The comparing module 214 may compare quality of the sample real-time data stream after transmission with pre-defined quality criteria which, for example, include impairment factors such as Codec type, network topology, etc. The quality of the sample stream after transmission may be measured by a measuring module (refer further to
The determining module 218 then determines whether or not any of the detected network devices in the sample stream path are over-loaded or are performing poorly based on the performance statistics gathered by the monitoring module 210.
Referring now to
An example embodiment is further described in use with reference to
Referring to
Although the call manager 122 may be used for measuring the quality of any real-time data stream, the example embodiment may find particular application in measuring the quality of sound or voice streams, for example, voice streams used for IP telephony. Thus, at block 354, the generating module 212 may generate a sample voice stream of known quality (e.g. having a MOS of 5), and may transmit the sample voice stream to a destination endpoint, for example switch 112. It will be noted that in the given example, because the call manager 122 is the source endpoint and the switch 112 is the destination endpoint, WAN edge routers 104 and 106 both lie in the path of the sample voice stream. Thus, the quality of the sample stream as received by switch 112 will be affected by the performance of WAN edge routers 104 to 106. In addition, the generating module 212 may transmit the sample voice stream to other destination endpoints, for example switch 114, to gauge the performance of WAN edge router 108. Thus, there may be a plurality of destination endpoints in the system 250 so that the sample voice streams pass through as many WAN edge routers as possible.
In another example embodiment, one of the WAN edge routers 104 to 106 may be the destination endpoint. Instead of, or in addition to, the call manager 122 may be a destination endpoint, and a router or switch may be a source endpoint. In such a case, the call manager 122 may include the measuring module 262, and the router or switch used as the source endpoint may include the generating module 212. Thus, there may be a plurality of source endpoints, and a single destination endpoint.
The measuring module 262 of switch 112 may measure, at block 356, the quality of the received sample voice stream in accordance with the MOS estimation algorithm, and transmit data indicitive of the measured voice quality back to the call manager 122. The comparing module 214 may thereafter compare, at block 358, the measured value of the sample voice stream against an expected quality value. For example, the comparing module 214 may determine what quality value is to be expected based on the network topology and/or all the codec used for transmitting the sample voice stream. The comparing module 214, using impairment factors, may thus determine an expected quality of the sample voice signal after transmission. For example, if the sample voice stream was transmitted using the G.711 codec, the expected MOS is 4.10 (refer to table 280 of
Using the traffic statistics gathered at block 352 by the monitoring module 210, the determining module 218 may then determine, at block 366, which of the detected WAN edge routers 104 to 106 in the sample stream path, if any, are contributing to the poor quality of service, and more specifically, which of these routers' interfaces are contributing to a poor quality of service. For example, if the traffic statistics show that an ATM interface of WAN edge router 104 had (and/or has) a very high memory or CPU usage (for example 80% to 100%) or was (and/or is) discarding an unusually high amount of non-error packets (e.g. one in 10 non-error packets were (and/or are) being discarded) it is likely or at least possible that the ATM interface of WAN edge router 104 is contributing to a poor quality of service.
It is to be understood that the order of some of the steps/operations described above may be changed and the same result may still be achieved. For example, the step of monitoring the routers, at block 352, may be performed later in the process, for example before or after the quality of the sample voice stream is measured at block 356, or before or after the WAN edge routers 104 to 108 are identified, at block 364.
The reporting module 226 may generate a report (e.g. in the form of a dashboard), at block 368, which summarizes the performance of each interface of each of the identified potentially faulty WAN edge routers 104 to 106 insofar as it relates to transmission quality of real-time data such as voice streams. The network administrator, after seeing the report, may be in a better position to correct the problem, for example by adjusting or bypassing the WAN edge router 104 to 108 which is causing the low quality of service.
The example computer system 400 includes a processor 402 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 404 and a static memory 406, which communicate with each other via a bus 408. The computer system 400 may further include a video display unit 410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 400 also includes an alphanumeric input device 412 (e.g., a keyboard), a user interface (UI) navigation device 414 (e.g., a mouse), a disk drive unit 416, a signal generation device 418 (e.g., a speaker) and a network interface device 420.
The disk drive unit 416 includes a machine-readable medium 422 on which is stored one or more sets of instructions and data structures (e.g., software 424) embodying or utilized by any one or more of the methodologies or functions described herein. The software 424 may also reside, completely or at least partially, within the main memory 404 and/or within the processor 402 during execution thereof by the computer system 400, the main memory 404 and the processor 402 also constituting machine-readable media.
The software 424 may further be transmitted or received over a network 426 via the network interface device 420 utilizing any one of a number of well-known transfer protocols (e.g., HTTP).
While the machine-readable medium 422 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
Although an embodiment of the present invention has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
The call manager 122 and/or switch 112, or any other computer system or network device in accordance with an example embodiment may be in the form of computer system 400.
The example methods, devices and systems described herein may be used for troubleshooting voice quality issues in a network environment. A network administrator may, based on the generated report, identify which network devices are contributing to a poor quality of service. The network administrator may therefore not need to check the performance of every network device in the network, but rather is provided with a shortlist of network devices which are potentially degrading voice quality.
Goyal, Dinesh, Saswade, Sachin Purushottam
Patent | Priority | Assignee | Title |
10671580, | May 04 2012 | International Business Machines Corporation | Data stream quality management for analytic environments |
10803032, | May 04 2012 | International Business Machines Corporation | Data stream quality management for analytic environments |
9460131, | May 04 2012 | International Business Machines Corporation | Data stream quality management for analytic environments |
Patent | Priority | Assignee | Title |
6269398, | Aug 20 1993 | AVAYA Inc | Method and system for monitoring remote routers in networks for available protocols and providing a graphical representation of information received from the routers |
6868068, | Jun 30 2000 | Cisco Technology, Inc. | Method and apparatus for estimating delay and jitter between network routers |
6970924, | Feb 23 1999 | VISUAL NETWORKS, INC , CORPORATION OF THE STATE OF CALIFORNIA | Methods and apparatus for monitoring end-user experience in a distributed network |
7428300, | Dec 09 2002 | Verizon Patent and Licensing Inc | Diagnosing fault patterns in telecommunication networks |
8630190, | Aug 22 2006 | Cisco Technology, Inc. | Method and system to identify a network device associated with poor QoS |
20030091033, | |||
20040073641, | |||
20050198266, | |||
20050281204, | |||
20060184670, | |||
20060250967, | |||
20070097966, | |||
20070101020, | |||
20070168195, | |||
20080049634, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 13 2014 | Cisco Technology, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 23 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Aug 21 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 23 2019 | 4 years fee payment window open |
Aug 23 2019 | 6 months grace period start (w surcharge) |
Feb 23 2020 | patent expiry (for year 4) |
Feb 23 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 23 2023 | 8 years fee payment window open |
Aug 23 2023 | 6 months grace period start (w surcharge) |
Feb 23 2024 | patent expiry (for year 8) |
Feb 23 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 23 2027 | 12 years fee payment window open |
Aug 23 2027 | 6 months grace period start (w surcharge) |
Feb 23 2028 | patent expiry (for year 12) |
Feb 23 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |