A signal monitoring apparatus and method involving devices for monitoring signals representing communications traffic, devices for identifying at least one predetermined parameter by analyzing the context of the at least one monitoring signal, a device for recording the occurrence of the identified parameter, a device for identifying the traffic stream associated with the identified parameter, a device for analyzing the recorded data relating to the occurrence, and a device, responsive to the analysis of the recorded data, for controlling the handling of communications traffic within the apparatus.

Patent
   RE41608
Priority
Sep 26 1996
Filed
Aug 24 2006
Issued
Aug 31 2010
Expiry
Sep 24 2017
Assg.orig
Entity
unknown
157
37
EXPIRED
0. 33. A system operable to acquire audio data packets for recording and analysis, comprising:
an audio data recorder operable to acquire data packets associated with a voice interaction transmitted over a computer network to a call center, the audio data packets comprising packet headers and a packet bodies;
an analysis module operable to analyze the data packets by identifying at least one speaker-independent predetermined parameter associated with two-way voice communication; and
a storage module operable to store at least a portion of the audio data packets in accordance with the at least one speaker-independent predetermined parameter.
0. 18. A method for recording audio data packets, comprising:
monitoring audio data packets received at a switch associated with a call center, the audio data packets being transmitted over a compputer network and comprising packet headers and packet bodies;
examining data within the audio data packets by using a processor that is communicatively connected to the switch to identify at least one speaker-independent predetermined parameter associated with two-way voice communication; and
storing at least a portion of the audio data packets in a storage device communicatively connected to the prpcessor in accordance with the at least one speaker-independent predetermined parameter.
0. 1. A signal monitoring system for monitoring and analyzing communications passing through a monitoring point, the system comprising:
a digital voice recorder (18) for monitoring two-way conversation traffic streams passing through the monitoring point, said digital voice recorder having connections (20) for being operatively attached to the monitoring point;
a digital processor (30) connected to said digital voice recorder for identifying at least one predetermined parameter by analyzing the voice communication content of at least one monitored signal taken from the traffic streams;
a recorder (38) attached to said digital processor for recording occurrences of the predetermined parameter;
a traffic stream identifier (36) for identifying the traffic stream associated with the predetermined parameter;
a data analyzer (36) connected to said digital processor for analyzing the recorded data relating to the occurrences; and
a communication traffic controller (34) operatively connected to said data analyzer and, operating responsive to the analysis of the recorded data, for controlling the handling of communications traffic within said monitoring system.
0. 2. The monitoring system of claim 1, wherein said at least one predetermined parameter includes a frequency of keywords identified in the voice communication content of the at least one monitored signal.
0. 3. The monitoring system of claim 1, wherein said digital processor further identifies episodes of anger or shouting by analyzing amplitude envelope.
0. 4. The monitoring system of claim 1, wherein said at least one predetermined parameter is a prosody of the voice communication content of the at least one monitored signal.
0. 5. The monitoring system of claim 1, wherein said connections for being operatively attached to the telephony exchange switch are attached via high impedance taps (20) to telephone signal lines (24, 26) attached to said telephony exchange switch.
0. 6. The monitoring system of claim 1, wherein said communication traffic controller serves to identify at least one section of traffic relative to another so as to identify a source of the predetermined parameter.
0. 7. The monitoring system of claim 1, wherein said communication traffic controller serves to influence further monitoring actions within the apparatus.
0. 8. The monitoring system of claim 1, wherein the analyzed contents of the at least one monitored signal comprise the interaction between at least two signals representing an at least two-way conversation.
0. 9. The monitoring system of claim 1, wherein the recorder operates in real time to provide a real-time indication of the occurrence.
0. 10. The monitoring system of claim 1, wherein said digital voice recorder comprises an analog/digital convertor (18) for converting analog voice into a digital signal.
0. 11. The monitoring system of claim 1, wherein said digital processor as a Digital Signal processor (30) arranged to operate in accordance with an analyzing algorithm.
0. 12. The monitoring system of claim 1, wherein the digital processor is arranged to operate in real time.
0. 13. The monitoring system of claim 1, further comprising a replay station (32) connected to said digital processor and arranged such that the voice communication content of the at least one monitored signal can be recorded and monitored by said digital processor for identifying the at least one parameter at some later time.
0. 14. The monitoring system of claim 1, wherein the at least one predetermined parameter comprises plural predetermined parameters and wherein said recorder records the occurrence of the plural predetermined parameters in each of the two directions of traffic separately.
0. 15. The monitoring system of claim 1, wherein said traffic stream identifier comprises a means for receiving an identifier tagged onto the traffic so as to identify its source.
0. 16. The monitoring system of claim 1, wherein said digital voice recorder for monitoring the traffic streams is operative responsive to an output from said traffic stream identifier identifying the source of the conversation in which the predetermined parameter has been identified, or a threshold occurrence of the predetermined parameter has been exceeded.
0. 17. The monitoring system of claim 1, wherein said digital voice recorder, said digital processor, said recorder, said traffic stream identifier, and said data analyzer reside on an add-in card to a telecommunications system.
0. 19. The method of claim 18, wherein the processor analyzes the packet headers.
0. 20. The method of claim 18, wherein examining includes determining telephone interactions to which the audio data packets belong.
0. 21. The method of claim 18, wherein examining includes sorting the audio data packets in accordance with a timestamp.
0. 22. The method of claim 18, further comprising identifying, by the processor, the voice communication content included in the packet bodies of the audio data packets.
0. 23. The method of claim 22, wherein identifying voice communication content includes identifying a frequency of keywords identified in the audio data packets received over the computer network.
0. 24. The method of claim 22, wherein identifying voice communication content includes identifying episodes of anger or shouting based upon an amplitude envelope associated with the audio data packets.
0. 25. The method of claim 22, wherein identifying voice communication content includes identifying a prosody associated with the voice communication content of the audio data packets.
0. 26. The method of claim 22, wherein storing in the storage device is based upon identification of voice communication content that includes the at least one speaker-independent predetermined parameter.
0. 27. The method of claim 22, wherein identifying voice communication content includes examining incoming and outgoing traffic streams to identify whether a talk-over condition exists with respect to the audio data packets.
0. 28. The method of claim 22, wherein identifying voice communication content includes identifying whether one or more of a predetermined group of words exists with respect to the audio data packets.
0. 29. The method of claim 22, wherein identifying voice communication content includes identifying stress in voice content associated with the audio data packets.
0. 30. The method of claim 29, wherein stress is identified by determining changes in volume, speed and tone of voice content associated with the audio data packets.
0. 31. The method of claim 22, wherein identifying voice communication content includes identifying a delay between audio data packet transmissions in opposite directions.
0. 32. The method of claim 18, wherein the examining includes analyzing the packet bodies by the processor.
0. 34. The system of claim 33, wherein the analysis module is operable to extract data from the packet header, and to analyze the packet body.
0. 35. The system of claim 33, wherein the audio data recorder is further operable to determine telephone interactions to which the data packets belong.
0. 36. The system of claim 33, wherein the audio data packets are sorted in accordance with a timestamp.
0. 37. The system of claim 33, wherein analysis of the packet bodies comprises identifying voice communication content included in the packet bodies of the audio data packets.
0. 38. The system of claim 37, wherein identifying voice communication content includes identifying a frequency of keywords identified in the audio data packets received over the computer network.
0. 39. The system of claim 37, wherein identifying voice communication content includes identifying episodes of anger or shouting based upon an amplitude envelope associated with the audio data packets.
0. 40. The system of claim 37, wherein identifying voice communication content includes identifying a prosody associated with the voice communication content of the audio data packets.
0. 41. The system of claim 37, wherein the storage module stores the portion of the audio data packets based upon identification of voice communication content that includes the at least one speaker-independent predetermined parameter.
0. 42. The system of claim 37, wherein identifying voice communication content includes examining incoming and outgoing traffic streams to identify whether a talk-over condition exists with respect to the audio data packets.
0. 43. The system of claim 37, wherein identifying voice communication content includes identifying whether one or more of a predetermined group of words exists with respect to the audio data packets.
0. 44. The system of claim 37, wherein identifying voice communication content includes identifying stress voice content associated with the audio data packets.
0. 45. The system of claim 37, wherein stress is identified by determining changes in volume, speed and tone of voice content associated with the audio data packets.
0. 46. The system of claim 37, wherein identifying voice communication content includes identifying a delay between data packet transmissions in opposite directions.
BRIEF DESCRIPTION OF THE DRAWINGS Step 302, FIG. 3). As will be appreciated by the arrows employed for the signal lines 24, 26, the high impedance tap 20 is arranged to monitor outgoing voice signals from the call-centre 10 whereas the high impedance tap 22 is arranged to monitor incoming signals to the call-centre 10. The voice traffic on the lines 24, 26 therefore form a two-way conversation between a call-centre operative using one of the terminals 12 and a customer (not illustrated).

The monitoring apparatus 16 embodying the present invention further includes a computer telephone link 28 whereby data traffic appearing at the exchange switch 14 can be monitored as required.

The digital voice recorder 18 is connected to a network connection 30 which can be in the form of a wide area network (WAN), a local area network (LAN) or an internal bus of a central processing unit of a computer.

Also connected to the network connection 30 is a replay station 32, a configuration arrangement application station 34, a station 36 providing speech and/or data analysis engine(s) and also storage means comprising a first storage means 38 for the relevant analysis rules and the results obtained and a second storage means 40 for storage of the data and/or speech monitor.

FIG. 2 illustrates the typical format of a data packet 42 used in accordance with the present invention and which comprises a packet header 44 of typically 48 bytes and a packet header 46 of typically of 2000 bytes.

The packet header is formatted so as to include the packet identification 48, the data format 50, a data and time stamp 52, the relevant channel number within which the data arises 54, the gain applied to the signal 56 and the data length 58.

The speech, or other data captured in accordance with the apparatus of the present invention, is found within the packet body 46 and within the format specified within the packet header 44.

The high impedance taps 20, 22 offer little or no effect on the transmission lines 24, 26 and, if not in digital form, the monitored signal is converted into digital form. For example, when the monitored signal comprises a speech signal, the signal is typically converted to a pulse code modulated (PCM) signal or is compressed as an Adaptive Differential PCM (ADPCM) signal.

Further, where signals are transmitted at a constant rate, the time of the start of the recordings is identified, for example by voltage or activity detection, i.e. so-called “vox” level detection, and the time is recorded. With asynchronous data signals, the start time of a data burst, and optionally the intervals between characters, may be recorded in addition to the data characters themselves.

The purpose of this is to allow a computer system to model the original signal to appropriate values of time, frequency and amplitude so as to allow the subsequent identification of one or more of the various parameters arising in association with the signal (see, FIG. 4). The digital information describing the original signals is then analysed at station 36 (Step 304; FIG. 3), in real time or later, to determine the required self of metrics, i.e., parameters, appropriate to the particular application.

FIG. 3 is a flowchart of an example process 300 for monitoring communications traffic. At stage 302, signals representing communications traffic are monitored. For example, the digital voice recorder 18 can monitor two-way conversation traffic associated with the exchange switch 14. At stage 304, a predetermined parameter is identified by analyzing the content. For example, a digital signal processor programmed with an appropriate algorithm can identify the predetermined parameter. At stage 306, the occurrence of the identified parameter is recorded. For example, the first storage 38(analysis rules and results) can store the occurrence of the identified parameter. At stage 308, the traffic stream associated with the parameter is identified. For example, the speech/data analysis engine 36 can identify the traffic stream. At stage 310, the recorded data relating to the occurrence is analyzed. For example, the speech/data analysis engine 36 can analyze the recorded data stored in the first storage 38.

A particular feature of the system is in recording the two directions of data transmission separately so allowing further analysis of information sent in each direction independently (Step 306; FIG. 3). In analogue telephone systems, this may be achieved by use of a four-wire (as opposed to two-wire) circuit whilst in digital systems, it is the norm to have the two directions of transmission separated onto separate wire pairs. In the data world, the source of each data packet is typically stored alongside the contents of the data packet.

A further feature of the system is in recording the level of amplification or attenuation applied to the original signal. This may vary during the monitoring of even a single interaction (e.g. through the use of Automatic Gain Control Circuitry). This allows the subsequent reconstruction and analysis of the original signal amplitude.

Another feature of the system is that monitored data may be “tagged” with additional information such as customer account numbers by an external system (e.g. the delivery of additional call information via a call logging port or computer telephony integration (CTI) port).

The importance of each of the parameters and the way in which they can be combined to highlight particularly good or bad interactions is defined by the user of the system. One or more such analysis profiles can be held in the system. These profiles determine the weighting given to each of the above parameters.

The profiles are normally used to rank a large number of monitored conversations and to identify trends, extremes, anomalies and norms. “Drill-down” techniques are used to permit the user to examine the individual call parameters that result in an aggregate or average score, and, further, allow the user to select individual conversations to be replayed to confirm or reject the hypothesis presented by the automated analysis.

A particular variant that can be employed in any embodiment of the present invention uses feedback from the user's own scoring of the replayed calls to modify its own analysis algorithms. This may be achieved using neutral network techniques or similar giving a system that learns from the user's own view of the quality of recordings.

A variant of the system uses its own and/or the scoring/ranking information to determine its further patterns of operation i.e.

    • determining which recorded calls to retain for future analysis,
    • determining which agents/lines to monitor and how often, and
    • determining which of the monitored signals to analyse and to what depth.

In many systems it is impractical to analyse all attributes of all calls hence a sampling algorithm may be defined to determine which calls will be analysed. Further, one or more of the parties can be identified (e.g. by calling-line identifier for the external party or by agent log-on identifiers for the internal party). This allows analysis of the call parameters over a number of calls handled by the same agent or coming from the same customer.

The system can use sparse capacity on the digital signal processors (DSPs) that control the monitoring, compression or recording of the monitored signals to provide some or all of the analysis required. This allows analysis to proceed more rapidly during those periods when fewer calls are being monitored.

Spare CPU capacity on a PC at an agent's desk could be used to analyse the speech. This would comprise a secondary tap into the speech path being recorded as well as using “free” CPU cycles. Such an arrangement advantageously allows for the separation of the two parties, e.g. by tapping the headset/handset connection at the desk. This allows parameters relating to each party to be stored even if the main recording point can only see a mixed signal.

A further variant of the system is an implementation in which the systems recording and analysing the monitored signals are built into the system providing the transmission of the original signals (e.g. as an add-in card to an Automatic Call Distribution (ACD) system).

The apparatus illustrated is particularly useful for identifying the following parameters:

    • degree of interruption (i.e. overlap between agent talking and customer talking),
    • comments made during music or on-hold periods.
    • delays experienced by customers (i.e. the period from the end of their speech to an agent's response),
    • caller/agent talk ratios, i.e. which agents might be talking too much.

However, it should be appreciated that the invention could be adapted to identify parameters such as:

    • “relaxed/stressed” profile of a caller or agent (i.e. by determining changes in volume, speed and tone of speech)
    • frequency of keywords heard (separately from agents and from callers) e.g. are agents remembering to ask follow-up questions about a certain product/service etc; or how often do customers swear at each agent? Or how often do agents swear at customers?
    • frequency of repeat calls. A combination of line, ID and caller ID can be provided to eliminate different people calling from single switchboard/business number
    • languages used by callers?
    • abnormal speech patterns of agents. For example if the speech recognition applied to an agent is consistently and unusually inaccurate for, say, half an hour, the agent should be checked for: drug abuse, excessive tiredness, drunkenness, stress, rush to get away etc.

It will be appreciated that the illustrated and indeed any embodiments of the present invention can be set up as follows.

The Digital Trunk Lines (e.g. T1/E1) can be monitored trunk side and the recorded speech tagged with the direction of speech. A MediStar Voice Reorder chassis can be provided typically with one or two E1/T1 cards plus a number of DSP cards for the more intense speech processing requirements.

Much of its work can be done overnight and in time, some could be done by the DSPs in the mediastar's own cards. It is also necessary to remove or at least recognise, periods of music, on-hold periods, IVR rather than real agents speaking etc. thus, bundling with Computer Integrated Telephony Services such as Telephony Services API (TSAPI) in many cases is appropriate.

Analysis and parameter identification as described above can then be conducted. However, as noted, if it is not possible to analyse all speech initially, analysis of a recorded signal can be conducted.

In any case the monitoring apparatus may be arranged to only search initially for a few keywords although re-play can be conducted so as to look for other keywords.

It should be appreciated that the invention is not restricted to the details of the foregoing embodiment. For example, any appropriate form of telecommunications network, or signal transmission media, can be monitored by apparatus according to this invention and the particular parameters identified can be selected, and varied, as required.

Blair, Christopher Douglas, Keenan, Roger Louis

Patent Priority Assignee Title
10028056, Sep 12 2006 Sonos, Inc. Multi-channel pairing in a media system
10031715, Jul 28 2003 Sonos, Inc. Method and apparatus for dynamic master device switching in a synchrony group
10063202, Apr 27 2012 Sonos, Inc. Intelligently modifying the gain parameter of a playback device
10097423, Jun 05 2004 Sonos, Inc. Establishing a secure wireless network with minimum human intervention
10120638, Jul 28 2003 Sonos, Inc. Synchronizing operations among a plurality of independently clocked digital data processing devices
10133536, Jul 28 2003 Sonos, Inc. Method and apparatus for adjusting volume in a synchrony group
10136218, Sep 12 2006 Sonos, Inc. Playback device pairing
10140085, Jul 28 2003 Sonos, Inc. Playback device operating states
10146498, Jul 28 2003 Sonos, Inc. Disengaging and engaging zone players
10157033, Jul 28 2003 Sonos, Inc. Method and apparatus for switching between a directly connected and a networked audio source
10157034, Jul 28 2003 Sonos, Inc. Clock rate adjustment in a multi-zone system
10157035, Jul 28 2003 Sonos, Inc Switching between a directly connected and a networked audio source
10175930, Jul 28 2003 Sonos, Inc. Method and apparatus for playback by a synchrony group
10175932, Jul 28 2003 Sonos, Inc Obtaining content from direct source and remote source
10185540, Jul 28 2003 Sonos, Inc. Playback device
10185541, Jul 28 2003 Sonos, Inc. Playback device
10209953, Jul 28 2003 Sonos, Inc. Playback device
10216473, Jul 28 2003 Sonos, Inc. Playback device synchrony group states
10228898, Sep 12 2006 Sonos, Inc. Identification of playback device and stereo pair names
10228902, Jul 28 2003 Sonos, Inc. Playback device
10282164, Jul 28 2003 Sonos, Inc. Synchronizing operations among a plurality of independently clocked digital data processing devices
10289380, Jul 28 2003 Sonos, Inc. Playback device
10296283, Jul 28 2003 Sonos, Inc. Directing synchronous playback between zone players
10303431, Jul 28 2003 Sonos, Inc. Synchronizing operations among a plurality of independently clocked digital data processing devices
10303432, Jul 28 2003 Sonos, Inc Playback device
10306364, Sep 28 2012 Sonos, Inc. Audio processing adjustments for playback devices based on determined characteristics of audio content
10306365, Sep 12 2006 Sonos, Inc. Playback device pairing
10324684, Jul 28 2003 Sonos, Inc. Playback device synchrony group states
10359987, Jul 28 2003 Sonos, Inc. Adjusting volume levels
10365884, Jul 28 2003 Sonos, Inc. Group volume control
10387102, Jul 28 2003 Sonos, Inc. Playback device grouping
10439896, Jun 05 2004 Sonos, Inc. Playback device connection
10445054, Jul 28 2003 Sonos, Inc Method and apparatus for switching between a directly connected and a networked audio source
10448159, Sep 12 2006 Sonos, Inc. Playback device pairing
10462570, Sep 12 2006 Sonos, Inc. Playback device pairing
10469966, Sep 12 2006 Sonos, Inc. Zone scene management
10484807, Sep 12 2006 Sonos, Inc. Zone scene management
10541883, Jun 05 2004 Sonos, Inc. Playback device connection
10545723, Jul 28 2003 Sonos, Inc. Playback device
10555082, Sep 12 2006 Sonos, Inc. Playback device pairing
10606552, Jul 28 2003 Sonos, Inc. Playback device volume control
10613817, Jul 28 2003 Sonos, Inc Method and apparatus for displaying a list of tracks scheduled for playback by a synchrony group
10613822, Jul 28 2003 Sonos, Inc. Playback device
10613824, Jul 28 2003 Sonos, Inc. Playback device
10635390, Jul 28 2003 Sonos, Inc. Audio master selection
10642889, Feb 20 2017 GONG IO LTD Unsupervised automated topic detection, segmentation and labeling of conversations
10720896, Apr 27 2012 Sonos, Inc. Intelligently modifying the gain parameter of a playback device
10747496, Jul 28 2003 Sonos, Inc. Playback device
10754612, Jul 28 2003 Sonos, Inc. Playback device volume control
10754613, Jul 28 2003 Sonos, Inc. Audio master selection
10848885, Sep 12 2006 Sonos, Inc. Zone scene management
10897679, Sep 12 2006 Sonos, Inc. Zone scene management
10908871, Jul 28 2003 Sonos, Inc. Playback device
10908872, Jul 28 2003 Sonos, Inc. Playback device
10911322, Jun 05 2004 Sonos, Inc. Playback device connection
10911325, Jun 05 2004 Sonos, Inc. Playback device connection
10949163, Jul 28 2003 Sonos, Inc. Playback device
10956119, Jul 28 2003 Sonos, Inc. Playback device
10963215, Jul 28 2003 Sonos, Inc. Media playback device and system
10965545, Jun 05 2004 Sonos, Inc. Playback device connection
10966025, Sep 12 2006 Sonos, Inc. Playback device pairing
10970034, Jul 28 2003 Sonos, Inc. Audio distributor selection
10979310, Jun 05 2004 Sonos, Inc. Playback device connection
10983750, Apr 01 2004 Sonos, Inc. Guest access to a media playback system
11025509, Jun 05 2004 Sonos, Inc. Playback device connection
11080001, Jul 28 2003 Sonos, Inc. Concurrent transmission and playback of audio information
11082770, Sep 12 2006 Sonos, Inc. Multi-channel pairing in a media system
11106424, May 09 2007 Sonos, Inc. Synchronizing operations among a plurality of independently clocked digital data processing devices
11106425, Jul 28 2003 Sonos, Inc. Synchronizing operations among a plurality of independently clocked digital data processing devices
11132170, Jul 28 2003 Sonos, Inc. Adjusting volume levels
11200025, Jul 28 2003 Sonos, Inc. Playback device
11223901, Jan 25 2011 Sonos, Inc. Playback device pairing
11265652, Jan 25 2011 Sonos, Inc. Playback device pairing
11276407, Apr 17 2018 GONG IO LTD Metadata-based diarization of teleconferences
11294618, Jul 28 2003 Sonos, Inc. Media player system
11301207, Jul 28 2003 Sonos, Inc. Playback device
11314479, Sep 12 2006 Sonos, Inc. Predefined multi-channel listening environment
11317226, Sep 12 2006 Sonos, Inc. Zone scene activation
11347469, Sep 12 2006 Sonos, Inc. Predefined multi-channel listening environment
11385858, Sep 12 2006 Sonos, Inc. Predefined multi-channel listening environment
11388532, Sep 12 2006 Sonos, Inc. Zone scene activation
11403062, Jun 11 2015 Sonos, Inc. Multiple groupings in a playback system
11418408, Jun 05 2004 Sonos, Inc. Playback device connection
11429343, Jan 25 2011 Sonos, Inc. Stereo playback configuration and control
11456928, Jun 05 2004 Sonos, Inc. Playback device connection
11467799, Apr 01 2004 Sonos, Inc. Guest access to a media playback system
11481182, Oct 17 2016 Sonos, Inc. Room association based on name
11540050, Sep 12 2006 Sonos, Inc. Playback device pairing
11550536, Jul 28 2003 Sonos, Inc. Adjusting volume levels
11550539, Jul 28 2003 Sonos, Inc. Playback device
11556305, Jul 28 2003 Sonos, Inc. Synchronizing playback by media playback devices
11625221, May 09 2007 Sonos, Inc Synchronizing playback by media playback devices
11635935, Jul 28 2003 Sonos, Inc. Adjusting volume levels
11650784, Jul 28 2003 Sonos, Inc. Adjusting volume levels
11758327, Jan 25 2011 Sonos, Inc. Playback device pairing
11894975, Jun 05 2004 Sonos, Inc. Playback device connection
11907610, Apr 01 2004 Sonos, Inc. Guess access to a media playback system
11909588, Jun 05 2004 Sonos, Inc. Wireless device connection
8295469, May 13 2005 AT&T Intellectual Property I, L.P. System and method of determining call treatment of repeat calls
8724521, Jul 30 2007 Verint Americas Inc. Systems and methods of recording solution interface
8751232, Aug 12 2004 RUNWAY GROWTH FINANCE CORP System and method for targeted tuning of a speech recognition system
8817964, Feb 11 2008 KYNDRYL, INC Telephonic voice authentication and display
8824659, Jan 10 2005 Microsoft Technology Licensing, LLC System and method for speech-enabled call routing
8879714, May 13 2005 AT&T Intellectual Property I, L.P. System and method of determining call treatment of repeat calls
9088652, Jan 10 2005 Microsoft Technology Licensing, LLC System and method for speech-enabled call routing
9112972, Dec 06 2004 RUNWAY GROWTH FINANCE CORP System and method for processing speech
9141645, Jul 28 2003 Sonos, Inc. User interfaces for controlling and manipulating groupings in a multi-zone media system
9158327, Jul 28 2003 Sonos, Inc. Method and apparatus for skipping tracks in a multi-zone system
9164531, Jul 28 2003 Sonos, Inc. System and method for synchronizing operations among a plurality of independently clocked digital data processing devices
9164532, Jul 28 2003 Sonos, Inc. Method and apparatus for displaying zones in a multi-zone system
9164533, Jul 28 2003 Sonos, Inc. Method and apparatus for obtaining audio content and providing the audio content to a plurality of audio devices in a multi-zone system
9170600, Jul 28 2003 Sonos, Inc. Method and apparatus for providing synchrony group status information
9176519, Jul 28 2003 Sonos, Inc. Method and apparatus for causing a device to join a synchrony group
9176520, Jul 28 2003 Sonos, Inc Obtaining and transmitting audio
9182777, Jul 28 2003 Sonos, Inc. System and method for synchronizing operations among a plurality of independently clocked digital data processing devices
9189010, Jul 28 2003 Sonos, Inc. Method and apparatus to receive, play, and provide audio content in a multi-zone system
9189011, Jul 28 2003 Sonos, Inc. Method and apparatus for providing audio and playback timing information to a plurality of networked audio devices
9195258, Jul 28 2003 Sonos, Inc System and method for synchronizing operations among a plurality of independently clocked digital data processing devices
9207905, Jul 28 2003 Sonos, Inc Method and apparatus for providing synchrony group status information
9213356, Jul 28 2003 Sonos, Inc. Method and apparatus for synchrony group control via one or more independent controllers
9213357, Jul 28 2003 Sonos, Inc Obtaining content from remote source for playback
9218017, Jul 28 2003 Sonos, Inc Systems and methods for controlling media players in a synchrony group
9348354, Jul 28 2003 Sonos, Inc. Systems and methods for synchronizing operations among a plurality of independently clocked digital data processing devices without a voltage controlled crystal oscillator
9350862, Dec 06 2004 RUNWAY GROWTH FINANCE CORP System and method for processing speech
9354656, Jul 28 2003 Sonos, Inc. Method and apparatus for dynamic channelization device switching in a synchrony group
9368111, Aug 12 2004 RUNWAY GROWTH FINANCE CORP System and method for targeted tuning of a speech recognition system
9374607, Jun 26 2012 Sonos, Inc. Media playback system with guest access
9563394, Jul 28 2003 Sonos, Inc. Obtaining content from remote source for playback
9569170, Jul 28 2003 Sonos, Inc. Obtaining content from multiple remote sources for playback
9569171, Jul 28 2003 Sonos, Inc. Obtaining content from local and remote sources for playback
9569172, Jul 28 2003 Sonos, Inc. Resuming synchronous playback of content
9658820, Jul 28 2003 Sonos, Inc. Resuming synchronous playback of content
9665343, Jul 28 2003 Sonos, Inc. Obtaining content based on control by multiple controllers
9727302, Jul 28 2003 Sonos, Inc. Obtaining content from remote source for playback
9727303, Jul 28 2003 Sonos, Inc. Resuming synchronous playback of content
9727304, Jul 28 2003 Sonos, Inc. Obtaining content from direct source and other source
9729115, Apr 27 2012 Sonos, Inc Intelligently increasing the sound level of player
9733891, Jul 28 2003 Sonos, Inc. Obtaining content from local and remote sources for playback
9733892, Jul 28 2003 Sonos, Inc. Obtaining content based on control by multiple controllers
9733893, Jul 28 2003 Sonos, Inc. Obtaining and transmitting audio
9734242, Jul 28 2003 Sonos, Inc. Systems and methods for synchronizing operations among a plurality of independently clocked digital data processing devices that independently source digital data
9740453, Jul 28 2003 Sonos, Inc. Obtaining content from multiple remote sources for playback
9749760, Sep 12 2006 Sonos, Inc. Updating zone configuration in a multi-zone media system
9756424, Sep 12 2006 Sonos, Inc. Multi-channel pairing in a media system
9766853, Sep 12 2006 Sonos, Inc. Pair volume control
9778897, Jul 28 2003 Sonos, Inc. Ceasing playback among a plurality of playback devices
9778898, Jul 28 2003 Sonos, Inc. Resynchronization of playback devices
9778900, Jul 28 2003 Sonos, Inc. Causing a device to join a synchrony group
9781513, Feb 06 2014 Sonos, Inc. Audio output balancing
9787550, Jun 05 2004 Sonos, Inc. Establishing a secure wireless network with a minimum human intervention
9794707, Feb 06 2014 Sonos, Inc. Audio output balancing
9813827, Sep 12 2006 Sonos, Inc. Zone configuration based on playback selections
9860657, Sep 12 2006 Sonos, Inc. Zone configurations maintained by playback device
9866447, Jun 05 2004 Sonos, Inc. Indicator on a network device
9928026, Sep 12 2006 Sonos, Inc. Making and indicating a stereo pair
9960969, Jun 05 2004 Sonos, Inc. Playback device connection
9977561, Apr 01 2004 Sonos, Inc Systems, methods, apparatus, and articles of manufacture to provide guest access
Patent Priority Assignee Title
3855418,
4093821, Jun 14 1977 WELSH, JOHN GREEN Speech analyzer for analyzing pitch or frequency perturbations in individual speech pattern to determine the emotional state of the person
4142067, Jun 14 1977 WELSH, JOHN GREEN Speech analyzer for analyzing frequency perturbations in a speech pattern to determine the emotional state of a person
4567512, Jun 01 1982 World Video Library, Inc. Recorded program communication system
4837804, Jan 14 1986 Mitsubishi Denki Kabushiki Kaisha Telephone answering voiceprint discriminating and switching apparatus
4924488, Jul 28 1987 ENFORCEMENT SUPPORT INCORPORATED, AN OH CORP Multiline computerized telephone monitoring system
4969136, Aug 08 1986 DICTAPHONE CORPORATION, A CORP OF DE Communications network and method with appointment information communication capabilities
4975896, Aug 08 1986 DICTAPHONE CORPORATION, A CORP OF DE Communications network and method
5036539, Jul 06 1989 ITT Corporation Real-time speech processing development system
5070526, Aug 08 1990 Cisco Technology, Inc Signal analyzing system
5101402, May 24 1988 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Apparatus and method for realtime monitoring of network sessions in a local area network
5166971, Sep 02 1988 Siemens Aktiengesellschaft Method for speaker recognition in a telephone switching system
5260943, Jun 16 1992 Motorola Mobility, Inc TDM hand-off technique using time differences
5274572, Dec 02 1987 Schlumberger Technology Corporation Method and apparatus for knowledge-based signal monitoring and analysis
5390243, Nov 01 1993 AT&T Corp.; American Telephone and Telegraph Company Telemarketing complex with automatic threshold levels
5511165, Oct 23 1992 International Business Machines Corporation Method and apparatus for communicating data across a bus bridge upon request
5535261, Aug 20 1993 SECURUS TECHNOLOGIES, INC Selectively activated integrated real-time recording of telephone conversations
5544176, Feb 13 1990 Canon Kabushiki Kaisha Information recording apparatus which eliminates unnecessary data before recording
5581614, Aug 19 1991 Rovi Guides, Inc Method for encrypting and embedding information in a video program
5623539, Jan 27 1994 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Using voice signal analysis to identify authorized users of a telephone system
5696811, Sep 22 1993 e-talk Corporation Method and system for automatically monitoring the performance quality of call center service representatives
5787253, May 28 1996 SILKSTREAM CORPORATION Apparatus and method of analyzing internet activity
5818907, Sep 22 1993 e-talk Corporation Method and system for automatically monitoring the performance quality of call center service representatives
5946375, Sep 22 1993 e-talk Corporation Method and system for monitoring call center service representatives
5960063, Aug 23 1996 KDDI Corporation Telephone speech recognition system
5983186, Aug 21 1995 Seiko Epson Corporation Voice-activated interactive speech recognition device and method
6035017, Jan 24 1997 AVAYA Inc Background speech recognition for voice messaging applications
6058163, Sep 22 1993 e-talk Corporation Method and system for monitoring call center service representatives
6108782, Dec 13 1996 Hewlett Packard Enterprise Development LP Distributed remote monitoring (dRMON) for networks
6115751, Apr 10 1997 Cisco Technology, Inc Technique for capturing information needed to implement transmission priority routing among heterogeneous nodes of a computer network
6314094, Oct 29 1998 Apple Inc Mobile wireless internet portable radio
6418214, Sep 25 1996 Cisco Technology, Inc Network-based conference system
6538684, Nov 29 1994 Canon Kabushiki Kaisha Television conference system indicating time data
20050240656,
20060165003,
EP510412,
GB2257872,
////////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Sep 15 1997BLAIR, CHRISTOPHER DOUGLASEyretel LimitedASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0188500067 pdf
Sep 16 1997KEENAN, ROGER LOUISEyretel LimitedASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0188500067 pdf
Jan 17 2006EYRETEL LIMITED, DBA WITNESS SYSTEMS LTD WITNESS SYSTEMS, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0188500108 pdf
Aug 24 2006Verint Americas Inc.(assignment on the face of the patent)
May 25 2007WITNESS SYSTEMS, INC VERINT AMERICAS INCCHANGE OF NAME SEE DOCUMENT FOR DETAILS 0301120585 pdf
Apr 29 2011VERINT AMERICAS INCCREDIT SUISSE AGSECURITY AGREEMENT0262070203 pdf
Sep 18 2013CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTVERINT VIDEO SOLUTIONS INC RELEASE OF SECURITY INTEREST IN PATENT RIGHTS0314480373 pdf
Sep 18 2013CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTVERINT SYSTEMS INC RELEASE OF SECURITY INTEREST IN PATENT RIGHTS0314480373 pdf
Sep 18 2013CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTVERINT AMERICAS INCRELEASE OF SECURITY INTEREST IN PATENT RIGHTS0314480373 pdf
Sep 18 2013VERINT AMERICAS INCCREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENTGRANT OF SECURITY INTEREST IN PATENT RIGHTS0314650450 pdf
Jun 29 2017Credit Suisse AG, Cayman Islands BranchVERINT AMERICAS INCRELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS 0430660473 pdf
Jun 29 2017VERINT AMERICAS INCJPMORGAN CHASE BANK, N A , AS COLLATERAL AGENTGRANT OF SECURITY INTEREST IN PATENT RIGHTS0432930567 pdf
Date Maintenance Fee Events


Date Maintenance Schedule
Aug 31 20134 years fee payment window open
Mar 03 20146 months grace period start (w surcharge)
Aug 31 2014patent expiry (for year 4)
Aug 31 20162 years to revive unintentionally abandoned end. (for year 4)
Aug 31 20178 years fee payment window open
Mar 03 20186 months grace period start (w surcharge)
Aug 31 2018patent expiry (for year 8)
Aug 31 20202 years to revive unintentionally abandoned end. (for year 8)
Aug 31 202112 years fee payment window open
Mar 03 20226 months grace period start (w surcharge)
Aug 31 2022patent expiry (for year 12)
Aug 31 20242 years to revive unintentionally abandoned end. (for year 12)