A method and system for indicating in real time that an interaction is associated with a problem or issue, comprising: receiving a segment of an interaction in which a representative of the organization participates; extracting a feature from the segment; extracting a global feature associated with the interaction; aggregating the feature and the global feature; and classifying the segment or the interaction in association with the problem or issue by applying a model to the feature and the global feature. The method and system may also use features extracted from earlier segments within the interaction. The method and system can also evaluate the model based on features extracted from training interactions and manual tagging assigned to the interactions or segments thereof.
|
1. A computerized method for indicating in real time that an interaction in which a representative of an organization participates is associated with a problem or issue, comprising:
capturing the interaction in a storage device by a computing platform comprising a processing apparatus, the interaction comprises at least a speech;
receiving a segment of the captured interaction;
extracting a feature from the segment wherein the feature is a word spoken in the segment of the interaction obtained by a speech to text mechanism comprising hardware;
extracting a global feature associated with the whole interaction in progress;
extracting features from segments of the interaction that are earlier to the segment;
aggregating the feature and the global feature and features similar to the feature that are related to the segments;
classifying the segment of the interaction in association with the problem or issue by applying a model to the feature and the global feature and the features similar to the feature that are related to the segments,
wherein extracting the feature from the segment comprises:
generating a phoneme graph from the segment of the interaction;
flushing a part of the graph by removing graph edges that have no continuity or do not lead to a final state of a hmm, wherein the edges represent candidate phonemes between two time points;
coarsely examining the flushed graph and determining a point in the flushed graph of high probability, wherein the point comprises a most probable sub-path of a path;
determining a limited time segment window based on the most probable sub-path that covers the sub-paths above or below the most probable sub-path; and
performing a thorough search only over the limited time segment window, wherein the thorough search comprises all edges which appear in the limited time window,
determining one or more words based on the thorough search,
wherein the recited operations are carried out in real time by a computing platform executing one or more computer applications.
2. The method of
3. The method of
receiving a training corpus;
framing, an interaction of the test corpus into segments;
extracting features from the segments; and
evaluating, the model based on the features.
4. The method of
5. The method of
6. The method of claim wherein the aggregation step aggregates a feature related to a second segment preceding the segment, the feature selected, from the group consisting of: a spotted word; a feature or indication associated with emotion; an emotion score that represents the probability of emotion to exist within the segment; a talk analysis parameters; an agent burst position; a customer burst position; a number of bursts; agent talk percentage; customer talk percentage; a silence duration; a word extracted by a speech to text engine; number of holds; hold duration; hold position; number of transfers; part of speech tagging or stemming; segment position within the interaction in absolute time, segment position within the interaction in number of words; speaker side within the segment; and average duration of silence between words in the segment.
7. The method of
8. The method of
9. The method of claim wherein the segment of the interaction overlaps with a previous or following segment of the interaction.
|
The present disclosure relates to call centers in general, and to a method and apparatus for obtaining business insight from interactions in real-time, in particular.
Large organizations, such as commercial organizations, financial organizations or public safety organizations conduct numerous interactions with customers, users, suppliers or other persons on a daily basis. A large part of these interactions are vocal, or at least comprise a vocal component.
Many of the interactions proceed in a satisfactory manner. The callers receive the information or service they require, and the interaction ends successfully. However, other interactions may not proceed as expected, and some help or guidance from a supervisor may be required. In even worse scenarios, the agent or another person handling the call may not even be aware that the call is problematic and that some assistance may be helpful. In some cases, when things become clearer, it may already be too late to remedy the situation, and the customer may have already decided to leave the organization.
Similar scenarios may occur in other business interactions, such as unsatisfied customers which do not immediately leave the organization but may do so when the opportunity presents itself, sales interactions in which some help from a supervisor can make the difference between success and failure, or similar cases.
Yet another category in which immediate assistance or observation can make a difference is fraud detection, wherein if a caller is suspected to be fraudulent, then extra care should be taken to avoid operations that are lossy for the organization.
For cases such as those described above, early alert or notification can let a supervisor or another person join the interaction or take any other step, when it is still possible to provide assistance, remedy the situation, or otherwise reduce the damages. Even if immediate response is not feasible, near real-time alert, i.e., alert provided a short time after the interactions finished may also be helpful and enable some damage reduction.
There is therefore a need in the art for a method and system that will enable real-time or near-real-time alert or notification about interactions in which there is a need for intervention by a supervisor, or another remedial step to be taken. Such steps may be require for preventing customer churn, keeping customer satisfied, providing support for sales interactions, identifying fraud or fraud attempts, or any other scenario that may pose a problem to a business.
A method and apparatus for classifying interactions captured in an environment according to problems or issues, in real time.
One aspect of the disclosure relates to a method for indicating in real time that an interaction in which a representative of an organization participates is associated with a problem or issue, comprising: receiving a segment of the interaction; extracting a feature from the segment; extracting a global feature associated with the interaction; aggregating the feature and the global feature; and classifying the segment or the interaction in association with the problem or issue by applying a model to the feature and the global feature. The method can further comprise determining an action to be taken when the classification indicated that the segment or interaction is associated with the problem or issue. The method can further comprise: receiving a training corpus; framing an interaction of the test corpus into segments; extracting features from the segments; and evaluating the model based on the features. Within the method, the feature optionally relates to the segment and is selected from the group consisting of: a spotted word; a feature or indication associated with emotion; an emotion score that represents the probability of emotion to exist within the segment; a talk analysis parameters; an agent burst position; a customer burst position; a number of bursts; agent talk percentage; customer talk percentage; a silence duration; a word extracted by a speech to text engine; number of holds; hold duration; hold position; number of transfers; part of speech tagging or stemming; segment position within the interaction in absolute time, segment position within the interaction in number of words; speaker side within the segment; and average duration of silence between words in the segment. Within the method, the global feature is optionally selected from the group consisting of: average agent activity within the interaction; average customer activity within the interaction; average silence; number of bursts per participant side; total burst duration per participant side; an emotion detection feature; total probability score for an emotion within the interaction; number of emotion bursts; total emotion burst duration; a stemmed word, a stemmed keyphrase; number of word repetitions; and number of keyphrase repetitions. Within the method, the aggregation step optionally aggregates a feature related to a second segment preceding the segment, the feature selected from the group consisting of: a spotted word; a feature or indication associated with emotion; an emotion score that represents the probability of emotion to exist within the segment; a talk analysis parameters; an agent burst position; a customer burst position; a number of bursts; agent talk percentage; customer talk percentage; a silence duration; a word extracted by a speech to text engine; number of holds; hold duration; hold position; number of transfers; part of speech tagging or stemming; segment position within the interaction in absolute time, segment position within the interaction in number of words; speaker side within the segment; and average duration of silence between words in the segment. Within the method, the action is optionally selected from the group consisting of: popping a message on a display device used by a person; generating an alert; routing the interaction; conferencing the interaction; and adding data to a statistical storage. Within the method, the problem or issues is selected from the group consisting of: customer dissatisfaction; churn prediction; sales assistance; and fraud detection. Within the method, the feature is optionally a word spoken in the segment or interaction, the method comprising: generating a phoneme graph from the segment or interaction, and flushing a part of the graph so as to enable for searching a word within the graph. Within the method, the segment of the interaction optionally overlaps with a previous or following segment of the interaction. The method can further comprise fast search of the part of the graph.
Another aspect of the disclosure relates to a system for indicating in real time that an interaction in which a representative of an organization participates is associated with a problem or issue, comprising: an extraction component for extracting a feature from a segment of an interaction in which a representative of the organization participates; an interaction feature extraction component for extracting a global feature from the interaction; and a classification component for determining by applying a model to the feature and the global feature whether a problem or issue associated are presented by the segment or interaction. The apparatus can further comprise an action determination component for determining an action to be taken after determining that the problem or issue associated are presented by the segment or interaction. The apparatus can further comprise an application component for taking an action upon determining that the problem or issue associated is presented by the segment or interaction. The apparatus can further comprise a model evaluation component for evaluating the model used by the classification component. The apparatus can further comprise a tagging component for assigning tags associated with the problem or issue to interactions or segments. Within the apparatus, the feature optionally relates to the segment and is selected from the group consisting of: a spotted word; a feature or indication associated with emotion; an emotion score that represents the probability of emotion to exist within the segment; a talk analysis parameters; an agent burst position; a customer burst position; a number of bursts; agent talk percentage; customer talk percentage; a silence duration; a word extracted by a speech to text engine; number of holds; hold duration; hold position; number of transfers; part of speech tagging or stemming; segment position within the interaction in absolute time, segment position within the interaction in number of words; speaker side within the segment; and average duration of silence between words in the segment. Within the apparatus, the global feature is optionally selected from the group consisting of: average agent activity within the interaction; average customer activity within the interaction; average silence; number of bursts per participant side; total burst duration per participant side; an emotion detection feature; total probability score for an emotion within the interaction; number of emotion bursts; total emotion burst duration; a stemmed word, a stemmed keyphrase; number of word repetitions; and number of keyphrase repetitions. Within the apparatus, the classification component optionally uses also a feature extracted from a second segment within the interaction, the feature selected from the group consisting of: a spotted word; a feature or indication associated with emotion; an emotion score that represents the probability of emotion to exist within the segment; a talk analysis parameters; an agent burst position; a customer burst position; a number of bursts; agent talk percentage; customer talk percentage; a silence duration; a word extracted by a speech to text engine; number of holds; hold duration; hold position; number of transfers; part of speech tagging or stemming; segment position within the interaction in absolute time, segment position within the interaction in number of words; speaker side within the segment; and average duration of silence between words in the segment.
Yet another aspect of the disclosure relates to a computer readable storage medium containing a set of instructions for a general purpose computer, the set of instructions comprising: receiving a segment of an interaction in which a representative of the organization participates; extracting a feature from the segment; extracting a global feature associated with the interaction; aggregating the feature and the global feature; and classifying the segment or the interaction in association with the problem or issue by applying a model to the feature and the global feature.
Exemplary non-limited embodiments of the disclosed subject matter will be described, with reference to the following description of the embodiments, in conjunction with the figures. The figures are generally not shown to scale and any sizes are only meant to be exemplary and not necessarily limiting. Corresponding or like elements are designated by the same numerals or letters.
A method and apparatus for identifying in real-time interactions in a call center, a trade floor, a public safety organization, the interactions requiring additional attention, routing, or other handling. Identifying such as interaction in real-time relates to identifying that the interaction requires special handling while the interaction is still going on, or a short time after it ends, so that such handling is efficient.
The method and apparatus are based on extracting multiple types of information from the interaction while it is progressing. A feature vector is extracted from the interaction for example every predetermined time frame, such as a number of seconds. In some embodiments, a feature vector extracted during an interaction can also contain or otherwise relate to features extracted at earlier points of time during the interaction, i.e., the feature vector can carry some of the history of the interaction.
The features may include textual features, such as phonetic indexing or speech-to-text (S2T) output, comprising words detected within the audio of the interaction; acoustical or prosody features extracted from the interaction, emotion detected within the interaction, CTI data, CRM data, talk analysis information, or the like.
The method and apparatus employ a training step and training engine. During training, vectors of features extracted from training interactions are associated with tagging data. The tagging can contain an indication of a problem the organization wishes to relate to, for example “customer dissatisfaction”, “predicted churn”, “sales assistance required”, “fraud detected”, or the like. The tagging data can be created manually or in any other manner, and can relate to a particular point of time in the interaction, or to the interaction as a whole.
By processing the set of pairs, wherein each pair comprises the feature vector and one or more associated tags, a model is estimated which provides association of feature vectors with tags.
Then at production time, also referred to as testing, runtime, or realtime, features are extracted from an ongoing interaction, and based on the model or rule deduced during training, real-time indications are determined for the interaction. The indications can be used in any required manner. For example, the indication can cause a popup message to be displayed on a display device used by a user, the message can also comprise a link through which the supervisor can join the interaction. In another embodiment, the indication can be used for routing the interaction to another destination, used for statistics, or the like.
The models, and optionally the features extracted from the interactions are stored, and can be used for visualization, statistics, further analysis of the interactions, or any other purpose.
The extraction of features to be used for providing real-time indications is adapted to efficiently provide results based on the latest feature vector extracted from the interaction, and on previously extracted feature vectors, so that the total indication is provided as early as possible, and preferably when the interaction is still going on.
Referring now to
The interactions are captured using capturing or logging components 100. The vocal interactions usually include telephone or voice over IP sessions 112. Telephone of any kind, including landline, mobile, satellite phone or others is currently the main channel for communicating with users, colleagues, suppliers, customers and others in many organizations. The voice typically passes through a PABX (not shown), which in addition to the voice of two or more sides participating in the interaction collects additional information discussed below. A typical environment can further comprise voice over IP channels, which possibly pass through a voice over IP server (not shown). It will be appreciated that voice messages are optionally captured and processed as well, and that the handling is not limited to two-sided conversations. The interactions can further include face-to-face interactions, such as those recorded in a walk-in-center 116, video conferences 124 which comprise an audio component, and additional sources of data 128. Additional sources 128 may include vocal sources such as microphone, intercom, vocal input by external systems, broadcasts, files, streams, or any other source. Additional sources may also include non vocal sources such as e-mails, chat sessions, screen events sessions, facsimiles which may be processed by Object Character Recognition (OCR) systems, or others, information from Computer-Telephony-Integration (CTI) systems, information from Customer-Relationship-Management (CRM) systems, or the like.
Data from all the above-mentioned sources and others is captured and may be logged by capturing/logging component 132. Capturing/logging component 132 comprises a computing platform executing one or more computer applications as detailed below. The captured data may be stored in storage 134 which is preferably a mass storage device, for example an optical storage device such as a CD, a DVD, or a laser disk; a magnetic storage device such as a tape, a hard disk, Storage Area Network (SAN), a Network Attached Storage (NAS), or others; a semiconductor storage device such as Flash device, memory stick, or the like. The storage can be common or separate for different types of captured segments and different types of additional data. The storage can be located onsite where the segments or some of them are captured, or in a remote location. The capturing or the storage components can serve one or more sites of a multi-site organization. A part of or storage additional to storage 134 is storage 136 that stores the real-time analytic models which are determined via training as detailed below, and used in run-time for feature extraction and classification in further interactions. Storage 134 can comprise a single storage device or a combination of multiple devices.
Real-time feature extraction and classification component 138 receives the captured or logged interactions, processes them and identifies interactions that should be noted, for example, by generating an alert, routing a call, or the like. Real-time feature extraction and classification component 138 optionally employs one or more engines, such as word spotting engines, transcription engines, emotion detection engines, or the like for processing the input interactions.
It will be appreciated that in order to provide indications related to a particular interaction as it is still going on or a short time later, classification component 138 may receive short segments of the interaction and not the full interaction when it is done. The segments can vary in length between a fraction of a second to one or a few tens of seconds. It will also be appreciated that the segments do not have to be of uniform length. Thus, at the beginning of an interaction, longer segments can be used. This may serve two purposes: first, the likelihood of a problem at the beginning of an interaction is relatively low. Second, since the beginning of the interaction can be assumed to be more neutral, it can be used as a baseline for constructing relevant models to which later parts of the interaction can be compared. Large deviation between the beginning and the continuation of the interaction can indicate a problematic interaction.
The apparatus further comprises analytics training component 140 for training models upon training data 142.
The output of real-time feature extraction and classification component 138 and optionally additional data may be sent to alert generation component 146 for alerting a user such as a supervisor in any way the user prefers, or as indicated for example by a system administrator. The alerts can include for example various graphic alerts such as a screen popup, a vocal indication, an SMS, an e-mail, a textual indication, a vocal indication, or the like. Alert generation component 146 can also comprise a mechanism for the user to automatically take action, such as connect to an ongoing call. The alert can also be presented as a dedicated user interface that provides the ability to examine and listen to relevant areas of the interaction or of previous interactions, or the like. The results can further be transferred to call routing component 148, for routing a call to an appropriate representative or another person. The results can also be transferred to statistics component 150 for collecting statistics about interactions in general, and interactions for which an alert was generated, in particular, and the reasons thereof, in particular.
The data can also be transferred to any additional usage component which may include further analysis, for example performing root cause analysis. Additional usage components may also include playback components, report generation components, or others. The real-time classification results can be further fed back and update the real-time analytic models generated by analytic training component 140.
The apparatus may comprise one or more computing platforms, executing components for carrying out the disclosed steps. The computing platform can be a general purpose computer such as a personal computer, a mainframe computer, or any other type of computing platform that is provisioned with a memory device (not shown), a CPU or microprocessor device, and several I/O ports (not shown). The components are preferably components comprising one or more collections of computer instructions, such as libraries, executables, modules, or the like, programmed in any programming language such as C, C++, C#, Java or others, and developed under any development environment, such as .Net, J2EE or others. Alternatively, the apparatus and methods can be implemented as firmware ported for a specific processor such as digital signal processor (DSP) or microcontrollers, or can be implemented as hardware or configurable hardware such as field programmable gate array (FPGA) or application specific integrated circuit (ASIC). The software components can be executed on one platform or on multiple platforms wherein data can be transferred from one computing platform to another via a communication channel, such as the Internet, Intranet, Local area network (LAN), wide area network (WAN), or via a device such as CDROM, disk on key, portable disk or others.
Referring now to
The system comprises four main layers, each comprising multiple components: extraction layer 200 for extracting features from the interactions or from additional data; real-time (RT) classification layer 204 for receiving the data extracted by extraction layer 200 and generating indications for problematic or other situations upon the extracted features; RT action determination component 206 for determining the desired action upon receiving the problem indication; and RT application layer 208 for utilizing the indications generated by RT classification layer 204 by providing alerts to sensitive interactions, enabling call routing or conferencing, or the like.
The system can further comprise training components 262 for training the models upon which classification layer 204 identifies the interaction for which an alert or other indication should be provided.
Extraction components 200 comprise RT phonetic search component 212, for phoneme-based extraction of textual data from an interaction, RT emotion detection component 216 for extracting features related to emotions expressed during the interactions, RT talk analysis component 220 for extracting features related to the interaction flow, such as silence periods vs. talk periods on either side, crosstalk events, talkover parameters, or the like.
Extraction components 200 further comprise RT speech to text engine 224 for extracting the full text spoken within the interaction so far, and RT Computer Telephony Integration (CTI) and Customer Relationship Management (CRM) data extraction component 228 for extracting data related to the interaction from external systems, such as CTI, CRM, or the like. Extraction components 200 further comprise RT interaction feature extraction component 230 for extracting features related to the interaction as a whole rather than to a single segment within the interaction.
Components of extraction components 200 are detailed in association with the following figures below.
RT classification layer 204 comprises components for utilizing the data extracted by extraction components 200 in order to identify interactions indicating business problems or business aspects which are of interest for the organization. Layer 204 can comprise, for example, RT customer satisfaction component 232 for identifying interactions or parts thereof in which a customer is dissatisfied, and some help may be required for the agent handling the interaction, RT churn prediction component 236 for identifying interactions which provide indications that the customer may churn the organization, RT sales assistance component 240 for identifying sales calls in which the representative may require help in order to complete the call successfully, and RT fraud detection component 244 for identifying calls associated with a fraud, a fraud attempt or a fraudster.
It will be appreciated that any other or different components can be used by RT classification layer 204, according to the business scenarios or problems which organization wants to identify. Such problems or scenarios may be general, domain specific, vertical specific, business specific, or the like.
The components of RT classification layer 204 use trained models which are applied to the feature vectors extracted by extraction components 200.
A flowchart of an exemplary method in which any of the components of RT classification layer 204 operates is provided in association with
The system further comprises RT application layer 208, which comprises components for using the indications provided by classification layer 204. For example, RT application layer 208 can comprise RT screen popup component 248 for popping a message on a display device used by a relevant person, such as a supervisor, that an interaction is in progress for which help may be required. The message can comprise a link to be clicked, or an application which enables the viewer of the message to join the call, or automatically call the customer or the agent if the interaction has ended.
RT application layer 208 can also comprise RT alert component 252 for generating any type of alert regarding an interaction requiring extra attention. The alert can take the form of an e-mail, sort message, telephone call, fax, visual alert, vocal alert, or the like.
Yet another application can be provided by RT routing component 256 for routing or conferencing a call for which a problematic situation has been detected.
A further application can be provided by RT statistics component 260, which collects, analyzes and presents statistical data related to all interactions, to interactions for which a condition or situation has been identified by RT classification layer, or the like.
It will be appreciated by a person skilled in the art that some of the options above can be offered by a single application or module, or by multiple applications which may share components. It will also be appreciated that multiple other applications can be designed and used in order to utilize the indications provided by classification layer 204.
The various applications can be developed using any programming language and under any development environment, using any proprietary or off the shelf user interface tools.
Yet another group of components in the system is training components 264 for generating the models upon which classification is performed. Training components 264 optionally include tagging component 268, which may provide a user interface for a user to tag interactions or certain time frames or time points within interactions, in accordance with the required tags, such as “unsatisfied customer”, “churning customer” or the like. Alternatively, tagging information can be received from an external source.
Training components 264 further comprise model evaluation component 272 for evaluating a model based on the received tags, and the features extracted by extraction components 200. Training components 264 can further comprise model updating component 276 for enhancing an existing model when more pairs of feature vectors and tags are available.
It will be appreciated that in order for a real time analytics system to be effective, it should analyze as many input interactions as possible. Such interactions may arrive from multiple channels. Each such channel represents independent data coming from an independent source, and should therefore be treated as such.
As detailed in association with
In some embodiments, some of the engines, such as the components of extraction layer 200 of
For multiplexing the engines, the data collected from each channel data is divided into buffers, being audio clips of a predetermined length, such as a few seconds each. The buffers of a particular interaction are sent to the engine for analysis one by one, as they are captured. The engine “scans” the input channels, takes a buffer from each channel, and analyzes it. This arrangement keeps the independence of the channel processing, while utilizing the engine for simultaneous processing of multiple channels.
In addition, in order to avoid missing an event in cases where an event starts at a certain buffer, and continues or ends at the next buffer, subsequent buffers of the same channel overlap each other by a predetermined number of seconds (ΔT). Thus, a buffer will start Δt seconds before the preceding buffer one ended.
Referring now to
The interaction starts at time 0 (relative to the beginning of the interaction). Buffer 1 (304) comprises the part of the interaction between time 0 and time T, buffer 2 (308) comprises the part of the interaction between time T−ΔT and time 2T, and buffer 3 (312) comprises the part of the interaction between time 2T−ΔT and time 3T. Thus, each buffer (optionally excluding the first one and the last one) is of length T+ΔT, and overlaps in ΔT the preceding and the following buffers.
It will be appreciated that the division into buffers takes place separately for every channel, such that buffers taken from multiple channels, are processed in parallel.
It will be appreciated that T, ΔT, and the number of channels that can be processed simultaneously depends on the particular engine being used, the required accuracy, and the required notification delay. In some embodiments for some engines, between 5 and 100 channels can be processed simultaneously, with T varying between about a second and about a minute, and ΔT varying between 0.1 second and 5 seconds. In art exemplary embodiment, 30 channels are processed simultaneously, with T=5 seconds and ΔT=1 second.
Referring now to
Automatic Speech Recognition (ASR) machines based on Hidden Markov Model (HMM) create a phoneme graph comprising vertices and edges, and accumulate results during their operation. Only at a later stage some edges are kept while less probable ones are removed.
In real time applications, it is required to decide before the interaction is over, for example at intervals of a number of seconds length, what the most probable “branches” or edges are. For this end, a “force-flush” of the accumulated graph is performed, which may cause removal of graph edges that have no continuity or do not lead to a final state of the HMM.
It is possible that such removal will result in loss of edges that may have later proven to be useful. Yet, the buffer overlap described in association with
Thus, the method for phoneme search in real time comprises phoneme graph generation steps 400, followed by real time specific steps 420. Real time specific steps 420 comprise graph force flush step 424 which outputs the phoneme graph generated by phoneme graph generation steps 400 for the last part of the interaction, e.g., the last 4 seconds. On step 428, the graph, which may be incomplete and different than the graph that would have been generated had the full interaction been available, is searched for words belonging to a precompiled list. The detected words are used as features output by extraction layer 200 to RT classification layer 204.
Since it is required to search the graph in real time, fast search algorithm may be used, which utilizes partial graph searching. The fast search first performs a coarse examination of selected branches of the flushed results graph. Once a “suspicious” point in the graph is found, i.e., there is a probability exceeding a threshold that the searched word is in the relevant section of the graph, a more thorough search is performed by the engine, over this section only. The thorough search comprises taking all edges of the graph which appear in a limited time range.
Thus, it will be appreciated that the fast mechanism may comprise two main options for effective search: searching along the entire time axis, but only on some edges of the graph, or searching all edges, but only on limited part of the time axis.
Referring now to
In an exemplary situation, a particular partial path within the graph has the highest score, i.e., it is most probable. A coarse search is performed along the path.
Referring now to
Referring now to
Thus, in
Referring now back to
It will be appreciated that other implementations for phoneme graph generation steps 400 can be used. It will also be appreciated that steps 400 and 420 can be replaced with any other algorithm or engine that enables word spotting, or an efficient search for a particular word within the audio.
Other engines used by RT extraction layer 204 do not require significant changes due to their operation on parts of the interactions rather than the whole interactions. Thus, emotion detection component 216 can be implemented, for example, as described in U.S. patent application Ser. No. 11/568,048, filed on Mar. 14, 2007 published as US20080040110, titled “Apparatus and methods for the detection of emotions in audio interactions” incorporated herein by reference, speech to text engine 224 can use any proprietary or third party speech to text engine, CTI and CRM data extraction component 228 can obtain CTI and CRM data through any available interface, or the like.
Referring now to
On training corpus receiving step 600, captured or logged interactions are received for processing. The interactions should characterize as closely as possible the interactions regularly captured at the environment. If the system is supposed to be used in multiple sites, such as multiple branches of an organization, the training corpus should represent as closely as possible all types of interactions that can be expected in all sites.
On framing step 604 an input audio of the training corpus is segmented into consecutive frames. The length of each segment is substantially the same as the length of the segments that will be fed into the runtime system, i.e., the intervals at which the system will search for the conditions for issuing a notification, thus simulating off-line the RT environment.
On feature extraction step 608, various features are extracted directly or indirectly from the input segment or from external sources. The features may include spotted words extracted by RT phonetic search component 212; emotion indications or features as extracted by RT emotion detection component 216, which provide an emotion score that represents the probability of emotion to exist within the segment; talk analysis parameters extracted by RT talk analysis component 220, such as agent/customer bursts position, number of bursts, activity statistics such as agent talk percentage, customer talk percentage, silence durations on either side, or the like; text extracted by RT speech to text engine 224 from the segment; CTI and CRM data extracted by RT CTI and CRM extraction component 228, such as number of holds, hold durations, hold position within the interaction or the segment, number of transfers, or the like. It will be appreciated that any other features that can be extracted from the audio or from an external source can be used as well. Feature extraction step 608 may also extract indirect data, such as by Natural Language Processing (NLP) analysis, in which linguistic processing is performed on text retrieved by RT speech to text engine 224 or by RT phonetic search component 212, including for example Part of Speech (POS) tagging and stemming, i.e., finding the base form of a word. NLP analysis can be performed using any proprietary, commercial, or third party tool, such as LinguistxPlatform™ manufactured by Inxight.
Further features may relate to the segment as a whole, such as segment position, in terms of absolute time within an interaction, in terms of number of tokens (segments or words) within an interaction; speaker side of the segment, for example 1 for the agent, 2 for the customer; the average duration of silence between words in the segment, or the like.
On global feature extraction step 612, interaction level features are also extracted, which may relate to the interaction as a whole, from the beginning of the interaction until the latest available frame. The features can include any of the following: average agent activity and average customer activity in percentage, as related to the whole interaction; average silence; number of bursts per participant side; total burst duration per participant side in percentage or in absolute time; emotion detection features, such as: total probability score for an emotion within the interaction, number of emotion bursts, total emotion burst duration, or the like; speech to text and NLP features such as stemmed words, stemmed keyphrases, number of word repetitions, number of keyphrases repetitions, or the like.
On feature aggregation step 616 all features obtained on feature extraction step 608, as well as the same or similar features relating to earlier segments within the interaction, and features extracted at global feature extraction step 612 are concatenated or otherwise aggregated into one feature vector. Thus the combined feature vector can include any subset of the following, and optionally additional parameters, for each segment of the interaction accumulated to that time, and to the interaction as a whole: emotion related features, such as the probability score of the segment to contain emotion and its intensity, number of emotion bursts, total emotion burst duration, or the like; speech to text features including words as spoken or their base form, wherein stop words are optionally excluded; NLP data, such as part of speech information for a word, number of repetitions of all words in the segment, term frequency-inverse document frequency (TF-IDF) of all words in the segment; talk analysis features, such as average agent activity percentage, average customer activity percentage, average silence, number of bursts per participant side, total burst duration per participant side, or the like; CTI or CRM features; or any other linguistic, acoustic or meta-data feature associated with the timeframe.
On model training step 620, a model is trained using pairs, wherein each pair relates to one segment and consists of the feature vector related to the timeframe and to the interaction as aggregated on step 616, and a manual tag assigned to the segment and received on step 624 and, which represents the assessment of a human evaluator as related to the segment and to a particular issue, such as customer satisfaction, churn probability, required sales assistance, fraud probability, or others. Training is preferably performed using methods such as Neural networks, Support Vector Machines (SVM) as described for example in “Support Vector Machines” by Marti A. Hearst, published in IEEE Intelligent Systems, vol. 13, no. 4, pp. 18-28, July/August 1998, doi:10.1109/5254.708428, incorporated herein by reference in its entirety, or other methods. The output of training step 628 is a model that will be used in production stage by the classification layer, as discussed in association with
On step 628 the model is stored in any permanent storage, such as a storage device accessed by a database system, or the like.
It will be appreciated that the method of
Referring now to
On testing interactions receiving step 700, captured or logged interaction segments are received for processing. At least one side of the interaction is an agent, a sales person or another person associated with the organization. The other party can be a customer, a prospect customer, a supplier, another person within the organization, or the like.
The interactions are received in segments having predetermined length, such as between a fraction of a second and a few tens of seconds. In some embodiments, the interactions are received in a continuous manner such as a stream, and after a predetermined period of time, the accumulated segment is passed for further processing. As detailed in association with
On feature extraction step 708, various features are extracted directly or indirectly from the input segment or from external sources, similarly to step 608 of
On global feature extraction step 712, similarly to step 612 of
On feature aggregation step 716, similarly to step 616 of
On classification step 720, model 724 trained on model training step 620 of
Optionally, a confidence score is assigned to the particular segment, indicating the certainty that the relevant issue exists in the current segment.
The features used on classification step 720 may include features of the current segment, features of previous segments, and features at the interaction level. Features related to previous segments can include all or some of the features related to the current segment.
It will be appreciated that in some embodiments, feature extraction steps 708 and 712 can take place once, while feature aggregation step 616 and classification step 720 can be repeated for optionally different subsets of the extracted and aggregated features and different models such as model 724, in order to check for different problems or issues.
On RT action determination step 728, an appropriate action to be taken is determined, such as sending a notification to a supervisor, popping a message on a display device of a user; with or without a link or a connection to the interaction, routing or conferencing the interaction, initiating a call to the customer if the interaction has already ended, storing the notification, updating statistics measures, or the like.
It will be appreciated that the action to be taken may depend on any one or more of the features. For example, the action may depend on the time of the segment within the call. Upon a problem detected in the beginning of the call, an alert may be raised, but if the problem persists or worsens at later segments, a conferencing may be suggested to enable a supervisor to join the call.
Alternatively, the action may be predetermined for a certain type of problem or issue, or for all types of problems. For example, an organization may require that every problem associated with an unsuccessful sale is reported and stored, but problems relating to unsatisfied customers require immediate popup on a display device of a supervisor.
On RT action performing step 732, the action determined in step 728 is carried out, by invoking or using the relevant applications such as the applications of RT application layer 208 of
The disclosed methods and system provide real time notifications or alerts regarding interactions taking place in a call center or another organization, while the interaction is still going on or a short time afterwards.
The disclosed method and system receive substantially consecutive and optionally overlapping segments of an interaction as the interaction progresses. Multiple features are extracted, which may relate to a particular segment, to preceding segments, or to the accumulated interaction as a whole.
The method and system classify the segments in association with, a particular problem or issue, and apply a trained model to the extracted features. If the classification indicates that the segment or the interaction is associated with the problem or issue, an action is taken, such as sending an alert, routing or conferencing a call, popping a message or the like. The action is taken as the interaction is still in progress, or a sort time afterwards, when the chances for improving the situation are highest.
It will be appreciated that the disclosure also relates to a computer readable storage medium containing a set of instructions for a general purpose computer, the set comprising instructions for performing the methods detailed above including: receiving a segment of an interaction in which a representative of the organization participates; extracting a feature from the segment; extracting a global feature associated with the interaction; aggregating the feature and the global feature; and classifying the segment or the interaction in association with the problem or issue by applying a model to the feature and the global feature.
It will be appreciated that multiple enhancements can be devised in accordance with the disclosure. For example, multiple different, fewer or additional features can be extracted. Different algorithms can be used for evaluating the models and applying them, and different actions can be taken upon detection of problems.
It will be appreciated by a person skilled in the art that multiple variations and options can be designed along the guidelines of the disclosed methods and system.
While the disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the disclosure. In addition, many modifications may be made to adapt a particular situation, material, step or component to the teachings without departing from the essential scope thereof. Therefore, it is intended that the disclosed subject matter not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but only by the claims that follow.
Shapira, Dori, Wasserblat, Moshe, Laperdon, Ronen, Pereg, Oren, Lubowich, Yuval, Feigin, Vladislav, Fox-Kahana, Oz
Patent | Priority | Assignee | Title |
10056095, | Feb 21 2013 | Microsoft Technology Licensing, LLC | Emotion detection in voicemail |
10140642, | Jul 27 2015 | DATA CAPABLE, INC. | Automated customer engagement and issue location, prediction, and response through utilization of public and private data sources |
10152681, | Mar 01 2013 | Mattersight Corporation | Customer-based interaction outcome prediction methods and system |
10162844, | Jun 22 2017 | VONAGE BUSINESS LIMITED | System and methods for using conversational similarity for dimension reduction in deep analytics |
10289967, | Mar 01 2013 | Mattersight Corporation | Customer-based interaction outcome prediction methods and system |
10395545, | Oct 28 2016 | International Business Machines Corporation | Analyzing speech delivery |
10706448, | Sep 30 2014 | PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO , LTD | Service monitoring system and service monitoring method |
10923109, | Aug 02 2017 | [24]7.ai, Inc.; [24]7 AI, INC | Method and apparatus for training of conversational agents |
11176141, | Oct 30 2013 | Lenovo (Singapore) Pte. Ltd. | Preserving emotion of user input |
11188923, | Aug 29 2019 | Bank of America Corporation | Real-time knowledge-based widget prioritization and display |
11334608, | Nov 23 2017 | Infosys Limited | Method and system for key phrase extraction and generation from text |
11335360, | Sep 21 2019 | Lenovo (Singapore) Pte. Ltd. | Techniques to enhance transcript of speech with indications of speaker emotion |
11861540, | Feb 17 2020 | Allstate Insurance Company | Natural language processing platform for automated training and performance evaluation |
9342501, | Oct 30 2013 | LENOVO PC INTERNATIONAL LIMITED | Preserving emotion of user input |
9390708, | May 28 2013 | Amazon Technologies, Inc | Low latency and memory efficient keywork spotting |
9569424, | Feb 21 2013 | Microsoft Technology Licensing, LLC | Emotion detection in voicemail |
9641681, | Apr 27 2015 | TALKIQ, INC | Methods and systems for determining conversation quality |
9792908, | Oct 28 2016 | International Business Machines Corporation | Analyzing speech delivery |
Patent | Priority | Assignee | Title |
6173260, | Oct 29 1997 | Vulcan Patents LLC | System and method for automatic classification of speech based upon affective content |
6185527, | Jan 19 1999 | HULU, LLC | System and method for automatic audio content analysis for word spotting, indexing, classification and retrieval |
6219639, | Apr 28 1998 | Nuance Communications, Inc | Method and apparatus for recognizing identity of individuals employing synchronized biometrics |
6480826, | Aug 31 1999 | Accenture Global Services Limited | System and method for a telephonic emotion detection that provides operator feedback |
6542602, | Feb 14 2000 | Nice Systems Ltd. | Telephone call monitoring system |
6574595, | Jul 11 2000 | WSOU Investments, LLC | Method and apparatus for recognition-based barge-in detection in the context of subword-based automatic speech recognition |
6922466, | Mar 05 2001 | CX360, INC | System and method for assessing a call center |
7624012, | Dec 17 2002 | SONY EUROPE B V | Method and apparatus for automatically generating a general extraction function calculable on an input signal, e.g. an audio signal to extract therefrom a predetermined global characteristic value of its contents, e.g. a descriptor |
7729914, | Oct 05 2001 | Sony Deutschland GmbH | Method for detecting emotions involving subspace specialists |
7752043, | Sep 29 2006 | VERINT AMERICAS INC | Multi-pass speech analytics |
7801055, | Sep 29 2006 | VERINT AMERICAS INC | Systems and methods for analyzing communication sessions using fragments |
7801280, | Dec 15 2004 | Verizon Patent and Licensing Inc | Methods and systems for measuring the perceptual quality of communications |
7940914, | Aug 31 1999 | Accenture Global Services Limited | Detecting emotion in voice signals in a call center |
7983910, | Mar 03 2006 | International Business Machines Corporation | Communicating across voice and text channels with emotion preservation |
8140330, | Jun 13 2008 | Robert Bosch GmbH | System and method for detecting repeated patterns in dialog systems |
8195449, | Jan 31 2006 | TELEFONAKTIEBOLAGET LM ERICSSON PUBL | Low-complexity, non-intrusive speech quality assessment |
8260614, | Sep 28 2000 | Intel Corporation | Method and system for expanding a word graph to a phone graph based on a cross-word acoustical model to improve continuous speech recognition |
20050108775, | |||
20070043608, | |||
20070071206, | |||
20070179784, | |||
20080040110, | |||
20080195385, | |||
20080256033, | |||
20090076811, | |||
20090164302, | |||
20090210226, | |||
20110040554, | |||
20110196677, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 08 2010 | PEREG, OREN | Nice Systems LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024512 | /0518 | |
Jun 08 2010 | FOX-KAHANA, OZ | Nice Systems LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024512 | /0518 | |
Jun 08 2010 | FEIGIN, VLADISLAV | Nice Systems LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024512 | /0518 | |
Jun 08 2010 | SHAPIRA, DORI | Nice Systems LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024512 | /0518 | |
Jun 08 2010 | LAPERDON, RONEN | Nice Systems LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024512 | /0518 | |
Jun 08 2010 | LUBOWICH, YUVAL | Nice Systems LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024512 | /0518 | |
Jun 08 2010 | WASSERBLAT, MOSHE | Nice Systems LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 024512 | /0518 | |
Jun 10 2010 | NICE-SYSTEMS LTD. | (assignment on the face of the patent) | / | |||
Jun 06 2016 | Nice-Systems Ltd | NICE LTD | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 040391 | /0483 | |
Nov 14 2016 | NICE SYSTEMS TECHNOLOGIES, INC | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | PATENT SECURITY AGREEMENT | 040821 | /0818 | |
Nov 14 2016 | NEXIDIA, INC | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | PATENT SECURITY AGREEMENT | 040821 | /0818 | |
Nov 14 2016 | INCONTACT, INC | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | PATENT SECURITY AGREEMENT | 040821 | /0818 | |
Nov 14 2016 | ACTIMIZE LIMITED | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | PATENT SECURITY AGREEMENT | 040821 | /0818 | |
Nov 14 2016 | AC2 SOLUTIONS, INC | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | PATENT SECURITY AGREEMENT | 040821 | /0818 | |
Nov 14 2016 | NICE SYSTEMS INC | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | PATENT SECURITY AGREEMENT | 040821 | /0818 | |
Nov 14 2016 | NICE LTD | JPMORGAN CHASE BANK, N A , AS ADMINISTRATIVE AGENT | PATENT SECURITY AGREEMENT | 040821 | /0818 |
Date | Maintenance Fee Events |
Aug 13 2015 | ASPN: Payor Number Assigned. |
Oct 10 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Oct 12 2022 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Apr 21 2018 | 4 years fee payment window open |
Oct 21 2018 | 6 months grace period start (w surcharge) |
Apr 21 2019 | patent expiry (for year 4) |
Apr 21 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 21 2022 | 8 years fee payment window open |
Oct 21 2022 | 6 months grace period start (w surcharge) |
Apr 21 2023 | patent expiry (for year 8) |
Apr 21 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 21 2026 | 12 years fee payment window open |
Oct 21 2026 | 6 months grace period start (w surcharge) |
Apr 21 2027 | patent expiry (for year 12) |
Apr 21 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |