A technology for assessing client experience in incident management can be implemented. The technology can fetch an event log entry from a first database comprising a plurality of event log entries generated by a client, wherein the event log entry is associated with a timestamp, an event descriptor, and a prescribed target time to close the event log entry. The technology can extract a communication message sent by the client from the event descriptor, determine a polarity score based on sentiment analysis of the communication message, determine a client experience index (CEI) based on the polarity score, save the CEI in an event record in a second database, determine an aggregated CEI based on an average of a plurality of CEIs determined for the corresponding plurality of event log entries, and output the aggregated CEI.
|
1. A computer-implemented method comprising:
fetching an event log entry from a first database comprising a plurality of event log entries generated by a client, wherein the event log entry is associated with a timestamp, an event descriptor, and a prescribed target time to close the event log entry;
extracting a communication message sent by the client from the event descriptor;
determining a polarity score based on sentiment analysis of the communication message;
determining a client experience index based on the polarity score;
saving the client experience index in an event record in a second database, wherein the event record corresponds to the event log entry and further comprises the timestamp and the prescribed target time to close the event log entry;
determining an aggregated client experience index based on an average of a plurality of client experience indexes determined for the corresponding plurality of event log entries;
outputting the aggregated client experience index;
disabling the fetching responsive to a determination that the timestamp associated with the event log entry stored in the first database is the same as the timestamp in the event record stored in the second database and the event log entry is closed by the prescribed target time; and
enabling the fetching responsive to a determination that the timestamp associated with the event log entry stored in the first database is different from the timestamp in the event record stored in the second database or the event log entry is not closed by the prescribed target time.
9. A computing system comprising:
memory;
one or more hardware processors coupled to the memory; and
one or more computer readable storage media storing instructions that, when loaded into the memory, cause the one or more hardware processors to perform operations comprising:
fetching an event log entry from a first database comprising a plurality of event log entries generated by a client, wherein the event log entry is associated with a timestamp, an event descriptor, and a prescribed target time to close the event log entry;
extracting a communication message sent by the client from the event descriptor;
determining a polarity score based on sentiment analysis of the communication message;
determining a client experience index based on the polarity score;
saving the client experience index in an event record in a second database, wherein the event record corresponds to the event log entry and further comprises the timestamp and the prescribed target time to close the event log entry;
determining an aggregated client experience index based on an average of a plurality of client experience indexes determined for the corresponding plurality of event log entries;
outputting the aggregated client experience index;
disabling the fetching responsive to a determination that the timestamp associated with the event log entry stored in the first database is the same as the timestamp in the event record stored in the second database and the event log entry is closed by the prescribed target time; and
enabling the fetching responsive to a determination that the timestamp associated with the event log entry stored in the first database is different from the timestamp in the event record stored in the second database or the event log entry is not closed by the prescribed target time.
16. One or more non-transitory computer-readable media having encoded thereon computer-executable instructions causing one or more processors to perform a method comprising:
fetching an event log entry from a first database comprising a plurality of event log entries generated by a client, wherein the event log entry comprises a timestamp, an event descriptor, and a prescribed target time to close the event log entry;
extracting a communication message sent by the client from the event descriptor;
determining a polarity score based on sentiment analysis of the communication message;
determining a client experience index based on the polarity score;
saving the client experience index in an event record in a second database, wherein the event record corresponds to the event log entry and further comprises the timestamp and the prescribed target time to close the event log entry;
determining an aggregated client experience index based on an average of a plurality of client experience indexes determined for the corresponding plurality of event log entries;
outputting the aggregated client experience index;
disabling the fetching responsive to a determination that the timestamp associated with the event log entry stored in the first database is the same as the timestamp in the event record stored in the second database and the event log entry is closed by the prescribed target time; and
enabling the fetching responsive to a determination that the timestamp associated with the event log entry stored in the first database is different from the timestamp in the event record stored in the second database or the event log entry is not closed by the prescribed target time,
wherein determining the client experience index comprises reducing the polarity score if the event log entry remains open after passing the prescribed target time for a predefined duration or if the event log entry remains open at the prescribed target time and a number of communications sent to the client corresponding to the event log entry is less than a predefined value.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
10. The system of
11. The system of
12. The system of
13. The system of
14. The system of
15. The system of
|
Incident management is an important feature in any computing system. Modern enterprise systems involve many different hardware components and software applications, the scale of which can be too complex to be incident free. An incident can cause unplanned interruptions or reductions in quality of information technology (IT) services. Such incident can occur in internal services (e.g., a broken printer, a computer crash, etc.) or external services (e.g., email server cannot be connected, network connection is down, etc.). After an incident occurs, a client or user negatively affected by the incident can file an incident ticket, which is delivered to responsible administrators (e.g., service desk administrators) who can troubleshoot the incident and take appropriate actions to resolve the underlying problems. Typically, an incident management system can manage the lifecycle of incidents in an enterprise environment, tracking and/or logging incidents from generation of incident ticket to resolution. Client experience of interacting with the incident management system often plays an important role of winning customer-centric opportunities. However, assessing client experience in incident management can be problematic. Accordingly, room for improvement exists.
Incident management system is utilized by many organizations through engagement with the service desk or self-help technology for rapid service restoration after unplanned interruptions or reductions in quality of IT services caused by incidents. In a typical incident management system, after an incident is reported by a client, an incident ticket can be opened by the client or the organization's service desk. Communications between the client and the service desk (and/or other technical support members) on the open incident ticket can be stored in an incident log. Generally, open incident tickets are monitored/tracked until they are resolved and/or closed.
Efficient and responsive incident management is critical to organizations because it not only can reduce or mitigate service interruptions caused by incidents, but also can play an important role in client relationship management. Clients interact with the incident management system when they need support, help, customization, and/or bug fixing. After receiving an incident ticket, the service desk, sometimes working along with product support team and/or engineering team, is expected to provide timely and effective technical support (e.g., workaround, document related help, workflow optimization, bug fixes, etc.) in order to close the incident ticket in a timely manner.
Client experience of interacting with the incident management system of an organization often plays an important role of retaining existing clients and/or attracting new clients. In one example, positive client experience can often lead to more licenses and/or subscriptions of the organization's products. For instance, a highly satisfied client who has received excellent incident support for a product may upgrade from trial licenses to subscriptions/licenses, or increase/expand its subscriptions/licenses to the product. On the other hand, a client who has had a negative experience when interacting with the incident management system may decide to drop the product or switch to a competitor's product.
Thus, assessing client experience in incident management can help an organization to improve the quality of its customer service and/or guide its decision-making process. For example, by assessing client experience in incident management, the organization can identify if any of its products leads to client dissatisfaction, if more resources need to be added to a support team, and/or if there is a recent trend of client experience that justifies certain organizational decisions (e.g., to upgrade or discontinue a product, to increase or reduce training for the client, etc.).
However, assessing client experience in incident management has mostly relied on surveys, which are often time consuming and generally only provide anecdotal, incomplete, or inaccurate information about client experience. Thus, it would be advantageous for an improved system and method that supports more objective, quantitative, and efficient assessment of client experience in incident management. Such improved technologies for assessing client experience in incident management can be applied across a wide variety of enterprise software environments.
The cloud user management stack 130 and the user interface suite 120 can be specific to a line-of-business (LOB) of an organization, e.g., the LOB can be related to a set of products and/or services offered by the organization. The cloud stack 110 can be an open source computing platform that allows IT service providers to offer public cloud services. In the depicted example, the cloud stack 110 includes a cloud server 116 in communication with a cloud database 114. In addition, the cloud stack 100 can have a dedicated client experience (CE) database 112, as described further below.
The user interface suite 120 includes a set of tools that allows a user to interact the cloud stack 110 and the cloud user management stack 130. For example, the user interface suite 120 can include a query builder interface 126 which allows the user to search and filter database objects, select objects, create relationship between objects, etc. As another example, the user interface suite 120 can also include a story designer 124 (e.g., Scenes provided by SAP SE, of Walldorf, Germany or others) which allows the user to create storyboards about products and services offered by the organization. As shown, the user interface suite 120 can further include a client experience interface 122 interacting with both the cloud stack 110 and the LOB cloud user management stack 130.
The LOB cloud user management stack 130 includes an application router 160 (e.g., AppRouter provided by SAP SE, of Walldorf, Germany or others), which interacts with both the LOB user interface suite 120 and the cloud stack 110. In any of the examples herein, the application router 160 can provide a single entry point to an enterprise application that comprises several different applications or microservices. Specifically, the application router 160 can be configured to dispatch requests to backend microservices, authenticate users, serve static content, among other capabilities.
In any of the examples herein, microservices (also known as microservices architecture) refers to an architectural approach in which a single application is composed of suites of loosely coupled and independently deployable smaller components, or services. In contrast to the conventional approach where a monolith application is built as a single, autonomous unit, the microservices are built as modular as possible. Generally, the microservices can be written in different programming languages and may use different data storage techniques, yet they can be connected to each other via an application programming interface (API) such as RESTful API. As a result, the microservices are more scalable and flexible, and are particularly useful for deployment in a cloud computing environment.
In the depicted example, the LOB cloud user management stack 130 includes a plurality of applications 170, which can be deployed as microservices. In one example, the applications 170 can include a query builder service (e.g., BusinessObjects Query Builder provided by SAP SE, of Walldorf, Germany or others), and the application router 160 can forward a user's query request (e.g., entered via the query builder interface 126) to the query builder service. In other examples, the applications 170 can include an information access service (e.g., the INA query model provided by SAP SE, of Walldorf, Germany or others) configured to enable applications to perform analytics, planning, and search on data stored in the cloud database 114, a proxy service, and other services.
As shown, the LOB cloud user management stack 130 also includes a plurality of cloud services 150 such as analytics on cloud 156, a LOB layer 154, and a LOB incident database 152 (also referred to as “incident database”). As described herein, the incident database 152 is a data repository of incidents that occurred in the LOB of the organization. In other words, the incident database 152 can log incident tickets reported by clients in the LOB, and an incident ticket can include a log of events or message communications which run from the opening of the incident ticket until the incident ticket is closed. Thus, as described herein, an incident ticket can also be referred to as an “incident log entry” or an “event log entry.” The message communications in an incident ticket can be communications between a client and a service desk administrator, between a client and a support team, between a support team and a technical team, or between any parties, so long as the message communications are relevant to the incident ticket (e.g., they share the same incident identifier, as described below). Opening incident tickets in the incident database 152 and log message communications in corresponding incident tickets can be implemented by any incident management system embedded within or in connection with the LOB cloud user management stack 130.
In addition, the LOB cloud user management stack 130 also includes a client experience microservice stack 140, which can include four microservices: a database polling microservice 142, a client experience analysis microservice 144, a client experience reporting microservice 146, and a client experience scheduling service 148. In certain embodiments, the client experience reporting microservice 146 and the client experience scheduling service 148 can be combined. As shown, the client experience microservice stack 140 can communicate with the client experience interface 122 and the client experience database 112 via the application router 160. As described below, these microservices can be deployed independently but work together to assess client experience in incident management.
Generally, the database polling microservice 142 can poll or retrieve incident tickets from the incident database 152 and compare the retrieved incident tickets with corresponding incident records stored in the client experience database 112. As described herein, the incident database 152 can be referred to as the “first database” and the client experience database 112 can be referred to as the “second database.” The client experience analysis microservice 144 can analyze the retrieved incident ticket and calculate a corresponding client experience index (CEI) which provides an objective and quantitative measure of client experience associated with the incident ticket. The client experience analysis microservice 144 can further calculate one or more aggregated CEIs for a plurality of retrieved incident tickets based on certain aggregation criteria. The calculated CEIs corresponding to the retrieved incident tickets can be stored or updated in the client experience database 112. The client experience reporting microservice 146 can be configured to generate client experience reports containing the calculated CEIs and/or aggregated CEIs according to certain predefined reporting formats. The client experience scheduling microservice 148 can allow a use to schedule the generation of client experience reports.
In practice, the systems shown herein, such as system 100, can vary in complexity, with additional functionality, more complex components, and the like. For example, there can be additional functionality within the cloud stack 110, the LOB user interface suite 120, and/or the cloud user management stack 130. Additional components can be included to implement security, redundancy, load balancing, report design, and the like.
The described computing systems can be networked via wired or wireless network connections, including the Internet. Alternatively, systems can be connected through an intranet connection (e.g., in a corporate environment, government environment, or the like).
The system 100 and any of the other systems described herein can be implemented in conjunction with any of the hardware components described herein, such as the computing systems described below (e.g., processing units, memory, and the like). In any of the examples herein, the incident tickets, the communication messages, the priority and/or weight factors, the CEIs and/or aggregated CEIs, and the like can be stored in one or more computer-readable storage media or computer-readable storage devices. The technologies described herein can be generic to the specifics of operating systems or hardware and can be applied in any variety of environments to take advantage of the described features.
A request to access client experience, e.g., in the form of a query for CEI 210, can be handled by the database polling microservice 220. Specifically, the database polling microservice 220 can have an incident data retriever 222 configured to retrieve an incident ticket from an incident database 224 (similar to 152) and a corresponding incident record from a client experience database 226 (similar to 112). As described further below, the incident record include previously calculated CEI associated with the incident ticket and other relevant information about the incident ticket. The database polling microservice 220 can compare (e.g., at 228) the retrieved incident ticket with the incident record to determine if there has been any update on the incident ticket since the last assessment of client experience with respect to the incident ticket.
If there has been an update on the incident ticket since the last assessment of client experience with respect to the incident ticket or if the incident ticket remains open at a prescribed target time (e.g., the comparation at 228 returns “yes”), the client experience analysis microservice 230 can be deployed to perform a series of operations including data preparation and synthesis (e.g., at 232), sentiment analysis (e.g., at 234), calculating CEI associated with the incident ticket (e.g., at 236), aggregating CEIs (e.g., at 238), and updating the client experience database 226 (e.g., at 240). After such updating operation, the incident record in the client experience database 226 contains the newly calculated CEI associated with the incident ticket. In addition, the aggregated CEIs can also be updated in the client experience database 226. Then, the client experience reporting and scheduling microservice 250 can be deployed to generate and distribute client experience reports (e.g., at 252) based on the calculated CEIs and/or aggregated CEIs.
On the other hand, if there has been no update on the incident ticket since the last assessment of client experience with respect to the incident ticket and the condition that incident ticket remains open at the prescribed target time is not met (e.g., the comparation at 228 returns “no”), the client experience reporting and scheduling microservice 250 can be deployed directly without calling the client experience analysis microservice 230. Instead, the client experience reporting and scheduling microservice 250 can generate and distribute client experience reports based on CEI data previously calculated and stored on the client experience database 226.
Example 4—Example Overall Method of Assessing Client Experience in Incident Management
At 310, the database polling service (e.g., 142 or 220) can fetch an incident ticket (i.e., incident log entry, or event log entry) from a first database (e.g., the incident database 152 or 224) comprising a plurality of incident tickets generated by a client. The incident ticket is associated with (e.g., the incident ticket can comprise or point to) a timestamp, an incident descriptor (also referred to as “event descriptor”), and a prescribed target time to close the incident ticket. In certain embodiments, the incident ticket can be associated with (e.g., comprise or point to) other relevant information. For example, the incident ticket can be associated with an incident response frequency (IRF) prescribed for the incident ticket (e.g., by a service-level agreement), wherein the IRF can specify a number that the client should be communicated to before closing the incident ticket.
To illustrate,
Typically, an incident management system can automatically generate an incident ID when a new incident ticket is opened. The priority of the incident ticket can be selected from a predefined, ranked list (e.g., “very low,” “low,” “medium,” “high,” and “very high”). In certain embodiments, the priority of the incident ticket can be directly assigned by the client when opening the incident tickets. In other embodiments, the priority of the incident ticket can be generated or modified by a service desk professional based on evaluation of the incident ticket. The incident management system can also automatically prescribe an MTP for any new incident ticket based on the time the incident ticket is opened and the assigned priority of the incident ticket (e.g., the default MTP can be set to be 10 days after the time when the incident ticket is opened). In certain embodiments, the MTP can be modified by the service desk professional.
When there is an update on an incident ticket, the incident management system can automatically update the last-modified timestamp of the incident ticket. The initial last-modified timestamp is the time when the incident ticket is opened. Afterwards, anytime when a communication related to the incident ticket is logged in the incident database, the time of the communication can be used to update the last-modified timestamp of the incident ticket. For example, when the client sends an email to the service desk, the incident management system can log such email to the incident ticket and update the last-modified timestamp of the incident ticket to the timestamp of the client's email.
The incident descriptor includes a textual description about the incident ticket. Generally, the incident descriptor can include a client problem statement and threads of communications related to the incident ticket. The incident problem statement can be a subject line when the incident ticket is opened, or can be extracted from the threads of communications related to the incident ticket.
For illustrative purposes,
Refer again to the flowchart 300 in
At 330, the client experience analysis microservice (e.g., 144 or 230) can determine a polarity score based on sentiment analysis of the communication message, as described further below.
At 340, the client experience analysis microservice (e.g., 144 or 230) can determine a client experience index (CEI) based on the polarity score.
At 350, the client experience analysis microservice (e.g., 144 or 230) can save or update the CEI in an incident record in a second database (e.g., the client experience database 112 or 226). The incident record corresponds to the incident ticket and can further comprise the last-modified timestamp and the prescribed target time to close the incident ticket.
To illustrate,
At 360, the client experience analysis microservice (e.g., 144 or 230) can determine an aggregated CEI based on an average of a plurality of CEIs determined for the corresponding plurality of incident tickets, as described below.
Then, at 370, the client experience reporting microservice (e.g., 146 or 240) can output the aggregated CEI, e.g., in a client experience report, as described below.
The method shown in
The illustrated actions can be described from alternative perspectives while still implementing the technologies. For example, “send” can also be described as “receive” from a different perspective.
In any of the examples herein, the database polling microservice (e.g., 142 or 222) can fetch the incident tickets from the incident database (e.g., 152 or 224) via on-demand polling or dynamic polling mechanisms.
In the case of on-demand polling, fetching an incident ticket from the incident database is triggered by a request from a user. For example, an administrator can enter a command to trigger the database polling microservice when the administrator needs to prepare a latest report on client experience across different clients, multiple products, and/or different departments/teams. In the case of dynamic polling, fetching the incident ticket from the incident database can be performed according to a predefined timetable. For example, the database polling microservice can have configurable scheduling tasks (e.g., using Cron jobs to configure the fetching task to be run based on a predefined frequency, e.g., every 1 day, every 12 hours, etc.). A POST request can then fetch the relevant data from an incident ticket (e.g., with a specified incident ID) based on the scheduled time/frequency.
For either on-demand polling or dynamic polling, the database polling microservice can be optimized to improve efficiency. Specifically, data polling is enabled only when there is a need to update the CEI corresponding to an incident ticket. Such need can arise in two conditions: (1) when there is an update on the incident ticket, e.g., due to a new client communication, and (2) when the MTP associated with the incident ticket has expired but the incident ticket still remains open (see e.g., condition check 228 in
In certain embodiments, fetching an incident ticket from the incident database can be conditioned on the last-modified timestamp of the incident ticket stored in the incident database being different from the last-modified timestamp in the incident record stored in the client experience database. Thus, fetching the incident ticket from the incident database is responsive to comparing the last-modified timestamp of the incident ticket and the last-modified timestamp in the incident record. In other words, the database polling microservice can compare the last-modified timestamp of the incident ticket stored in the incident database with the last-modified timestamp of the incident record stored in the client experience database. The database polling microservice can fetch the incident ticket from the incident database if its associated last-modified timestamp is later (i.e., newer) than the last-modified timestamp of the corresponding incident record stored in the client experience database. Unless the incident ticket remains open at the MTP (which is another condition that can enable data polling, as described below), no fetching is performed if the last-modified timestamp of the incident ticket is the same as the last-modified timestamp of the incident record (i.e., the incident ticket has not been updated with new client communication since the last calculation of CEI corresponding to the incident ticket).
In certain embodiments, the database polling microservice can be configured to fetch an incident ticket from the incident database when the prescribed MTP expires and the incident ticket remains open. Thus, expiration of the MTP while the incident ticket is still open can enable the database polling microservice to fetch the incident ticket and recalculate/update the CEI corresponding to the incident ticket. In other words, the database polling microservice is enabled to fetch the incident ticket from the incident database if the associated last-modified timestamp has been updated compared to the last-modified timestamp of the corresponding incident record stored in the client experience database or if the incident ticket is not closed at the expiration of MTP.
As shown, an incident ticket 610, which is fetched by a database polling service described above, can be fed to a message filter 620. The message filter 620 can extract client's communication message 630 from the incident descriptor contained in the incident ticket 610, e.g., by finding tags associated with the client (e.g., based on client's user ID and/or client's user description), as described above.
The extracted communication message 630 can be sent to a message parser 640, which can be configured to parse the communication message 630 into separate parts including English text 642, emojis 644, emoticons 646, and foreign language text 648. In the depicted example, the English text 642 can be fed directly to a sentiment analyzer 650 for sentiment analysis. On the other hand, the emojis 644, emoticons 646, and foreign language text 648 can be translated into plain English text via respective dictionaries 645, 647, and 649 before providing input to the sentiment analyzer 650. Such translation is necessitated because client sentiment (e.g., happy, frustrated, etc.) may be expressed in non-English text, for example, through emojis, emoticons, and/or foreign language.
The sentiment analyzer 650 can perform sentiment analysis of the input data and generate a normalized polarity score as output. Based on the calculated polarity score, a client experience index calculator 660 can calculate the CEI corresponding to the incident ticket. Through a client experience database updater 660, the newly calculated CEI can be used to update the corresponding incident record stored in the client experience database. In addition, the calculated CEI can be used to generate one or more aggregated CEIs by an CEI aggregator 670. In certain embodiments, the aggregated CEIs can also be stored/updated in the client experience database so that they can be used to generate client experience reports.
In any of the examples herein, a client experience analysis microservice (e.g., 144, 230, or 690) can quantify client sentiment from an incident ticket, as illustrated in
In the depicted example, an example incident ticket 710 includes threads of communications, including a plurality of communication messages sent from a client whose user description is “Jorge Melman” The incident descriptor of the incident ticket 710 includes the text of message lines of the incident ticket. As noted above, when the client sends a communication message, the last-modified timestamp of the incident ticket can be updated on the incident database (i.e., it will be different from the last-modified timestamp in the incident record stored in the client experience database). Thus, the database polling microservice can fetch the corresponding incident ticket for further analysis.
As noted above, a message filter (e.g., 620) can extract client's communication messages from the incident descriptor, e.g., by finding tags associated with the client (e.g., based on client's user ID and/or client's user description). In the depicted example, the client's communication messages include message lines corresponding to “Jorge Melman” Such extracted communication messages (which can be translated to English text if they contain emojis, emoticons, and/or foreign language text) can be sent to a sentiment analyzer (e.g., 650) for sentiment analysis.
As described herein, sentiment analysis refers to a natural language processing or computational linguistic technique that is configured to determine whether a given text carries positive, neutral, or negative sentiment. Various approaches (e.g., lexicon-based approach, the machine learning approach, etc.) can be used by the sentiment analyzer to conduct sentiment analysis. In certain embodiments, the sentiment analyzer can generate a sentiment word cloud 720 (where the words can be shown in various colors, fonts, sizes, and/or themes), which shows the most frequent words that appear in positive/neutral/negative sentiments embodied by the client's communication messages. The sentiment word cloud 720 can provide an intuitive visualization of client's sentiment expressed in communication messages.
The basic task in sentiment analysis is to determine the sentiment polarity (positivity, neutrality or negativity) of a given text. In certain embodiments, the sentiment analyzer can be configured to generate a pie chart 730 showing sentiment polarity distribution. In the depicted example, positive polarity sentiment is dominant (71.95%), followed by neutral polarity sentiment (26.83%), while the negative polarity sentiment is negligible (1.22%).
In certain embodiments, the sentiment analyzer can quantify the sentiment polarity by calculating a polarity score (PS), which can be a normalized metric ranging from −1 to 1, where a more negative PS indicates more negative sentiment, a more positive PS indicates more positive sentiment, and a PS close to 0 indicates neutral sentiment.
In certain embodiments, message lines of the client in the incident ticket are used for calculating the PS. In certain embodiments, the sentiment analyzer maintains the sentiment analysis results based on previous communication messages of the client contained in the incident ticket. When there is a new client communication message in the incident ticket, the sentiment analyzer can perform a “delta” sentimental analysis based on the new client communication message only. Then, the sentiment analyzer can calculate the PS by aggregating the results of such “delta” sentimental analysis with sentiment analysis results based on previous communication messages.
In any of the examples herein, a client experience index (CEI) corresponding to an incident ticket can be determined based on the polarity score (PS) obtained through sentiment analysis of client communication messages, as described above. Determination of CEI can be performed by a CEI calculator (e.g., 660) which performs one or more calculation steps, as described below. In certain embodiments, the CEI calculator can have a rule engine, which has predefined rules and/or parameters, based on which the calculation steps are performed.
As illustrated in
To calculate the CEI 830, the priority adjusted polarity score (PPS) 820 can be further adjusted by assessing the delay past MTP 822. In certain embodiments, when the incident ticket has not been closed by MTP and there is a further delay past the MTP, the PPS can be adjusted downwardly. In other words, the rule engine assumes that the client experience will decrease when there is a prolonged delay past the MTP.
In one particular embodiment, for every X days of delay (where X can be a predefined number, e.g., X can be 1, 2, 5, 10, etc.) past the MTP, the CEI can be calculated by subtracting a positive Δ1 from PPS, where Δ1 is a predefined value which can be dependent on the priority of the incident ticket. For example, it can be predefined that for “very high” incident ticket, for every X days of delay past the MTP, the CEI can be calculated by subtracting Δ1=0.20 from PPS (the subtracted Δ1 can be smaller, e.g., 0.10, for “high” priority incident ticket). While specific examples of adjusting PPS based on the delay past MTP are described above, it is to be understood that other methods of downward adjustment of PPS based on delay past MTP can be implemented. For example, when the PPS is positive, downward adjustment of PPS can be implemented by multiplying the PPS with a coefficient that is between 0 and 1, and when the PPS is negative, downward adjustment of PPS can be implemented by multiplying the PPS with a coefficient that is greater than 1.
In one particular embodiment, the downward adjustment of the PPS 820 can be further impacted by the resolution status 824 of the incident ticket. For example, if after X days of delay past the MTP, the incident ticket still has not been closed, then the amount of downward adjustment can be increased (e.g., the subtracted value Δ1 can be doubled, or increased by a certain percentage, etc.). In other words, the rule engine assumes that client experience will further decrease if after a certain delay past the MTS, the incident ticket still remains open.
In addition, to calculate the CEI 830, the priority adjusted polarity score (PPS) 820 can be further downwardly adjusted if the incident ticket remains open when the MTP expires and the client receives fewer responses or communications from the service desk and/or other technical/support teams than the prescribed incident response frequency (IRF) 826. For example, when the MTP expires and the incident ticket is open, if the prescribed IRF is 10 (i.e., the client is expected to receive 10 responses or communications from the service desk and/or technical/support teams) but the client only receives 3 such responses or communications, then the PPS can be further reduced by a positive Δ2. Similar to Δ1, Δ2 can be a predefined value which is dependent on the priority of the incident ticket (e.g., Δ2 can be 0.2 for “very high” priority and 0.1 for “high” priority incident ticket). In certain embodiment, the value of Δ2 can be proportional to the different between actual number of responses or communications the client received and the prescribed IRF. In other words, the rule engine assumes that, if the incident ticket has not been closed by the MTP, the fewer responses or communications the client received, the worse is the client experience. Likewise, while specific examples are described above, it is to be understood that other methods of downward adjustment of PPS based on IRF and MTP status can be implemented.
In any of the examples herein, the calculated CEIs corresponding to different incident tickets can be aggregated based on certain aggregation criteria to generate one or more aggregated CEIs, which can be included in incident report(s), as described below. Generation of aggregated CEIs can be performed by a CEI aggregator (e.g., 670) which performs one or more calculation steps, as described below. In certain embodiments, the CEI aggregator can have a rule engine, which has predefined rules and/or parameters, based on which the calculation steps are performed.
As shown in
As shown in
The average CEI 930 can be further adjusted to generate the aggregated CEI 940 for reporting purposes. Such adjustment can be performed by applying one or more weightages or factors to the average CEI 930. The weightages or factors can be predefined based on one or more attributes of the corresponding incident tickets 910.
In certain embodiments, the incident tickets 910 within the incident ticket group are related to one specific product. To generate aggregated CEI 940, one example adjustment is to apply a deviation factor 922 to the average CEI 930 (e.g., multiplying the average CEI by the deviation factor) based on incoming variation 922 of incident tickets 910. As described herein, the incoming variation of incident tickets can be characterized by a predefined metric that measures temporal variation of the number of new incident tickets 910 within the incident ticket group. For example, the incoming variation can be simply defined as the difference between total number of new incident tickets in the last quarter and total number of new incident tickets in the same quarter one year ago. Other definitions of incoming variation (e.g., based on ratio, percentage difference, etc.) can also be used.
If the specific product is newly released, the deviation factor 922 can be initialized to 1. If the specific product has been used for a period of time, the deviation factor 922 can be adjusted based on the measured incoming variation of incident tickets. In certain embodiments, the deviation factor 922 is increased if there is a significant increase or decrease in the number of new incident tickets based on a trend analysis. The can be implemented by the rule engine, based on the assumption that a significant increase in the number of new incident tickets may indicate that the product has incurred more complaints, whereas a significant decrease in the number of new incident tickets may indicate that the product has been used less frequently. For example, assume based on a trend analysis of new incident tickets occurred in previous periods, between 200 and 500 new incident tickets per quarter is considered a normal range. If the number of new incident tickets in the present quarter is suddenly increased to is 1,000 (or suddenly decreased to 50), the deviation factor 922 can be increased (e.g., doubled to 2.0) accordingly.
In certain embodiments, when the incident tickets 910 within the incident ticket group are related to one specific product, a product weightage 934 can be applied to the average CEI 930 (e.g., multiplying the average CEI by the product weightage). The product weightage 934 can be predefined based on priority or importance of the specific product. For example, the product weightage 934 can be predefined in a range from 1 to 10, where a higher value indicates higher priority and a lower value indicates lower priority of the product. By applying a product weightage 934 to the average CEI 930, the rule engine assumes that high-priority products have more impact on client experience than low-priority products.
In certain embodiments, a client weightage 936 can also be applied to the average CEI 930 (e.g., multiplying the average CEI by the client weightage) for reporting purposes. The client weightage 936 can be predefined based on priority or importance of the specific client. For example, the client weightage 936 can be predefined in a range from 1 to 10, where a higher value indicates higher priority and a lower value indicates lower priority of the client. By applying a client weightage 936 to the average CEI 930, the rule engine assumes that client experience of high-priority clients is more important (and thus need more attention) than low-priority clients.
In any of the examples herein, a client experience reporting microservice (e.g., 146 or 240) can be configured to generate a client experience report which embeds one or more aggregated CEIs according to a predefined format, based on who receives such report. In certain embodiments, the CEI aggregator (e.g., 670) can be implemented in the client experience reporting microservice instead of the client experience analysis microservice.
As an example,
As another example,
In yet another example,
The client experience report(s) can be presented in other formats that are different from the examples shown in
In any of the examples herein, the client experience report can be delivered in electronic form and have interactive features (e.g., hyperlinks, control buttons, etc.) that allow the recipient to easily switch between different representation formats, to filter what information is displayed and what information is hidden, to allow information overview and support drilling down to details, to export the data and/or graphs into external files, etc.
In any of the examples herein, a client experience scheduling microservice (e.g., 148 or 240) can be configured to schedule the generation of client experience reports. For example, the client experience scheduling microservice can be configured so that the client experience reports are generated according to a predefined schedule (e.g., once a day, once a week, etc.) for a particular recipient. Different recipients can receive different formats of client experience reports, and/or according to different schedules. For example, a president or a CEO of the organization may want to receive a high-level report, whereas a mid-level manager may receive a more detailed client experience report.
In certain embodiments, the client experience report(s) can be generated on demand. For example, an administrator can send a command through a user interface (e.g., 122), which triggers the generation of client experience report(s).
In certain embodiments, the client experience report(s) can be automatically generated based on one or more predefined triggering events. For example, one predefined triggering event can be an aggregated CEI corresponding to one specific client (or one specific product, one specific team, etc.) is below a predefined threshold (e.g., 0) which indicates the client experience is becoming negative. Another predefined triggering event can be aggregated CEI corresponding to one specific client (or one specific product, one specific team, etc.) has shown a downward trend for a period of time (e.g., decreasing by 0.5 within 1 month) based on slope analysis. Other triggering events can be similarly defined. By automatically generating client experience report(s) based on triggering events, alerts on degrading client experience (or other concerning issues about client experience) can be timely sent to responsible recipients, who can then take corrective or remedial actions.
The client experience reports can be used in a number of ways. For example, they can be used to evaluate and identify the overall experience (positive, negative, or neutral) and/or trend of experience of a client in different phases of a product. They can be used by the management team to track client experience during different phases of client interaction with different teams, which may reveal if the transition of incident ticket from one team to another team led to improvement or degradation of client experience. Importantly, they can be used to identify gaps in services that lead to lower client experience so that remedial and/or corrective actions (e.g., providing more resources, enhanced education, etc.) can be proactively taken.
A number of advantages can be achieved via the technology described herein.
For example, the client experience microservices described herein (e.g., the database polling microservice, the client experience analysis microservice, the client experience reporting microservice, and the client experience scheduling microservice) can be implemented on top of the any cloud-based analytics module (e.g., the analytics on cloud 156). Thus, these microservices can be easily integrated with existing incident management systems. For example, the client experience microservices described herein can be integrated with the incident management system of an e-commerce platform or a retail platform to assess customer satisfactions with listed products, or they can be integrated with an incident management system of an organization's enterprise software platform to assess customer experience of using various types of IT services provided by the organization.
By accessing the incident database of an incident management system, these microservices can objectively and quantitatively assess client experience in incident management, and generate comprehensive yet intuitive client experience report(s). High efficiency of assessing client experience (e.g., a simple software command can generate client experience reports for hundreds of clients involving thousands of products within a few seconds) is achieved because of the optimization of database polling microservice (i.e., data polling is performed only when there is a need to update the CEI corresponding to in incident ticket). Thus, the technology described herein for objective and quantitative assessment of client experience in incident management can take away any guesswork, and is more accurate and efficient than conventional survey-based assessment methods.
While calculation of aggregated CEI follows a bottom-up approach (e.g., perform sentiment analysis of client communications, then determine CEI for individual incident ticket, before deriving the aggregated CEI), presenting the client experience can follow a top-down approach. For example, a client experience report can be customized to show aggregated CEI for a plurality of clients. It can be drilled down to an individual client to show the aggregated CEI for a plurality of products used by the client, and can be further drilled down to an individual product to show the aggregated CEI associated with different teams with whom the client has interactions.
Based on objective and quantitative assessment of client experience in incident management, an organization can promptly identify issues (e.g., a trend of degrading client experience, a lack of client support by a team, etc.) in incident management that otherwise would be hidden or unrecognized, and thus take corrective measures to resolve the issues. Insights gained from such client experience assessment can also help other decision makings, e.g., whether to upgrade or discontinue a particular product, whether to provide additional training or bring additional resources to a team, etc.
With reference to
A computing system 1300 can have additional features. For example, the computing system 1300 includes storage 1340, one or more input devices 1350, one or more output devices 1360, and one or more communication connections 1370, including input devices, output devices, and communication connections for interacting with a user. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing system 1300. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing system 1300, and coordinates activities of the components of the computing system 1300.
The tangible storage 1340 can be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed within the computing system 1300. The storage 1340 stores instructions for the software implementing one or more innovations described herein.
The input device(s) 1350 can be an input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, touch device (e.g., touchpad, display, or the like) or another device that provides input to the computing system 1300. The output device(s) 1360 can be a display, printer, speaker, CD-writer, or another device that provides output from the computing system 1300.
The communication connection(s) 1370 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can use an electrical, optical, RF, or other carrier.
The innovations can be described in the context of computer-executable instructions, such as those included in program modules, being executed in a computing system on a target real or virtual processor (e.g., which is ultimately executed on one or more hardware processors). Generally, program modules or components include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules can be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules can be executed within a local or distributed computing system.
For the sake of presentation, the detailed description uses terms like “determine” and “use” to describe computer operations in a computing system. These terms are high-level descriptions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
Any of the computer-readable media herein can be non-transitory (e.g., volatile memory such as DRAM or SRAM, nonvolatile memory such as magnetic storage, optical storage, or the like) and/or tangible. Any of the storing actions described herein can be implemented by storing in one or more computer-readable media (e.g., computer-readable storage media or other tangible media). Any of the things (e.g., data created and used during implementation) described as stored can be stored in one or more computer-readable media (e.g., computer-readable storage media or other tangible media). Computer-readable media can be limited to implementations not consisting of a signal.
Any of the methods described herein can be implemented by computer-executable instructions in (e.g., stored on, encoded on, or the like) one or more computer-readable media (e.g., computer-readable storage media or other tangible media) or one or more computer-readable storage devices (e.g., memory, magnetic storage, optical storage, or the like). Such instructions can cause a computing device to perform the method. The technologies described herein can be implemented in a variety of programming languages.
The cloud computing services 1410 are utilized by various types of computing devices (e.g., client computing devices), such as computing devices 1420, 1422, and 1423. For example, the computing devices (e.g., 1420, 1422, and 1424) can be computers (e.g., desktop or laptop computers), mobile devices (e.g., tablet computers or smart phones), or other types of computing devices. For example, the computing devices (e.g., 1420, 1422, and 1424) can utilize the cloud computing services 1410 to perform computing operations (e.g., data processing, data storage, and the like).
In practice, cloud-based, on-premises-based, or hybrid scenarios can be supported.
Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, such manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth herein. For example, operations described sequentially can in some cases be rearranged or performed concurrently.
Any of the following embodiments can be implemented.
Clause 1. A computer-implemented method comprising: fetching an event log entry from a first database comprising a plurality of event log entries generated by a client, wherein the event log entry is associated with a timestamp, an event descriptor, and a prescribed target time to close the event log entry; extracting a communication message sent by the client from the event descriptor; determining a polarity score based on sentiment analysis of the communication message; determining a client experience index (CEI) based on the polarity score; saving the CEI in an event record in a second database, wherein the event record corresponds to the event log entry and further comprises the timestamp and the prescribed target time to close the event log entry; determining an aggregated CEI based on an average of a plurality of CEIs determined for the corresponding plurality of event log entries; and outputting the aggregated CEI.
Clause 2. The method of clause 1, wherein extracting the communication message comprises searching the event descriptor based on one or more predefined tags.
Clause 3. The method of any one of clauses 1-2, wherein fetching the event log entry from the first database is triggered by a request from a user.
Clause 4. The method of any one of clauses 1-2, wherein fetching the event log entry from the first database is performed according to a predefined timetable.
Clause 5. The method of any one of clauses 1-4, wherein fetching the event log entry from the first database is enabled if the timestamp associated with the event log entry stored in the first database is different from the timestamp in the event record stored in the second database.
Clause 6. The method of any one of clauses 1-4, wherein fetching the event log entry from the first database is enabled if the event log entry is not closed by the prescribed target time.
Clause 7. The method of any one of clauses 1-6, wherein determining the CEI comprises applying a priority factor to the polarity score, wherein the priority factor is determined based on a priority indicator assigned to the event log entry by the client.
Clause 8. The method of any one of clauses 1-7, wherein determining the CEI comprises reducing the polarity score if the event log entry remains open after passing the prescribed target time for a predefined duration.
Clause 9. The method of any one of clauses 1-8, wherein determining the CEI comprises reducing the polarity score if the event log entry remains open at the prescribed target time and a number of communications sent to the client corresponding to the event log entry is less than a predefined value.
Clause 10. The method of any one of clauses 1-9, wherein determining the aggregated CEI comprises applying one or more weightages to the average of the plurality of CEIs, wherein the one or more weightages are predefined based on one or more attributes of the corresponding plurality of event log entries.
Clause 11. A computing system comprising: memory; one or more hardware processors coupled to the memory; and one or more computer readable storage media storing instructions that, when loaded into the memory, cause the one or more hardware processors to perform operations comprising: fetching an event log entry from a first database comprising a plurality of event log entries generated by a client, wherein the event log entry is associated with a timestamp, an event descriptor, and a prescribed target time to close the event log entry; extracting a communication message sent by the client from the event descriptor; determining a polarity score based on sentiment analysis of the communication message; determining a CEI based on the polarity score; saving the CEI in an event record in a second database, wherein the event record corresponds to the event log entry and further comprises the timestamp and the prescribed target time to close the event log entry; determining an aggregated CEI based on an average of a plurality of CEIs determined for the corresponding plurality of event log entries; and outputting the aggregated CEI.
Clause 12. The system of clause 11, wherein fetching the event log entry from the first database is triggered by a request from a user.
Clause 13. The system of clause 11, wherein fetching the event log entry from the first database is performed according to a predefined timetable.
Clause 14. The system of any one of clauses 11-13, wherein fetching the event log entry from the first database is enabled if the timestamp associated with the event log entry stored in the first database is different from the timestamp in the event record stored in the second database.
Clause 15. The system of any one of clauses 11-13, wherein fetching the event log entry from the first database is enabled if the event log entry is not closed by the prescribed target time.
Clause 16. The system of any one of clauses 11-15, wherein determining the CEI comprises applying a priority factor to the polarity score, wherein the priority factor is determined based on a priority indicator assigned to the event log entry by the client.
Clause 17. The system of any one of clauses 11-16, wherein determining the CEI comprises reducing the polarity score if the event log entry remains open after passing the prescribed target time for a predefined duration.
Clause 18. The system of any one of clauses 11-17, wherein determining the CEI comprises reducing the polarity score if the event log entry remains open at the prescribed target time and a number of communications sent to the client corresponding to the event log entry is less than a predefined value.
Clause 19. The system of any one of clauses 11-18, wherein extracting the communication message comprises searching the event descriptor based on one or more predefined tags.
Clause 20. One or more computer-readable media having encoded thereon computer-executable instructions causing one or more processors to perform a method comprising: fetching an event log entry from a first database comprising a plurality of event log entries generated by a client, wherein the event log entry comprises a timestamp, an event descriptor, and a prescribed target time to close the event log entry; extracting a communication message sent by the client from the event descriptor; determining a polarity score based on sentiment analysis of the communication message; determining a client experience index (CEI) based on the polarity score; saving the CEI in an event record in a second database, wherein the event record corresponds to the event log entry and further comprises the timestamp and the prescribed target time to close the event log entry; determining an aggregated CEI based on an average of a plurality of CEIs determined for the corresponding plurality of event log entries; and outputting the aggregated CEI; wherein fetching the event log entry from the first database is enabled if the timestamp of the event log entry stored in the first database being is different from the timestamp in the event record stored in the second database or if the event log entry is not closed by the prescribed target time; and wherein determining the CEI comprises reducing the polarity score if the event log entry remains open after passing the prescribed target time for a predefined duration of if the event log entry remains open at the prescribed target time and a number of communications sent to the client corresponding to the event log entry is less than a predefined value.
The technologies from any example can be combined with the technologies described in any one or more of the other examples. In view of the many possible embodiments to which the principles of the disclosed technology can be applied, it should be recognized that the illustrated embodiments are examples of the disclosed technology and should not be taken as a limitation on the scope of the disclosed technology. Rather, the scope of the disclosed technology includes what is covered by the scope and spirit of the following claims.
Tiwari, Rahul, Biswas, Devashish
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
11295250, | Jul 18 2016 | Accenture Global Solutions Limited | Simulation queuing based system for analyzing client or application related changes to an application maintenance project |
8700569, | May 09 2012 | Bertec Corporation | System and method for the merging of databases |
20130132060, | |||
20140081691, | |||
20190164170, | |||
20200272793, | |||
20210173745, | |||
20210191802, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 30 2021 | TIWARI, RAHUL | SAP SE | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 056201 | /0811 | |
May 02 2021 | BISWAS, DEVASHISH | SAP SE | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 056201 | /0811 | |
May 07 2021 | SAP SE | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
May 07 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Jan 24 2026 | 4 years fee payment window open |
Jul 24 2026 | 6 months grace period start (w surcharge) |
Jan 24 2027 | patent expiry (for year 4) |
Jan 24 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 24 2030 | 8 years fee payment window open |
Jul 24 2030 | 6 months grace period start (w surcharge) |
Jan 24 2031 | patent expiry (for year 8) |
Jan 24 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 24 2034 | 12 years fee payment window open |
Jul 24 2034 | 6 months grace period start (w surcharge) |
Jan 24 2035 | patent expiry (for year 12) |
Jan 24 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |