Systems and methods for classifying a dialogue act in a chat log are provided. Each word of the dialogue act is mapped to a word vector representation. An utterance vector representation of the dialogue act is computed based on the word vector representations. An additional utterance vector representation of the dialogue act is computed based on the utterance vector representation. The additional utterance vector representation is mapped to a classification of the dialogue act.
|
1. A computer implemented method for classifying a dialogue act in a chat log, comprising:
mapping each word of the dialogue act to a word vector representation;
computing an utterance vector representation of the dialogue act based on the word vector representations using a bidirectional long short-term memory (LSTM) architecture;
computing an additional utterance vector representation of the dialogue act based on the utterance vector representation, the computing the additional utterance vector representation comprising:
applying a skip connection between the dialogue act of a participant and an immediately prior dialogue act of the same participant; and
computing the additional utterance vector representation based on an utterance vector representation of the skip connection to the immediately prior dialogue act of the same participant;
mapping the additional utterance vector representation, with a directed-acyclic-graph long short-term memory network (DAG-LSTM), to a classification of the dialogue act; and
outputting the classification of the dialogue act.
9. A non-transitory computer readable medium storing computer program instructions for classifying a dialogue act in a chat log, the computer program instructions when executed by a processor cause the processor to perform operations comprising:
mapping each word of the dialogue act to a word vector representation;
computing an utterance vector representation of the dialogue act based on the word vector representations using a bidirectional long short-term memory (LSTM) architecture;
computing an additional utterance vector representation of the dialogue act based on the utterance vector representation, the computing the additional utterance vector representation comprising:
applying a skip connection between the dialogue act of a participant and an immediately prior dialogue act of the same participant; and
computing the additional utterance vector representation based on an utterance vector representation of the skip connection to the immediately prior dialogue act of the same participant;
mapping the additional utterance vector representation, with a directed-acyclic-graph long short-term memory network (DAG-LSTM), to a classification of the dialogue act; and
outputting the classification of the dialogue act.
5. An apparatus comprising:
a processor; and
a memory to store computer program instructions for classifying a dialogue act in a chat log, the computer program instructions when executed on a neural network of the processor cause the processor to perform operations comprising:
mapping each word of the dialogue act to a word vector representation;
computing an utterance vector representation of the dialogue act based on the word vector representations using a bidirectional long short-term memory (LSTM) architecture;
computing an additional utterance vector representation of the dialogue act based on the utterance vector representation, the computing the additional utterance vector representation comprising:
applying a skip connection between the dialogue act of a participant and an immediately prior dialogue act of the same participant; and
computing the additional utterance vector representation based on an utterance vector representation of the skip connection to the immediately prior dialogue act of the same participant;
mapping the additional utterance vector representation, with a directed-acyclic-graph long short-term memory network (DAG-LSTM), to a classification of the dialogue act; and
outputting the classification of the dialogue act.
2. The computer implemented method of
computing the additional utterance vector representation based on utterance vector representations of all prior dialogue acts in the chat log.
3. The computer implemented method of
computing the additional utterance vector representation of the dialogue act using a modified tree long short-term memory (LSTM) based architecture.
4. The computer implemented method of
6. The apparatus of
computing the additional utterance vector representation based on utterance vector representations of all prior dialogue acts in the chat log.
7. The apparatus of
computing the additional utterance vector representation of the dialogue act using a modified tree long short-term memory (LSTM) based architecture.
8. The apparatus of
10. The non-transitory computer readable medium of
computing the additional utterance vector representation based on utterance vector representations of all prior dialogue acts in the chat log.
11. The non-transitory computer readable medium of
computing the additional utterance vector representation of the dialogue act using a modified tree long short-term memory (LSTM) based architecture.
12. The non-transitory computer readable medium of
|
This application claims the benefit of U.S. Provisional Application No. 63/016,601, filed Apr. 28, 2020, the disclosure of which is herein incorporated by reference in its entirety.
The present invention relates generally to classification of dialogue acts, and in particular to classification of dialogue acts in group chats with DAG-LSTMs (directed-acyclic-graph long short-term memory).
A dialogue act is an utterance in conversational dialogue that serves a function in the dialogue. Examples of a dialogue act include a question, an answer, a request, or a suggestion. Classification of dialogue acts is an important task for workflow automation and conversational analytics. Conventionally, machine learning techniques have been applied for classification of dialogue acts. Such conventional machine learning techniques typically predict classifications of dialogue acts based on textual content of the dialogue acts, the user who generated the dialogue acts, and contextual information of the dialogue acts.
With the increasing prevalence of chat and messaging applications, classification of dialogue acts in group chats is of particular importance. However, classification of dialogue acts in group chats has a number of challenges. Group chats may include multiple participants simultaneously conversing, leading to entanglements of utterances. Further, unlike spoken conversations, written conversations do not have any prosodic cues. In addition, due to the informal nature of group chats, they tend to include domain-specific jargon, abbreviations, and emoticons. Accordingly, group chats do not include sufficient information for classification of dialogue acts using conventional techniques.
In accordance with one or more embodiments, systems and methods for classifying a dialogue act in a chat log are provided. Each word of the dialogue act is mapped to a word vector representation. An utterance vector representation of the dialogue act is computed based on the word vector representations. An additional utterance vector representation of the dialogue act is computed based on the utterance vector representation. The additional utterance vector representation is mapped to a classification of the dialogue act.
In one embodiment, the additional utterance vector representation is computed based on utterance vector representations of all prior dialogue acts in the chat log and an utterance vector representation of an immediately prior dialogue act of the same participant as the dialogue act.
In one embodiment, the utterance vector representation is computed using a bidirectional long short-term memory (LSTM) architecture and the additional utterance vector representation is computed using a modified tree long short-term memory (LSTM) based architecture.
In one embodiment, the chat log is a transcript of a conversation between a plurality of participants.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
Embodiments described herein provide for the classification of dialogue acts in a chat log using directed-acyclic-graph long short-term memory networks (DAG-LSTMs). Such DAG-LSTMs are implemented with skip connections to incorporate contextual information from all prior dialogue acts in the chat log and prior dialogue acts from the same participant. Advantageously, by incorporating such contextual information, embodiments described herein provide of the classification of dialogue acts with higher accuracy as compared to conventional approaches.
Formally, a chat log U comprises a set of dialogue acts (or utterances) {uk}k=1K, of a chat sessions, where each dialogue act uk comprises a set of words {wtk}t=1T. Each dialogue act uk is generated by one of the participants pk∈P, where P denotes the set of participants in the chat session. Given the set of dialogue acts {uk}k=1K, embodiments described herein assign each dialogue act uk with a classification yk∈Y, where Y denotes the set of predefined classifications.
The classification of dialogue acts uk is formulated as a sequence modeling task solved using a variant of Tree-LSTMs. First, each word wtk of a dialogue act uk is mapped to a dense fixed-size word vector representation ωtk. Then, an utterance vector representation vk of the dialogue act uk is computed using an utterance model based on the set of word vector representations {ωtk}t=1T. Next, an additional utterance vector representation ϕk of the dialogue act uk is computed using a conversation model based on the utterance vector representation vk of the dialogue act uk, all previously computed utterance vector representations {vk}j=1k, and the prior utterance vector representation from the same participant, thereby contextualizing dialogue act uk and summarizes the state of the conversation at that point in time. The additional utterance vector representation ϕk is mapped to a classification yk using a classifier to classify the dialogue act.
In summary, classification of dialogue acts uk is performed based on the following operations:
ωt=WordLookup(wtk)
vk=UtteranceModel({ωtk}t=1T)
ϕk=ConversationModel({vk}j=1T)
yk=Classifier(ϕk)
At step 202, a chat log U comprising one or more dialogue acts {uk}k=1K, is received. As shown in framework 100 of
Steps 204-210 of
At step 204, each word wtk of the particular dialogue act uk is mapped to a word vector representation ωtk. For example, in framework 100 of
A bidirectional LSTM is used to represent the particular dialogue act. Accordingly, let lstm({xj}j=1t) be recursively defined as follows:
lstm({xj}j=1t=steplstm(xt,lstm({xj}j=1t−1)) (Equation 1)
(ht,ct)=steplstm(xt,(ht−1,ct−1)) (Equation 2)
where the step function is defined such that:
ht=sigmoid(Wixxt+Wihht−1+Wicct−1+bi) (Equation 3)
ft=sigmoid(Wfxxt+Wfhht−1+Wfcct−1+bf) (Equation 4)
ot=sigmoid(Woxxt+Wohht−1+Wocct+b0) (Equation 5)
gt=tan h(Wcxxt+Wchht−1+bc) (Equation 6)
ct=ft⊙ct−1+it⊙gt (Equation 7)
ht=ot⊙ tan h(ct) (Equation 8)
where W are weight matrices and b. are bias vectors. it, ft, and ot are input, forget, and output gates, respectively, and ⊙ denotes elementwise product.
When the recurrence is defined in terms of the past (as in Equations 1-8), the LSTM is a forward directed LSTM, denoted {right arrow over (lstm)}. Alternatively, the LSTM may be defined in terms of the future, referred to as a backward directed LSTM and denoted :
({xj}j=1t)=step(xt,({xj}j=t+1t−1)) (Equation 9)
(,)=ste(xt,(,)) (Equation 10)
Concatenating {right arrow over (ht)} and results in a contextualized representation of word wtk inside the dialogue act uk:
({right arrow over (ht)},{right arrow over (ct)})={right arrow over (lstm)}({ωj}j=1t) (Equation 11)
(,)=({ωj}j=1T) (Equation 12)
=[{right arrow over (ht)},] (Equation 13)
At step 206, an utterance vector representation vk of the particular dialogue act uk is computed based on the word vector representations ωtk. For example, as shown in framework 100 of
Contextualized representations of the word vector representations cot are (affinely) transformed into a feature space, which is then pooled across all the words in the dialogue acts:
where max denotes the elementwise maximum across multiple vectors, Wu is the weight matrix, and bu is the bias vector. At the end of this operation, a single fixed size utterance vector representation v that represents the dialogue act u={wtk}t=1T.
At step 208, an additional utterance vector representation ϕk of the particular dialogue act uk is computed based on the utterance vector representation vk of the particular dialogue act uk. In one embodiment, the additional utterance vector representation is also computed based on utterance vector representations of all prior dialogue acts in the chat log and an utterance vector representation of an immediately prior dialogue act of the same participant. For example, as shown in framework 100 of
One approach for computing the additional utterance vector representation from a set of utterance vector representations {vk}j=1k is to use another LSTM model and feed the contextualized (given the history of past dialogue acts) utterance vector representations to a final classifier layer as follows:
(ϕk,γk)=lstmv({vk}i=1k) (Equation 16)
ŷk=softmax(Wyϕk+by) (Equation 17)
yk=argmax ŷk (Equation 18)
where Wy is a weight matrix, by is a vector, and ŷk denotes the predicted probability distribution over the dialogue act set Y.
In this approach, a conversation would be represented as a flat sequence of utterance with no information about which dialogue act is generated by which participant. In order to address this, skip connections are added between consecutive dialogue acts generated by the same participant. Accordingly, dialogue acts may have two antecedents: 1) all past dialogue acts, and 2) the past dialogue act from the same participant. Accordingly, the model can build up a user history and link each dialogue act to a user's particular history within a conversation. Dialogue acts from the same participant are also closer in the computation graph.
Accordingly, Tree-LSTM equations are utilized. Let tlstm({xη′}η′∈Sp(η)) denote a Tree-LSTM where η is a node in a given tree or graph, Sp(η) denotes the index set for the subtree (subgraph) spanned by η, and {xη′}η′∈Sp(η) denotes the nodes spanned by η. Then, the tlstm is recursively defined in terms of children of η, denoted ch(η), as follows:
tlstm({xη′}η′∈Sp(η))=steptlstm(xη,Uη′∈ch(η)tlstm(({xη″}η″∈Sp(η))) (Equation 19)
(hη,cη)=steptlstm(xη,Uη′∈ch(η)(hη′,cη′)) (Equation 20)
where the step function is defined such that:
iη=sigmoid(Wixxη+Ση′∈ch(η)Wihe(η′,η)hη′+bi) (Equation 21)
fηη′=sigmoid(Wfxxη+Ση′∈ch(η)Wfhe(η′,η)e(η″,η)hη″+bf) (Equation 22)
oη=sigmoid(Woxxt+Ση′∈ch(η)Wohe(η′,η)hη′+b0) (Equation 23)
gη=tan h(Wcxxt+Ση′∈ch(η)Wghe(η′,η)hη′+bg) (Equation 24)
cη=iη⊙gη+Ση′∈ch(η)fηη′⊙cη′ (Equation 25)
hη=oη⊙ tan h(cη) (Equation 26)
where e(η′,η)∈E denotes the edge type (or label) that connects η′ to η. In general, E can be an arbitrary fixed size set. In one embodiment, E is of size two: 1) edges that connect all prior dialogue acts to a current dialogue act, and 2) edges that connect an immediately prior dialogue act from the same participant to the current dialogue act. Since weights are parameterized by the edge types e(η′,η), contribution of past dialogue acts and past dialogue acts from the same participant are computed differently.
It is noted that Tree-LSTM equations are applied even though the computation graphs are not trees but directed acyclic graphs (DAGs), since each node feeds into two parents (i.e., a next dialogue act and a next dialogue act from the same participant).
Since each node cell cη contributes to two other cells cη′ and cη″ additively, recursively unfolding Equation 25 for csink, the cell corresponding to the last dialogue act in the chat log, gives exponentially many additive terms of cη in the length of the shortest path from η and sink. This very quickly causes state explosions in the length of a conversation, which was experimentally confirmed. To address this, Equation 25 is modified as follows:
where max denote the elementwise maximum over multiple vectors, which effectively picks (in an elementwise fashion) a path through either one of the children. Thus, cell growth will be at worst linear in the conversation length. Since the modified equations are more appropriate for DAGs compared to Tree-LSTMs, the modified model is referred to as DAG-LSTM.
At step 210, the additional utterance vector representation ϕk is mapped to a classification yk of the particular dialogue act uk. For example, as shown in framework 100 of
The additional utterance vector representation is mapped to a classification as follows:
(ϕk,γk)=daglstmv({vi}i=1k) (Equation 28)
ŷk=softmax(Wyϕk+by) (Equation 29)
At step 212, it is determined whether there are any dialogue acts remaining in the chat log. If it is determined that there is at least one dialogue act remaining in the chat log at step 212, method 200 returns to step 204 and steps 204-212 are repeated using the next dialogue act in the chat log as the particular dialogue act. Accordingly, steps 204-212 are repeatedly performed for each dialogue act in the chat log.
If it is determined that there are not any dialogue acts remaining in the chat log at step 212, method 200 proceeds to step 214 where the classifications of the dialogue acts are output. For example, the classifications of the dialogue acts can be output by displaying the classifications of the dialogue acts on a display device of a computer system, storing the classifications of the dialogue acts on a memory or storage of a computer system, or by classifications of the dialogue acts to a remote computer system.
While embodiments described herein are described for classification of dialogue acts, it should be understood that embodiments described herein may also be utilized for classification of emotions, sentiment analysis, thread disentanglement, or any other aspect of dialog modeling.
Embodiments described herein were experimentally validated and compared with four baseline models. The first baseline model utilized convolutional neural networks (CNNs) for both dialogue act and context representation. The second baseline model utilized Bidirection LSTMs (BiLSMTs) for dialogue act representation and LSTMs for context representation. The third baseline model utilized CNNs for dialogue act representation and LSTMs for context representation. The fourth baseline model utilized BiLSTMs for dialogue act representation and had no context representation. Embodiments described herein were implemented with BiLSTMs for dialogue act representation and DAG-LSTMs for context representation. BiLSTMs were not utilized for context representation because such architectures are not suitable for live systems.
The evaluation of embodiments described herein against the baseline models was performed on a dataset comprising conversations from an online version of a game where trade negotiations were carried out in a chat interface. The dataset comprises over 11,000 utterances from 41 games annotated for various tasks, such as anaphoric relations, discourse units, and dialog acts. For the experimental evaluation, only the dialog act annotations were utilized. The dataset comprises six different dialogue acts, but one of those dialogue acts named Preference had very low prevalence (only 8 dialogue act) and was therefore excluded from the evaluation.
The dataset was randomly split into three groups: a training group (29 games with 8,250 dialog acts), a dev group (4 games with 851 dialog acts), and a test group (8 games with 2,329 dialog acts). The dialog acts were tokenized using the Stanford PTBTokenizer and the tokens were represented by GloVe (Global Vectors for Word Representation) embeddings.
The Adam optimizer in the stochastic gradient descent setting was used to train all models. A patience value of 15 epochs was used (i.e., training was stopped after no observing an improvement for 15 epochs in the validation data) and each model was trained for a maximum of 300 epochs. The best iteration was selected based on the validation macro-F1 score. All models were hyperparameter-tuned using validation set macro-F1 using simple random search. A total of 100 experiments were performed to evaluate random hyperparameter candidates based on the following distributions (whenever applicable to a particular architecture):
Learning rate ˜10Uniform(−1, −3)
Dropout rate ˜Uniform(0, 0.5)
Word dropout rate ˜Uniform(0, 0.3)
Word vector update mode ˜Uniform{fixed, fine-tune}
#Units in utterance layer ˜Uniform{50, 75, 100, 200}
#Units in conversation layer ˜Uniform{50, 75, 100, 200}
#filters in CNNs ˜Uniform{50, 75, 100, 200}
Window size for CNNs ˜Uniform{2, 3, 4}
The owners of the dataset that was utilized for the experimental validation presented results using CRFs (conditional random fields) on a preliminary version of the dataset, which included dialog acts from only 10 games. The owner CRF model reported to have achieved 83% accuracy and a 73% macro-F1 score. Though these results are not directly comparable with the results shown in table 400, they are presented herein for context.
Systems, apparatuses, and methods described herein may be implemented using digital circuitry, or using one or more computers using well-known computer processors, memory units, storage devices, computer software, and other components. Typically, a computer includes a processor for executing instructions and one or more memories for storing instructions and data. A computer may also include, or be coupled to, one or more mass storage devices, such as one or more magnetic disks, internal hard disks and removable disks, magneto-optical disks, optical disks, etc.
Systems, apparatus, and methods described herein may be implemented using computers operating in a client-server relationship. Typically, in such a system, the client computers are located remotely from the server computer and interact via a network. The client-server relationship may be defined and controlled by computer programs running on the respective client and server computers.
Systems, apparatus, and methods described herein may be implemented within a network-based cloud computing system. In such a network-based cloud computing system, a server or another processor that is connected to a network communicates with one or more client computers via a network. A client computer may communicate with the server via a network browser application residing and operating on the client computer, for example. A client computer may store data on the server and access the data via the network. A client computer may transmit requests for data, or requests for online services, to the server via the network. The server may perform requested services and provide data to the client computer(s). The server may also transmit data adapted to cause a client computer to perform a specified function, e.g., to perform a calculation, to display specified data on a screen, etc. For example, the server may transmit a request adapted to cause a client computer to perform one or more of the steps or functions of the methods and workflows described herein, including one or more of the steps or functions of
Systems, apparatus, and methods described herein may be implemented using a computer program product tangibly embodied in an information carrier, e.g., in a non-transitory machine-readable storage device, for execution by a programmable processor; and the method and workflow steps described herein, including one or more of the steps or functions of
A high-level block diagram of an example computer 902 that may be used to implement systems, apparatus, and methods described herein is depicted in
Processor 904 may include both general and special purpose microprocessors, and may be the sole processor or one of multiple processors of computer 902. Processor 904 may include one or more central processing units (CPUs), for example. Processor 904, data storage device 912, and/or memory 910 may include, be supplemented by, or incorporated in, one or more application-specific integrated circuits (ASICs) and/or one or more field programmable gate arrays (FPGAs).
Data storage device 912 and memory 910 each include a tangible non-transitory computer readable storage medium. Data storage device 912, and memory 910, may each include high-speed random access memory, such as dynamic random access memory (DRAM), static random access memory (SRAM), double data rate synchronous dynamic random access memory (DDR RAM), or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices such as internal hard disks and removable disks, magneto-optical disk storage devices, optical disk storage devices, flash memory devices, semiconductor memory devices, such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), digital versatile disc read-only memory (DVD-ROM) disks, or other non-volatile solid state storage devices.
Input/output devices 908 may include peripherals, such as a printer, scanner, display screen, etc. For example, input/output devices 908 may include a display device such as a cathode ray tube (CRT) or liquid crystal display (LCD) monitor for displaying information to the user, a keyboard, and a pointing device such as a mouse or a trackball by which the user can provide input to computer 902.
Any or all of the systems and apparatus discussed herein may be implemented using one or more computers such as computer 902.
One skilled in the art will recognize that an implementation of an actual computer or computer system may have other structures and may contain other components as well, and that
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
Zhang, Haimin, Irsoy, Ozan, Gosangi, Rakesh, Wei, Mu-Hsin, Lund, Peter John, Pappadopulo, Duccio, Fahy, Brendan Michael, Nephytou, Neophytos, Diaz, Camilo Ortiz
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10764431, | Sep 17 2019 | Capital One Services, LLC | Method for conversion and classification of data based on context |
10885277, | Aug 02 2018 | GOOGLE LLC | On-device neural networks for natural language understanding |
20190286698, | |||
20200150780, | |||
20200152184, | |||
20200168210, | |||
20200193265, | |||
20210217408, | |||
20210263952, | |||
20220215177, | |||
20230029759, | |||
20230046658, | |||
WO2020020041, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 19 2021 | Bloomberg Finance L.P. | (assignment on the face of the patent) | / | |||
Dec 20 2022 | BLOOMBERG FINANCE L P | BANK OF AMERICA, N A , AS ADMINISTRATIVE AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 062190 | /0224 | |
Aug 27 2023 | WEI, MU-HSIN | BLOOMBERG FINANCE L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 068235 | /0115 | |
Aug 31 2023 | FAHY, BRENDAN MICHAEL | BLOOMBERG FINANCE L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 068235 | /0115 | |
Aug 31 2023 | LUND, PETER JOHN | BLOOMBERG FINANCE L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 068235 | /0115 | |
Aug 31 2023 | GOSANGI, RAKESH | BLOOMBERG FINANCE L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 068235 | /0115 | |
Aug 31 2023 | IRSOY, OZAN | BLOOMBERG FINANCE L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 068235 | /0115 | |
Aug 31 2023 | DIAZ, CAMILO ORTIZ | BLOOMBERG FINANCE L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 068235 | /0115 | |
Oct 04 2023 | PAPPADOPULO, DUCCIO | BLOOMBERG FINANCE L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 068235 | /0115 | |
Oct 04 2023 | NEPHYTOU, NEOPHYTOS | BLOOMBERG FINANCE L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 068235 | /0115 |
Date | Maintenance Fee Events |
Apr 19 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Oct 10 2026 | 4 years fee payment window open |
Apr 10 2027 | 6 months grace period start (w surcharge) |
Oct 10 2027 | patent expiry (for year 4) |
Oct 10 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 10 2030 | 8 years fee payment window open |
Apr 10 2031 | 6 months grace period start (w surcharge) |
Oct 10 2031 | patent expiry (for year 8) |
Oct 10 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 10 2034 | 12 years fee payment window open |
Apr 10 2035 | 6 months grace period start (w surcharge) |
Oct 10 2035 | patent expiry (for year 12) |
Oct 10 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |