A method of identifying a root cause in a distributed computing environment is provided including collecting metric data of a plurality of application components, collecting metric correlation relationship data, collecting topology relationship data, and collecting transaction tracking relationship data that indicates a group of the application components used to execute a requested transaction. A graph is generated including a plurality of nodes corresponding to the application components by merging the metric correlation relationship data, the topology relationship data, and the transaction tracking relationship data. The nodes of the graph are traversed in an order based on a bi-directional weight for each of a plurality of edges connecting neighboring nodes among the plurality of nodes. A recommendation list is generated including at least one abnormal application component. The recommendation includes an instruction to repair the abnormal application component acting as a system bottleneck in the distributed computing environment.
|
7. A method of identifying a root cause in a distributed computing environment, comprising:
collecting metric data generated by each of a plurality of application components;
collecting metric correlation relationship data that indicates a relationship between the metric data generated by the application components;
collecting physical topology relationship data that indicates a spatial relationship between the application components;
collecting transaction tracking relationship data that indicates a group of the application components used to execute a requested transaction;
generating a graph including a plurality of nodes corresponding to the application components by merging the metric correlation relationship data, the physical topology relationship data, and the transaction tracking relationship data;
calculating a bi-directional weight for each of a plurality of edges connecting neighboring nodes among the plurality of nodes based on a rate of occurrence of abnormal events detected between pairs of neighboring nodes among the plurality of nodes;
traversing the plurality of nodes in the graph in an order based on the bi-directional weight for each of the plurality of edges connecting neighboring nodes among the plurality of nodes;
identifying target nodes from among the plurality of nodes that correspond to application components having a throughput that is below a predefined threshold;
traversing a plurality of paths along the graph that include the target nodes to identify at least one node corresponding to an abnormal application component; and
generating a recommendation including the at least one abnormal application component, wherein the at least one abnormal application component acts as a system bottleneck in the distributed computing environment,
wherein the application components are a plurality of computers in the distributed computing environment, and
wherein at least some of the plurality of computers represented in the physical topology relationship data are communicatively coupled to perform the requested transaction.
1. A method of identifying a root cause in a distributed computing environment, comprising:
collecting metric data generated by each of a plurality of application components;
collecting metric correlation relationship data that indicates a relationship between the metric data generated by the application components;
collecting physical topology relationship data that indicates a spatial relationship between the application components;
collecting transaction tracking relationship data that indicates a group of the application components used to execute a requested transaction;
generating a graph including a plurality of nodes corresponding to the application components by merging the metric correlation relationship data, the physical topology relationship data, and the transaction tracking relationship data;
calculating a bi-directional weight for each of a plurality of edges connecting neighboring nodes among the plurality of nodes based on a rate of occurrence of abnormal events detected between pairs of neighboring nodes among the plurality of nodes;
traversing the plurality of nodes in the graph in an order based on the bi-directional weight for each of the plurality of edges connecting neighboring nodes among the plurality of nodes;
identifying target nodes from among the plurality of nodes that correspond to application components having a response time that is above a predefined threshold;
traversing a plurality of paths along the graph that include the target nodes to identify at least one node corresponding to an abnormal application component; and
generating a recommendation including the at least one abnormal application component, wherein the recommendation includes an instruction to repair the at least one abnormal application component, and wherein the at least one abnormal application component acts as a system bottleneck in the distributed computing environment,
wherein the application components are a plurality of computers in the distributed computing environment, and
wherein at least some of the plurality of computers represented in the physical topology relationship data are communicatively coupled to perform the requested transaction.
13. A computer system configured to identify a root cause in a distributed computing environment, the system comprising:
a memory storing a computer program; and
a processor configured to execute the computer program, wherein the computer program is configured to:
collect metric data generated by each of a plurality of application components;
collect metric correlation relationship data that indicates a relationship between the metric data generated by the application components;
collect physical topology relationship data that indicates a spatial relationship between the application components;
collect transaction tracking relationship data that indicates a group of the application components used to execute a requested transaction;
generate a graph including a plurality anodes corresponding to the application components by merging the metric correlation relationship data, the physical topology relationship data, and the transaction tracking relationship data;
calculate a bi-directional weight for each of a plurality of edges connecting neighboring nodes among the plurality of nodes based on a rate of occurrence of abnormal events detected between pairs of neighboring nodes among the plurality of nodes;
traverse the plurality of nodes in the graph in an order based on the bi-directional weight for each of the plurality of edges connecting neighboring nodes among the plurality of nodes;
identify target nodes from among the plurality of nodes that correspond to application components having a response time that is above a predefined threshold;
traverse a plurality of paths along the graph that include the target nodes to identify at least one node corresponding to an abnormal application component; and
generate a recommendation including the at least one abnormal application component, Wherein the recommendation includes an instruction to repair the at least one abnormal application component, and Wherein the at least one abnormal application component acts as a system bottleneck in the distributed computing environment,
wherein the application components are a plurality of computers in the distributed computing environment, and
wherein at least some of the plurality of computers represented in the physical topology relationship data are communicatively coupled to perform the requested transaction.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
14. The computer system of
15. The computer system of
|
Exemplary embodiments of the present invention relate to root cause recommendation. More particularly, exemplary embodiments of the present invention relate to a system and method for relationship based root cause recommendation.
Computer systems, such as cloud applications, may include a distributed computing environment. Cloud applications may include a distributed dynamic environment with linked computers and servers existing in a number of geographic locations. Application problems or errors may occur at any number of the linked computers and servers. Thus, monitoring cloud applications for desired functioning may include identifying one or more root causes of problems or errors. For example, a single server in a particular geographic location may have a problem or error which may impact other servers linked to the server having the problem or error. However, in a large-scale distributed dynamic environment, a relatively large number of errors or problems may be detected. Thus, it may be difficult to identify a particular server that is demonstrating abnormal behavior and it may be difficult to prioritize individual servers or computers for maintenance or repair. Generally, identifying the root cause of a problem in a large-scale distributed dynamic environment will reduce the time elapsed between an occurrence of a problem or error and the resolution of the problem or error.
Exemplary embodiments of the present invention provide a method of identifying a root cause in a distributed computing environment including collecting metric data generated by each of a plurality of application components, collecting metric correlation relationship data that indicates a relationship between the metric data generated by the application components, collecting topology relationship data that indicates a spatial relationship between the application components, and collecting transaction tracking relationship data that indicates a group of the application components used to execute a requested transaction. A graph is generated including a plurality of nodes corresponding to the application components by merging the metric correlation relationship data, the topology relationship data, and the transaction tracking relationship data. Target nodes are identified among the plurality of nodes that correspond to application components having a response time that is above a predefined threshold. A plurality of paths along the graph that include the target nodes are traversed to identify at least one node corresponding to an abnormal application component. A recommendation list is generated including the at least one abnormal application component.
According to an exemplary embodiment of the present invention the at least one node corresponding to the abnormal application component may be present in each of the traversed plurality of paths.
According to an exemplary embodiment of the present invention the metric data may include at least one of a response time, a throughput, a latency, and an error count.
According to an exemplary embodiment of the present invention the application components may be services executed by a plurality of computers in the distributed computing environment.
According to an exemplary embodiment of the present invention the topology relationship data may further indicate a traversed path along the group of the application components taken to execute the requested transaction.
According to an exemplary embodiment of the present invention the group of the application components used to execute the requested transaction may include some of the application components.
According to an exemplary embodiment of the present invention the group of the application components used to execute the requested transaction may include all of the application components.
According to an exemplary embodiment of the present invention the at least one abnormal application component may function as a system bottleneck.
Exemplary embodiments of the present invention provide a method of identifying a root cause in a distributed computing environment including collecting metric data generated by each of a plurality of application components, collecting metric correlation relationship data that indicates a relationship between the metric data generated by the application components, collecting topology relationship data that indicates a spatial relationship between the application components, and collecting transaction tracking relationship data that indicates a group of the application components used to execute a requested transaction. A graph is generated including a plurality of nodes corresponding to the application components by merging the metric correlation relationship data, the topology relationship data, and the transaction tracking relationship data. Target nodes are identified among the plurality of nodes that correspond to application components having a throughput that is below a predefined threshold. A plurality of paths along the graph that include the target nodes are traversed to identify at least one node corresponding to an abnormal application component. A recommendation list is generated including the at least one abnormal application component.
Exemplary embodiments of the present invention provide a computer system configured to identify a root cause in a distributed computing environment. The system includes a memory storing a computer program, and a processor configured to execute the computer program. The computer program performs the following steps. Collect metric data generated by each of a plurality of application components. Collect metric correlation relationship data that indicates a relationship between the metric data generated by the application components. Collect topology relationship data that indicates a spatial relationship between the application components. Collect transaction tracking relationship data that indicates a group of the application components used to execute a requested transaction. Generate a graph including a plurality of nodes corresponding to the application components by merging the metric correlation relationship data, the topology relationship data, and the transaction tracking relationship data. Identify target nodes from among the plurality of nodes that correspond to application components having a response time that is above a predefined threshold. Traverse a plurality of paths along the graph that include the target nodes to identify at least one node corresponding to an abnormal application component and generate a recommendation including the at least one abnormal application component.
The above and other features of the present invention will become more apparent by describing in detail exemplary embodiments thereof, with reference to the accompanying drawings, in which:
Exemplary embodiments of the present invention described herein generally include identifying a root cause in a distributed computing environment. Accordingly, while the exemplary embodiments of the present invention may be susceptible to various modifications and alternative forms, specific exemplary embodiments are shown by way of example in the drawings and will herein be described in more detail. It should be understood, however, that there is no intent to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention.
Exemplary embodiments of the present invention will be described more fully hereinafter with reference to the accompanying drawings. Like reference numerals may refer to like elements throughout the specification and drawings.
Exemplary embodiments of the present invention provide a method of identifying a root cause in a distributed computing environment. Referring to
The generated graph may include two data sets. A first data set may include a list of each of the edges of the distributed computing environment. A second data set may include a list of each of the servers in the distributed computing environment. Duplicate nodes included in the list of nodes may be removed. Thus, a single node may be used to identify each of the servers in the distributed computing environment. A duplicate node may be a node that shares an identical name with another identified node. That is, the same server may be identified twice and only a single node may be included in the generated graph to represent the single server.
The generated graph may include both outgoing edges (e.g., an outgoing vertex) and incoming edges (e.g., an incoming vertex) between each individual server in the distributed computing environment (see, e.g.,
Referring to
According to exemplary embodiments of the present invention, each of the plurality of servers (e.g., servers 201, 202, 203, 204, 205, 206, 207 and 208) may be linked with or may communicate with at least one other server. For example, as illustrated in
Each of the servers (e.g., servers 201, 202, 203, 204, 205, 206, 207 and 208) of the distributed computing environment may be disposed in a different geographic location. For example, each of the servers (e.g., servers 201, 202, 203, 204, 205, 206, 207 and 208) of the distributed computing environment may be disposed in different countries or regions from each other. Distances between each of the servers (e.g., servers 201, 202, 203, 204, 205, 206, 207 and 208) may vary. Alternatively, some or all of the servers (e.g., servers 201, 202, 203, 204, 205, 206, 207 and 208) may be disposed in a same geographic location.
Referring to
The collected application level or transaction level throughput and response time 301 may be evaluated by an application abnormal detector 302. The application abnormal detector 302 may determine if an application is functioning normally. For example, the application abnormal detector 302 may determine whether one or more components of the application are functioning normally or abnormally 303. If the application is found to be functioning normally then a new collection of application level or transaction level throughput and response time 301 may be determined. This process may be repeatedly performed, as desired. For example the process may be performed according to predetermined time intervals or a predetermined number (e.g., 2,500) of processes may be performed to detect application abnormalities, as desired. According to an exemplary embodiment of the present invention the presence of at least one abnormal application component may function as a system bottleneck. If an abnormality is detected, the collected throughput and/or response times may be evaluated by a recommendation analyzer 304, which may recommend a next step 311 to a user.
According to an exemplary embodiment of the present invention, the application abnormal detector 302 may be a threshold abnormality detector. For example, the application abnormal detector 302 may detect an abnormality when a throughput is below a predetermined threshold or when a response time is above a predetermined threshold. If an abnormality is detected, the collected throughput and response times may be evaluated by the recommendation analyzer 304, which may recommend a next step 311 to a user.
The recommendation analyzer 304 may receive a graph 310 from a path builder 309. The path builder 309 may build a graphical representation (e.g., the graph 310) of each of the components of the application. The path builder 309 may receive transaction data 306, topology data 307 and correlation analysis (causality) data 308 from a raw data collector 305 that monitors each of these types of data. The path builder may combine the transaction data 306, the topology data 307 and the correlation analysis (causality) data 308 from the raw data collector 305 to form the graph 310 and may provide the graph to the recommendation analyzer 304. The path builder 309 will be described in more detail below with reference to
Referring to
The generated graph may include two data sets. A first data set may include a list of each of the edges of the distributed computing environment. A second data set may include a list of each of the servers in the distributed computing environment. Duplicate nodes included in the list of nodes may be removed. Thus, a single node may be used to identify each of the servers in the distributed computing environment. A duplicate node may be a node that shares an identical name with another identified node. That is, the same server may be identified twice and only a single node may be included in the generated graph to represent the single server.
The generated graph may include both outgoing edges (e.g., an outgoing vertex) and incoming edges (e.g., an incoming vertex) between each individual server in the distributed computing environment (see, e.g.,
The path builder may receive topology relationship data 407 (T(k) 402), transaction tracking relationship data 406 (TT(k) 403) and metric correlation relationship data 408 (PI(k) 401). The path builder 409 may provide combined topology and transaction tracking data (T(k)+TT(k) 404 for multivariate correlation analysis 408. The topology relationship data 407 (T(k) 402), the transaction tracking relationship data 406 (TT(k) 403) and the metric correlation relationship data 408 (PI(k) 401) may be combined 405 by the path builder 409 to generate the graph. That is, the generated graph may include the combined topology relationship data 407 (T(k) 402), transaction tracking relationship data 406 (TT(k) 403) and metric correlation relationship data 408 (PI(k) 401), which may be represented by formula Cp(k)=PI(k)+T(k)+TT(k) 410. The topology relationship data 407 (T(k) 402), the transaction tracking relationship data 406 (TT(k) 403) and the metric correlation relationship data 408 (PI(k) 401) will be described in more detail below.
According to an exemplary embodiment of the present invention the transaction tracking relationship data 406 may indicate a traversed path along the group of the application components taken to execute the requested transaction. The topology relationship data 407 may indicate the spatial relationship between application components (e.g., the physical distance between geographic components).
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
According to an exemplary embodiment of the present invention, the path builder 702 may communicate with the information cube 701. The information cube 701 may store analytic and monitoring solutions. For example, the information cube 701 may store executable software for analysis and monitoring of the distributed computing environment, and the executable software may be utilized by the path builder 702. The monitoring and analysis solutions in the information cube 701 may capture workload and bottleneck dynamics of the components of the application including the distributed computing environment. Workload variations and an occurrence of bottlenecks in the application components may occur dynamically, and solutions for analyzing and monitoring the workload and bottlenecks may be learned and stored in the information cube 701. For example, predictive insight (PI) of the multivariate correlation analysis unit 709 may be increased by learning relationships between the application components and the timing of communication between the application components.
The recommendation analyzer 703 may generate a recommendation list (see, e.g.,
The deep root cause analysis unit 707 may identify the root cause of performance degradation in the distributed computing environment. For example, the root cause of performance degradation may include a database deadlock, running out of JVM memory or running out of a database connection pool. Thus, an individual server may be identified as not functioning as desired. The root cause identified by the deep root cause analysis unit 707 may be correlated with the degradation of throughput and/or response time to determine causality in the edges between individual servers. The deep root cause analysis unit 707 may perform dynamic code path analytics. The deep root cause analysis unit 707 may determine a particular line of code which is causing degradation in a CPU or IO consumer. However, exemplary embodiments of the present invention are not limited thereto and any desired root cause analysis tools may be utilized, as desired.
The phrase “real system behavior” may refer to the average throughput and average response time that are measured for a particular application or system.
Referring to
Referring to
Exemplary embodiments of the present invention provide a method of identifying a root cause in a distributed computing environment. Referring to
Referring to
The computer system referred to generally as system 1200 may include, for example, a central processing unit (CPU) 1201, random access memory (RAM) 1204, a printer interface 1210, a display unit 1211, a local area network (LAN) data transmission controller 1205, a LAN interface 1206, a network controller 1203, an internal bus 1202, and one or more input devices 1209, for example, a keyboard, mouse etc. As shown, the system 1200 may be connected to a data storage device, for example, a hard disk, 1208 via a link 1207.
Referring to
Root cause scores may be determined for each of the nodes. A higher root cause score may indicate a higher likelihood that a particular node includes an error. The root cause scores may be used to identify a potentially abnormal node and the recommendation list may be generated. For example, as illustrated in
The descriptions of the various exemplary embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the exemplary embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described exemplary embodiments. The terminology used herein was chosen to best explain the principles of the exemplary embodiments, or to enable others of ordinary skill in the art to understand exemplary embodiments described herein.
The flowcharts and/or block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various exemplary embodiments of the inventive concept. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
It is understood that although this disclosure relates to cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed.
Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
Characteristics are as follows:
On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.
Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).
Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.
Wang, Lan Jun, Wu, Hai Shan, Qi, Yao Dong, Xu, Di Dx, Yang, Yi Bj
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
6363477, | Aug 28 1998 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Method for analyzing network application flows in an encrypted environment |
6442615, | Oct 23 1997 | Telefonaktiebolaget LM Ericsson | System for traffic data evaluation of real network with dynamic routing utilizing virtual network modelling |
6707795, | Apr 26 1999 | Nortel Networks Limited | Alarm correlation method and system |
7206972, | Jan 09 2003 | Alcatel | Path commissioning analysis and diagnostic tool |
7239677, | Apr 29 2003 | Unwired Planet, LLC | Method and apparatus for soft symbol scaling |
7506195, | Dec 26 2002 | Fujitsu Limited | Operation management method and operation management server |
7580998, | Aug 14 2002 | WSOU Investments, LLC | Method for describing problems in a telecommunications network |
7593936, | Aug 11 2003 | NETSKOPE, INC | Systems and methods for automated computer support |
7818418, | Mar 20 2007 | CA, INC | Automatic root cause analysis of performance problems using auto-baselining on aggregated performance metrics |
7940716, | Jul 01 2005 | GOOGLE LLC | Maintaining information facilitating deterministic network routing |
8001527, | Dec 21 2004 | Citrix Systems, Inc | Automated root cause analysis of problems associated with software application deployments |
8023867, | Mar 01 2007 | Ricoh Company, Ltd. | Magnet roller and method for the same, magnetic particle-support member, development device, process cartridge, and image forming apparatus |
8032867, | Jun 05 2007 | CA, INC | Programmatic root cause analysis for application performance management |
8225291, | Jan 04 2008 | International Business Machines Corporation | Automated detection of application performance bottlenecks |
8375370, | Jul 23 2008 | International Business Machines Corporation | Application/service event root cause traceability causal and impact analyzer |
8423827, | Dec 28 2009 | International Business Machines Corporation | Topology based correlation of threshold crossing alarms |
8463899, | Jul 29 2005 | BMC SOFTWARE, INC | System, method and computer program product for optimized root cause analysis |
8553561, | Aug 22 2007 | CAVIUM INTERNATIONAL; MARVELL ASIA PTE, LTD | Quality of service for mesh networks |
8751867, | Oct 12 2011 | VMware LLC | Method and apparatus for root cause and critical pattern prediction using virtual directed graphs |
9065743, | Dec 24 2009 | AT&T Intellectual Property I, L.P. | Determining connectivity in a failed network |
9160609, | May 28 2010 | FUTUREWEI TECHNOLOGIES, INC | Virtual Layer 2 and mechanism to make it scalable |
9160628, | Mar 17 2008 | Comcast Cable Communications, LLC | Representing and searching network multicast trees |
9418088, | Dec 02 2015 | International Business Machines Corporation | Identification of storage system elements causing performance degradation |
9882782, | Mar 26 2015 | Utopus Insights, Inc | Network management using hierarchical and multi-scenario graphs |
9954765, | Jan 08 2016 | Telefonaktiebolaget LM Ericsson (publ) | Graph construction for computed spring multicast |
20010052016, | |||
20020111755, | |||
20030046390, | |||
20040073844, | |||
20040218698, | |||
20050144314, | |||
20050206513, | |||
20060007863, | |||
20080037562, | |||
20080114581, | |||
20080222068, | |||
20080279101, | |||
20090018983, | |||
20090086741, | |||
20100138694, | |||
20100306305, | |||
20110047262, | |||
20120086855, | |||
20120158933, | |||
20120185735, | |||
20120300774, | |||
20120331551, | |||
20130097463, | |||
20130117748, | |||
20130212440, | |||
20150188783, | |||
20160036725, | |||
20160149771, | |||
20160162346, | |||
20160180093, | |||
20160224400, | |||
20170075744, | |||
20170093645, | |||
20170155570, | |||
20170161131, | |||
20170284839, | |||
20180197327, | |||
WO2012092256, | |||
WO2014088559, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 30 2015 | QI, YAO DONG | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036495 | /0319 | |
Jul 30 2015 | WANG, LAN JUN | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036495 | /0319 | |
Jul 30 2015 | WU, HAI SHAN | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036495 | /0319 | |
Jul 30 2015 | XU, DI DX | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036495 | /0319 | |
Jul 30 2015 | YANG, YI BJ | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036495 | /0319 | |
Sep 04 2015 | International Business Machines Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Sep 23 2024 | REM: Maintenance Fee Reminder Mailed. |
Date | Maintenance Schedule |
Feb 02 2024 | 4 years fee payment window open |
Aug 02 2024 | 6 months grace period start (w surcharge) |
Feb 02 2025 | patent expiry (for year 4) |
Feb 02 2027 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 02 2028 | 8 years fee payment window open |
Aug 02 2028 | 6 months grace period start (w surcharge) |
Feb 02 2029 | patent expiry (for year 8) |
Feb 02 2031 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 02 2032 | 12 years fee payment window open |
Aug 02 2032 | 6 months grace period start (w surcharge) |
Feb 02 2033 | patent expiry (for year 12) |
Feb 02 2035 | 2 years to revive unintentionally abandoned end. (for year 12) |