One or more embodiments, described herein, are directed towards a technology for analyzing a distributed system in order to determine one or more inconsistencies placing the distributed system in an unstable state. Once the one or more inconsistencies are determined, one or more operations reconciling the inconsistencies are defined in order to stabilize the distributed system.
|
11. A distributed system comprising:
a processor;
one or more computer-readable media having embodied thereon computer-executable instructions that, when executed by the processor, configure the distributed system to perform acts comprising:
receiving an indication of a connection between a first node and a second node, each of the first node and second node storing one or more types of data;
creating a clique, the clique comprising the first node and the second node, and a plurality of other nodes so that all nodes are fully connected;
monitoring the clique for consistency across the one or more types of data located in the clique, wherein the one or more types of data are consistent when one or more conditions defined for each type of data are satisfied, the one or more conditions being pre-set and customized in accordance with each particular type of data;
identifying one or more inconsistencies found across the one or more types of data located in the clique;
determining an operation horizon, wherein the operation horizon defines a particular set of connected nodes for performing one or more operations reconciling the one or more inconsistencies; and
stabilizing the clique by performing the one or more operations reconciling the one or more inconsistencies found across the one or more types of data located in the clique.
5. In a distributed system comprised of a plurality of nodes, one or more computer-readable media having stored thereon, computer-executable instructions that, when executed by a computer, direct performance of a method for facilitating stabilization of the distributed system, the method comprising:
receiving an indication of a connection between a first node and a second node, each of the first node and second node storing one or more types of data;
creating a clique, the clique comprising of a set of completely connected nodes including the first node and the second node;
monitoring the clique for consistency across the one or more types of data persisting over the clique, wherein the one or more types of data are consistent when one or more conditions defined for each type of data are satisfied, the one or more conditions being pre-set and customized in accordance with each particular type of data;
identifying one or more inconsistencies found across the one or more types of data located in the clique;
stabilizing the clique by reconciling the one or more inconsistencies found across the one or more types of data located in the clique, wherein the stabilizing comprises:
determining an operation horizon that defines a particular set of nodes for performing one or more operations reconciling the one or more identified inconsistencies; and
performing the one or more operations that reconcile the one or more inconsistencies found across the one or more types of data located in the clique.
1. A method for facilitating stabilization of a distributed system, the method comprising:
receiving an indication of a connection between a first node and a second node, each of the first node and second node storing one or more types of data;
creating a clique, the clique comprising the first node, the second node, and a plurality of other connected nodes;
monitoring the clique for consistency across the one or more types of data located in the plurality of nodes in the clique, wherein the one or more types of data are consistent when one or more conditions defined for each type of data are satisfied, the one or more conditions being pre-set and customized in accordance with each particular type of data;
identifying one or more inconsistencies occurring in the clique by checking the one or more conditions for data consistency over the clique;
specifying one or more reconciling operations, wherein the one or more reconciling operations reconcile the one or more identified inconsistencies;
determining an operation horizon, wherein the operation horizon defines a particular set of nodes performing the one or more operations reconciling the one or more identified inconsistencies; and
stabilizing the clique by initiating performance of the one or more reconciling operations, wherein the stabilizing produces a stable clique, a stable clique being a plurality of nodes, wherein each node is connected to one another and wherein the one or more types of data contained over the clique satisfies the one or more conditions.
2. A method as recited by
3. A method as recited by
4. A method as recited by
6. One or more computer-readable media as recited by
7. One or more computer-readable media as recited by
8. One or more computer-readable media as recited by
9. One or more computer-readable media as recited by
10. One or more computer-readable media as recited by
12. A distributed system as recited by
13. A distributed system as recited by
|
Data-centric business applications use distributed systems to create, modify, transfer, share, acquire, store and/or verify data located in different locations, hereinafter referred to as nodes. Such types of data, for example, includes data associated with data-centric business applications such as on-line stores, patient portals, network transactions, merging databases, etc.
Distributed systems share and transmit information amongst multiple nodes. Generally speaking, a node is a location that stores information within the distributed system. Examples of nodes include a computer on a network, a server on a network, or one of multiple data storage locations on a computer or server.
Maintaining data integrity when utilizing a distributed system in data-centric business applications becomes problematic when data is created, modified, transferred, shared, acquired, stored and/or verified at one or more nodes across a distributed system. For example, a server computer on a network may be configured to maintain a backup copy of a document created on a client computer.
However, if the server computer and the client computer are not connected via the network when a copy of the document is modified on the client computer, then the backup copy of the document stored on the server computer is not updated in accordance with the modified version of the original document because there is no established connection. Therefore, data integrity is not maintained across the nodes within the distributed system because the backup copy of the document stored on the server computer is not the same as the original document stored on the client computer.
Synchronization is a conventional approach to solving such data integrity problems. Conventional synchronization has provided a way to directly transfer data point-to-point from one node to another within a distributed system. In the example relating to a document backup system explained above, the server computer maintains an exact copy of the original document created and/or modified on the client computer.
Thus, synchronization provides a direct file transfer by comparing data bits and/or copying the data from a first location to another location in order to provide the same document in two different locations. This direct file transfer thereby maintains data integrity across the first location and another location.
However, mere point-to-point synchronization does not solve higher level policies necessary to maintain data integrity across a more complex distributed system. For example, when merging two databases containing a list of employee names into a single database, mere file transfer results in several data integrity problems, such as duplicated names. Name duplication within a merged database may then lead to internal processing errors related to employee information. For example, Arnold Johnson might not receive his paycheck because the paycheck was sent to another Arnold Johnson. In this exemplary scenario, the distributed system does not maintain data integrity because of the confusion that results when having duplicate names. Conventional point-to-point synchronization does not solve ensuring that the distributed system exchanges data so that data integrity is eventually established.
One or more embodiments described herein are directed towards a technology for analyzing a distributed system in order to determine one or more inconsistencies placing the distributed system in an unstable state. Once the one or more inconsistencies are determined, one or more operations reconciling the inconsistencies are defined in order to stabilize the distributed system.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The same numbers are used throughout the drawings to reference like elements and features.
Data-centric business applications use distributed systems to manage particular types of data across a set of nodes. Such data-centric business applications include on-line stores, patient portals, network transactions, merging databases, etc.
A distributed system creates, modifies, transfers, shares, acquires, stores and/or verifies data across one or more nodes. A node is any location where data is stored within a distributed system. Thus, a distributed system may include one or more nodes within a single computer or one or more nodes across multiple computers that are connected via one or more networks. Distributed systems maintain data integrity across a particular set of nodes. For example, in a document management system, a backup copy of a document created on a client computer may be stored on a server computer connected to a client computer via one or more networks.
However, nodes in a distributed system are continually connected and disconnected from one another. For example, two nodes are disconnected from one another when a client logs off the server or when a network connection fails. Conventional approaches have used synchronization to directly copy data from one node to another when one node is reconnected to another node in order to maintain data integrity. In other words, synchronization ensures upon reconnection that the backup copy of the document stored on the server computer is the same as the document created on the client computer.
Synchronization is implemented when a connection between a client computer and server computer is reestablished, by transferring, bit by bit, the original document on the client computer to the server computer. However, synchronization does not account for higher level policies associated with data-centric applications utilizing distributed systems.
Some examples of higher level policies that mere synchronization does not resolve include, but are not limited to: informing a system analyst via a notification message when an issue that must be corrected manually arises, locating necessary user credential information within a distributed system in order to verify transactional data, changing values that fall outside an acceptable range to a default value within the acceptable range, and automatically calculating the primary forms of inconsistency.
Described herein are one or more embodiments that maintain data consistency across a particular set of nodes in a distributed system.
Information located within a distributed system includes one or more types of data. The one or more types of data located within the distributed system are consistent from one node to another when the data does not contradict defined conditions for each type of data. This contradiction of defined conditions for a particular type of data often leads to consequential errors and/or conflicts in the distributed system, and prohibits efficiency and functionality within the distributed system.
Data consistency is different than data integrity, described in relation to synchronization above. Data consistency is defined with one or more conditions. These defined conditions vary from one type of data to another. They are not limited to a simple file transfer, or direct copying, as described in relation to synchronization.
In light of various connections and disconnections occurring between nodes in a distributed system, numerous issues may arise in trying to maintain consistency across the data within the distributed system and there is no guarantee that the data located across a particular set of nodes in a distributed system is consistent from one node to another.
For example,
From this example, an inconsistency occurs if the copy of the document X 114 on the server 102 is not the same as the original document X 116 that is created or modified on the client computer 106. More specifically, an inconsistency results when a condition specifying that the exact same copy be stored on both the server and the client computer has been violated.
On the other hand, by way of example, data consistency is not limited to having the same data in two different locations. In fact, inconsistencies may also result from having the exact same data in two different locations.
For example,
In this scenario, a defined condition may be flagging that there can not be two identical employee names in a single database because identical names lead to errors and confusion when trying to identify and locate a particular one of the employees with the identical name “Scott Smith”. In this example, an inconsistency results when there are two listings of the exact same data.
Thus, as
Furthermore, these possible inconsistencies may lead to troublesome situations within the distributed system because in many circumstances, once detected, a user must manually analyze the distributed system to determine what is causing the error and then must manually correct the error, taking away valuable time and resources. Thus, a distributed system that can not automatically reconcile one or more possible inconsistencies amongst one or more types of data located within a distributed system is not self-stabilizing.
Reconciliation is defined as one or more operations that resolve the one or more inconsistencies. In one embodiment, a user analyzing the distributed system is a system analyst. Thus, depending on the type of inconsistency, reconciliation can be a wide variety of operations defined by the system analyst analyzing the distributed system. For example, specific examples of reconciliation include: sending a notification or message to a system administrator in order to notify them of an error so that the error will be manually corrected, an automatic update of pricing information with regard to a product or service that has changed over time, deleting a copy of duplicated data, canceling a credit card transaction, updating credit card information, changing the network location where data is verified, etc. This list is not exhaustive and it is noted that reconciliation encompasses any defined operations capable of reconciling an inconsistency.
In the following discussion, an exemplary environment and exemplary procedures are described which are operable to implement a self-stabilizing distributed system. Stability over a distributed system is achieved when:
In an exemplary implementation, one or more defined conditions are associated with a particular type of data located within a distributed system. These conditions provide a framework for determining whether the data located in the distributed system is consistent. In an exemplary embodiment, these conditions are defined by a system analyst for a particular type of data prior to implementing the distributed system. Alternatively, the conditions can be a default set of conditions associated with the particular type of data.
If a defined condition is not satisfied, i.e. the data violates the defined condition, the data is deemed inconsistent. In a case that there are one or more inconsistencies related to a particular type of data, the distributed system is in an unstable state. In order to return the distributed system from an unstable state to a stable state, the system analyst defines one or more operations that automatically reconcile the one or more inconsistencies resulting from the defined conditions that have been violated. These one or more operations are described further below with respect to
Thus in this exemplary embodiment, a distributed system is self-stabilizing because one or more defined operations allow the distributed system to automatically reconcile one or more inconsistencies related to one or more types of data located within the distributed system.
As discussed with respect to
This customization goes a step further when the system analyst defines one or more operations reconciling the one or more inconsistencies. Thus the distributed system is implemented in a customized manner to be self-stabilizing.
In the example of
The descriptions given with respect to
As previously mentioned, a system analyst defines customized conditions for each particular type of data located within a distributed system for a variety of reasons. For example, there may be an ideal range of values that the system analyst would like to use as a predicate for the particular type of data. Another example would be to eliminate confusion when processing a certain type of data, such as making sure the price of a product sold on-line is continually updated from one node to another in the distributed system so that on-line consumers are receiving correct pricing information.
The system analyst also defines one or more customized operations that return the distributed system from an unstable state to a stable state by reconciling the one or more inconsistencies. Although, the exemplary systems corresponding to a document management and database merger have been described with respect to
The one or more operations are defined to reconcile higher level policies than those discussed in the document management system. In many situations, inconsistencies can not be resolved by simply copying data, as is done with the synchronization process previously described. This synchronization process does not allow a system analyst to specify, i.e. customize, a set of conditions for each particular type of data within the distributed system.
By allowing a system analyst to define customized conditions for one or more types of data within a distributed system, and further define one or more customized operations that reconcile any inconsistency resulting from a condition that has not been satisfied, the system is able to account for higher level policies that simple synchronization in a distributed system will not resolve. These higher level policies permit the system analyst to understand the complex situations causing inconsistencies.
In order to illustrate a self-stabilizing distributed system, an exemplary implementation relating to a document management system is explained below along with the features and procedures of the embodiment in the discussion of
Exemplary Environment
Each network connection (314, 316, 318, 320 and 322) may assume a wide variety of configurations. For example, each network connection may include an internet connection, a wide area network (WAN) connection, a local area network (LAN) connection, a wireless network connection, a public telephone network connection, a peer to peer network connection, or any combination thereof. A wide variety of other instances are also contemplated without departing from the spirit and scope thereof.
In
For example, in
Referring to
Now assume that the original creator of document X is client 310 and along with the creation of document X, client 310 creates a single associated field specifying the creator of the document, the single associated field identifying client 310. Document X and the single associated field are stored at server 304.
Next, when the servers 302 and 304 are reconnected, it is determined that the copy of the document X and its associated fields 324 stored on server 302 are not the same as the copy of the document X and its associated fields 326 stored on server 304, because the copy on server 302 contains the edited document X along with the new additional associated field identifying client 306, the last person to edit the document, while the copy on server 304 contains the original document X and the single associated field identifying the creator, client 310.
Upon this reconnection, the distributed system determines that both defined customized conditions previously set within the distributed system have been violated. Specifically, the copy of document X and any associated fields are no longer the same on both servers 302 and 304 and the creator of document X, client 310, has not been notified via an email of the edit performed to document X by client 306. Thus in this scenario, the distributed system as described would be in an unstable state.
The three interrelated models 400 include:
The data model 402 defines one or more types of data 408 located across one or more nodes in a distributed system. As previously mentioned, exemplary distributed systems may include data associated with data-centric business applications, including on-line stores, document management systems and patient portals. Thus, associated with these types of distributed systems are a variety of different types of data located across one or more nodes. Such types of data may include for example, credit card information, copies of documents, login information, etc.
The data model 402 also allows a system analyst to define one or more customized conditions 410 associated with each of the one or more types of data 408 located within the distributed system. The customized conditions 410 are defined for each particular type of data 408. These customized conditions 410 allow the distributed system to determine whether the data located across one or more nodes is consistent. When the particular type of data 408 satisfies the customized conditions 410, the data is consistent. When the particular type of data 408 does not satisfy, i.e. violates, the conditions 410, the data is inconsistent.
As previously indicated in the exemplary implementation of a document management system described above with respect to
Thus, the defined conditions 410 ultimately allow the distributed system to determine if each particular type of data 408 is consistent. The conditions are pre-set by a system analyst for each of the one or more types of data 408 located in one or more nodes in the distributed system.
Operations Model
The operations model 404 characterizes one or more data operations 412 that reconcile an inconsistency that results when one or more of the customized conditions 410 for a particular type of data 408 are violated, as defined in the data model 402. A system analyst defines one or more data operations 412 for each inconsistency that results when one or more of the customized conditions 410 in the data model are violated. In order to reconcile a single inconsistency, the operations model can define a single operation or a series of two or more operations performed in sequence.
Thus, the operations model 404 is utilized by system analysts to define reconciliation and account for all possible inconsistencies that result when the data in the distributed system violates the customized conditions 410. By either automatically performing the one or more operations 412 defined in the operation model 404, or knowing that there is one or more operations 412 that will be automatically performed accounting for the inconsistency, the distributed system is capable of returning from an unstable to a stable state.
In the exemplary implementation of document management system described above with respect to
In order to address the second condition, i.e. that the creator of the document is notified via an email of any edit to the document, a system analyst defines that a simple email is generated within the system and sent to the original creator of document X, client 310, notifying client 310 of the edit made by client 306. This operation reconciles the inconsistency that resulted when client 306 edited document X. If an operation is defined for both inconsistencies that result from client 306 editing document X, then the distributed system is returned from an unstable to a stable state. In this scenario the distributed system is self-stabilizing.
Connectivity Model
The connectivity model 406 provides a classification of the nodes that may exist in the distributed system. For example, in the document manage system nodes are classified as client-type or server-type. The connectivity model 406 also enumerates the one or more types of data 408 that can be contained by a node of a particular classification.
Additionally, the connectivity model 406 specifies if nodes of classification S (e.g. server-type) and classification T (client-type) can communicate. For example, nodes N and M in the distributed system can communicate only if node N, classified as a T node, and node M, classified as an S node are allowed to communicate by the connectivity model 406. Using this information, the connectivity model 406 determines how the information in the data model 402 and operations model 404 interact, and ultimately determine whether the distributed system has adequate operations to reconcile the one or more possible inconsistencies.
In order to do this, the connectivity model 406 identifies each operation 412 defined by the operations model 404 for a particular type of data 408 and first defines an operation horizon 414. An operation horizon 414 defines how many nodes in the network need to be connected for the operation to succeed. In other words, the connectivity model 406 determines how much connectivity is necessary to perform a reconciling operation 412 on the particular type of data 408 in order to reconcile the one or more inconsistencies.
Thus, upon implementation of reconciling an inconsistency, or when the distributed system is attempting to self-stabilize, the nodes identified in the operation horizon 414 must be connected in order to bring the distributed system from an unstable to a stable state.
In the exemplary implementation of document management system described above with respect to
In addition to defining an operation horizon 414 for each type of data, the connectivity model also analyzes and defines one or more cliques 416 for each type of data 408 located in the distributed system. A clique 216 is two or more nodes that are completely connected. A clique 416 is defined as the smallest set of nodes where a particular type of data 408 should be stable. This set of nodes may also be referred to as a subnet.
A clique is a set of nodes that are pairwise connected. A clique is also called a completely connected subnet, because every node in the clique is connected to every other node. The connectivity model determines how nodes may connect, and therefore determines the cliques that may be observed in the distributed system. These cliques are not enumerated in the connectivity model, but are a direct consequence of the information contained in the connectivity model.
As previously mentioned, it is difficult to maintain stability across every node in a distributed system. However, if the distributed system is split into one or more cliques 416, or subnets, then the distributed system is separated into simple parts where stability can be understood. Using the defined cliques 416, stability across a set of nodes can be understood by the system analyst.
This concept allows for example, multiple sets of client/server combinations to maintain stability with each other. For example, in
If the nodes within two distinct cliques become completely connected to one another, for example via a network connection, they will seek to maintain stability across the larger set of nodes including a union of a first clique and a second clique. Thus, a set of nodes maintaining stability is always dynamically changing because of connections/disconnections of defined cliques.
As a result of defining one or more operation horizons 414, and one or more cliques 416, the connectivity model 406 can provide a framework that a distributed system uses to maintain stability. This allows the analysis to determine an operation horizon 414, i.e. a set of nodes necessary for an operation 412 to succeed, and identify a clique 416 where stability is defined for a particular set of nodes. Thus, the connectivity model 406 provides a blueprint for how the distributed system can self-stabilize.
The data model 402, the operations model 404 and the connectivity model 406 work together to provide an analysis of every possible scenario that can go wrong within a distributed system. It is noted, that different types of data located within the distributed system will lead to different types of defined inconsistencies and therefore different operations defined by the system analysts that provide reconciliation of the inconsistencies.
At 502, the data model defines one or more types of data located on one or more nodes within the distributed system. The one or more types of data represent information used to implement the distributed system. In other words, the one or more types of data defined will be associated with the type of distributed system implemented. As previously specified, exemplary systems may include data-centric business applications. Thus the one or more types of data would be the information used in a particular business application.
At 504, the data model defines one or more customized conditions associated with the one or more types of data defined at 502. In one embodiment, the customized conditions are defined by a system analyst prior to implementing the distributed system. The conditions provide parameters specifying whether the particular type of data is in a consistent or inconsistent state by comparing the data with the customized conditions. If the data contradicts the condition, then the data is inconsistent, while if the data satisfies the condition, then the data is consistent.
At 506, the operations model defines one or more operations for the one or more types of data that will reconcile any inconsistency that results from a violated condition. The one or more data operations are associated with the type of distributed system implemented, such as the exemplary document management system previously discussed.
At 508, the connectivity model finds an operation horizon for the one or more operations defined at 506. The operation horizon determines the nodes in the distributed system necessary to successfully perform the operation.
At 510, the connectivity model defines a clique for the one or more types of data. The clique defines a subnet of the distributed system where stability is maintained. A clique is a set of nodes completely connected to one another.
At 602, the distributed system receives an indication of a connection. This connection is established between a first node and a second node. In one embodiment, the connection is a reconnection resulting from a failed network connection between the first and second nodes. In another embodiment, the connection is established subsequent to an intentional disconnection between the first and second nodes within the distributed system. In any event, the distributed system determines two nodes, not previously connected, have been connected. In one implementation, the two nodes are part of two separate cliques defined in the connectivity model.
At 604, the distributed system creates a clique from connecting the first and second nodes at 602. In one embodiment, the clique includes all the nodes in a first clique including the first node and all the nodes in a second clique including the second node.
At 606, the distributed system monitors, i.e. tests, whether one or more particular types of data located across the connected nodes in the distributed system are consistent in accordance with one or more pre-set and customized conditions defined by a system analyst for each particular type of data, as specified in the data model.
At 608, the distributed system identifies one or more inconsistencies associated with the one or more types of data monitored at 606. An inconsistency results when one or more customized conditions pre-set by a system analyst for each type of data are violated.
At 610, the distributed system specifies one or more reconciling operations implemented to reconcile the one or more inconsistencies identified at 608. The one or more reconciling operations are customized by the system analyst for each particular type of data. When the one or more operations are performed reconciling each inconsistency for a particular type of data, the type of data is returned from an inconsistent to a consistent state.
At 612, the distributed system stabilizes the clique by performing the one or more reconciling operations reconciling the one or more inconsistencies that result when one or more customized conditions pre-set for one or more types of data are violated.
As depicted in
An operating system 706 is shown stored in the memory 704 and is executed on the network computer 702. Also stored on the memory are software modules that implement the process illustrated in
The receiver 708 receives an indication of a connection between a first node and a second node. The creator 710 creates a clique. The monitor 712 monitors the clique for consistency across one or more types of data located in the first and second nodes. The identifier 714 identifies one or more inconsistencies found across the one or more types of data located in the first and second node. The reconciliation specifier 716 specifies one or more reconciling operations reconciling the one or more inconsistencies. The stabilizer 718 stabilizes the clique by initiating performance of the one or more operations specified by the reconciliation specifier 716. Finally, the operation horizon determiner 720 defines a particular set of nodes that must be connected to perform the one or more operations.
Also stored on the memory of the network computer the are three interrelated models, i.e. the data model, the operations model and the connectivity model, as explained above with respect to
Although the one or more embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the one or more embodiments defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are described as exemplary forms of implementing the claimed embodiments.
Schulte, Wolfram, Jackson, Ethan K.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
6058416, | May 22 1998 | International Business Machines Corportion; IBM Corporation | Flexible state sharing and consistency mechanism for interactive applications |
6615253, | Aug 31 1999 | Accenture Global Services Limited | Efficient server side data retrieval for execution of client side applications |
7281236, | Sep 30 2003 | EMC IP HOLDING COMPANY LLC | System and methods for developing and deploying a remote domain system |
7318074, | Nov 17 2003 | International Business Machines Corporation | System and method for achieving deferred invalidation consistency |
7370053, | Aug 05 2002 | Microsoft Technology Licensing, LLC | Coordinating transactional web services |
7403946, | Oct 01 1999 | Accenture Global Services Limited | Data management for netcentric computing systems |
7451163, | Oct 24 2001 | Oracle International Corporation | Data synchronization |
20020188538, | |||
20030191832, | |||
20040268312, | |||
20070044151, | |||
20070239797, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 30 2008 | Microsoft Corporation | (assignment on the face of the patent) | / | |||
Dec 18 2008 | SCHULTE, WOLFRAM | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 022059 | /0355 | |
Dec 22 2008 | JACKSON, ETHAN K | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 022059 | /0355 | |
Oct 14 2014 | Microsoft Corporation | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034564 | /0001 |
Date | Maintenance Fee Events |
Feb 01 2011 | ASPN: Payor Number Assigned. |
Mar 26 2014 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Apr 12 2018 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Apr 13 2022 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Oct 26 2013 | 4 years fee payment window open |
Apr 26 2014 | 6 months grace period start (w surcharge) |
Oct 26 2014 | patent expiry (for year 4) |
Oct 26 2016 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 26 2017 | 8 years fee payment window open |
Apr 26 2018 | 6 months grace period start (w surcharge) |
Oct 26 2018 | patent expiry (for year 8) |
Oct 26 2020 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 26 2021 | 12 years fee payment window open |
Apr 26 2022 | 6 months grace period start (w surcharge) |
Oct 26 2022 | patent expiry (for year 12) |
Oct 26 2024 | 2 years to revive unintentionally abandoned end. (for year 12) |