systems and methods involve prioritizing information based at least in part on test results for tests. A computing device may administer one or more tests and/or may receive test results for one or more tests. Multiple executions of one or more tests may be administered over a period of time. A device administering a test may evaluate the functionality of at least a portion of an application programming interface (API) or at least a portion of a user interface. test results may be analyzed to determine a failure pattern and/or pass rate for one or more tests. test results may be analyzed to determine an error signature and/or error signature frequency for one or more test results. A report can be generated that prioritizes information based at least in part on the tests, test results, and/or any determined information.
|
8. A computer-implemented method for categorizing test results, comprising:
under the control of one or more computer systems configured with executable instructions,
receiving a plurality of test results for each of a plurality of tests, each test result comprising an indicator indicating whether the test was successful or unsuccessful;
determining, for each of at least a portion of the plurality of tests, a failure pattern being a sequence of indicators for successive test results for the test, a pass rate based at least in part on a proportion of the test results having indicators indicating that an execution of the test was successful, and an error frequency based at least in part on a frequency with which an error type occurs, the error type determined based at least in part on the failure pattern; and
generating a report, wherein the generated report is prioritized first by the determined failure patterns, then by the determined pass rates within each failure pattern, and then by the determined error frequencies within each pass rate.
20. One or more non-transitory computer-readable storage media having collectively stored thereon instructions that, when executed by one or more processors of a computer system, cause the computer system to at least:
receive a plurality of test results for each of a plurality of tests, each test result comprising an indicator indicating whether the test was successful or unsuccessful, each test result of at least a subset of test results having an indicator that indicates that the test was unsuccessful further comprising an error message;
determine, for each of at least a portion of the plurality of tests, a failure pattern, the failure pattern selected from a plurality of predetermined failure pattern categories by comparing a sequence of indicators for the test to the predetermined failure pattern categories;
determine, for each of at least a portion of the plurality of test results having an indicator indicating that the test was unsuccessful, an error signature for the test result based at least in part on the error message;
determine, for at least a portion of the determined error signatures, an error signature frequency based at least in part on a proportion of the test results for the test associated with the error signature; and
generate a report, wherein the generated report is prioritized first by the determined failure pattern, then by a pass rate calculated as a proportion of indicators indicating a successful test to total indicators for each test, and then by the determined error signature frequencies within each pass rate.
14. A computer system for generating prioritized reports, comprising:
one or more processors; and
memory, including executable instructions that, when executed by the one or more processors, cause the computer system to at least:
receive a plurality of test results for each of a plurality of tests, each test result comprising an indicator indicating whether the test was successful or unsuccessful, each test result of at least a subset of test results having an indicator that indicates that the test was unsuccessful further comprising an error message;
determine, for each of at least a subset of the plurality of tests, a categorization based at least in part on a pattern of indicators for that test;
determine, for each of at least a portion of the test results having an indicator indicating that the test was unsuccessful, an error signature for the test result, the error signature based at least in part on the error message;
determine, for each of at least a portion of the determined error signatures, an error signature frequency based at least in part on a proportion of the determined error signatures having the same error signature; and
generate a report, the report providing an indication of a priority for at least a portion of the report, based at least in part on the categorization of each test, wherein the generated report is prioritized first by the determined categorization, then by a pass rate calculated as a proportion of successful indicators in each pattern of indicators, and then by the determined error signature frequency.
1. A computer-implemented method for prioritizing tests based on test results, comprising:
under the control of one or more computer systems configured with executable instructions,
receiving a plurality of test results for each of a plurality of tests, each test result comprising an indicator indicating whether the test was successful or unsuccessful, each test result having an indicator that indicates that the test was unsuccessful further comprising an error message;
determining, for each test in the plurality of tests, a failure pattern selected from a plurality of predetermined failure pattern categories, wherein the failure pattern is selected based at least in part on a pattern of indicators for successive test results of the test, the indicators indicating whether the test was successful or unsuccessful;
determining, for each test in the plurality of tests, a pass rate based at least in part on a proportion of the test results having indicators indicating that the test was successful;
determining, for each test result having the indicator indicating that the test was unsuccessful, an error signature for the test result based at least in part on the error message;
determining, for each different error signature associated with each test, an error signature frequency based at least in part on a proportion of the test results for the test associated with the error signature; and
generating a report, the report prioritizing the tests based at least in part on the determined failure patterns, pass rates, and error signature frequencies, wherein the generated report is prioritized first by the determined failure patterns, then by the determined pass rates within each failure pattern, and then by the determined error signature frequencies within each pass rate.
2. The computer-implemented method of
3. The computer-implemented method of
4. The computer-implemented method of
5. The computer-implemented method of
6. The computer-implemented method of
ordering at least a portion of the information in the report based at least in part on the determined failure patterns, pass rates, or error signature frequencies.
7. The computer-implemented method of
determining, for each test in the plurality of tests, a maximum error signature frequency for the test, wherein ordering the portion of the information in the report is based at least in part on the determined maximum error signature frequencies.
9. The computer-implemented method of
determining, for each of at least a portion of the plurality of test results, an error signature, the error signature based at least in part on at least a portion of an error message associated with the test; and
wherein the generated report is based at least in part on the determined error signatures.
10. The computer-implemented method of
the method further comprises determining, for each of at least a portion of the determined error signatures, an error signature frequency, wherein the error signature frequency based at least in part on a proportion of the error signatures; and
the generated report is based at least in part on the determined error signature frequencies.
11. The computer-implemented method of
the method further comprises determining, for each of at least a subset of the plurality of tests, a maximum error signature frequency, wherein the maximum error signature frequency is based at least in part on the error signature frequency; and
the generated report is based at least in part on the determined maximum error signature frequencies.
12. The computer-implemented method of
13. The computer-implemented method of
sending the generated report to at least one electronic address.
15. The computer system of
determine, for each of at least a portion of the plurality of tests, a failure pattern based at least in part on indicators for successive test results for the test, the failure pattern selected from a plurality of predetermined failure pattern categories; and
wherein the generated report is based at least in part on the determined failure patterns.
16. The computer system of
determine, for each of at least a portion of the plurality of tests, a failure rate based at least in part on the number of the plurality of test results associated with the test; and
wherein the generated report is based at least in part on the determined failure rates.
17. The computer system of
18. The computer system of
determining, for each of at least a portion of the test results, a priority based at least in part on determined error signatures;
determining an order for the portion of the test results, the order indicating the priority of the test results; and
generating the report based at least in part on the determined order for the portion of the test results.
19. The computer system of
determining at least a portion of the report having a high importance; and
visually indicating the high importance of the portion of the report.
21. The one or more non-transitory computer-readable storage media of
send at least a portion of the generated report to at least one electronic address.
22. The one or more non-transitory computer-readable storage media of
obtain historical information associated with at least one error signature; and
based at least in part on the obtained historical information, determine a potential solution associated with the at least one error signature, wherein the generated report includes the potential solution.
23. The one or more non-transitory computer-readable storage media of
24. The one or more non-transitory computer-readable storage media of
receive a request for the report;
in response to receiving the request for the report, dynamically generate the report; and
display the report, wherein at least a portion of the report is ordered based at least in part on the determined error signatures.
|
As the complexity of computer systems increases, the time required to debug applications associated with computer systems often increases as well. For example, modern organizations often operate computer networks that implement numerous services. A single transaction with the computer network can involve many of these services and each service may itself utilize numerous devices in the network. For example, a web-based application may contain dependencies on numerous servers, databases, scripts, and the like. The time required to continually test and verify that components of a system are operational often increases as the number of dependencies in the application increases. Tests may be performed that verify whether a portion or all of a system, such as a user interface or an application programming interface, is operational. Such tests, however, are often difficult to implement due to a number of reasons, such as inconsistent results, temporary glitches, and the like. Accordingly, testing and diagnosis can require valuable resources being expended to evaluate potential problems. Further, conventional testing techniques may result in a less-than-ideal allocation of resources.
In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiment being described.
Techniques described and suggested herein prioritize various tests based at least in part on test results for the tests. In a particular illustrative embodiment, numerous tests are each performed multiple times over a period of time. For example, a test that verifies whether all or part of a particular application programming interface (API) is operational may be performed on a daily basis. As another example, a test that verifies whether all or part of a particular user interface is operational may be performed every other day. Tests may be performed at various times including, but not limited to, hourly, daily, weekly, bi-weekly, monthly, on demand, or at other times.
The test results from some or all of these tests may be recorded. For example, test results for tests may be stored in one or more databases. Test results can include information such as a timestamp indicating when the test was performed, an indicator indicating whether the test was successful, and an error message if the test was unsuccessful. Numerous additional embodiments are disclosed herein and variations of embodiments explicitly disclosed herein are considered as being within the scope of the present disclosure.
The test results for one or more tests taken over a period of time can be analyzed. For example, five tests may be performed on an hourly basis and the results for each of these tests may be saved in a database. The test results from one or more of these tests may be analyzed. In one illustrative embodiment, the test results from three of the tests are analyzed over a two week period of time. In another illustrative embodiment, the test results from all five tests are analyzed over a monthly period of time. The number of tests selected to be analyzed may be automatically or manually determined. Likewise, the period of time in which test results for the selected tests should be analyzed may be automatically or manually determined. For example, the period of time in which test results are analyzed may be an hour, day, week, every other week, month, year, some combination thereof, or another period of time. The period of time may be dynamically selected. For example, in one embodiment, a particular number of test results for each test may be determined.
The tests and test results selected or determined to be analyzed can be analyzed in any number of ways. A pattern for a test can be determined based at least in part on a pattern of the successive indicators of the test results for a particular test. For example, if ten test results for a particular test are determined to be analyzed, then the test results for this test may be organized from the oldest test result to the newest test result. In other words, the test results for this test may be analyzed in order from the test result associated with the oldest timestamp to the test result associated with the most current time stamp. In one embodiment, the indicator for each test result indicating whether a test was successful or unsuccessful is analyzed. Thus, if the indicators for the ten test results for the test are “1111111111”, where “1” indicates that the test was successful, then it may be determined that a pattern associated with the test results for that test is “always passes”. Similarly, if the indicators for the ten test results for the test are “0000000000”, where “0” indicates that the test was unsuccessful, then it may be determined that a pattern associated with the test results for that test is “always fails”.
Any number of patterns may be dynamically determined or selected from a pre-defined list of patterns. For example, referring back to the previous example, if the ten test results for the test are “1111100000”, then it may be determined that a pattern associated with the test results for the test is “appears broken”. As another example, if the ten test results for the test are “0000011111” or “1110011111”, then it may be determined that a pattern associated with the test results for the test is “appears fixed.” There may be one or more pattern categories that indicate that a pattern cannot be determined. For example, if the ten test results for the test are “1010101010” or “1100110101” then a particular pattern may be unable to be determined and therefore the test may be assigned a pattern such as “flaky” or “inconsistent.”
A pass rate for a test may be determined based at least in part on the test results for a particular test that are analyzed. The pass rate may reflect how often the test was successful. For example, in one embodiment, a pass rate for a test is calculated by dividing the number of test results that have indicators indicating that the test was successful by the number of test results for the test. Thus, in an embodiment, if the ten test results for a test to be analyzed are “1010101011”, then the pass rate would be calculated as 6/10 or 0.60 or 60%. Numerous additional embodiments are disclosed herein and variations of embodiments explicitly disclosed herein are considered as being within the scope of the present disclosure.
If a test result has an indicator indicating that the test was unsuccessful and if the test result is associated with an error message, then an error signature for the test result may be determined. An error signature can be a unique or substantially unique signature based on the error message associated with the test result. For example, the error signature for a test result may be determined by taking a hash, such as a secure hash algorithm (SHA) hash, such as a SHA1 hash, of the error message. In embodiments, each test result having the same error message will have the same error signature.
An error signature frequency may be determined based at least in part on the test results that are analyzed. The error signature frequency can reflect how often a particular error signature occurred within a particular test or throughout the test results. In one embodiment, an error signature frequency is based at least in part on a number of same error signatures and a total number of error signatures. For example, ten test results for a first test and five test results for a second test may be analyzed. In this example, four of the ten test results for the first test and one of the five test results for the second test may be associated with an error signature of “c8dc3c55176e1119f8b91dcb411f2ab048b3a2d8” which is the SHA1 hash of a “Error establishing database connection” error message associated with each of these test results. In addition, in this embodiment, five of the ten tests results for the first test are associated with a second error signature and one of the ten tests results for the first test indicate that the test was successful. Furthermore, in this embodiment, three of the five test results for the second test are associated with a third error signature and one of the five test results for the second test indicate that the test was successful. Using this example, in one embodiment, the error signature frequency of “c8dc3c55176e1119f8b91dcb411f2ab048b3a2d8” for the first test may be determined to be 44.4% (i.e. four test results for the first test having the error signature out of nine test results for the first test indicating failure) and the error signature frequency of “c8dc3c55176e1119f8b91dcb411f2ab048b3a2d8” for the second test may be determined to be 25% (i.e. one test result for the second test having the error signature out of four test results for the second test indicating failure). In another embodiment, the error signature frequency of “c8dc3c55176e1119f8b91dcb411f2ab048b3a2d8” may be determined to be 38.46% across the first and second tests (i.e. five test results having the error signature out of thirteen total test results for the first and second tests indicating failure).
As another example, in an embodiment, ten test results for a first test and five test results for a second test may be analyzed. In this example, four of the ten test results for the first test and one of the five test results for the second test may be associated with an error signature of “c8dc3c55176e1119f8b91dcb411f2ab048b3a2d8” which is the SHA1 hash of a “Error establishing database connection” error message associated with each of these test results. Using this example, in one embodiment, the error signature frequency of “c8dc3c55176e1119f8b91dcb411f2ab048b3a2d8” for the first test may be determined to be 40% (i.e. four test results for the first test having the error signature out of ten test results for the first test) and the error signature frequency of “c8dc3c55176e1119f8b91dcb411f2ab048b3a2d8” for the second test may be determined to be 20% (i.e. one test result for the second test having the error signature out of five test results for the second test). In another embodiment, the error signature frequency of “c8dc3c55176e1119f8b91dcb411f2ab048b3a2d8” may be determined to be 33.33% (i.e. five test results having the error signature out of fifteen total test results).
One or more prioritized report may be generated. A prioritized report may be based at least in part on any determined failure patterns, pass rates, error signatures, and/or error signature frequencies. For example, a test having a failure pattern of “always fails” may have red text or have a red background so as to visually distinguish the test from other tests. Thus, in embodiments, red text or a red background associated with a test may indicate a high priority for the test. As another example, if a test has a failure pattern of “appears broken” then the text or the background color associated with this test may be yellow. Thus, in embodiments, yellow text or a yellow background associated with a test may indicate a medium priority for the test. The tests and/or test results may be ordered based on a priority schema. In one embodiment, tests are ordered first by failure pattern from highest priority to lowest priority and then by pass rate from lowest pass rate to highest pass rate. Numerous additional embodiments are disclosed herein and variations of embodiments explicitly disclosed herein are considered as being within the scope of the present disclosure.
One or more of the generated reports may be sent or otherwise made available. For example, an email comprising one or more generated reports may be sent to an appropriate email address. As another example, one or more generated reports may be made available, such as through a graphical user interface. In one embodiment, a user can dynamically generate a prioritized report and view the report on a display associated with the user.
Referring now to
The computing device 110 shown in
In embodiments, the computing device 110 comprises a computer-readable medium such as a random access memory (RAM) coupled to a processor that executes computer-executable program instructions and/or accesses information stored in memory. For example, computing device 110 may comprise a computer-readable medium that has program code stored thereon for executing one or more tests, as described herein. In one embodiment, computing device 110 comprises a computer-readable medium that has program code stored thereon for storing the test results for one or more tests to memory or to a data store, such as data store 120. A computer-readable medium may comprise, but is not limited to, an electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions. Other examples comprise, but are not limited to, a floppy disk, CD-ROM, DVD, magnetic disk, memory chip, ROM, RAM, SRAM, DRAM, CAM, DDR, flash memory such as NAND flash or NOR flash, an ASIC, a configured processor, optical storage, magnetic tape or other magnetic storage, or any other medium from which a computer processor can read instructions. In one embodiment, the computing device 110 may comprise a single type of computer-readable medium such as random access memory (RAM). In other embodiments, the computing device 110 may comprise two or more types of computer-readable medium such as random access memory (RAM), a disk drive, and cache. The computing device 110 may be in communication with one or more external computer-readable mediums such as an external hard disk drive or an external DVD drive.
In embodiments, the computing device 110 comprises a processor which executes computer-executable program instructions and/or accesses information stored in memory. For example, a processor in computing device 110 may execute program code stored in memory for executing one or more tests, such as on a periodic basis on in response to a request to perform the one or more tests, store test results to data store 120, analyze test results, and/or generate a prioritized report. The instructions may comprise processor-specific instructions generated by a compiler and/or an interpreter from code written in any suitable computer-programming language including, for example, C, C++, C#, Visual Basic, Java, Python, Perl, Ruby, and JavaScript. In an embodiment, the computing device 110 comprises a single processor. In other embodiments, the computing device 110 comprises two or more processors.
The computing device 110 shown in
In embodiments, the computing device 110 may comprise or be in communication with a number of external or internal devices such as a mouse, a CD-ROM, DVD, a keyboard, a display, audio speakers, one or more microphones, or any other input or output devices. A display may use any suitable technology including, but not limited to, LCD, LED, CRT, and the like.
The data store 120 shown in
The network 130 shown in
One or more connections to network 130 may be provided through an Internet Service Provider (ISP). An ISP can be any organization that provides a customer with access to the internet. An ISP may connect customers to the internet using various types of connections or technologies including, but not limited to, copper wires, dial-up, digital subscriber line (DSL), asymmetric digital subscriber line (ASDL), wireless technologies, fiber optics, or integrated services digital network (ISDN). In embodiments, an ISP may be associated with various devices such as gateways, routers, switches, repeaters, or other devices.
The server 140 shown in
In embodiments, the server 140 comprises a computer-readable medium such as a random access memory (RAM) coupled to a processor that executes computer-executable program instructions and/or accesses information stored in memory. A computer-readable medium may comprise, but is not limited to, an electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions. Other examples comprise, but are not limited to, a floppy disk, CD-ROM, DVD, magnetic disk, memory chip, ROM, RAM, SRAM, DRAM, CAM, DDR, flash memory such as NAND flash or NOR flash, an ASIC, a configured processor, optical storage, magnetic tape or other magnetic storage, or any other medium from which a computer processor can read instructions. In one embodiment, the server 140 may comprise a single type of computer-readable medium such as random access memory (RAM). In other embodiments, the server 140 may comprise two or more types of computer-readable medium such as random access memory (RAM), a disk drive, and cache. The server 140 may be in communication with one or more external computer-readable mediums such as an external hard disk drive or an external DVD drive.
In embodiments, the server 140 comprises a processor which executes computer-executable program instructions and/or accesses information stored in memory. The instructions may comprise processor-specific instructions generated by a compiler and/or an interpreter from code written in any suitable computer-programming language including, for example, C, C++, C#, Visual Basic, Java, Python, Perl, Ruby, and JavaScript. In an embodiment, the server 140 comprises a single processor. In other embodiments, the server 140 comprises two or more processors.
The server 140 shown in
In embodiments, the server 140 may comprise or be in communication with a number of external or internal devices such as a mouse, a CD-ROM, DVD, a keyboard, a display, audio speakers, one or more microphones, or any other input or output devices. A display may use any suitable technology including, but not limited to, LCD, LED, CRT, and the like.
Referring now to
The process 200 shown in
In embodiments, one or more of the tests for which test results are received is designed to verify whether all or a portion of an environment is operational. For example, a test may be designed to verify whether all or part of a particular application program interface (API) is operational. For example, a test may be designed to verify whether an external API is operational by sending a request to the API and then verifying that one or more appropriate actions were performed by the API. In one embodiment, a device administering a test may attempt to create a task through an API and then verify that the task was appropriately created. In other embodiments, a device administering a test may verify that statistics associated with or provided by an API are accurate.
A test can be designed to verify whether all or part of a particular user interface is operational. For example, a device administering a test may involve the device simulating a user logging into a particular website or application, such as a web-based application, remotely managed application, or other application. In this embodiment, the device administering the test may attempt to browse to a particular web page, enter a username and a password in the appropriate fields of a form on the web page, and submit the form. The device administering the test may then verify whether it was able to log into the account associated with the entered username and password submitted. In other embodiments, a test is designed to diagnose one or more potential problems with an application programming interface (API), user interface (UI), and/or another component of one or more applications. In some embodiments, a test can be designed to diagnose whether one or more components in a computing system, such as a web-based application or remotely managed application, is functioning properly. In some embodiments, one computing device administers one or more tests. In another embodiment, two or more computing devices administer a single test or multiple tests. Numerous additional embodiments are disclosed herein and variations of embodiments explicitly disclosed herein are considered as being within the scope of the present disclosure.
A test result can include various types of information. For example, a test result may be associated with a particular execution or run of a test. In this embodiment, the test result may comprise a test name or other identifier that associates the test result with the test for which it corresponds. For example, if a test entitled “System API” is executed, then information that identifies the test result as corresponding to the “System API” test may be included in the test result. In embodiments, a test result is associated with an individual test. In other embodiments, a test result may be associated with one or more executions of an individual test and/or one or more executions of a plurality of tests.
A test result may contain information that corresponds with date and/or time that an execution of the test was implemented and/or completed. For example, a test result can be associated with a timestamp indicating when the test was performed. In embodiments, a test result contains an indicator indicating whether the test was successful or unsuccessful. For example, if a test is designed to verify that a user can log into an account by entering a username and password, then a test result for an execution of the test may include an indicator indicating whether the test was able to log into the account. Such an indicator may include any type of information that can be used to determine whether the test was successful in logging into the account. In embodiments, an indicator for a test result associated with an execution of a test can include a “1”, “true”, “successful”, or another appropriate indicator if the test passed or a “0”, “false”, “unsuccessful”, or another appropriate indicator if the test failed.
In embodiments, if an execution of a test is unsuccessful, then a test result associated with that execution of the test can include one or more error messages. An error message may provide an indication of a potential problem or a potential solution, or both. For example, in one illustrative embodiment, an error message provides “Unable to Connect to User Database” which indicates that there may be a problem with the network connection to the User Database or there may be a problem with a server managing the User Database. In some embodiments, an error message may provide a code segment that could not successfully be executed, a line number associated with a code segment that could not successfully be executed, or other information that indicates a point within the test that could not be completed. For example, a particular test may perform four functions that verify the availability of a portion of a web-based application. In this embodiment, if an execution of the test indicates that the function one and function two was able to be successfully completed, but that the test was unable to successfully complete function three, then the error message may indicate that the test failed at function three or may provide a name or description, or both, of the function that was attempted but failed. In other embodiments, an error message can include any type of information that may be usable to identify a potential problem associated with a test.
In an embodiment, one or more tests may be designed such that error messages associated with results of an execution of one or more of the tests are coordinated. For example, two different tests may have a same dependency. As one example, a first test may be dependent upon a connection with a products database. A second, different test may also be dependent upon a connection with the products database. In this embodiment, if an execution of the first test fails because of a connection error with the products database, then an error message, such as “Unable to Establish a Connection to the Product Database”, may be included in the test result for that execution of the first test. Similarly, if an execution of the second test fails because of a connection error with the products database, then an error message, such as “Unable to Establish a Connection to the Product Database”, may be included in the test result for that execution of the second test. Thus, in embodiments, error messages associated with test results across two or more tests may alone or in combination provide an indication of a common or related problem.
Referring back to
At least a portion of the plurality of test results may be analyzed in any number of ways. At least a portion of the plurality of test results can be analyzed by determining a failure pattern for one or more tests associated with at least a portion of the plurality of test results. A failure pattern for a test can be determined based at least in part on a pattern of successive test results for the test. In one embodiment, each test result for a test is sorted by a timestamp associated with the test. For example, the test results may be sorted from the oldest timestamp to the newest timestamp. In some embodiments, each test result may be associated with an indicator indicating whether the test was successful or unsuccessful. In this embodiment, the indicators associated with the test results in the sorted order may be analyzed.
In some embodiments, indicators associated with successive test results may be analyzed to dynamically determine one or more patterns, such as a failure pattern for the test. In other embodiments, indicators associated with successive test results may be analyzed to determine one or more patterns, such as a failure pattern, selected from a plurality of predetermined categories. Predetermined categories can include categories such as “always fails”, “appears broken”, “appears fixed”, “always passes”, or one or more other categories. In some embodiments, if each of the indicators associated with the test results for one or more tests indicates that the test was successful, then a “always passes” pattern is determined. For example, if successive indicators for the test results associated with a test are “1111111”, where “1” indicates that the test was successful, then it may be determined that a pattern for the test is “always passes”. Likewise, if each of the indicators associated with the test results for one or more tests indicates that the test was unsuccessful, then a “always fails” pattern may be determined. For example, if successive indicators for the test results associated with a test are “00000”, where “0” indicates that the test was unsuccessful, then it may be determined that a pattern for the test is “always fails”.
In an embodiment, if successive indicators for test results associated with one or more tests indicates that executions of one or more of the tests were at first unsuccessful but are now successful, then a “appears fixed” pattern is determined. For example, if successive indicators for the test results associated with a test are “0011111111111111111”, where “0” indicates that an execution of the test was unsuccessful and “1” indicates that an execution of the test was successful, then a “appears fixed” pattern may be determined for the test. As another example, if the test results associated with a test are “11110001111”, then a “appears fixed” pattern may be determined according to an embodiment. In one embodiment, if a predetermined number of indicators associated with the latest test results for a test each indicate that an execution of the test was successful and if at least one indicator associated with a test result for the test indicates that an execution of the test was unsuccessful, then a “appears fixed” pattern may be determined for the test. Thus, if indicators associated with test results for a test are “0000000000111” then an “appears fixed” pattern may be determined because the indicators associated with the latest three test results for the test each indicate that the execution of test was successful. In other embodiments, an “appears fixed” pattern may be determined based at least in part on a proportion of test results having an indicator indicating that a test was successful and a total number of tests results for the test. For example, the percentage of indicators indicating that an execution of a test was successful may be compared to a threshold percentage to determine a pattern for the test.
In an embodiment, if successive indicators for test results associated with one or more tests indicates that executions of one or more of the tests were at first successful but are now unsuccessful, then a “appears broken” pattern is determined. For example, if successive indicators for the test results associated with a test are “11111111000”, where “0” indicates that an execution of the test was unsuccessful and “1” indicates that an execution of the test was successful, then a “appears broken” pattern may be determined for the test. In one embodiment, if a predetermined number of indicators associated with the latest test results for a test each indicate that an execution of the test was unsuccessful and if at least one indicator associated with a test result for the test indicates that an execution of the test was successful, then a “appears broken” pattern may be determined for the test. In other embodiments, an “appears broken” pattern may be determined based at least in part on a proportion of test results having an indicator indicating that a test was unsuccessful and a total number of tests results for the test. For example, the percentage of indicators indicating that an execution of a test was unsuccessful may be compared to a threshold percentage to determine a pattern for the test.
In some embodiments, if a pattern cannot be determined or if a pattern does not match another predefined pattern category, then a “flaky,” “inconsistent,” or “undetermined” pattern category may be selected for the test.
At least a portion of the plurality of test results can be analyzed by determining a pass rate for one or more tests associated with at least a portion of the plurality of test results. For example, a pass rate for a test may be determined based at least in part on the test results for the test having indicators indicating that an execution of the test was successful and the number of test results for the test. In one embodiment, a pass rate is a proportion of the number of tests having indicators indicating that an execution of a test was successful. In an embodiment, a pass rate is the number of successful executions of a test divided by the total number of executions of the test. For example, if the indicators associated with test results for a test are “11010” then a pass rate for the test may be determined to be ⅗ or 60 percent (i.e. three successful indicators divided by five test results).
At least a portion of the plurality of test results may be analyzed by determining an error signature for one or more of the plurality of test results. For example, in one embodiment, an error signature is determined for each of the plurality of test results having an indicator indicating that the test was unsuccessful and having an error message. In an embodiment, each unique error message may be associated with a unique error signature. In another embodiment, each different error message can be associated with a substantially unique error signature. An error signature may be determined based at least in part on a hash of an error message associated with a test result for a test. Various suitable hash functions can include, but are not limited to, MD4, MD5, SHA-1, SHA-2, or another hashing function.
At least a portion of the plurality of test results may be analyzed by determining an error signature frequency for one or more test results and/or tests based at least in part on determined error signatures for test results. An error signature frequency may be determined for a test based at least in part on how often an error signature occurs for the test results of the test. For example, if there are ten test results indicating failure associated with a particular test and four of the test results are associated with an identical error signature, then the error signature frequency for the test may be determined to be 40 percent (i.e. four test results associated with the error signature divided by the number of test results for the test indicating failure). An error signature frequency may be determined based at least in part on error signatures for test results associated with one or more tests. For example, there may be ten test results for a first test. In this example, four of the test results indicate failure associated with a first error signature, three of the test results indicate failure associated with a second error signature, and three of the test results indicate that the test was successful. Furthermore, in this example, there may be five test results for a second test and three of test results may be associated with the first error signature, one of the test results may be associated with a third error signature, and one of the test results may indicate that the test was successful. In this embodiment, an error signature frequency for the first error signature may be determined to be 63.6 percent (i.e. seven test results associated with the first error signature divided by eleven total tests results indicating failure). As another example, in this embodiment, an error signature frequency for the second error signature can be determined to be 27.3 percent (i.e. three tests results associated with the second error signature divided by eleven total test results indicating failure) and an error signature frequency for the third error signature can be determined to be 9.1 percent (i.e. one test result associated with the third error signature divided by eleven total test results indicating failure).
Referring now to
In the process 300 shown in
In the process 300 shown in
In the process 300 shown in
In the process 300 shown in
In another embodiment, an error signature frequency is based on a number of the same error signatures for a test and the total number of test results. For example, an error signature frequency associated with the error signature “5c824112b8fcbe348516f150729645ab” for the first test may be determined to be 2/4 or 50 percent because two of the four test results for the first test are associated with the error signature “5c824112b8fcbe348516f150729645ab”. Likewise, an error signature frequency associated with the error signature “ab996e811626bec3a94c94f660905ae1” for the first test may be determined to be ¼ or 25 percent because one of the four test results for the for the first test is associated with the error signature “ab996e811626bec3a94c94f660905ae1”.
Referring now to
In the process 400 shown in
In the process 400 shown in
In the process 400 shown in
In the process 400 shown in
Referring back to
A prioritized report based at least in part on analyzed test results may be generated in any number of ways so long as at least a portion of the generated report provides an indication of a priority for at least a portion of the report. For example, a generated report may provide an indication of a priority for one or more test results. A generated report can provide an indication of a priority for one or more tests. In an embodiment, a generated report provides an indication of a priority for one or more pass rates associated with one or more tests. In another embodiment, a generated report provides an indication of a priority for one or more error signatures or error messages, or both. In some embodiments, a generated report can provide an indication of a priority for one or more error signature frequencies.
A generated prioritized report may be in any number of electronic formats. For example, a prioritized report may be a text file, an email, a spreadsheet, a word processing document, a presentation, web page, HTML file, other document, or any other suitable format. In some embodiments, one or more prioritized reports may be dynamically generated. For example, a prioritized report may be generated that lists only the highest priority tests based at least in part on the analyzed test results. In one embodiment, analyzed information is stored in one or more databases. In such an embodiment, tests, test results, and/or analyzed information may be accessible in real-time or on demand. For example, a user interface may be provided that enables a user to generate a prioritized report based on one or more parameters selected by the user such as a time period, priority level, test name, test results, failure pattern, error message, error signature, error signature frequencies, other parameters, etc.
A prioritized report may provide an indication of priority in various ways. Priority may be indicated based on a color of the text associated with a portion of a generated report. For example, in one embodiment, a portion of a report that has red text indicates a high priority. Priority may be indicated based on a color of a background associated with a portion of a generated report. For example, if a portion of a report has a yellow background, then a medium priority may be indicated according to one embodiment. Priority may be indicated based on an ordering of information contained in at least a portion of a generated report. For example, in an embodiment, tests having a failure pattern “always fails” are listed first because they have a higher priority than other tests, such as tests having a failure pattern of “always passes”. In another embodiment, tests having a pass rate classified as low (or a failure rate classified as high) are listed first in a report because these tests have a higher priority than other tests, such as tests having a 100 percent pass rate.
In one embodiment, a prioritized report provides an indication of priority by ordering information in the prioritized report. For example, in an embodiment, error messages are ordered based on a maximum error signature frequency associated with one or more tests and/or an error message. In this embodiment, if a first maximum error signature frequency is 65 and a second maximum error signature frequency is 35 percent for a test, then the error message associated with the first maximum error signature may be listed first and then the error message associated with the second maximum error signature frequency may be listed in the prioritized report. In another embodiment, a particular number of error signatures for a particular test are used. Thus, if a test is associated with three error signature frequencies, then in one embodiment only the largest error signature frequency is included in the prioritized report for that test. In another embodiment, each error signature frequency associated with a test above a threshold error signature frequency can be included in the prioritized report.
In embodiments, priority may be based on a threshold value. For example, in one report, tests associated with a pass rate of less than 90 percent may be considered “high priority” and tests associated with a pass rate between 90 percent and 95 percent may be considered “medium priority”. In an embodiment, priority is based on error signature frequency. For example, if the error signature frequency is above a threshold value, then a prioritized report can indicate that an error message is considered “high priority”. In other embodiments, priority may be indicated by a font type, font size, or other information that provides a user an indication of priority. In some embodiments, a legend or other guide may be provided in a generated report that a user can view to determine priority.
In some embodiments, priority may be based on, or a priority report may otherwise include, information received from additional data sources. For example, historical information for one or more tests, one or more test results, one or more error messages, one or more error signatures, and/or one or more potential solutions may be included in a priority report. Such information may be available from any number of data sources, such as one or more databases. In one embodiment, historical information associated with one or more tests may be included in a prioritized report. In an embodiment, diagnostic information may be determined or selected based at least in part on analyzed information may be included in a prioritized report. For example, if historical data or other information indicates a potential solution to a particular error message, then the potential solution and/or additional information about the potential solution may be included in a prioritized report. Numerous variations are within the scope of this disclosure.
Referring now to
In the embodiment shown in
Next, in process 500, each of the tests in each failure pattern are organized by their pass rate 520. For example, each of the tests in the highest priority failure pattern may be arranged in ascending order. Thus, in this embodiment, the test associated with the lowest pass rate that has the highest priority failure pattern may be listed first. Next, the test having the second lowest pass rate that also has the highest priority failure pattern may be listed. Thus, the tests in one or more failure patterns may be arranged by their pass rates.
In one embodiment, a bucketing approach may be used to reduce noise in the sorting order. A bucketing approach may be accomplished in numerous ways. In one embodiment, one or more of the pass rates are rounded to a particular decimal place. Thus, if a pass rate is 73.89 percent, then the pass rate may be rounded up to 73.9. In another embodiment, the pass rate may be rounded down to 73.8. In some embodiments, the pass rate may be rounded to the nearest percent. Thus, if a pass rate is 34.84 percent, then the pass rate may be rounded up to 35 percent. In another embodiment, the pass rate may be rounded down to 34 percent. In one embodiment, one or more pass rates are placed into one or more categories that can be used to arrange pass rates. For example, in one embodiment, pass rates can be placed into categories that are in 5 percent increments. In another embodiment, there may be ten pass rate categories where each pass rate represents a ten percent pass rate. Thus, one category can represent pass rates from 0-10 percent, a second category can represent pass rates from 10-20 percent, a third category can represent pass rates from 20-30 percent, etc. Numerous other embodiments are within the scope of this disclosure.
Then, in process 500, under each test a list of the error messages may be included in the prioritized report 530. The error messages may be ordered based on an error signature frequency associated with the test and/or error message. For example, if a first error signature frequency is 75 percent and a second error signature frequency is 20 percent for a test, then the error message associated with the first error signature frequency may be listed first and then the error message associated with the second error signature frequency may be listed in the prioritized report. In one embodiment, the error messages are ordered based on a maximum error signature frequency associated with one or more tests and/or an error message. For example, in this embodiment, if a first maximum error signature frequency is 70 and a second maximum error signature frequency is 25 percent for a test, then the error message associated with the first maximum error signature may be listed first and then the error message associated with the second maximum error signature frequency may be listed in the prioritized report. In another embodiment, a particular number of error signatures for a particular test are used. Thus, if a test is associated with three error signature frequencies, then in one embodiment only the largest error signature frequency is included in the prioritized report for that test. In another embodiment, each error signature frequency associated with a test above a threshold error signature frequency can be included in the prioritized report.
Referring now to
In the embodiment shown in
Next, in process 600, tests are listed under each error message based on their failure pattern 620. Thus, referring to the example above, if a test result for a first test is associated with the second error message and the failure pattern for the first test indicated that the test is always failing, then the first test may be listed before a second test that is associated with another test result having the second error message where the second test's failure pattern indicates that the test appears fixed.
Next, in process 600, tests having the same failure pattern are ordered based at least in part on their pass rate 630. Thus, in one embodiment, ten tests may have test results associated with a first error message. In this embodiment, seven of the tests may be associated with an “always failing” failure pattern and three of the tests may be associated with an “appears failing” failure pattern. In this embodiment, the seven tests having the “always failing” failure pattern may be organized from the lowest pass rate to the highest test rate and the three tests having the “appears failing” failure pattern may be arranged from the lowest pass rate to the highest pass rate.
Referring now to
In the embodiment shown in
Referring now to
In the embodiment shown in
In the embodiment shown in
Referring back to
One or more generated reports may be displayed on any number of devices such as a tablet, mobile phone, desktop, laptop, or another suitable computing device. One or more generated reports may be sent via any suitable communication networks. For example, one or more generated reports may be sent via email, fax, text message, or another suitable notification.
The illustrative environment includes at least one application server 908 and a data store 910. It should be understood that there can be several application servers, layers, or other elements, processes, or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. As used herein the term “data store” refers to any device or combination of devices capable of storing, accessing, and retrieving data, which may include any combination and number of data servers, databases, data storage devices, and data storage media, in any standard, distributed, or clustered environment. The application server can include any appropriate hardware and software for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling a majority of the data access and business logic for an application. The application server provides access control services in cooperation with the data store, and is able to generate content such as text, graphics, audio, and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HTML, XML, or another appropriate structured language in this example. The handling of all requests and responses, as well as the delivery of content between the client device 902 and the application server 908, can be handled by the Web server. It should be understood that the Web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein.
The data store 910 can include several separate data tables, databases, or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store illustrated includes mechanisms for storing production data 912 and user information 916, which can be used to serve content for the production side. The data store also is shown to include a mechanism for storing log data 914, which can be used for reporting, analysis, or other such purposes. It should be understood that there can be many other aspects that may need to be stored in the data store, such as for page image information and to access right information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store 910. The data store 910 is operable, through logic associated therewith, to receive instructions from the application server 908 and obtain, update, or otherwise process data in response thereto. In one example, a user might submit a search request for a certain type of item. In this case, the data store might access the user information to verify the identity of the user, and can access the catalog detail information to obtain information about items of that type. The information then can be returned to the user, such as in a results listing on a Web page that the user is able to view via a browser on the user device 902. Information for a particular item of interest can be viewed in a dedicated page or window of the browser.
Each server typically will include an operating system that provides executable program instructions for the general administration and operation of that server, and typically will include a computer-readable storage medium (e.g., a hard disk, random access memory, read only memory, etc.) storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions. Suitable implementations for the operating system and general functionality of the servers are known or commercially available, and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein.
The environment in one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via communication links, using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated in
The various embodiments further can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices, or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems, and other devices capable of communicating via a network.
Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as TCP/IP, OSI, FTP, UPnP, NFS, CIFS, and AppleTalk. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, and any combination thereof.
In embodiments utilizing a Web server, the Web server can run any of a variety of server or mid-tier applications, including HTTP servers, FTP servers, CGI servers, data servers, Java servers, and business application servers. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python, or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, and IBM®.
The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers, or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), and at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc.
Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices may be employed.
Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules, or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.
Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention, as defined in the appended claims.
The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Preferred embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the invention. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for the invention to be practiced otherwise than as specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.
All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
Patent | Priority | Assignee | Title |
10528458, | Aug 31 2017 | MICRO FOCUS LLC | Continuous integration and continuous deployment system failure analysis and resolution |
10572360, | May 16 2016 | AUTOMATEPRO LTD | Functional behaviour test system and method |
10599552, | Apr 25 2018 | Model checker for finding distributed concurrency bugs | |
10642450, | Nov 19 2015 | KONICA MINOLTA, INC. | Information processing apparatus |
10747653, | Dec 21 2017 | SAP SE | Software testing systems and methods |
10831647, | Sep 20 2017 | SAP SE | Flaky test systems and methods |
11010287, | Jul 01 2019 | INTUIT INC. | Field property extraction and field value validation using a validated dataset |
11163667, | Mar 30 2020 | EMC IP HOLDING COMPANY LLC | Method and apparatus for error ticket management |
11586534, | Jun 13 2017 | Microsoft Technology Licensing, LLC | Identifying flaky tests |
11630749, | Apr 09 2021 | Bank of America Corporation | Electronic system for application monitoring and preemptive remediation of associated events |
9720687, | Aug 27 2014 | VALGENESIS, INC.; VALGENESIS, INC | Validating and maintaining respective validation status of software applications and manufacturing systems and processes |
9785541, | Aug 17 2015 | AMDOCS DEVELOPMENT LIMITED; Amdocs Software Systems Limited | System, method, and computer program for generating test reports showing business activity results and statuses |
Patent | Priority | Assignee | Title |
5771243, | Feb 07 1997 | Etron Technology, Inc. | Method of identifying redundant test patterns |
6760873, | Sep 28 2000 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Built-in self test for speed and timing margin for a source synchronous IO interface |
20050257086, | |||
20080010537, | |||
20080148120, | |||
20090177936, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 29 2012 | Amazon Technologies, Inc. | (assignment on the face of the patent) | / | |||
Feb 29 2012 | HOOD, JAMES L | Amazon Technologies, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027796 | /0206 |
Date | Maintenance Fee Events |
Nov 12 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jan 02 2023 | REM: Maintenance Fee Reminder Mailed. |
Jun 19 2023 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
May 12 2018 | 4 years fee payment window open |
Nov 12 2018 | 6 months grace period start (w surcharge) |
May 12 2019 | patent expiry (for year 4) |
May 12 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 12 2022 | 8 years fee payment window open |
Nov 12 2022 | 6 months grace period start (w surcharge) |
May 12 2023 | patent expiry (for year 8) |
May 12 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 12 2026 | 12 years fee payment window open |
Nov 12 2026 | 6 months grace period start (w surcharge) |
May 12 2027 | patent expiry (for year 12) |
May 12 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |