Provided is a system and method capable of automatically testing a website or application for an error in a user interface without human intervention. As an example, in a system for testing an error in a user interface of an application or website, a user interface system that includes a testable action recognizer that obtains a screenshot of a screen of the application or website, manages a layout and a test action based on user interface (ui) configuration and text recognition information from the screenshot, and a test action generator that receives the layout, selects a test scenario corresponding the layout, and performs a test action according to the test scenario, and in which the testable action recognizer manages whether or not a test is progressed for each screen layout according to the test scenario, is disclosed.
|
10. A method for testing an error in a user interface of an application or website, the method comprising:
fetching, by a test action generator, a test scenario for an application or website;
recognizing, by a testable action recognizer, a layout through a screenshot of the application or website;
selecting, by the test action generator, a test action based on the test scenario and the layout;
generating, by the test action generator, test data;
performing, by the test action generator, the test action on a screen of the application or website;
managing, by a test action manager, whether or not the test is progressed through a table;
storing, by a test operation manager, the corresponding layout, test operation, and intent in the table together;
preventing, by the test operation manager, a duplicate test even when there is a change in design or a value of the screen based on derived test operation information; and
recognizing, by the testable action recognizer, a layout by combining a user interface (ui) configuration and text recognition information of the screen and hierarchical structure information provided by an operating system,
wherein the testable action recognizer includes a ui element detector, the method further comprising:
learning and storing, by the ui element detector, a model parameter recognizing a ui element object through machine learning; and
detecting, by the ui element detector, the ui element object based on the parameter.
1. A system for testing an error in a user interface (ui) of an application or website, the system comprising:
a testable action recognizer configured to:
obtain a screenshot of a screen of the application or website,
recognize a layout based on ui configuration and text recognition information from the screenshot, and
convert the layout into a test action; and
a test action generator configured to:
receive the test action,
select a test scenario corresponding an intent to the test action, and
perform the test action according to the test scenario,
the testable action recognizer is further configured to
manage whether or not a test is progressed for each screen layout according to the test scenario,
responsive to detecting a first screen has been tested, record the first screen as tested in a table to prevent repetitive tests from being performed, wherein the table further records contents of the test action performed on the first screen, and
responsive to detecting a second screen has not been tested, record the second screen in the table for future testing,
recognize the layout by combining the ui configuration and text recognition information of the screen and hierarchical structure information provided by an operating system,
wherein the testable action recognizer includes a ui element detector configured to:
learn and store a model parameter recognizing a ui element object through machine learning, and detect the ui element object based on the model parameter,
the system further comprises a test operation manager, configured to
store the corresponding layout, test operation, and intent in the table together; and
prevent a duplicate test even when there is a change in design or a value of the screen based on derived test operation information.
2. The system of
3. The system of
4. The system of
5. The system of
6. The system of
7. The system of
8. The system of
9. The system of
test a first intent on the first screen to reach a third screen via a first path; and
responsive to detecting a second path between the first screen and the third screen, test the first intent on the first screen to reach the third screen via the second path.
11. The method of
separating and storing, by the intent/object detector, a text obtained through a user interface (ui) configuration or text recognition information into an intent and an object.
12. The method of
13. The method of
fetching, by the intent/object matching, information of a virtual person for which data in a matching form is set from the test persona repository according to the text obtained through the ui configuration or text recognition information.
14. The method of
storing, by the test persona repository, at least one virtual person having email, mobile phone number, and credit card information available for a real test.
15. The method of
verifying a result of the user interface test action by accessing an external service using test persona account information, and an error is detected.
16. The method of
testing a first intent on a first screen to reach a third screen via a first path; and
responsive to detecting a second path between the first screen and the third screen, testing the first intent on the first screen to reach the third screen via the second path.
|
This application claims priority to Korean Patent Application No. 10-2020-0184995 filed on Dec. 28, 2020, the disclosure of which is hereby incorporated in its entirety by reference herein.
Embodiments relate to a system and method capable of automatically testing a website or an application to find an error in a user interface without human intervention.
Before launching a new service or updating a new function, information technology (IT) companies that provide a service platform test functions in the platform to ensure all the functions work correctly.
Previously, a test was done by people with a development version of a platform according to various scenarios.
As competition among IT companies intensifies, the type of functions within a platform has varied, but the time spent for a launch or an update needed to be shortened. Thus, when testing a platform, there has been a problem of significantly increasing manpower, time and cost.
Accordingly, IT companies have outsourced humans to test the platforms individually or developed a separate test program to conduct automated testing.
However, human testing has a problem in that a deviation can occur in accuracy and time according to the skill level of the tester. In addition, as the function and application environment of a platform become complex, the cost increases exponentially.
In a method of developing a test program, the test program needs to be modified every time a change occurs in a platform, so the development period can increase. There are also many cases where a non-error is reported as an error due to a wrong implementation of the test program. Accordingly, such method is not practical and can be problematic.
An aspect of the present invention provides a system and a method capable of automatically testing an application or a website to find an error in a user interface without human intervention.
According to at least one embodiment, a system for user interface autonomous testing, which finds an error in a user interface of an application or a website, includes four components, a testable action recognizer, a test action generator, an error recognizer, and an external service integrator. A testable action recognizer obtains a screenshot of an application or a website, recognizes a layout from the screenshot based on user interface (UI) configuration and text recognition information, and converts the layout into a test action. A test action generator receives the test action, selects a test scenario corresponding to the test action, and performs the test action according to the test scenario. An error recognizer recognizes an error by checking a test result. An external service integrator verifies whether the test action for an external system is accurately reflected in the external system. In the system for user interface autonomous testing, the testable action recognizer may include a test action manager. In the layout, UI element objects such as buttons and test input boxes (EditText) may be grouped into a common part (Header, Footer), which is a repetitive part in an application or a website, and a variable part (Body). The test action manager may store a table listing test actions with test status information, whether the test is in progress or not. In the system for user interface autonomous testing, the test action manager may store a layout of a corresponding screen and an available test action in the table.
In the system for user interface autonomous testing, the testable action recognizer may recognize the layout by combining UI element objects, text recognition information and hierarchical structure information (UI Hierarchy) of UI elements on a screen sent from an operating system.
In the system for user interface autonomous testing, the testable action recognizer may further include a UI element detector that in advance learns and stores model parameters which enable the UI element detector to recognize the UI element objects and detects the UI element objects based on the learned parameters through machine learning.
In the system for the UI autonomous testing, the test action generator may further include an intent/entity detector that chooses intents and entities on the screen based on the UI element objects or text information and store the chosen intents and entities.
In the system for the UI autonomous testing, the test action generator may further include a test persona repository which stores information of a virtual person set in advance. In the system for the UI autonomous testing, the test action generator may further include an intent/entity matcher that fetches data of a matching type from the test persona repository according to a context acquired from the UI element objects or text recognition information.
In the system for the UI autonomous testing, the test persona repository may store at least one virtual person with an email, a mobile phone number, and credit card information available for a real test.
In the system for the UI autonomous testing, the test issue recognizer may detect whether or not an error has occurred in a system log or a message on the screen.
In the system for the UI autonomous testing, the external service integrator may detect an error by verifying correct performance of functions, such as sending a mail and entering a social media comment, which perform an action in a system but a result is reflected in the external system.
According to another embodiment, a method for user interface autonomous testing in a user interface (UI) of an application or a website, includes following steps. First, a test action generator fetches a test scenario previously set on an intent basis according to a classification of the application or the website. Second, a testable action recognizer recognizes a layout from a screenshot of the application or the website. Third, the test action generator detects a list of testable actions based on layout information and selects the test action based on the test scenario and the list of testable actions. Fourth, the test action generator generates test data. Finally, the test action generator performs the test action on the screen of the application or the website.
In the method for the UI autonomous testing, the testable action recognizer may include a test action manager, and the layout is grouped into a common part (Header, Footer) which is a repetitive part in an application or a website, and a variable part (Body) The test action manager may store a table listing test actions with test status information, whether or not the test is in progress.
In the method for the UI autonomous testing, the test action manager may store a layout of a corresponding screen and an available test action in the table.
In the method for the UI autonomous testing, the testable action recognizer may recognize the layout by combining UI element objects (Elements or Widgets), text recognition information of the screen and hierarchical structure information (UI Hierarchy) of UI elements on a screen sent from an operating system.
In the method for the UI autonomous testing, the testable action recognizer may further include a UI element detector that in advance learns and stores model parameters which enable the UI element detector to recognize the UI element objects and detects the UI element object based on the learned parameter through machine learning.
In the method for the UI autonomous testing, the test action generator may further include an intent/entity detector that chooses intents and entities on the screen based on the UI element objects or text information.
In the method for the UI autonomous testing, the test action generator may further include a test persona repository which stores information of a virtual person set in advance.
In the method for UI autonomous testing, the test action generator may further include an intent/entity matcher that fetches data of a matching type from the test persona repository according to a context acquired from the UI element objects or text recognition information.
In the method for the UI autonomous testing, the test persona repository may store at least one virtual person with an email, a mobile phone number, and credit card information available for a real.
The method for the UI autonomous testing may further include an external service integrator that accesses an external system to check whether the test action has actually occurred when the user test action affects the external system such as sending an email. The system for UI autonomous testing and a method according to the present invention can recognize the UI element objects and text that need to be tested on a screenshot image, recognize a layout of the screen by combining UI layer information provided by the operating system and make a list of available test actions. In addition, a test is performed by recognizing the intent on the screen in real time and choosing a matching test from the test scenarios based on an intent set in advance according to the classification of applications, such as shopping, news, messenger, or website. When specific data such as text or numbers are required to be entered, the main functions of the application or the website can be automatically tested without human intervention by allowing the specific data to be entered automatically using preset test persona information. In addition, even if an appropriate scenario is not set, the intent to be tested and the object information to be entered can be recognized on the screen and information of the matching test persona can be entered. Thus, this search test is more effective than a normal random test. Since the screen is managed as a test action list (table) abstracted into intents and entities, slightly different screens with small design changes or content changes are recognized as the same screen, so duplicate tests are prevented, and thus, more functions can be tested in the same time period than a random test.
The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain principles of the present disclosure. In the drawings:
Hereinafter, preferred embodiments will be described in detail with reference to the accompanying drawings.
The present invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein: rather, these embodiments are provided so that those skilled in the art thoroughly understand the present invention. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present invention to those skilled in the art.
Also, in the drawing figures, the dimensions of layers and regions may be exaggerated for clarity of illustration. Like reference numerals refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. In this specification, it will also be understood that when a member A is referred to as being connected to a member B, the member A can be directly connected to the member B or indirectly connected to the member B with a member B therebetween. The terms used herein are for illustrative purposes of the present invention only and should not be construed to limit the meaning or the scope of the present invention.
As used in this specification, a singular form may, unless definitely indicating a particular case in terms of the context, include a plural form. Also, the expressions “comprise” and/or “comprising” used in this specification neither define the mentioned shapes, numbers, steps, operations, members, elements, and/or groups of these, nor exclude the presence or addition of one or more other different shapes, numbers, steps, operations, members, elements, and/or groups of these, or addition of these. The term “and/or” used herein includes any and all combinations of one or more of the associated listed items.
As used herein, terms such as “first,” “second,” etc. are used to describe various members, components, regions, layers, and/or portions. However, it is obvious that the members, components, regions, layers, and/or portions should not be defined by these terms. The terms do not mean a particular order, up and down, or superiority, and are used only for distinguishing one member, component, region, layer, or portion from another member, component, region, layer, or portion. Thus, a first member, component, region, layer, or portion which will be described may also refer to a second member, component, region, layer, or portion, without departing from the teaching of the present invention.
Spatially relative terms, such as “below”, “beneath”, “lower”, “above”, “upper” and the like, used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. These spatially relative terms are intended for easy comprehension of the prevent invention according to various process states or usage states of the prevent invention, and thus, the present invention is not limited thereto. For example, an element illustrated in the drawings is turned inside out, the element described as “beneath” or “below” may change into “above” or “upper”. Thus, the term “below” may encompass the term “above” or “below”.
Preferred embodiments of the present invention will be described in detail with reference to the drawings to the extent that a person of ordinary skill in the art can easily implement the present invention.
Hereinafter, a system for user interface autonomous testing 10 according to an embodiment of the present invention will be described.
First, referring to
The testable action recognizer UI can determine testable actions for an application or a website to be tested. To this end, the testable action recognizer 11 determines an available test action on the current screen by combining a user interface (UI) element objects, text, and UI layer information for the current screen. Specifically, the testable action recognizer 11 can include a UI element detector 111, a text recognizer 112, a UI hierarchy information adapter 113, a UI layout recognizer 114, and a test action manager 115.
The UI element detector 111 can perform object detection from a screen shot of a currently displayed screen to recognize UI elements to be tested. In particular, the UI element detector 111 can perform machine learning, particularly deep learning, and an accuracy thereof can increase according to the learning. To this end, the UI element detector 111 can calculate and store model parameters for object detection by learning previously collected screenshot images of an application or a website collected, and recognize types (classes) and positions (bounding boxes) of the UI element objects on a screen under test, based on the learned model parameters.
The text recognizer 112 can recognize and extract text from a screen shot of the current screen. In particular, the text recognizer 112 can use an optical character recognition (OCR) technique for text extraction. Through the OCR, the text recognizer 112 can extract all texts on the current screen, and as will be described later, the testable action recognizer 11 can use information of this text to infer values and types of data to be input to a corresponding area.
The UI hierarchy information adapter 113 can extract layout information of a corresponding screen from UI layer information provided by an operating system. This UI layer information is provided by the operating system by collecting identification (ID) values or text values, which are given to UI elements by an application or a website developer in order to control each UI element while programming the UI, and descriptive data entered for the visually impaired, which comes in a different format depending on the operating system (e.g., Android®, iOS®) and development platform (Web, App). In the present invention, a UI hierarchy information adapter 113 converts UI layer information, which comes in a different format for each development platform, into an internally standardized format. Through this standardization, the system for UI autonomous testing according to the present invention can be applied to a third operating system or a development platform, only by adding an adapter without any changes of the remaining components. However, this information was entered by the developer of the application or the website and the collection of the information is performed by the operating system performs in the background, and thus information omissions or inconsistencies often occur. For example, if the developer has not set descriptive data or text for an image, blank data will come in, and if the time when the screen shot was taken and the time when the operating system provided UI layer information is different, integrity between the two information will not match.
The UI layout recognizer 114 can determine a layout of the current screen by combining the UI element objects detected by the UI element detector 111, the text extracted from the text recognizer 112, and the hierarchy information drawn by the UI hierarchy information adapter 113. In addition, the UI layout recognizer 114 suppresses duplicate tests of a common part by classifying the layout of the corresponding screen into a common part (header, Footer) area, shared by screens of the same application or website, and a variable part (Body). Furthermore, if there is a unit of multiple values for the variable part, for example a birthday, the test action of a birthday input is guaranteed to be performed collectively with the three UI element objects of a month, a day, and a year, by combining the three date values into one group.
Based on the type (Class), text (Value), and location (Bounding Box) information of each UI element included in the layout input from the UI layout recognition unit 114, the test operation manager 115 can organize test actions into a list in the form of a table, and record and manage what kind of test data is generated and performed by a test action generator 13 to be described later. The test action manager abstracts and manages the screen based on the test action, thereby preventing duplicate tests of the same test action on the screen. For example, in the case of an initial screen of a shopping mall, as products are changed, text values of UI elements are changed each time the initial screen is accessed. Thus, the same screen of a shopping mall can be recognized as a different screen, and a test of selecting (e.g., Click) a product at the same location can be repeated infinitely. After configuring the test action table for each screen, the text action manager compares the previously recorded test action table with the current test action table in consideration of the type of each UI element to prevent duplicate test action tables for the same screen from being created and managed. For example, when UI elements of the same type are arranged in the form of a list or a grid, it is determined whether the screens are the same by comparing only the arrangement form except for the text information of each UI element. Due to the test action manager 114, the system for user interface autonomous testing 10 according to an embodiment of the present invention can perform a test without omission or repeating the test action.
The issue detector 12 can detect an error or a performance problem by monitoring a log history of a device being tested, CPU, memory, network status, etc. For example, when an error indication appears on a screen or a log of an application kill is detected, the issue detector 12 reports information of the screen where an error occurred, input data used at that time, an error log and a message.
The test action generator 13 receives the test action list of the corresponding screen from the testable action recognizer 11 described above and generates input data necessary for testing the corresponding screen in an appropriate format. To this end, the test action generator 13 can include an intent/entity recognizer 131, an intent/entity matcher 132, a test persona repository 133, and a test scenario repository 134.
First, the intent/entity recognizer 131 can change recognized texts on a screen into intents and entities. Intent and Entity are technical terms generally used when modeling a conversation system such as chatbot. In general, an intent is a unit of conversation in which words come and go or the intention of a speaker conveyed through the conversation, and an entity is a specific requirement in the intentions. For example, in a sentence “Please select a credit card type”, an intent is “Please select” and an entity is “Credit card type”. The intent/entity recognizer 131 can be based on natural language processing (NLP) technology. For example, when there is an option selection UI element (Radio) of ‘card A’, ‘card B’, and ‘card C’ on a screen and the text in the nearest position is ‘select payment method’, the intent/entity recognizer 131 can recognize ‘select credit card’ as an intent and ‘card A’, ‘card B’, and ‘card C’ as entities.
The intent/entity matcher 132 can convert test actions of the corresponding screen into intents and entities and store them in the test action table. By using the intent/entity matcher 132, a screen can be defined based on an abstracted intent instead of a specific terminology used by each application or website. Through this, it is possible to perform testing while recognizing intents and determining whether the defined test scenario and the current test screen match on the intent basis in real time. Thus, it is possible to test applications or websites of the same classification with generalized test scenarios designed based on intents without creating test scenarios for each application or website.
On the other hand, the test action generator 13 can select the intent with the highest priority based on the test action table managed on the intent basis by the intent/entity matcher 132 and choose an action for executing the selected intent. For example, in a scenario for purchasing goods through account transfer, if there are two intents on a single test screen ‘select a credit card’ and ‘input bank account number’, the test action generator 13 selects the highly prioritized ‘input deposit bank account’ intent and generates data related to this intent.
The tester persona repository 133 can set and store information of a virtual person having the same conditions as those of ordinary people when the system for the UI autonomous testing 10 according to an embodiment of the present invention operates. For example, the test persona repository 133 can virtually generate a 28-year-old person named ‘John Doe’, and set information such as a job, an email, an address, a mobile phone number, a credit card information, a chat app, and a social media in advance. In particular, information of an email and a mobile phone number can be used to authenticate a person in a specific application or a website, and a credit card information is frequently required in a test for purchasing a paid item or a test for online shopping. Therefore, in order to prevent errors due to input data during the test, it is preferable for the test persona repository 133 to set an email, a mobile phone number, and/or a credit card information to be actually usable when generating/storing information of a virtual person in advance. Most of the modern applications or websites do not operate alone but are used in conjunction with various external systems. Therefore, in order to test this connection, an external system account should be available and the result of test action thereof should be checked by accessing the system. The test persona of the present invention differs from the existing methods in that it has both data for testing the corresponding system, as well as information and settings for testing such external system connection.
The test scenario repository 134 can store a scenario with which a test proceeds, that is, a test path, through the system for user interface autonomous testing 10 according to an embodiment of the present invention. The test scenario can beset according to the classification of the application or website. For example, all online shopping applications create a test scenario with a series of actions (intents) to search, select, enter the personal information and address to be delivered from the main home screen, and make payments via credit card. The classification of the application or website to be tested can be obtained by scraping the site information if the application has already been posted on the app store, or the developer can directly designate it before the start of the test, and it can also be inferred through intent information recognized on the initial screen.
The external service integrator 14 makes it possible to verify whether the operation was normally performed in the external system when the test action of the system for user interface autonomous testing 10 according to the embodiment of the present invention occurs in the external system. For example, when there is a button called ‘Post’ on social media (e.g. Facebook®) on the screen, after performing this test action, a user can check if there is a new posting on the initial screen of the social media owned by the test persona, and the user can report an error when it is not posted. As described above, the system for the UT autonomous testing 10 according to an embodiment of the present invention can grasp the layout and test action by combining UI configuration, text recognition, and UI layer information based on a screenshot for an application or a website for testing. And, according to the classification of the application or website, the test is conducted according to the pre-defined test scenario on the intent basis, and, when data such as text rather than simple clicks or touches needs to be input, the data can be input using information of the test persona. In addition, each test path and whether or not to proceed with the test can be organized into an abstracted test action table from which the design or content information of the screen has been removed, so that the test can be performed accurately and quickly while minimizing duplicate tests.
Hereinafter, a test method using the system for UI autonomous testing 10 according to an embodiment of the present invention will be described step by step.
First, in the device setup (S1) step, for the operation of the system for user interface autonomous testing 10 according to an embodiment of the present invention, a test environment of the test device can be configured. Specifically, an action such as selecting an operating system version of the system 10 test device, installing an application that can conflict with a corresponding application or that must be installed first can be performed in advance. For example, if a banking app is intended to be tested, a vaccine app, which is an essential app, is installed first, and the Facebook app is installed with an account held by Test Persona and is logged in to prepare for normal Facebook-linked testing in the bank app.
In the application installation (S2) step, the application to be tested can be installed on the test device. In this step, an installation file before application distribution can be installed on the test device by receiving the installation file from the developer of the company requesting the test. It is also possible for general users to download and install an installation file by accessing an app store where applications are downloaded from a smartphone or a tablet. In the case of a website, an installation file is used on a website along with ActiveX and installs data files necessary to run websites such as programs and public certificates.
In the loading a generalized test scenario (S3) step, among the test scenarios stored in the test scenario repository 134, one suitable for the application or website can be selected and loaded. As described above, when the classification of the application is already posted on the app store, a test scenario suitable for the classification can be easily selected. If necessary, it is possible to set an appropriate test scenario by receiving the details of the application from the developer, and it is also possible to fetch the scenario after inferring the classification from the intent of the initial screen after running the application.
In the application screenshot recognition step (S4), starting from the home screen initially displayed when each application or website is executed, a screen shot of the screen can be acquired and recognized step by step according to the test scenario. In this step, the screenshot can be transferred to the testable action recognizer 11 together with UI layer information of the corresponding screen.
The operation selection (S5) step will be described with reference to
In the test data generation (S6) step, the test action generator 13 can fetch information suitable for the test action from the test persona repository 133. As described above, due to the nature of the application or website, there are cases in which a specific operating system or an actual operating email, a mobile phone number, and credit card information can be requested. Accordingly, the test action generator 13 can generate input data according to the test scenario by fetching information corresponding thereto from the repository.
In the test action execution (S7) step, the test action generator 13 can transfer and perform operations such as inputting or clicking data on the screen to the operating system.
Finally, in the test action result verification (S8) step, before or after the test action is performed, a message or system log of the application or the website is verified to detect whether an error has occurred, and a report to be transferred to the developer is stored. When the result of performing the test, operation needs to be checked in the external system, such as sending an email or entering a social media comment, it can be verified by accessing the external system through the external service integration unit 14 as described above.
On the other hand,
In this case, as illustrated in
Using this, the test action generator 13 can query and fetch information on a card that can be actually input as illustrated in
Hereinafter, the action of the test action manager in the system for user interface autonomous testing according to an embodiment of the present invention will be described in detail.
In this way, the test action manager 115 can organize the test progress in the form of a table as illustrated in
Meanwhile.
Meanwhile,
Meanwhile, if there is no scenario being tested among the untested intents, it can be checked whether there is an untested scenario starting with the corresponding intent (S74). If there is the untested scenario (Yes), it starts the scenario that starts with that intent. For example, if an intent such as “Add Cart” is found on the screen while the payment scenario has not been executed, “Add Cart” is selected as the next test action and it is indicated that the payment scenario test has started. If there is no appropriate scenario (No), another intent can also be selected based on the location on the screen.
On the other hand, if there is no untested intent on the screen, it is checked whether there is another test path performed with intents in the screen (S75), and if there is the other test path (Yes), the corresponding intent can be selected again. On the other hand, if there is no other test path (No), it is checked whether it is the home screen (S76), and if it is not the home screen (No), the process is restarted to return to the home screen to proceed with the test. If it is the home screen (Yes), the test for all sub-paths has been completed, and thus the test ends.
Hereinafter, an example of data stored in a test persona repository and a method of efficiently expanding persona data will be described, in the system for user interface autonomous testing according to an embodiment of the present invention.
As illustrated in
As illustrated in
As illustrated in
Hereinafter, the repository table of the intent/entity recognizer will be described as an example.
As illustrated in
Hereinafter, a process in which the persona information is fetched by the intent/entity matcher will be described.
As illustrated in
Hereinafter, an exemplary test scenario management method will be described.
Referring to
Hereinafter, an implementation example of a user interface system will be described according to an embodiment of the present invention.
As illustrated in
Here, the logic and all the data referenced by the logic can be stored in the repository, but the logic required for the operation can be reloaded into the memory. In addition, operations such as object detection, OCR, and object recognition can be processed by the GPU, but in some cases, the CPU can be responsible for most logic including the operations.
Meanwhile, as illustrated in
As illustrated in
In addition, in implementation of the cloud form, each agent that is directly connected to a device through a device connector and performs a test is called a Testbot. The Testbot can play a role of installing the application, collecting screen shots and accessibility information of the application, sending them to an AI Testbot engine, and then executing the test operation determined by the AI Testbot engine. In addition, when the Testbot controller receives a test request from the user, the Testbot controller can allocate the Testbot and monitor the test execution.
In addition, in the cloud form, devices are shared by multiple users, and thus the same model is usually distributed across multiple servers, and, when a user request comes, the device farm manager can select the least loaded Testbot controller to which the device is connected and assign the test task. In this case, the user can select a model and request a test regardless of where the device he or she is using is actually connected.
On the other hand, the AI Testbot Engine can perform all operations to determine the next test action and detect errors by receiving application screenshot information and log information sent by each Testbot.
What has been described above is only one embodiment for implementing the user interface autonomous testing system and method according to the present invention, and the present invention is not limited to the embodiment described above, and as claimed in the claims below, anyone of ordinary skill in the field to which the present invention belongs without departing from the gist of the present invention will be said to have the technical spirit of the present invention to the extent that various changes can be implemented. The above-described embodiment is merely one embodiment for carrying out the battery module, and the present disclosure is not limited to the embodiment, and the technical spirits of the present disclosure include all ranges of technologies that may be variously modified by an ordinary person in the art, to which the present disclosure pertains, without departing from the essence of the present disclosure as claimed in the following claims.
Park, Hyun Jin, Lee, Ju Hyeon, Hwang, Jae Jun
Patent | Priority | Assignee | Title |
11790031, | Oct 31 2022 | Content Square SAS | Website change detection |
Patent | Priority | Assignee | Title |
20150100486, | |||
20150339213, | |||
20200073921, | |||
20210333983, | |||
KR1020120040419, | |||
KR1020160070410, | |||
KR1020180058579, | |||
KR102075111, | |||
KR20150069455, | |||
WO2013145629, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 30 2020 | APPTEST.AI | (assignment on the face of the patent) | / | |||
Jan 04 2021 | HWANG, JAE JUN | APPTEST AI | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055295 | /0215 | |
Jan 04 2021 | PARK, HYUN JIN | APPTEST AI | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055295 | /0215 | |
Jan 04 2021 | LEE, JU HYEON | APPTEST AI | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055295 | /0215 |
Date | Maintenance Fee Events |
Dec 30 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Jan 14 2021 | SMAL: Entity status set to Small. |
Date | Maintenance Schedule |
Jan 31 2026 | 4 years fee payment window open |
Jul 31 2026 | 6 months grace period start (w surcharge) |
Jan 31 2027 | patent expiry (for year 4) |
Jan 31 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 31 2030 | 8 years fee payment window open |
Jul 31 2030 | 6 months grace period start (w surcharge) |
Jan 31 2031 | patent expiry (for year 8) |
Jan 31 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 31 2034 | 12 years fee payment window open |
Jul 31 2034 | 6 months grace period start (w surcharge) |
Jan 31 2035 | patent expiry (for year 12) |
Jan 31 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |