A stub can be loaded into a first browser environment of a browser application on a client machine, with the stub being loaded from a domain. The stub can execute to load an online application test into the first browser environment. Additionally, the test can execute in the first browser environment to conduct the test on an online application. For example, the test may be conducted from a second browser environment of the browser on the client machine. Performing the test can include loading one or more digital pages from the application into the second browser environment.
|
1. A computer-implemented method, comprising:
loading a stub into a first browser environment of a browser application on a client machine, the stub being loaded from a domain that is remote from the client machine;
executing the stub in the first browser environment to load an online application test into the first browser environment; and
conducting the test on an online application from the domain by executing the test in the first browser environment, the executing of the test in the first browser environment producing one or more actions in a second browser environment of the browser on the client machine, the one or more actions comprising loading one or more digital pages from the application into the second browser environment, the second browser environment being an inline frame in the first browser environment.
18. A computer system comprising:
at least one processor; and
memory comprising instructions stored thereon that when executed by at least one processor cause at least one processor to perform acts comprising:
loading a stub into a first browser environment of a browser application on a client machine, the stub being loaded from a domain that is remote from the client machine;
executing the stub in the first browser environment to load an online application test into the first browser environment; and
conducting the test on an online application from the domain by executing the test in the first browser environment, the executing of the test in the first browser environment producing one or more actions in a second browser environment of the browser on the client machine, the one or more actions comprising loading one or more digital pages from the application into the second browser environment, the second browser environment being an inline frame in the first browser environment.
17. A computer system comprising:
at least one processor; and
memory comprising instructions stored thereon that when executed by at least one processor cause at least one processor to perform acts comprising:
loading a stub into a first browser environment of a browser application on a client machine, the stub being loaded from a domain;
executing the stub in the first browser environment to load an online application test into the first browser environment;
launching a second browser environment, the second browser environment being a browser environment selected from a group consisting of a different browser window from the first browser environment and an inline frame in the first browser environment; and
conducting the test on an online application from the domain using the second browser environment of the browser on the client machine, conducting the test comprising:
loading one or more digital pages from the application from the domain and into the second browser environment; and
an abstraction layer, which is between the test and the second browser environment, relaying information between the test and the second browser environment, the abstraction layer being configured to interact with the test in the same manner whether the second browser environment is a browser window that is different from a browser window that hosts the first browser environment or whether the second browser environment is an inline frame in the first browser environment.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
16. The method of
19. The system of
20. The system of
|
Online applications such as Web applications can be tested using an online computer application testing automation software framework or application. Because Web applications are primarily accessed through browser applications (browsers), such testing applications can interface with and run tests on browsers.
Some testing applications have utilized scripting languages, such as JavaScript, by injecting scripts into the application being tested (i.e., application under test). Specifically, such scripts are typically injected into the Web pages that are to be served by the application under test.
Other testing applications conducted tests by using a user interface layer to send messages to a browser. Such testing applications can feed user interface events to the browser (e.g., an instruction to click on a particular button in a Web page). Such testing applications have a different specialized adapter for each different browser type (Internet Explorer® browser for the Windows® operating system, Internet Explorer® browser for the Windows® Phone operating system, Firefox® browser for Windows® operating system, Firefox® browser for Android™ operating system, etc.) and for different versions of the same browser type.
Online application testing systems have orchestrated automatically performing tests of an online application on multiple computing machines in response to a single user input command. Also, testing systems have monitored and reported performance statistics, such as different page loading times with and without use of a cache, etc.
The tools and techniques discussed herein relate to computer testing of online applications. The tools and techniques can include using a stub in one browser environment to test an online application, such as where the online application is tested in a different browser environment. A stub is a software component that is configured to receive and conduct online application tests by interacting with products in browser environments, such as interacting with one or more Web pages being tested. For example, the Web page(s) may be one or more digital pages received from an online application that is being tested. The stub may also be configured to interact with a harness.
A harness is a software component that is configured to host a stub and to interact with the stub. These components may also be configured to perform additional functions. For example, a harness may be configured to present a user interface to receive user input to govern a test and/or to report test results. A browser environment is a computer environment provided by a browser. Different browser environments or separate browser environments refer to environments that have some degree of separation from each other. For example, different browser environments may be two different windows (e.g., two totally separate windows, or two tabs in a tabbed browser) of a browser running in a computing machine. As another alternative, different browser environments may include a browser window and an inline frame within that window, an inline frame and another inline frame within that inline frame, etc.
In one embodiment, the tools and techniques can include loading a stub into a first browser environment of a browser application on a client machine, with the stub being loaded from a domain, which can be remote from the client machine. The stub can execute to load an online application test into the first browser environment. Additionally, the test can execute in the first browser environment to conduct the test on an online application. For example, the test may be executed in the first browser environment to test the online application from a second browser environment of the browser on the client machine. Performing the test can include loading one or more digital pages from the application into the second browser environment.
As used herein, a domain is a set of computing resources in a computer network such as a global computer network (e.g., the Internet), where the set of computing resources corresponds to a domain name registered with a domain name service for the computer network. Accordingly, the same domain corresponds to the same registered domain name and a different domain name corresponds to a different registered domain name. For example, the computing resources referenced by bing.com/news (e.g., with the uniform resource locator http://www.bing.com/news) and bing.com/maps are considered to be in the same domain, but computing resources referenced by bing.com and uspto.gov are considered to be in different domains.
This Summary is provided to introduce a selection of concepts in a simplified form. The concepts are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Similarly, the invention is not limited to implementations that address the particular techniques, tools, environments, disadvantages, or advantages discussed in the Background, the Detailed Description, or the attached drawings.
Embodiments described herein are directed to techniques and tools for improved testing of online applications in browser environments (such as Web browser environments within a Web browser). Such improvements may result from the use of various techniques and tools separately or in combination.
Such techniques and tools may include executing tests using a stub that can be loaded from the same domain as a domain where the online application being tested is located. In one implementation, the stub can be loaded into a harness, such as in an inline frame in the harness. The stub can be a simple page that can load and run a test. The use of the stub from the domain of the application under test can allow a test to execute in the same domain as the application under test (because the stub that conducts the test is loaded from the same domain as the pages from the application under test). Accordingly, the test can run using scripting between the stub and pages loaded from the application under test. For example, such scripting may include scripting to load a page from the application under test, scripting to select a user interface element on the page, etc. This scripting can be performed without running afoul of cross-site-scripting (XSS) limitations that are imposed by many browsers because the stub and the page being tested are loaded from the same domain.
The stub, the harness, the tests, and other components discussed below can be defined at least in part using a scripting language that can be executed by multiple different types of browsers. For example, the scripting language can be a widely used scripting language such as JavaScript.
Additionally, the stub can be included in code that is shipped for an application (e.g., those that are already operating on a global computer network). Accordingly, the stub can be available to be loaded in a client machine to perform tests on the application even after the application is shipped. This can even be done with tests that are authored after the application has been shipped because the tests can be loaded from domains other than the domain for the application being tested.
Because tests run in the context of the stub, which may be in a different browser environment from the product code in some examples (e.g., pages loaded from the application under test), the test code can be isolated from the product code. This can allow tests to be run without injecting test code into the product code, which can alter the state of the product code and lead to false positives (i.e., false indications of problems with the application under test). In some scenarios, a second browser environment may not be used. For example, for unit tests, a file may be loaded directly in the stub's browser environment, and application programming interfaces of the online application being tested can be called in that file. In other scenarios, the second browser environment may be used to provide benefits, such as allowing the product code to be run in its own isolated browser environment.
Tests can be hosted in a domain separate from the domain for the application under test and the stub, although the tests could be hosted in that domain. Indeed, tests can be hosted from any available server on any available domain. Accordingly, the tests need not be shipped with the application under test or included in the product under test. Additionally, the tests can run from any domain, including customer domains.
The stub and the harness can collect and measure page load time performance when navigating to page URLs during a test. Such load times can include PLT1 (page load time without caching), PLT2 (page load time when the page has been cached), and/or PLT3 (load time for retrieving data for an already-loaded page, such as for updating the data).
The stub or the harness can have a built-in task queue processor so that tests can write ‘tasks’ that can execute one after the other. This can simplify the programming model for the tests by avoiding the need to use and keep track of numerous callback events in a test. Each task can have an optional ‘waiter’ that determines when the task has finished. This can produce the benefit of improving efficiency by avoiding the need for hard-coded ‘sleeps’ and by helping to improve the speed of the test execution.
For browsers that only allow one window to be active at a time (e.g., many mobile device browsers), tests can be run by loading the product pages into an inline frame inside the inline frame that hosts the stub. This technique can facilitate tests using such browsers, while still allowing for page transitions without losing state because the product code from the application under test is running in the inline frame that is separate from the inline frame of the stub where the code for the test is being executed.
The tools and techniques can include a client service running outside the browser but on the same machine as the browser. This client service may perform various operations to facilitate running the tests. For example, the client service can host all or part of the files needed by the harness and the stub for running tests, so that a special server is not needed to host the harness and the stub. As another example, the client service can perform tasks that cannot be performed within at least some browsers (e.g., drag and dropping to and from locations outside the browser, file uploads, dismissing dialogs, clearing the browser cache, taking screen shots, etc.). In some situations, some types of browsers may be able to perform some actions themselves, while other types of browsers may be unable to do so. In such instances, the client service may perform a task when testing with some types of browsers and not others. For example, some types of browsers may be able to clear the browser cache from within the browser, but other types of browsers may not allow this to be done from within the browser. As another example of a use of the client service, the client service can launch and close a browser to run tests automatically on a client machine (such as when using the orchestration service discussed below). As yet another example, the client service may request cookies from an authentication provider and pass those cookies to the browser to log in as user(s). As yet another example, the client service may listen for both HTTP and HTTPS requests to match the protocol of the online application under test.
The tools and techniques may also include a deployable service for orchestrating automatic test execution on multiple machines, which may be performed in response to a single user input. For example, different types of devices and browsers can register with the service to execute tests. When user input is provided to request that a set of tests be run on multiple machines, the orchestration service can automatically respond by instructing multiple client machines that are registered with the service to perform the requested test. This can involve a test being executed in parallel on multiple machines, including different types of devices and/or different types of browsers. These tests can be run in browsers on machines that are local and/or remote to the orchestration service. For example, some such tests may be performed on client machines in the cloud that are managed by a third party that is a separate entity from an entity that is managing the orchestration service. If browsers become unresponsive while running a test, the orchestration service can remotely restart such browsers.
The tools and techniques may include storing information about previously populated data for future use (e.g., for use in future tests), such as in non-volatile memory or possibly in volatile memory. Accordingly, such data may be re-used for future tests instead of re-populating the data for each test. Such re-use of data can improve efficiency when running multiple tests that can use the same data.
The harness and the stub may provide a debugging mode. For example, user input may be provided in the harness to indicate that debugging mode is to be used for a test. The debugging mode can include one or more changes from regular test operations to improve efficiency in debugging. For example, the debugging mode may force exceptions to be thrown and break when exceptions occur, rather than catching exceptions and continuing execution through exceptions. As another example, the debugging mode may include breaking into the debugger automatically after the test is loaded into the stub, which can allow a user to set break points in the test.
The tools and techniques may also include utilizing configuration files to communicate settings for a test. For example, a configuration file may be communicated to the harness to customize settings for a test (e.g., test selection, location of the application under test, harness settings, etc.).
The tools and techniques may include running a single test in response to a single user input gesture, or responding to a single user input gesture by running a single test multiple times, or running multiple tests multiple times in response to a single user input gesture (e.g., a single click, etc.).
Accordingly, one or more substantial benefits can be realized from the tools and techniques described herein, such as the technical benefits discussed above.
The subject matter defined in the appended claims is not necessarily limited to the benefits described herein. A particular implementation of the invention may provide all, some, or none of the benefits described herein. Although operations for the various techniques are described herein in a particular, sequential order for the sake of presentation, it should be understood that this manner of description encompasses rearrangements in the order of operations, unless a particular ordering is required. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, flowcharts may not show the various ways in which particular techniques can be used in conjunction with other techniques.
Techniques described herein may be used with one or more of the systems described herein and/or with one or more other systems. For example, the various procedures described herein may be implemented with hardware or software, or a combination of both. For example, the processor, memory, storage, output device(s), input device(s), and/or communication connections discussed below with reference to
I. Exemplary Computing Environment
The computing environment (100) is not intended to suggest any limitation as to scope of use or functionality of the invention, as the present invention may be implemented in diverse general-purpose or special-purpose computing environments.
With reference to
Although the various blocks of
A computing environment (100) may have additional features. In
The storage (140) may be removable or non-removable, and may include computer-readable storage media such as flash drives, magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment (100). The storage (140) stores instructions for the software (180).
The input device(s) (150) may be one or more of various different input devices. For example, the input device(s) (150) may include a user device such as a mouse, keyboard, trackball, etc. The input device(s) (150) may implement one or more natural user interface techniques, such as speech recognition, touch and stylus recognition, recognition of gestures in contact with the input device(s) (150) and adjacent to the input device(s) (150), recognition of air gestures, head and eye tracking, voice and speech recognition, sensing user brain activity (e.g., using EEG and related methods), and machine intelligence (e.g., using machine intelligence to understand user intentions and goals). As other examples, the input device(s) (150) may include a scanning device; a network adapter; a CD/DVD reader; or another device that provides input to the computing environment (100). The output device(s) (160) may be a display, printer, speaker, CD/DVD-writer, network adapter, or another device that provides output from the computing environment (100). The input device(s) (150) and output device(s) (160) may be incorporated in a single system or device, such as a touch screen or a virtual reality system.
The communication connection(s) (170) enable communication over a communication medium to another computing entity. Additionally, functionality of the components of the computing environment (100) may be implemented in a single computing machine or in multiple computing machines that are able to communicate over communication connections. Thus, the computing environment (100) may operate in a networked environment using logical connections to one or more remote computing devices, such as a handheld computing device, a personal computer, a server, a router, a network PC, a peer device or another common network node. The communication medium conveys information such as data or computer-executable instructions or requests in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The tools and techniques can be described in the general context of computer-readable media, which may be storage media or communication media. Computer-readable storage media are any available storage media that can be accessed within a computing environment, but the term computer-readable storage media does not refer to propagated signals per se. By way of example, and not limitation, with the computing environment (100), computer-readable storage media include memory (120), storage (140), and combinations of the above.
The tools and techniques can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment. In a distributed computing environment, program modules may be located in both local and remote computer storage media.
For the sake of presentation, the detailed description uses terms like “determine,” “choose,” “adjust,” and “operate” to describe computer operations in a computing environment. These and other similar terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being, unless performance of an act by a human being (such as a “user”) is explicitly noted. The actual computer operations corresponding to these terms vary depending on the implementation.
II. System for Online Application Testing Across Browser Environments
A. Overall System Architecture Example
The server resources can also include a harness source domain (240), which can provide harnesses to the client machines (210). Additionally, the server resources can include a test source domain (250), which can provide tests to the client machines (210). The server resources can also include an orchestration service (260), which can receive requests from the client machines (210) to perform tests on multiple browsers and multiple machines. The orchestration service (260) can also respond to such requests by orchestrating such performance of tests on multiple browsers and/or computing machines.
Alternatively, the functions of one or more of the server resources (230, 240, 250, and/or 260) may be combined in a single domain. For example, harnesses and tests may be provided from the same domain. Similarly, the orchestration service (260) may be provided from a domain that also provides harnesses and/or tests. Indeed, the harness source, test source and/or orchestration service may even be in the same domain as the application under test in some scenarios.
B. Example of Client System Architecture
Referring now to
The browser (305) can host multiple browser environments, which can be at least partially isolated from each other. For example, the browser (305) may be configured to simultaneously have multiple open browsing windows, multiple open browsing tabs, inline frames within one or more open browsing windows, etc.
The browser (305) can open a harness browsing environment (320). For example, a user may provide user input to the browser (305) to open a browser window, and may provide user input in that browser window to navigate to a harness source, such as by providing a uniform resource locator for a location of the harness source, prompting the browser to send a request such as a HTTP “get” request. The harness source can respond by providing the harness (330) to the browser (305), which can load the harness (330) in the harness browser environment (320). For example, the harness (330) can be a Web page, such as an ASPX or HTML page that includes JavaScript for handling test execution as discussed herein. The harness (330) can allow several settings to be input, such as a product location, a test location, an identification of tests to run, a username to use, etc. The harness (330) can provide a user interface (400) (an example of which is illustrated in
The harness (330) can also host a stub browser environment (340). For example, the stub browser environment (340) may be an inline frame in the harness (330) (e.g., an inline frame represented by the box in
The stub (350) can be a simple Web page, such as an ASPX or HTML page, which allows messages to be sent back and forth between the harness and the stub, and that allows the stub to perform other actions, such as loading core and task libraries (352), which can be loaded from a different domain than the domain from which the stub was loaded. The core and task libraries (352) can include code for various tasks and components for executing tests, such as an abstraction layer (354) and a task processor (356). Accordingly, the core and task libraries (352) can include instructions, such as in the form of JavaScript files, which can control execution of tests (360) from within the stub browser environment (340), including loading the file (such as a file including JavaScript) for the test (360), initializing the task processor (356), starting the test (360), and collecting logs from the test execution. The core and task libraries (352) can also include a common library that can execute basic tasks on any browser that supports the scripting language used for the components of the libraries (352) (such as JavaScript), such as finding an element on a Web page, clicking an element on a Web page, navigating to pages, waiting for pages to render, etc. The core and task libraries (352) are shown outside the browser (305) because they can be loaded as external file(s) in some implementations. However, all or part of the code from the core and task libraries (352) can be loaded into one or more environments in the browser (305) to facilitate execution of such code in some situations.
The harness (330) can send the stub (350) a location of the test (360) to execute. The harness (330) can also send the stub (350) settings that are used while executing the test. The stub (350) can send messages to the harness (330), such as results from test execution. For example such results may be sent to the harness (330) in a scripting language, such as in the form of a JSON blob. The harness (330) and the stub (350) may be loaded from different domains. However, cross-domain communication may still be conducted between the harness (330) and the stub (350) in such a scenario, such as by using the known postMessage method.
A running test (360) can make calls to the abstraction layer (354) to launch and communicate with a product browser environment (370), which can host a product (380) (e.g., Web pages retrieved from the online application under test). The abstraction layer (354) can determine whether to launch the product browser environment (370) as a separate window from the stub browser environment (340) (as illustrated in
This feature of optionally launching the product browser environment (370) as an inline frame in the stub browser environment (340) can get around issues with some browsers, such as immersive browsers. As can be seen from the discussion herein, the stub browser environment (340) and the product browser environment (370) are typically active at the same time while tests are being performed. However, for immersive browsers, a new tabbed browser is open instead of a new window, since there is no concept of a window for immersive browsers. This can pose a problem because many mobile devices will suspend any executing script, such as JavaScript, on non-active tabbed browsers to save battery life. Because the test code is executing on the harness page (in the stub browser environment (340) within the harness browser environment (320)), this can effectively hang test execution.
By creating the product browser environment (370) as an inline frame within the stub browser environment (340) (which can itself be an inline frame within the harness browser environment (320)) instead of opening a window, a new window need not be opened to navigate to the product (380). Instead, the browser (305) can be instructed to navigate to the product (380) directly with an inline frame on the harness page (this configuration is not illustrated in the figures). This can still allow for maintaining state during page transitions in the product (380) in immersive browsers on mobile devices, without causing the device to suspend the test execution (which can, for example, be JavaScript executing in the stub browser environment (340)).
Some examples of features that may be used with online application testing across browser environments will now be discussed in more detail. These features include test execution, the task processor, the client service, performance monitoring, and automated test orchestration.
C. Test Execution
Referring to
Next, a configuration file can be loaded. The configuration file can include available tests that can be run. A list of such tests may be displayed on the harness user interface (not shown in
After the test (360) finishes executing, the test (360) can post a message back to the harness (330) with the logs from the run. If all tests (360) have finished executing, the tests (360) can post a message back to the harness (330) that the test run is finished. The harness (330) can parse the logs after each individual test is run and either log the results to the harness user interface (400) or post the logs back to the orchestration service (260) if running in unattended mode. The interactions of the executing test (360) with the product (380) and the harness (330) can be facilitated by the stub (350) and/or the core and task libraries (352).
D. The Task Processor
As discussed above, the core and task libraries (352) can include a core file (which can be a JavaScript file) that contains the task processor (356), which can simplify writing of the tests (360). Some scripting languages that may be used for writing tests, such as JavaScript, execute to produce a single-threaded and synchronous process. Accordingly, writing code in such scripting languages can involve using numerous timers and callback methods. Such code can be difficult and cumbersome to write. Instead of having tests written in such a manner, the task processor (356) can execute tasks sequentially so that the test author does not need to worry about timers and callback methods.
Referring now to
Referring now to
As a general example, if a function F( ) adds tasks A and B, there can be a single queue containing A and B, and A can be the first task that is executed. When A is executed, A can get its own queue and the task processor (356) can wait to execute B until all the tasks in A's queue have completed. If while executing task A, task C is added to the queue, then C can be the next task executed. The task processor (356) can wait to execute B until A has finished executing and all of the tasks that A has added while it was executing have finished. In this way order can be maintained, and tasks and subtasks can be executed in a proper order.
Referring still to
Upon completion of the fourth queue (640), the fourth queue (640) can be removed from the task queue stack (600), and the task processor (356) can resume at a specified point in the third queue (the task processor (356) can maintain a pointer to a current execution point in each queue and can continue from that point when returning to that queue). Upon completion of the third queue (630), the third queue (630) can be removed from the task queue stack (600), and the task processor (356) can resume at the specified point in the second queue (620) (at the next task, the TEST TEARDOWN task (624)). Upon completion of the second queue (620), the second queue (630) can be removed from the task queue stack (600), and the task processor (356) can resume at the specified point in the first queue (610) (at the next task, the UNLOAD TEST task (615)). The task processor (356) can then complete the remaining tasks in the first queue (610).
E. The Client Service
Referring back to
For example, the client service (310) can host the harness (330), as well as the stub browser environment (such as a stub inline frame). Additionally, the client service (310) can serve up files, such as the files for the core and task libraries (352) and/or the test(s) (360), to the harness (330) and the stub (350). These files may be located on the client machine (300), but they can be hosted on a server remote from the client machine in some scenarios. This function of the client service (310) of handling the files on the client machine can produce some benefits. For example, the files for the online application testing need not be deployed with the application being tested, except that the stub (350) can be located in the same domain as the application being tested so that it can be loaded from that domain. Also, the files for testing can be copied onto a client machine (300) to execute tests against a product server that has a stub (350) deployed. This can be done without installation or dependency on other tools, besides the browser (305). A test may be executed by a user with little expertise. Additionally, authoring tests and debugging tests can be performed relatively easily because the files can be quickly copied to a client machine (300) and changes made to the files for test(s) (360) on the client machine (300) can be immediately available in the browser (305) because requests can be served up by the client service (310).
Another function of the client service (310) can be to perform tasks that are not allowed from within the browser (305). Because the harness (330) and stub (350) can execute completely within the browser (305), the browser (305) may have restrictions on what actions can and cannot be performed. For instance, from within the browser (305) itself, components typically cannot drag and drop a file from outside the browser (e.g., from a desktop or operating system folder), capture a screenshot, access the file system, dismiss dialogs, etc. When a test (360) is to perform one of these actions, it can send an HTTP request to the client service (310) to perform the requested action (e.g., take a screenshot). The test (360) can wait for the client service (310) to do the action and to respond that the action has been completed. The test (360) can then continue its execution. Thus, the client service (310) can allow a test (360) to perform an action that the test (360) normally would not be able to perform if the test was only executing within the browser (305).
A third function of the client service (310) can be to take a screenshot whenever the test (360) requests it (as discussed above), or when the test (360) reports a failure. For example, the stub (350) and harness (330) can automatically make a call to the client service (310) when a failure is encountered. The client service (310) may also take a video of the user interface of the product browser environment (370) as the test (360) is executing. Both of these can be used in debugging failures.
The client service (310) may also launch browsers (305) and run tests (360) automatically. This automated running may be useful where the client service (310) plugs into an existing harness so that tests (360) can be run automatically at a specified interval (e.g., every single day) or whenever a new build is produced. Also, the client service (310) can launch a browser (305) to automatically register with an orchestration service (260) (see
Moreover, the client service (310) can be used to request authentication cookies from an external service. Many products require users to log-in, and this feature can be useful to be able to test as users with different levels of permission. The test (360) can request the client service (310) to make a call to get an authentication cookie from the authentication provider. The client service (310) can then send the cookie to the test (360) within the browser (305). The test (360) can then use the cookie to impersonate another user so that multi-user scenarios can be tested. If there is no client service (310) available on the local client machine (300), this can also be done from an instance of the client service (310) running on another machine because the request for a cookie can be made without access to the local client machine (300).
For some platforms (e.g., mobile platforms) where it is too expensive or unnecessary to write a client service (310) to run on that platform, the client service (310) can be hosted on another machine. The client machine (300) can then navigate to this service to access the harness (330) and the tests (360), rather than having the client service (310) provide the harness (330) and the tests (360). In this scenario, the client machine (300) will not be able to make calls to the service to perform local actions (such as taking a screen shot or drag & drop of a file). However, other functions of the client service (310) may be performed.
F. Performance Monitoring
Performance tests can be written by specifying the uniform resource locator or set of uniform resource locators to be measured and configuring the types of measurements to take, such as by using the harness settings page and/or a configuration file. These settings can instruct the system to measure one or more of the types of page load times (PLT1, PLT2, and/or PLT3) and can run a specified number of iterations. The harness (330) and the stub (350) can start the test (360) and navigate to the specified page uniform resource locator(s) for the specified number of times, possibly clearing the cache between runs, depending upon which set of PLT were specified. After a run has completed, the stub (350) and harness (330) can perform computations on the results, and can output statistics, such as the average, minimum, maximum, and standard deviation of the results. These statistics can be compared to previous runs in the harness user interface (400), or in an external service.
G. Test Orchestration
As noted above, an orchestration service (260) can facilitate distributing and running tests (360) across browsers that are registered with the orchestration service (260). Different types of browsers on different types of client machines can register with the orchestration service (260). For example, a client machine (300) may be prompted to register with the orchestration service (260) by selecting a setting (not shown) on the harness user interface (400) to run in unattended mode. The browser (305) on the client machine (300) can respond by pinging the orchestration service (260) periodically to request a job to run. If the orchestration service (260) does not have any jobs to run, the browser (305) can keep pinging the orchestration service (260) until the browser (305) gets a job from the orchestration service (260). If the orchestration service (260) has a job to run, the orchestration service (260) can send the browser (305) a list of one or more tests (360) to run and the settings (such as the product uniform resource locator) to use during the run. The browser (305) can then execute the set of test(s) (360) and report the results back to the orchestration service (260). The browser (305) can then start the loop again to request a job to run from the orchestration service (260).
An orchestration service job can be started from a harness user interface (400) on a client machine (300). For example, the “ORCHEST. TESTS” tab in
Since any client machine (210) that can communicate with the orchestration service (260) through a browser (e.g., any Internet-connected Web browser) can be registered with the orchestration service (260), the registered client machines (210) can include machines that are hosted in the cloud or hosted by a 3rd party. If development is done inside of a private domain and testing is to be done on a cloud device, the orchestration service (260) may automatically open a temporary tunnel between the cloud device and the machine that is hosting the product code so that the cloud device has access to the product machine through a domain firewall. Because all settings can be passed as part of the uniform resource locator, remote client machines can be registered with the orchestration service (260) by simply navigating to the uniform resource locator for the orchestration service (260), with some additional query parameters.
The orchestration service (260) can maintain a list of client machines that are registered with it. The orchestration service (260) may also maintain a list of the last time that each registered client machine sent a request for a job. If the machine has not requested a job for longer than a determined period of time, and the machine has an instance of the client service (310) running on it, the orchestration service (260) may send a request to the client service (310) on the machine to automatically restart the browser (305). For example, this may happen if a job hanged the browser (305). If the browser (305) still does not restart, the orchestration service (260) may respond by providing a notification of the problem, such as by sending an email message.
III. Techniques for Online Application Testing Across Browser Environments
Several techniques for online application testing across browser environments will now be discussed. Each of these techniques can be performed in a computing environment. For example, each technique may be performed in a computer system that includes at least one processor and memory including instructions stored thereon that when executed by at least one processor cause at least one processor to perform the technique (memory stores instructions (e.g., object code), and when processor(s) execute(s) those instructions, processor(s) perform(s) the technique). Similarly, one or more computer-readable storage media may have computer-executable instructions embodied thereon that, when executed by at least one processor, cause at least one processor to perform the technique. The techniques discussed below may be performed at least in part by hardware logic.
Referring to
In one example, second browser environment can be in a different browser window from the first browser environment. In another example, the second browser environment may be in an inline frame in the first browser environment. In some examples, a second browser environment may not be used. Instead, a test file (e.g., a JavaScript file) may be loaded directly into the first browser environment, and application programming interfaces of the online application being tested can be called in that file.
The technique of
The stub may collect results from the test and return results of the test. Additionally, the technique of
The stub and the test, and possibly also the harness, may each be defined at least in part by the same scripting language, such as JavaScript.
A stack of multiple task queues may be maintained. Each of the task queues can include at least one task specified in the test, and at least one of the task queues can include multiple tasks specified in the test. The task queues in the stack may be performed in a last-in-first-out order. Also, maintaining the stack may include adding an additional queue to the stack in response to a running task in an existing task in an existing queue in the stack calling for the addition of the additional queue. The additional queue can be run before continuing with running the existing queue. One or more tasks within at least one of the task queues can be performed in an order specified in the test. Also, one or more tasks within at least one of the task queues can be performed in an order specified in a general task library.
The test can specify one or more tasks that can include at least one task that includes an action and a waiter. The waiter can wait for occurrence of a specified event that is different from the action, but can be an effect from performance of the action.
The technique of
The technique of
Referring now to
Referring now to
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Mandal, Aditi, Gittelman, Arye, Nation, Zachary A., Strick, John W., Shah, Ajey P., Silverstein, Michael B., Jia, Yubo, Spitsyn, Alexander S., Bogazliyanlioglu, Emre
Patent | Priority | Assignee | Title |
10013668, | Aug 14 2015 | Oracle International Corporation | Secure storage of enterprise certificates for cloud services |
10073679, | Sep 26 2014 | Oracle International Corporation | Efficient and intuitive databinding for mobile applications |
10255061, | Aug 05 2016 | Oracle International Corporation | Zero down time upgrade for a multi-tenant identity and data security management cloud service |
10261836, | Mar 21 2017 | Oracle International Corporation | Dynamic dispatching of workloads spanning heterogeneous services |
10263947, | Aug 05 2016 | Oracle International Corporation | LDAP to SCIM proxy service |
10290133, | Sep 26 2014 | Oracle International Corporation | High fidelity interactive screenshots for mobile applications |
10341354, | Sep 16 2016 | Oracle International Corporation | Distributed high availability agent architecture |
10341410, | May 11 2016 | Oracle International Corporation | Security tokens for a multi-tenant identity and data security management cloud service |
10348858, | Sep 15 2017 | Oracle International Corporation | Dynamic message queues for a microservice based cloud service |
10419514, | Aug 14 2015 | Oracle International Corporation | Discovery of federated logins |
10425386, | May 11 2016 | Oracle International Corporation | Policy enforcement point for a multi-tenant identity and data security management cloud service |
10445395, | Sep 16 2016 | Oracle International Corporation | Cookie based state propagation for a multi-tenant identity cloud service |
10452497, | Aug 14 2015 | Oracle International Corporation | Restoration of UI state in transactional systems |
10454915, | May 18 2017 | Oracle International Corporation | User authentication using kerberos with identity cloud service |
10454940, | May 11 2016 | Oracle International Corporation | Identity cloud service authorization model |
10484243, | Sep 16 2016 | Oracle International Corporation | Application management for a multi-tenant identity cloud service |
10484382, | Aug 31 2016 | Oracle International Corporation | Data management for a multi-tenant identity cloud service |
10505941, | Aug 05 2016 | Oracle International Corporation | Virtual directory system for LDAP to SCIM proxy service |
10511589, | Sep 14 2016 | Oracle International Corporation | Single logout functionality for a multi-tenant identity and data security management cloud service |
10516672, | Aug 05 2016 | Oracle International Corporation | Service discovery for a multi-tenant identity and data security management cloud service |
10530578, | Aug 05 2016 | Oracle International Corporation | Key store service |
10567364, | Sep 16 2016 | Oracle International Corporation | Preserving LDAP hierarchy in a SCIM directory using special marker groups |
10579367, | Aug 05 2016 | Oracle International Corporation | Zero down time upgrade for a multi-tenant identity and data security management cloud service |
10581820, | May 11 2016 | Oracle International Corporation | Key generation and rollover |
10582001, | Aug 11 2015 | Oracle International Corporation | Asynchronous pre-caching of synchronously loaded resources |
10582012, | Oct 16 2015 | Oracle International Corporation | Adaptive data transfer optimization |
10585682, | Aug 05 2016 | Oracle International Corporation | Tenant self-service troubleshooting for a multi-tenant identity and data security management cloud service |
10594684, | Sep 14 2016 | Oracle International Corporation | Generating derived credentials for a multi-tenant identity cloud service |
10616224, | Sep 16 2016 | Oracle International Corporation | Tenant and service management for a multi-tenant identity and data security management cloud service |
10693861, | May 11 2016 | Oracle International Corporation | Task segregation in a multi-tenant identity and data security management cloud service |
10705823, | Sep 29 2017 | Oracle International Corporation | Application templates and upgrade framework for a multi-tenant identity cloud service |
10715564, | Jan 29 2018 | Oracle International Corporation | Dynamic client registration for an identity cloud service |
10721237, | Aug 05 2016 | Oracle International Corporation | Hierarchical processing for a virtual directory system for LDAP to SCIM proxy service |
10735394, | Aug 05 2016 | Oracle International Corporation | Caching framework for a multi-tenant identity and data security management cloud service |
10764273, | Jun 28 2018 | Oracle International Corporation | Session synchronization across multiple devices in an identity cloud service |
10791087, | Sep 16 2016 | Oracle International Corporation | SCIM to LDAP mapping using subtype attributes |
10798165, | Apr 02 2018 | Oracle International Corporation | Tenant data comparison for a multi-tenant identity cloud service |
10831789, | Sep 27 2017 | Oracle International Corporation | Reference attribute query processing for a multi-tenant cloud service |
10834137, | Sep 28 2017 | Oracle International Corporation | Rest-based declarative policy management |
10841385, | Sep 26 2014 | Oracle International Corporation | Efficient means to test server generated applications on mobile device |
10846390, | Sep 14 2016 | Oracle International Corporation | Single sign-on functionality for a multi-tenant identity and data security management cloud service |
10848543, | May 11 2016 | Oracle International Corporation | Security tokens for a multi-tenant identity and data security management cloud service |
10878079, | May 11 2016 | Oracle International Corporation | Identity cloud service authorization model with dynamic roles and scopes |
10904074, | Sep 17 2016 | Oracle International Corporation | Composite event handler for a multi-tenant identity cloud service |
10931656, | Mar 27 2018 | Oracle International Corporation | Cross-region trust for a multi-tenant identity cloud service |
11012444, | Jun 25 2018 | Oracle International Corporation | Declarative third party identity provider integration for a multi-tenant identity cloud service |
11023555, | Sep 16 2016 | Oracle International Corporation | Cookie based state propagation for a multi-tenant identity cloud service |
11061929, | Feb 08 2019 | Oracle International Corporation | Replication of resource type and schema metadata for a multi-tenant identity cloud service |
11102313, | Aug 10 2015 | Oracle International Corporation | Transactional autosave with local and remote lifecycles |
11127178, | Sep 26 2014 | Oracle International Corporation | High fidelity interactive screenshots for mobile applications |
11165634, | Apr 02 2018 | Oracle International Corporation | Data replication conflict detection and resolution for a multi-tenant identity cloud service |
11258775, | Apr 04 2018 | Oracle International Corporation | Local write for a multi-tenant identity cloud service |
11258786, | Sep 14 2016 | Oracle International Corporation | Generating derived credentials for a multi-tenant identity cloud service |
11258797, | Aug 31 2016 | Oracle International Corporation | Data management for a multi-tenant identity cloud service |
11271969, | Sep 28 2017 | Oracle International Corporation | Rest-based declarative policy management |
11308132, | Sep 27 2017 | Oracle International Corporation | Reference attributes for related stored objects in a multi-tenant cloud service |
11321187, | Oct 19 2018 | Oracle International Corporation | Assured lazy rollback for a multi-tenant identity cloud service |
11321343, | Feb 19 2019 | Oracle International Corporation | Tenant replication bootstrap for a multi-tenant identity cloud service |
11356454, | Aug 05 2016 | Oracle International Corporation | Service discovery for a multi-tenant identity and data security management cloud service |
11411944, | Jun 28 2018 | Oracle International Corporation | Session synchronization across multiple devices in an identity cloud service |
11423111, | Feb 25 2019 | Oracle International Corporation | Client API for rest based endpoints for a multi-tenant identify cloud service |
11463488, | Jan 29 2018 | Oracle International Corporation | Dynamic client registration for an identity cloud service |
11528262, | Mar 27 2018 | Oracle International Corporation | Cross-region trust for a multi-tenant identity cloud service |
11601411, | Aug 05 2016 | Oracle International Corporation | Caching framework for a multi-tenant identity and data security management cloud service |
11611548, | Nov 22 2019 | Oracle International Corporation | Bulk multifactor authentication enrollment |
11651357, | Feb 01 2019 | Oracle International Corporation | Multifactor authentication without a user footprint |
11652685, | Apr 02 2018 | Oracle International Corporation | Data replication conflict detection and resolution for a multi-tenant identity cloud service |
11669321, | Feb 20 2019 | Oracle International Corporation | Automated database upgrade for a multi-tenant identity cloud service |
11687378, | Sep 13 2019 | Oracle International Corporation | Multi-tenant identity cloud service with on-premise authentication integration and bridge high availability |
11693835, | Oct 17 2018 | Oracle International Corporation | Dynamic database schema allocation on tenant onboarding for a multi-tenant identity cloud service |
11792226, | Feb 25 2019 | Oracle International Corporation | Automatic api document generation from scim metadata |
11870770, | Sep 13 2019 | Oracle International Corporation | Multi-tenant identity cloud service with on-premise authentication integration |
9575873, | Sep 13 2013 | SAP SE | Software testing system and method |
9826045, | Sep 26 2014 | Oracle International Corporation | Efficient means to test server generated applications on mobile device |
9830254, | Sep 25 2013 | Microsoft Technology Licensing, LLC | Online application testing across browser environments |
9851968, | Sep 26 2014 | Oracle International Corporation | High performant iOS template based application build system |
9858174, | Sep 26 2014 | Oracle International Corporation | Updatable native mobile application for testing new features |
9910764, | Jun 24 2013 | Microsoft Technology Licensing, LLC | Automated software testing |
Patent | Priority | Assignee | Title |
7721154, | Sep 05 2006 | Parasoft Corporation | System and method for software run-time testing |
8327271, | Oct 31 2000 | Software Research, Inc. | Method and system for testing websites |
8490059, | Sep 29 2009 | International Business Machines Corporation | Cross-browser testing of a web application |
8819630, | Dec 08 2008 | Microsoft Technology Licensing, LLC | Automatic test tool for webpage design with micro-browsers on mobile platforms |
8863085, | Jan 31 2012 | GOOGLE LLC | Monitoring web applications |
8875102, | Mar 12 2009 | GOOGLE LLC | Multiple browser architecture and method |
20120198351, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 09 2013 | STRICK, JOHN W | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033239 | /0216 | |
Sep 18 2013 | JIA, YUBO | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033239 | /0216 | |
Sep 18 2013 | NATION, ZACHARY A | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033239 | /0216 | |
Sep 20 2013 | BOGAZLIYANLIOGLU, EMRE | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033239 | /0216 | |
Sep 20 2013 | SPITSYN, ALEXANDER S | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033239 | /0216 | |
Sep 20 2013 | SHAH, AJEY P | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033239 | /0216 | |
Sep 23 2013 | SILVERSTEIN, MICHAEL B | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033239 | /0216 | |
Sep 24 2013 | MANDAL, ADITI | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033239 | /0216 | |
Sep 24 2013 | GITTELMAN, ARYE | Microsoft Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033239 | /0216 | |
Sep 25 2013 | Microsoft Technology Licensing, LLC | (assignment on the face of the patent) | / | |||
Oct 14 2014 | Microsoft Corporation | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 039025 | /0454 |
Date | Maintenance Fee Events |
Jun 13 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 24 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Dec 29 2018 | 4 years fee payment window open |
Jun 29 2019 | 6 months grace period start (w surcharge) |
Dec 29 2019 | patent expiry (for year 4) |
Dec 29 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 29 2022 | 8 years fee payment window open |
Jun 29 2023 | 6 months grace period start (w surcharge) |
Dec 29 2023 | patent expiry (for year 8) |
Dec 29 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 29 2026 | 12 years fee payment window open |
Jun 29 2027 | 6 months grace period start (w surcharge) |
Dec 29 2027 | patent expiry (for year 12) |
Dec 29 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |