A system for automating the life cycle of a software application is provided. The software application utilizes computing resources distributed over a network. A representative system includes creating logic operable to create a task list which describes how at least one stage in the application life cycle is to be performed, and processing logic responsive to the creating logic, operable to process the task list to perform at least one stage in the application life cycle. The processing logic is integrated with a development environment, and the development environment is used to develop the software application.
|
10. A method for automating a life cycle of a software application, where the software application utilizes a plurality of computing resources distributed over a network, the method comprising:
creating, by at least one computer processor, a file describing a plurality of stages of the life cycle, wherein the plurality of stages comprises a development stage, a packaging stage, a distribution stage, an installation stage, an execution stage, a collection stage, and an uninstall stage;
creating, by the at least one computer processor, a task list in the file which describes how the plurality of stages in the life cycle of the software application is to be performed, wherein the plurality of stages is assigned to be distributively performed by a plurality of grid nodes in a computational grid;
managing, by the at least one computer processor, processing of the task list by a process engine to perform the plurality of stages in the life cycle by the plurality of grid nodes in the computational grid;
as part of the installation stage, installing the software application to a target node, wherein the software application is executed in accordance with the task list; and
as part of the uninstall stage, uninstalling the software application from the target node, wherein results of execution of the software application are transferred from the target node to the at least one computer processor.
1. A method for automating a life cycle of a grid node application, where the grid node application utilizes a plurality of computing resources distributed over a network, the method comprising:
creating, by at least one computer processor, a file describing a plurality of stages of the life cycle, wherein the plurality of stages comprises a development stage, a packaging stage, a distribution stage, an installation stage, an execution stage, a collection stage, and an uninstall stage;
creating, by the at least one computer processor, a task list in the file which describes how the plurality of stages of the life cycle of the grid node application is to be performed, wherein the plurality of stages is assigned to be distributively performed by a plurality of grid nodes in a computational grid;
managing, by the at least one computer processor, processing of the task list by a process engine to perform the plurality of stages of the life cycle by the plurality of grid nodes in the computational grid, the process engine being integrated with a development environment, and the development environment being a grid node application development environment;
as part of the installation stage, installing the grid node application to a target node, wherein the grid node application is executed in accordance with the task list; and
as part of the uninstall stage, uninstalling the grid node application from the target node, wherein results of execution of the grid node application are transferred from the target node to the at least one computer processor.
2. The method of
3. The method of
4. The method of
7. The method of
8. The method of
verifying that a precondition is satisfied before performing a task of the task list, wherein the precondition is associated with the task in the task list and describes requirements of a system on which the grid node application executes.
9. The method of
obtaining a description of available resources for at least a portion of the plurality of computing resources; and
verifying that a precondition is satisfied before performing a task, wherein the precondition is associated with the task in the task list and describes system requirements of the grid node application.
11. The method of
|
This application is a continuation of U.S. Ser. No. 10/608,942, now U.S. Pat. No. 7,437,706, which was filed on Jun. 27, 2003, and published as U.S. Patent Publication No. 2004/0268293, which is incorporated herein by reference in its entirety.
The present invention relates generally to distributed computing software, and more particularly to tools used to develop and build distributed computing software.
For many years, scientists, academics and engineers have used computers to solve complex problems. Computers are used, in many different disciplines, for tasks such as modeling, simulation and forecasting. For example, the scientific community has used such networks to sequence genes, analyze astronomic data and analyze weather forecast data. Because these tasks are computationally complex and/or involve huge amounts of data, high-performance computers are generally used, and many interesting problems are not investigated because access to such high-performance computers is very limited.
A relatively new approach to, complex computing relies on the aggregate computing power of networks of computers instead of individual high-performance computers. These networks are known as “computational grids”, or simply “grids,” while the computers on the grid are called “grid nodes.” The infrastructure of these computational grids is designed to provide consistent, dependable, pervasive and inexpensive access to computing resources, which in the aggregate provide high performance computing capabilities.
To take advantage of computational grids, a computing task is decomposed so that it runs in parallel on multiple grid nodes. Some computing tasks are suited for data decomposition, where the same application executes on many grid nodes in parallel using different data. Others are suited for task decomposition, where different applications execute on many grid nodes in parallel using the same data. Other forms of decomposition are also possible, as well as a combination of multiple forms of decomposition.
Grid computing began in the academic and scientific community. The tools first used to develop applications for the grid were therefore those familiar to academics and scientists, and were typically based on the Unix operating system and the C programming language. These software developers are comfortable with the “bare bones” development environment provided by Unix, with features such as command line interpreters, shell scripts, etc.
Grid computing is now beginning to spread to the business community. Software developers in the business community typically use a different set of tools and a different development environment. In order to make grid computing more accessible to the wider business community, there is a need for systems and methods that address these and/or other perceived shortcomings of the prior art.
One embodiment, among others, of the present invention provides systems and methods for automating the life cycle of a software application. The software application utilizes computing resources distributed over a network. A representative system includes creating logic operable to create a task list which describes how at least one stage in the application life cycle is to be performed, and processing logic responsive to the creating logic, operable to process the task list to perform at least one stage in the application life cycle. The processing logic is integrated with a development environment, and the development environment is used to develop the software application.
One method, among others, includes: creating a task list which describes how at least one stage in the life cycle is to be performed; and processing the task list by a process engine to perform at least one stage in the life cycle. The process engine is integrated with a development environment, and the development environment is used to develop the software application.
To execute a grid node application 103 on a grid node 101, nodes which can provide appropriate computing resources must first be identified, and the grid node application 103 must be submitted to the identified node(s) as a job for execution. Rather than using a centralized resource manager and/or job submission manager, the grid 100 as it exists today uses a decentralized approach, where each grid node 101 provides: grid services 104 which support resource discovery, job submission, and other functionality; and a grid client 105 which uses the grid services 104 provided by other grid nodes 101. This can best be illustrated by an example.
In
In
In the packaging stage 202, the grid node application 103 is packaged, so that all files required to run the application are packaged together. For example, an application written in Java may require several different applets and classes in order to run, and these components can be aggregated into a single archive file. Packaging is often used because it simplifies transfer of the application to another system, but packaging stage 202 is not necessary.
In the distribution stage 203, grid node application 103 is distributed to the target nodes where it will run. Distribution stage 203 may involve the transfer of a single file (particularly if packaging is used) or it may involve transferring each required file individually. If a node requires additional installation after files are distributed, for example, setting up the run-time environment on the target nodes, then the installation stage 204 performs installation.
At the execution stage 205, grid node application 103 executes on the target nodes. At the collection stage 206, the results from the execution of the grid node application 103 on one or more target nodes are collected. At the uninstall stage 207, the grid node application 103 is uninstalled, removing it from the target nodes.
Life cycle 200 does not always progress from one stage to another in a strict sequence. Instead, it is common for a subset of stages in the life cycle 200 to be repeated. For example, while the grid node application 103 is still under development, the development stage 201 and packaging stage 202 may be repeated many times while the developer debugs the grid node application 103, while the remaining stages are not required because the developer executes the grid node application 103 locally rather than using the grid 100. As another example, when development is complete, the stages of distribution, execution and collection may be repeated may times as the user runs the grid node application 103 on several different sets of target nodes.
In the example shown in
Install stage 204 and Execute stage 205 both use Grid-Run 304 to execute a specific program on the target nodes. Typically, the install stage 204 will execute shell commands or utility programs in order to create directories, move files, uncompress files, etc. Execute stage 205 executes the grid node application 103.
Collect stage 206 is performed using Grid-Status 305, which is a grid client 105 which checks the status of a grid node application 103 which was executed on a target node. Collect stage 206 also uses Grid-Copy 303 to copy output files produced by the execution of the grid node application 103 on the target nodes. Uninstall stage 207 is performed using Grid-Run 304, typically executing shell commands and utilities to delete files from the target nodes and restore environment variables and paths.
As discussed above, in the prior art approach described by
In one embodiment, process engine 401 does not perform the tasks itself, but relies on individual task subsystems 405 to perform each type of task. In
Input file 402 may allow one stage to be specified as dependent on another stage. In
In
Process engine 401 decides to perform (at 407) the task list associated with execution stage 205. The Grid-Run task subsystem 405a executes a grid node application 103 on one or more target nodes. The particular grid node application 103 and target nodes which are passed as parameters to grid services 104 could be specified directly in input file 402, or in another file referenced by input file 402.
At 408, process engine 401 decides to perform the task list associated with collect stage 206. The Grid-Check-Status task subsystem 405b checks the status of a grid node application 103 on one or more target nodes. The Grid-Copy task subsystem 405c copies files from one or more target nodes to the node which submitted the grid node application 103 for execution. Processing by process engine 401 is then complete, as the last task in the collect stage 206 has been performed.
In one embodiment, the process engine 401 verifies that pre-conditions are satisfied before performing a specific task in task list 404. Pre-conditions are specified at the task level, as parameters in input file 402. For example, if the distribution stage 203 includes a task which copies a file to a target grid node 101, preconditions for that task may include: the source file exists; the target grid node 101 exists, there is enough disk space on the target grid node 101, the user has permissions to copy the file and to create and write the file to the target, etc.
Pre-conditions also allow process engine 401 to take into account the specific requirements of a particular grid node application 103 when performing a task. For example, a grid node application 103 may run only on an Intel® platform with a Linux® operating system, so a precondition for the Grid-Run task in this case is that the target grid node 101 meets the specific requirements of the grid node application 103. In one embodiment, the process engine 401 tests preconditions iteratively on all available grid nodes 101 within an organization. To determine whether or not pre-conditions are satisfied for a particular grid node 101, process engine 401 uses grid services 104 to obtain meta-information about the resources available on grid node 101, and compares this with the requirements of the grid node application 103.
While pre-conditions are defined at the task level, the resolution of pre-conditions is handled at the stage level and job level. That is, the section 403 of input file 402 which defines a stage also specifies how failure of a pre-condition is handled. For example, a pre-condition for the collection stage 206 (which collects the results produced by an executed job) would be that the submitted job has successfully completed execution. If this pre-condition is not met, then the submitted job is aborted and/or resubmitted.
In this embodiment, the user interface for integrated development environment 501 consists of one window which is split into several panes: the project pane 502; the structure pane 503; the content pane 504; and the message pane 505. The content pane 504 allows editing of source files. The structure pane 503 shows in a hierarchical form the structure of the file that is currently displayed in content pane 504. The message pane 505 displays messages which result from various operations such as building, compiling, debugging and testing.
The life cycle stages described by input file 402 are displayed in project pane 502 as nodes 506a-d. In
In one embodiment, the integrated development environment 501 is Borland JBuilder®, the process engine 401 is Apache Ant™, and the input file 402 is an XML build file. In this embodiment, stages correspond to Ant™ targets, and the tasks are implemented as Java classes which extend the base classes in Ant™.
The Process engine 401 invokes grid services 104 through whatever application programming interface (API) is provided by the toolkit. Grid client 105 presents an interface, API 602. Each toolkit 601 uses the GRID client API 602 to programmatically call grid client 105. Grid client 105 uses GRID protocols 603 to communicate (over data network 102) with the grid services 104 residing on another grid node 101.
The peripherals 703 may include input devices, for example but not limited to, a keyboard, mouse, scanner, microphone, etc. Furthermore, the peripherals 703 may also include output devices, for example but not limited to, a printer, display, etc. Finally, the peripherals 703 may further include devices that communicate both inputs and outputs, for instance but not limited to, a modulator/demodulator (modem; for accessing another device, system, or network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc.
The processor 701 is a hardware device for executing software, particularly that stored in memory 702. The processor 701 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the grid node 101, a semiconductor based microprocessor (in the form of a microchip or chip set), a microprocessor, or generally any device for executing software instructions.
The memory 702 can include any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Moreover, the memory 702 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 702 can have a distributed architecture, where various components are situated remote from one another, but can be accessed by the processor 701.
The software in memory 702 may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. In the example of
The system for automating the life cycle of a grid node application 400 is a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When a source program, then the program needs to be translated via a compiler, assembler, interpreter, or the like, which may or may not be included within memory 702, so as to operate properly in connection with the operating system 706.
If the grid node 101 is a PC, workstation, or the like, the software in the memory 702 may further include a basic input output system (BIOS) (omitted for simplicity). The BIOS is a set of essential software routines that initialize and test hardware at startup, start the operating system 706, and support the transfer of data among the hardware devices. The BIOS is stored in ROM so that the BIOS can be executed when the grid node 101 is activated.
When the grid node 101 is in operation, the processor 701 is configured to execute software stored within the memory 702, to communicate data to and from the memory 702, and to generally control operations of the grid node 101 pursuant to the software. The system for automating the life cycle of a grid node application 400 and the operating system 706, in whole or in part, but typically the latter, are read by the processor 701, perhaps buffered within the processor 701, and then executed.
When the system for automating the life cycle of a grid node application 400 is implemented in software, it should be noted that the system for automating the life cycle of a grid node application 400 can be stored on any computer readable medium for use by or in connection with any computer related system or computer readable medium. In the context of this document, a “computer-readable medium” can be any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, system, or device. The computer-readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, system, device, or propagation medium. A nonexhaustive example set of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory), and a portable compact disc read-only memory (CDROM). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
In an alternative embodiment, where the system for automating the life cycle of a grid node application 400 is implemented in hardware, the system for automating the life cycle of a grid node application 400 can be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit(s) (ASIC) having appropriate combinatorial logic gates, a programmable gate array(s) (PGA), a field programmable gate array(s) (FPGA), etc.
The foregoing description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obvious modifications or variations are possible in light of the above teachings. The embodiments discussed, however, were chosen and described to illustrate the principles of the invention and its practical application to thereby enable one of ordinary skill in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variation are within the scope of the invention as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly and legally entitled.
Patent | Priority | Assignee | Title |
10346141, | Mar 16 2016 | Amazon Technologies, Inc. | Deterministic execution for visually developed operations |
11366658, | Jan 19 2021 | SAP SE | Seamless lifecycle stability for extensible software features |
11689413, | May 24 2016 | APSTRA, INC | Configuring system resources for different reference architectures |
9892015, | Mar 16 2015 | Microsoft Technology Licensing, LLC | Integration and automation of build and test in a customized computing system |
Patent | Priority | Assignee | Title |
6009455, | Apr 20 1998 | Distributed computation utilizing idle networked computers | |
6665860, | Jan 18 2000 | SNAP INC | Sever-based method and apparatus for enabling client systems on a network to present results of software execution in any of multiple selectable render modes |
6687735, | May 30 2000 | TRANCEIVE TECHNOLOGIES, INC | Method and apparatus for balancing distributed applications |
6848101, | Jul 12 2000 | Mitsubishi Denki Kabushiki Kaisha | Software management system for providing software installation and maintenance over a network |
6983400, | May 16 2002 | Oracle America, Inc | Distributed test harness model |
7085853, | Sep 10 2002 | Oracle America, Inc | System and method for a distributed shell in a java environment |
7110936, | Feb 23 2001 | Complementsoft LLC | System and method for generating and maintaining software code |
7210119, | Mar 31 2000 | TRIMBLE MRM LTD | Handling unscheduled tasks in a scheduling process |
7739660, | Mar 31 2006 | SAP SE | Code management in a distributed software development environment |
7765521, | Aug 29 2002 | DATACLOUD TECHNOLOGIES, LLC | Configuration engine |
20020199170, | |||
20030200527, | |||
20030200536, | |||
20030236577, | |||
20070022404, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 26 2003 | WOODGEARD, LARRY A | Bellsouth Intellectual Property Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029199 | /0239 | |
Apr 27 2007 | Bellsouth Intellectual Property Corporation | AT&T INTELLECTUAL PROPERTY, INC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 029203 | /0646 | |
Jul 27 2007 | AT&T INTELLECTUAL PROPERTY, INC | AT&T BLS Intellectual Property, Inc | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 029203 | /0649 | |
Nov 01 2007 | AT&T BLS Intellectual Property, Inc | AT&T Delaware Intellectual Property, Inc | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 029203 | /0655 | |
Sep 10 2008 | AT&T Intellectual Property I, L.P. | (assignment on the face of the patent) | / | |||
Jan 31 2012 | AT&T Delaware Intellectual Property, Inc | AT&T Intellectual Property I, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029199 | /0291 |
Date | Maintenance Fee Events |
May 25 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Aug 23 2021 | REM: Maintenance Fee Reminder Mailed. |
Feb 07 2022 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Dec 31 2016 | 4 years fee payment window open |
Jul 01 2017 | 6 months grace period start (w surcharge) |
Dec 31 2017 | patent expiry (for year 4) |
Dec 31 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 31 2020 | 8 years fee payment window open |
Jul 01 2021 | 6 months grace period start (w surcharge) |
Dec 31 2021 | patent expiry (for year 8) |
Dec 31 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 31 2024 | 12 years fee payment window open |
Jul 01 2025 | 6 months grace period start (w surcharge) |
Dec 31 2025 | patent expiry (for year 12) |
Dec 31 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |