A modular compilation system that utilizes a fully integrated hierarchical representation as a common intermediate representation to compile source code programs written in one or more procedural programming languages into an executable object code file. The structure of the integrated common intermediate representation supports machine-independent optimizations, as well as machine-dependent optimizations, and also supports source-level debugging of the executable object code file. The integrated hierarchical representation (IHR) is language independent and is shared by all of the components of the software development system, including the debugger.

Patent
   5175856
Priority
Jun 11 1990
Filed
Aug 23 1990
Issued
Dec 29 1992
Expiry
Jun 11 2010
Assg.orig
Entity
Large
196
3
all paid
16. In a computer processor, a method for performing source level debugging of an optimized object code file to be executed on a target computer processing system wherein the optimized object code file represents a source code file comprised of one or more program modules written in one or more high level programming languages, the method comprising the steps of:
(a) for each program module, generating a HiForm (hf) machine and language independent representation of the program module, the hf representation to be stored as part of an integrated hierarchical representation file (the IHR file) for the source code file in the form of a set of electronic data files to be stored in the computer processor;
(b) generating context information and debug information for the source code file using the hf representation of the program module and storing the context information and debug information as part of the IHR file;
(c) generating a loform (LF) machine dependent, hierarchical representation for the hf representation of each program module and storing the LF representation as part of the IHR file;
(d) generating an object code file based upon the LF representation of the source code file; and
(e) using the IHR file to debug the object code file as it is executed on the target computer processing system.
7. In a computer system having a data storage device for storing computer data, wherein data for an integrated hierarchical representation file (IHR file) of a computer program executed on said computer system is stored in a data storage device and implemented on said computer system according to a data structure, said computer system comprising:
one or more computer processors for compiling, optimizing and debugging a source code file for the computer program to produce an object code file to be executed on a target computer processing system, said source code file including a series of high level programming language statements defining operations executed by said target computer processing system;
said data structure for storing data in the data storage device for the IHR tile comprises:
scope node means for storing one or more items of context information which defines one or more scopes within the source code file;
for each scope within the source code file, a plurality of symbol node means, each symbol node means for storing all items of information about a unique named object included in the high level programming language statements of the source code file;
for each scope within the source code file, a plurality of statement node means, each statement node means for storing one or more machine independent, common intermediate statements generated by the computer processor for each of the high level programming language statements included in the source code file; the statement node means being operably linked to one another in a lexical order;
for each statement node means;
one of more expression node means, each expression node means for storing an operator or operand that together with the other expression node means define the operation indicated by the associated statement node means;
literal node means for storing a value for each constant operated on by the operation indicated by the associated statement node means;
type node means for storing a data type for each named object operated on by the operation indicated by the associated statement node means; and
loform node means for storing one or more machine dependent instruction generated for the target computer processing system for each expression node means;
wherein said computer system access said data structure to retrieve said information for compiling, optimizing and debugging said source code file for the computer program to produce said object code file executed on said target computer processing system.
1. In a computer processor, a computer implemented method for compiling and optimizing a source code file written in one or more high level programming languages to produce an object code file and an integrated hierarchical representation file (the IHR file) for the source code file associated with the object code, both the object code file and the IHR file to be stored as electronic data files in a memory device, the object code file to be executed on a target computer processing system, the method comprising the steps of:
(a) compiling the source code file by performing the steps of:
(a1) for each high level programming language, generating a HiForm (hf) machine independent representation of the high level program language, the hf representation to be stored as part of the IHR file;
(a2) combining the hf representation for each high level programming language into a single hf representation;
(a3) storing the single hf representation as part of the IHR file;
(a4) generating context information and debug information for the source code file using the single hf representation and the source code file; and
(a5) storing the context information and debug information as part of the IHR file;
(b) optimizing the hf representation by performing the steps of:
(b1) in response to a first user-selected option, optimizing the single hf representation to reorganize the single hf representation and generate hf optimization information; and
(b2) storing the hf optimization information as part of the IHR file;
(c) translating the hf representation into a loform (LF) representation for the target computer processing system by performing the steps of:
(c1) generating a loform (LF) machine dependent, hierarchical representation for the hf representation for the target computer processing system; and
(c2) storing the LF representation as part of the IHR file;
(d) optimizing the LF representation by performing the steps of:
(d1) in response to a second user-selected option, optimizing the LF representation to reorganize the LF representation and generate LF optimization information; and
(d2) storing the LF optimization information as part of the IHR file; and
(e) generating the object code file by performing the steps of:
(e1) generating the object code file based upon the LF representation, including generating object code information;
(e2) storing the object code file in the memory device;
(e3) storing the object code information as part of the IHR file; and
(e4) storing the IHR file in the memory device,
such that the information stored in the IHR file is available during execution of the object code file on the target computer processing system.
9. In a computer processor, a computer implemented method for compiling and optimizing a source code file comprising one or more program modules written in one or more programming languages to produce an object code file to be executed on a target computer processing system and an integrated hierarchical representation file (the IHR file) for the source code file associated with the object code, both the object code file and the IHR file to be stored as data files in a memory device, the method comprising the steps of:
(a) compiling each program module by performing for each program module the steps of:
(a1) generating a HiForm (hf) machine and language independent representation of the program module, the hf representation to be stored as part of the IHR file;
(a2) storing the hf representation as part of the IHR file;
(a3) generating context information and debug information for the source code file using the hf representation of the program module; and
(a4) storing the context information and debug information as part of the IHR file;
(b) optimizing the hf representation for each program module by performing the steps of:
(b1) in response to a first user-selected option, optimizing the hf representations of the program module to reorganize the hf representation and generate hf optimization information; and
(b2) storing hf optimization information as part of the IHR file;
(c) translating the hf representation into a loform (LF) representation for the target computer processing system for each program module by performing the steps of:
(c1) generating a loform (LF) machine dependent, hierarchical representation for the hf representation of each program module for the target computer processing system; and
(c2) storing the LF representation as part of the IHR file;
(d) optimizing the LF representation by performing the steps of:
(d1) in response to a second user-selected option, optimizing the LF representation of the one or more of the program modules to reorganize and generate LF optimization information; and
(d2) storing LF optimization information as part of the IHR for the source code file; and
(e) generating the object code file of the source code file based upon the LF representation of all of the program modules by performing the steps of:
(e1) generating the object code file based upon the LF representations of all of the program modules, including generating the object code information;
(e2) storing the object code information as part of the IHR file; and
(e3) storing the object code file and the IHR file in the memory device,
such that the information stored in the IHR file is available during execution of the object code file on the target computer processing system.
17. A computer implemented method for electronically organizing and storing data for an integrated hierarchical representation file (the IHR file) of a computer program for use with a compilation system having one or more storage mechanisms for storing data and having one or more computer processors for compiling, optimizing and debugging a source code file for the computer program to produce an object code file to be executed on a target computer processing system, the source code file being stored in the storage mechanism for the compilation system and including a series of high level programming language statements defining operations to be executed by the target computer processing system, the method comprising:
(a) identifying, organizing and storing as part of the IHR file one or more items of context information which defines one or more scopes within the source code file; each such item of context information designated as a scope node;
(b) for each scope within the source code file, identifying, organizing and storing as part of the IHR file all items of information about each unique named object included in the high level programming language statements of the source code file, each such item of information designated as a symbol node within each scope node;
(c) for each scope within the source code file, identifying, organizing and storing as part of the IHR file one or more machine independent, common intermediate statements generated by the computer processor for each of the high level programming language statements included in the source code file, each such statement designated as a statement node, the statement nodes being operably linked to one another in a lexical order;
(d) for each statement node;
(d1) identifying, organizing and storing as part of the IHR file an operator or operand that together with the other operators or operands define the operation indicated by the associated statement node, each such operator or operand designated as an expression node;
(d2) identifying, organizing and storing as part of the IHR file a value for each constant operated on by the operation indicated by the associated statement node, each such value designated as a literal node;
(d3) identifying, organizing and storing as part of the IHR file a data type for each named object operated on by the operation indicated by the associated statement node, each such data type designated as a type node; and
(e) storing as part of the IHR file one or more machine dependent instruction generated for the target computer processing system for each expression node during the compilation process, each such instruction designated as a loform node,
such that the information stored in the IHR file is available during execution of the object code file on the target computer processing system.
2. The method of claim 1 further comprising the steps of:
(f) debugging the object code file by performing the steps of:
(f1) executing the object code file on the target computer processing system;
(f2) accessing the IHR file; and
(f3) analysing the information in the stored IHR file to debug the object code file.
3. The method of claim 2 wherein the programming languages include one or more high level language and an assembly language.
4. The method of claim 3 wherein the target computer processing system is a highly parallel multiprocessor system.
5. The method of claim 4 wherein the computer processor for performing steps (a)-(f) and the target computer processing system are comprised of a plurality of computer processors which are networked together.
6. The method of claim 5 wherein the step of debugging the object code file during execution by using the stored IHR file is accomplished in a distributed manner such that a user on one computer processor can debug a source code file executing on another computer processor.
8. The data structure for storing data in the data storage device of claim 7 further comprising:
for each scope within the source code file, block node means for storing the relationships of one or more basic blocks of statement node means in the source code file;
for each basic block, DUDe node means for storing any data dependences between named objects used within the basic block which are defined as variables in the source code file; and
for each scope within the source code file, loop node means for storing the relationships of any loops identified by the statement node means in the source code file,
such that the information contained in the block node means, the DUDe node means and the loop node means is generated by the computer processor as a result of optimizing the statement node means.
10. The method of claim 9 wherein the step (e1) includes the steps of:
(e11) generating a module object code file for each of the program modules; and
(e12) linking all of the module object code files into a single object code file.
11. The method of claim 10 wherein both step (c) and step (f2) include the step of inlining procedure calls.
12. The method of claim 9 wherein the IHR file is stored as a graphical structure and steps (c) and (e) include the step of performing graphical reductions on the IHR file.
13. The method of claim 9 wherein the programming languages include one or more high level language and an assembly language.
14. The method of claim 9 wherein the target computer processing system is a highly parallel multiprocessor system.
15. The method of claim 14 wherein the computer processor for performing steps (a)-(g) and the target computer processing system are comprised of a plurality of computer processors which are networked together.
18. The method of claim 17 further comprising the steps of using the computer processor to perform an optimization on the source code file, the optimization including the steps of:
(f) for each scope within the source code file, identifying, organizing and storing as part of the IHR file the relationships of one or more basic blocks of statement nodes in the source code file; such relationships designated as block nodes;
(g) for each basic block, identifying, organizing and storing as part of the IHR file any data dependences between named objects used within the basic block which are defined as variables in the source code file; such data dependences designated as DUDe nodes; and
(h) for each scope within the source code file, identifying, organizing and storing as part of the IHR file the relationships of any loops identified by the statement nodes in the source code file, such relationships designated as loop nodes.

This application is a continuation-in-part of an application filed in the United States Patent and Trademark Office on Jun. 11, 1990, entitled INTEGRATED SOFTWARE ARCHITECTURE FOR A HIGHLY PARALLEL MULTIPROCESSOR SYSTEM, Ser. No. 07/537,466, which is assigned to the assignee of the present invention and which is hereby incorporated by reference in the present application.

The present invention relates generally to software development of computer programs, and, more specifically, to optimizing compilers for generating object code files from source code programs and debuggers for debugging the object code files as they are executed by a computer processing system, particularly a high-performance multiprocessor system.

Presently, software development focuses on two separate tasks in the process of generating a computer program: the compiling of the program into an executable object code file to be run on a computer processing system and the debugging of that executable file as it is being executed by the computer processing system. In general, a separate version of a compiler is created for each programming language and each computer processing system. Similarly, a separate debugger is created to debug the executable object code on each computer processing system. As a result of the independent creation of present compilers and debuggers, most prior art software development systems are a collection of separate tools where each of the tools knows little or nothing about the other tools in the development system.

The design and construction of compilers is well known in the art, e.g., Aho, Sethi and Ullman, Compiler: Principles, Techniques and Tools, Addison-Wesley (1986); and Waite et al, Compiler Construction, Springer-Verlag, (1984). Compilers convert a given computer source language, such as FORTRAN, into code executable by a given computer processing system (i.e., the target machine). Compilation of a computer source language is accomplished through a series of transformations. First, the strings of symbols that comprise the source code are lexically analyzed to ascertain the atomic units or words for translation. Then, the string of symbols are syntactically analyzed for ascertaining the grammatical relations among the words. Typically, the output is expressed in the form of a parse tree which is transformed into an intermediate language representation of the source code. Most compilers do not generate a parse tree explicitly, but form the intermediate code as the syntactic analysis takes place. Optimization is then applied to the intermediate code, after which the target machine-executable or object code is generated. Examples of optimizing compilers for present high performance computer processing systems include the compilers for the Hitachi S-810 supercomputer (e.g., U.S. Pat. Nos. 4,773,007, 4,807,126, 4,821,181, 4,833,606, 4,843,545 and 4,853,872), the compilers for the Cray-1 supercomputer (e.g., Cray Research Publication number SR-0018) and the compilers for the IBM mainframe computers (e.g., U.S. Pat. Nos. 4,782,444, 4,791,558 and 4,802,091).

The design and construction of debuggers is also well known in the art. Debuggers assist programmers in creating executable code by identifying errors in the execution of the object code file and helping to trace the source of the error as manifested in the executable object code file back to the source code program. Most debuggers are particular to a computer processing system because of the inherent relationship between the hardware features of a computer processing system and the execution of object code files on that computer processing system. While the debugging process may be relatively straightforward for a given programming language executing on a given computer processing system, the challenge for present debuggers is to provide effective identification of errors in executable code produced by an optimizing compiler that is, for example, part of a software development system for a high-performance computer processing system. The difficulties of debugging executable code produced by an optimizing compiler are further compounded when the compiler produces code capable of executing on more than one processor in a multiprocessor system.

Optimizations are frequently performed for programs to be executed on a high-performance computer processing system, including multiprocessor systems. The objectives of the optimizing portion of a compilation system are to (a) increase the execution speed of the program, (b) reduce the size of the executable code, and (c) minimize processing costs through efficient resource allocation. Optimizations that are frequently employed in optimizing compilers can be divided into two classes, which are commonly known as "local" and "global" optimizations. Local optimizations are those that are based on an analysis of a relatively small region of the program, such as a "basic block", or perhaps only two adjacent machine instructions. Global optimizations are those that are based on an analysis of more than a single basic block. Examples are "code motion" (moving code out of loops) and "global common subexpression elimination." Although many types of local and global optimizations are presently used in compilation systems, all of these optimization affect the execution of the program in ways that are not obvious from the organization and structure of the source code program and, consequently, increase the problems associated with effectively debugging the program. These problems are further compounded in multiprocessor systems where more than one processor may be executing portions of the executable code file for a given program.

Generally, compilers for different programming languages use different intermediate representations during the compilation process, while debuggers use yet another intermediate representation for the debugging process. Because the debugger has no knowledge of the intermediate representations used by the various compilers, the debugger has no way of relating the optimized executable code back to the original source code and, as a result, the debugging of optimized code is very difficult. Also, for compilers that use different intermediate representations, inter-language inlining is impossible. Because most prior art assemblers do not use a common intermediate representation, assembly language programs must use different debuggers from those used for high level language programs. In addition, little optimization of assembly language programs have been attempted in the past. This is partly because of an assumption that an assembly language program is written exactly the way the programmer wanted it to be written and partly because of the cost of developing an optimizer specifically for assembly language programs.

More recent software development systems such as the Ada Programming Support Environment (APSE) for the Ada programming language use a common intermediate representation (CIR) shared by many of the components in the compilation system in an effort to solve some of the problems mentioned above. Unfortunately, the common intermediate representation, known as DIANA, is specific only to the Ada programming language. Thus, mixing of languages at the intermediate level in the compilation system is impossible. Additionally, DIANA is not in itself capable of representing the transformations performed by optimizers on the source program. For this reason, debugging an optimized program in the Ada environment is difficult. For example, the Ada debugger does not know where to find the value of a variable if the compiler decides to keep that variable in a register, rather than in a memory location. Also, DIANA does not represent machine level instructions, so use of DIANA for assisting in the optimization of assembly language programs is impossible.

Another recent compiler system (U.S. Pat. No. 4,667,290) defines multiple front ends for different programming languages that produce the same common intermediate representation. While this approach solves some of the problems presented by earlier software development systems, several problems still remain. First, the sequential nature of the CIR produced by this prior art software development system fails to represent transformations performed by an optimizer on the source program. Second, the debugger is not closely integrated with the development system. Because of this, the debugger cannot know the kinds of transformations performed by the compiler, hence the debugging of optimized code is difficult. Third, because the assembler in this prior art software development system produces relocatable object code rather than some form of a common intermediate representation, the compiler cannot be used to optimize the assembly language program. Thus, only primitive optimizations such as peephole optimizations can be performed on a machine dependent level, that is to say on the level of code that can only run on a specific target machine. Fourth, because the debugger in this prior art software development system is designed to operate on the CIR generated by the compiler, it is unsuitable for the source-level debugging of assembly language programs. In other prior art systems, this problem is solved by providing primitive debuggers for assembly language programs; however, this requires users to learn two different debuggers, one for high level language debugging and another for assembly language debugging.

Even if a unified and integrated intermediate representation for compilers, assemblers and debuggers were available, the present methods and systems do not represent the information in a form that is most suited for optimization. The various types of common intermediate representations utilized in the prior art software development systems are essentially simple linear representations of information concerning only the actual programming statements in the source code. The common intermediate representations of prior art software development system have no mechanism for preserving important context and optimization information about the compiled program. Most importantly, the actual structure of present common intermediate representations does not allow for efficient optimizations because the structure of the representation does not expose many of the relationships among the components of the source code program.

Although present software development systems can produce efficient and effective executable object code files for a given source code program, there is no completely integrated software development system that allows for common representation of all types of information about the source code and optimized object code program. Consequently, there is a need for an integrated software development system that allows for a common intermediate representation to be effectively utilized by all components of the software development system and that is capable of representing additional information about the program for purposes of optimization and debugging, particularly in a high performance multiprocessor environment. In addition, there is a continuing need to provide better methods and structures for representing this common intermediate representation that are more suitable for performing a variety of optimization techniques during software development.

The present invention is a modular compilation system that utilizes a fully integrated hierarchial representation (hereinafter referred to as IHR) as a common intermediate representation to compile source code programs written in one or more procedural programming languages into an executable object code file. The structure of the integrated common intermediate representation supports machine-independent optimizations, as well as machine-dependent optimizations, and also supports source-level debugging of the executable object code file. The integrated hierarchical representation (IHR) is language independent and is shared by all of the components of the software development system, including the debugger. For purposes of the present invention, "language independent," means that the IHR can represent a plurality of high-level, procedural computer programming languages as well as specific assembly languages. Because all of the components share the same IHR, any component of the system can understand information about a computer program produced by any other component of the software development system.

Unlike prior art common intermediate representations that only relate to the individual program statements, the IHR of the present invention is capable of representing additional information about the computer program for purposes of optimization and debugging. For example, the compilation system of the current invention includes information about optimizations that have been performed on the IHR. This allows the debugger to provide the user with a much more accurate picture of what the actual optimized code looks like. As of an example of how this additional information is used, consider the situation where the compiler has elected to keep a variable in a register over a particular region of a program. The optimizer will note this information in the IHR so the debugger can find the value of a variable, even when that variable is stored in a register. Without this additional information, the user would have no way of knowing the true value of the variable during execution of that particular region of the program. Another example of the type of additional information stored in the IHR of the present invention is information relating to instruction scheduling. As a result of instruction scheduling optimizations, the compiler may move or reorder instructions related to a source language statement to improve execution speed. The information about the movement of instructions is efficiently represented in the structure of the IHR so that the debugger knows where to set breakpoints in the executable object code file corresponding to the original source language statements.

The IHR of the present invention also enables a new, more efficient means for procedure inlining. Procedure inlining is the process whereby the compiler replaces a procedure call with the actual body of code of the called procedure so that called procedure code executes as part of the program code. When procedure calls are not inlined into program code, the calls are made while the program code executes and can incur significant overhead. The overhead of a procedure call is the cost in time of making a jump to a different section of code and returning from that sectional code and the saving and restoring of registers around these jumps. Inlining saves the overhead of procedure calls being made during the execution of a program. The inlining method of the current invention uses the IHR in a way that presents two significant advantages over inlining methods of the prior art. First, the present invention performs inlining using the IHR to remove restrictions between languages so as to enable inlining between different languages. Second, the present invention performs procedural inlining at two distinct points in the compilation/optimization process so as to ensure full optimization of the program and to inline any procedures called across files.

Another improvement over prior art common intermediate representations is the fact that the present invention allows for both high-level and assembly languages to be represented using the same IHR. As a result, many optimizations performed by the compiler on high level languages can also be performed on assembly language programs. For example, instruction scheduling is not performed by prior art assemblers, but can be performed by the current invention. Source level debugging of assembly language programs is made possible using the same debugger as the high level language debugger. Because the debugger understands the IHR and both the complier and the assembler produce the same IHR, the debugging of both assembly language programs and high-level language programs can be accomplished using a single debugger.

The IHR of the current invention is a hierarchical graphical data structure representation of all types of information relating to a source code program. In its broadest form, the IHR is comprised of a HiForm (HF) representation that is independent of procedural programming language and the target machine and a LoForm (LF) representation that is target machine-dependent. Using a hierarchical graphical data structure, the IHR represents the different levels and types of information about the source code program as a related set of nodes, each node graphically representing certain information about the program and having a certain type of relationship with other nodes in the IHR. Generally, three different types of information about the program may be represented in the node structure of the IHR: context information, executable information and optimization information. Because of the various interrelationships among all of these kinds of information, each kind of information may be represented as all or part of one or more nodes. The graphical representation by the IHR of all three types of information as an integrated set of nodes provides an efficient and effective mechanism for implementing a variety of optimization techniques during compilation, particularly optimization techniques that are based upon a graphical analysis of the various relationships within a program (e.g., control flow, dependence analysis, etc.).

At the highest level, a program module is represented as a collection of static program scope nodes. Scope nodes comprise the HF representation of the scope of a program. Each unique scope in a source language program corresponds to one HF scope node. Each scope node contains information about the type of scope, the associated symbol node, the symbol table for the symbols defined in this scope, and the connections to other scope nodes (both sibling scopes and parent and child scopes) which can be used to create a scope graph.

For each scope node there is a related set of symbol nodes and a pragma node. Symbol nodes are the HiForm (HF) representation of named objects. There are symbol nodes for constants, variables, formal parameters, structure components, labels, macros, static data blocks, etc. The symbol nodes for a scope define all of the symbols that may be used within that scope. At this level, the symbol nodes for the variables and statements generally correspond with the actual program variables used in the source code program. The pragma node represents the compiler directives for that scope node and any additional information that may be supplied with those directives (e.g., optimization controls, listing controls, etc.).

Associated with each scope node is a series of statement nodes. Statement nodes comprise the HiForm (HF) representation of statements. Each unstructured source language statement corresponds to one HF statement node, while structured source language statements may correspond to several HF statement nodes. Statement nodes are linked together in lexical order. This facilitates the representation of a basic block by enabling identification of the first and last statement nodes which the basic block contains.

Each statement node will have an associated series of expression nodes. Expression nodes are the representations of expressions in HF. The expressions represent the series of operators (e.g., load, add, multiply) and operands (i.e., variables and constants) that together make up the associated statement. Conceptually, an expression is a representation of some value within the program that is the result of an operation set forth by the associated statement.

Associated with the expression nodes are the literal nodes, type nodes and LF nodes. Literal nodes are the HF representation of objects or operands with compile-time constant values. Types nodes are the HF representation of the data type of an object (e.g., integer, floating point, double precision). At the lowest level associated with each expression, the LF nodes are a representation of the machine instructions generated for the expression. Depending upon the kind of expression, one or more LF nodes will be related to an expression node in a hierarchical tree relationship.

While the actual information related to each node may include one or more of the three general kinds of information about the program (context, execution or optimization), generally context information is represented in the scope nodes and symbol nodes and execution information is represented in the statement and expression nodes. Optimization information is information that is generated by compilation system from an analysis of the context and execution information and is usually stored in three additional node structures: block nodes, DUDe and Darc nodes and loop nodes.

Definition-use information relates a variable's definition to all the uses of the variable that are affected by that definition. Use-definition information relates a variable's use to all the definitions of the variable that affect that use. Definition-definition information relates a variable definition with all definitions of the variable that are made obsolete by that definition. The present invention incorporates definition-use, use-definition and definition-definition information for single and multiple word variables, equivalenced variables, pointers and procedure calls (including all potential side effects) into a single representation (DUDe nodes and DARC nodes) that is an integral part of the dependence analysis done for vectorization and multithreading. Each DUDe node represents a use or a definition of a variable. Two DUDe nodes are connected with a DARC node when a possible data dependence exists.

Another structure which aids optimization is the structure graph. This portion of the IHR represents the static nesting of loops in a high-level language program. The structure graph comprises loop nodes and block nodes. A loop node represents a loop in a high-level language (e.g., DO loop in FORTRAN, for loop in C.). A block node represents basic blocks in a high-level language program. The structure graph is connected in a manner which shows the hierarchy of loops and basic blocks in a program.

The present invention is especially adapted for use with a multiprocessor computer processing system, and, in particular, a highly parallel multiprocessor system having multiple tightly-coupled processors that share a common memory. It will be recognized, however, that the current invention is equally effective on computer systems ranging from a single scalar processor to highly parallel, multiprocessor systems with pipelined vector processing mechanisms.

In the preferred embodiment, the integrated modular compilation system of the present invention is comprised of a set of integrated program development tools including a program manager, a compiler, a user interface, and a distributed debugger. The program manager controls the development environment for source code files representing a software program. The compiler is responsible for compiling the source code file to create an object code file comprised of multiple threads capable of parallel execution. An executable code file is then derived from the object code file. The user interface presents a common visual representation of the status, control and execution options available for monitoring and controlling the execution of the executable code file on the multiprocessor system. The distributed debugger, utilizing IHR, provides debugging information and control of the execution of the executable code file on the multiprocessor system.

An assembler for generating object code from an assembly source code program may automatically perform some optimization of the assembly language program. The assembler generates LF which is translated by the binary generator into object code (machine instructions). The assembler also generates HF for an assembly language program that provides information useful in debugging assembly programs because of the integration between the HF representation of a program and the distributed debugger of the present invention.

The distributed debugger is capable of debugging optimized multithreaded object code for the preferred multiprocessor system. It can also debug distributed programs across an entire computer network, including the multiprocessor system and one or more remote systems networked together with the multiprocessor system. It will be recognized that the optimized parallel object code produced by the compiler will be substantially different than the non-optimized single processor code that a user would normally expect as a result of the compilation of his or her source code. In order to accomplish debugging in this type of environment, the distributed debugger maps the source code file to the optimized parallel object code file of the software program, and vice versa, using the information contained in the IHR of the present invention.

A primary objective of the present invention is to provide an integrated hierarchical representation that is capable of supporting language-independent and machine-independent optimizations, as well as machine-dependent optimizations, and source-level debugging of an optimized executable object code file.

Another primary objective of the present invention is to provide an integrated hierarchical representation that is shared by all of the components of an integrated modular software development system, including the compiler, the debugger, the assembler, the optimizer, and the code generator.

A further primary objective of the present invention is to provide an integrated hierarchical representation that uses a graphical hierarchical data structure to represent context, execution and optimization information associated with a source code program and provides an efficient and effective mechanism for implementing a variety of optimization techniques during software development, particularly optimization techniques that are based upon an analysis of the various relationships within a program.

Still another objective of the present invention is to provide a method for optimizing assembly language programs by using an integrated hierarchical representation and a common optimizer shared by an assembler and compiler.

A still further objective of the present invention is to provide a method for source-level debugging of assembly language programs by using a language-independent integrated hierarchical representation.

An additional objective of the present invention is to provide an integrated modular compilation system especially adapted for generating and debugging source code for a highly parallel multiprocessor system.

Another objective of the present invention is to provide an integrated modular compilation system that can accomplish inter-language inlining and inlining at more than one point during the compilation process.

These and other objectives of the present invention will become apparent with reference to the drawings, the detailed description of the preferred embodiment and the appended claims.

FIGS. 1a and 1b are an overall block diagram of the components of the software development system of the present invention.

FIG. 2a comprised of 2a-1 and 2a-2, and 2b are overall block diagrams showing the structure of the integrated hierarchical representation (IHR) of the present invention.

FIG. 3 is a block diagram showing the relationship between the expression nodes and the LF nodes in the present invention.

FIGS. 4a and 4b are a flow diagram of the present invention showing the method of translating a source program into the IHR that allows for debugging of highly optimized code inter-language inlining, and optimization and source-level debugging of assembly language programs.

FIGS. 5a and 5b show the structure of the debugger register mapping providing information for debugging of optimized code and the method of determining the register containing a variable in optimized code.

FIG. 6 shows the structure of a loop structure graph containing optimization information.

FIG. 7 is a block diagram of the preferred embodiment of a single multiprocessor cluster system for executing the software architecture of the present invention.

FIGS. 8a and 8b are a block diagram of a four cluster implementation of the multiprocessor cluster system shown in FIG. 7.

Software development systems, including compilers, assemblers and debuggers are well known in the prior art. A working understanding of the concepts and terminology involved in such development systems is assumed for purposes of describing the preferred embodiment of the present invention. For an excellent discussion of the present state of the art of compilation systems, reference is made to Aho, Sethi and Ullman, Compilers: Principles, Techniques and Tools, Addison-Wesley (1986).

Although it will be understood that the present invention is capable of operating on any number of computer processing systems, the preferred embodiment of a computer processing system for executing the present invention is a highly parallel multiprocessor cluster system comprising multiple tightly-coupled processors sharing a common memory.

Referring now to FIG. 7, a single multiprocessor cluster of the preferred embodiment of the multiprocessor cluster system for executing the present invention is shown having a plurality of high-speed processors 10 sharing a large set of shared resources 12 (e.g., main memory 14, global registers 16, and interrupt mechanisms 18). In this preferred embodiment, the processors 10 are capable of both vector and scalar parallel processing and are connected to the shared resources 12 through an arbitration node means 20. The processors 10 are also connected through the arbitration node means 20 and a plurality of external interface ports 22 and input/output concentrators (IOC) 24 to a variety of external data sources 26. The external data sources 26 may include a secondary memory system (SMS) 28 linked to the input/output concentrator means 24 via one or more high speed channels 30. The external data sources 26 may also include a variety of other peripheral devices and interfaces 32 linked to the input/output concentrator via one or more standard channels 34. The peripheral device and interfaces 32 may include disk storage systems, tape storage systems, terminals and workstations, printers, and communication networks.

Referring now to FIGS. 8a and 8b, a block diagram of a four cluster version of the multiprocessor system is shown. Each of the clusters 40a, 40b, 40c and 40d physically has its own set of processors 10, shared resources 12, and external interface ports 22 (not shown) that are associated with that cluster. The clusters 40a, 40b, 40c and 40d are interconnected through a remote cluster adapter means (not shown) that is an integral part of each arbitration node means 20 as explained in greater detail in the parent application. Although the clusters 40a, 40b, 40c and 40d are physically separated, the logical organization of the clusters and the physical interconnection through the remote cluster adapter means enables the desired symmetrical access to all of the shared resources 12. The multiprocessor cluster system of the preferred embodiment creates a computer processing environment in which parallelism is favored. Some of mechanisms in the multiprocessor cluster system which aid the present invention in coordinating and synchronizing the-parallel resources of such a multiprocessor system include, without limitation: the distributed input/output subsystem, including the signaling mechanism, the fast interrupt mechanism, and the global registers and the atomic operations such as TAS, FAA, FCA and SWAP that operate on the global registers; the mark instructions, the loadf instruction, the accounting registers and watchpoint addresses; and the various mechanism that support the pipelined operation of the processors 10, including the instruction cache and the separate issue and initiation of vector instructions. Together, and individually, these mechanisms support the symmetric access to shared resources and the multi-level pipeline operation of the preferred multiprocessor system. For a more detailed description of the preferred embodiment of the multiprocessor cluster system for executing the present invention, reference is made to the previously identified parent application, entitled INTEGRATED SOFTWARE ARCHITECTURE FOR A HIGHLY PARALLEL MULTIPROCESSOR SYSTEM.

Referring now to FIGS. 1a and 1b, the various modular components of the present invention will be described. The compiler 200 includes one or more front-end modules that interface the integrated hierarchical representation (IHR) of the present invention with a variety of available programming languages. The preferred embodiment of the compiler 200 provides a Fortran front-end 201 and a C front-end 202. The front ends 201 and 202 generate a representation of the source code in a high-level integrated hierarchical representation referred to as HiForm (HF). The HF representation is used by the optimizer 203, the code generator 204, the LoForm optimizer 205 and the binary generator 206. A low-level integrated hierarchical representation referred to as LoForm (LF) is generated by the code generator 204, as well as by the assembler 210. The LF representation is used by the LoForm optimizer 205 and the binary generator 206. The preferred embodiments of the HF and LF components of the IHR and the relationships between the IHR and the various components of the software development system are described in further detail hereinafter.

The objective of the front-ends 201 and 202 is to produce a representation of the source code for a software program in the first stage (HF) of the integrated hierarchical representation (IHR) of the source code program. The front ends 201 and 202 parse the source code into HiForm. Parsing determines the syntactic correctness of source code and translates the source into an intermediate. Because the C and Fortran front-ends 201 and 202 share the optimizer 203 and code generator 204, the programmer may easily mix different programming languages in the same application. Compiler front-ends for additional languages can be added to the compiler 200 and will share the optimizer 203 and code generator 204 with existing front-ends.

In the preferred embodiment, the C compiler front-end 201 is based on the ANSI X 2.159-1989 C language standard. Extensions to the C compiler front-end 201 provide the same functions to which System V programmers are accustomed in other C compilers. Additional extensions, in the form of compiler directives, benefit CPU-intensive or large engineering/scientific applications. The C compiler front-end 201 performs macro processing, saving the definitions of macros for debugging as part of the IHR as described in greater detail hereinafter. The Fortran compiler front-end 202 is based on ANSI Fortran 77 and contains several extensions for source compatibility with other vendor's Fortran compilers. All extensions can be used in a program unless there is a conflict in the extensions provided by two different vendors. Again, the extensions are saved as part of the IHR as described in greater detail hereinafter. Although the preferred embodiment of the IHR is presented in terms of HF and LF representations capable of supporting both C and Fortran programming languages, it will be appreciated that additions and changes to the IHR may be necessary to support programming features of other programming languages, e.g., Ada, Pascal.

The optimizer 203 performs standard scalar optimizations, and detects sections of code that can be vectorized or automatically threaded and performs those optimizations. The optimizer 203 attempts to improve the HF code so that faster-running object code will result by performing several language and machine-independent optimizations. The optimizer 203 performs aggressive optimizations, which include automatic threading of source code, automatic vectorization of source code, interprocedural analysis for better optimizations, and automatic in-lining of procedures as discussed in greater detail hereinafter. The optimizer 203 also performs advanced dependence analysis to identify every opportunity for using the vector capabilities of the preferred multiprocessor system. The same dependence analysis is used to do multithreading, which makes it possible to concurrently apply multiple processors to a single program. The optimizer also applies a wide range of scalar optimizations to use the scalar hardware in the most efficient manner. Scalar loop optimizations, such as strength reduction, induction variable elimination, and invariant expression hoisting are performed on loops that cannot be vectorized or automatically multithreaded. Global optimizations are performed over an entire procedure. They include: propagation of constants, elimination of unreached code, elimination of common subexpressions, and conversion of hand-coded IF loops to structured loops. In-lining of procedures automatically pulls small, frequently used procedures inline to eliminate procedure call overhead. For a more detailed discussion of the various optimizations that may be invoked by the preferred embodiment of the present invention, reference is made to the previously identified parent application entitled INTEGRATED SOFTWARE ARCHITECTURE FOR A HIGHLY PARALLEL MULTIPROCESSOR SYSTEM.

Optimization is a time- and space-intensive process, even when using efficient algorithms. Selected parts of optimization may be turned off to provide some of the benefits without all of the cost. For example, performing vectorization does not require performing scalar global optimization; however, without performing the global transformation, some opportunities for vectorization may be missed. It will also be recognized that there are situations where it is necessary to have quick compilation, and the optimization phase may be skipped altogether. In the preferred embodiment, each of the optimizers, optimizer 203 and LoForm optimizer 205, may be selectively invoked by a user by inserting appropriate command line statements in a source code program to activate or inactivate desired optimizations.

The code generator 204 takes the HF code from the front ends 201 and 202, via the optimizer 203 if utilized, and translate the HF code into a combined HF and LF representation of the source code program. In essence, the code generator 204 moves the representation of the source code from a language and machine-independent high-level representation (HF) to a machine-dependent low-level representation (LF) that is part of the integrated hierarchical representation.

Once the process of translating the intermediate HF code to LF code is completed, machine-dependent optimizations may be performed by the LoForm optimizer 205. These optimizations attempt to make optimum use of registers, such as keeping the most commonly used variables in registers throughout a procedure. Other optimizations that may be performed by the preferred embodiment include: an instruction scheduler that seeks to simultaneously use the multiple functional units of a particular target machine and minimize the time required to complete a collection of instructions; a linkage tailoring that minimizes procedure call overhead across files; a post-scheduling process that pushes back memory loads as early as possible and performs bottom loading of loops; and a loop unrolling that duplicates the body of the loop to minimize loop overhead and maximize resource usage.

As part of the optimizers 203 and 205, the preferred embodiment of the present invention includes a two-stage means for procedure inlining, first inliner 221 and second inliner 222. Procedure inlining is the process whereby the compiler 200 replaces a procedure call with the actual body of code in the called procedure so that called procedure code executes as part of the program code. When procedure calls are not inlined into program code, the calls are made while the program code executes and can incur significant overhead. The overhead of a procedure call is the cost in time of making a jump to a different section of code and returning from that sectional code and the saving and restoring of registers around these jumps. Inlining saves the overhead of procedure calls being made during the execution of a program. In the preferred embodiment, intra-file inlining is performed at the HF level by the first inliner 221 in connection with optimizer 203 and inter-file inlining is performed at the LF level by the second inliner 222 in connection with the LoForm optimizer 205.

In the actual inlining process, procedure call sites are located and matching procedure definitions are found in the IHR. If it is determined that it is beneficial and economical to inline, the equivalent of the statements of the procedure definition in IHR form are copied and added to the IHR at the call site. Copies of all the local variables are also made from the procedure definition and added to the call site. Uses of the formal parameters in the definition are turned into references to temporaries when the copying takes place.

In the preferred embodiment, determination of which call sites are inlined is done automatically so that no user directives are required; however, the user can be explicit about which calls to inline. If a program calls procedures that require large amounts of local storage space, the user may wish to control inlining. Otherwise, the inlining process in the compiler determines which procedure call sites in the program can be inlined. In the method of the present invention, the IHR of the program source code is utilized throughout the inlining process. For example, the inlining process uses the IHR to determine which procedure call sites are inside looping constructs and gives those call sites a high probability of being inlined. Procedure calls within looping constructs are given a higher priority and are allowed to be bigger. Routines over a certain number of statement nodes in size are not inlined since the benefit of inlining large procedures is minimal. The IHR is examined to detect call sites to procedures that are recursive, that require large amounts of local variable storage (stack) space, or that have some characteristic that will prevent the inlined procedure call from behaving exactly as the non-inlined call.

Inlining within the same compilation unit (a single file) is done at compile time by the first inliner 221. Inlining between compilation units (across files) is done at link time, near the end of the compilation process, by second inliner 222. Some call sites cannot be inlined until link time, when the complete intermediate database representation of all compilation units is available and all the information necessary to detect potential inlining problems is available. In the case where procedures are inlined across different files by second inliner 222, information is gathered from the IHR into an interprocedural database. Part of this information contained in the interprocedural database is used by the preferred embodiment of the present invention to select which procedures are to be inlined.

The inliners 221 and 222 also work closely with the inter-procedural analysis (IPA) 230. When the compiler 200 is processing a procedure, there may be calls to other procedures. In the traditional software environment, the compiler 200 has no knowledge of the effects of these other (or called) procedures. Without such knowledge, the compiler 200 is forced to assume the worst and inhibit many optimizations that are safe. Interprocedural analysis (IPA) is the collection and analysis of procedure information. The results of this analysis allow the compiler 200 to optimize across called procedures. Certain optimizations can benefit from interprocedural analysis. With the use of IPA information, the number of instances where an optimization can be applied should be increased. The optimizations that can benefit from IPA include: common subexpression elimination, forward substitution, redundant store elimination, constant folding, constant propagation, dead code elimination, global common subexpression elimination, vectorization and automatic multithreading. In addition, for each procedure in a program, IPA 230 collects a list of defined or used global variables and counts how many times each variable is defined or used. IPA 230 sums the number of defines and uses of the global variables and sorts them into the order of most frequent use. The most frequently used variables can then be allocated to L registers. The registers for a called procedure are offset from the calling procedure to reduce the number of register saves and restores in a procedure call.

The IPA 230 also works closely with the Linker 214 in performing interprocedural analysis on source code programs that consist of multiple procedure modules, each of which may be compiled separately and then linked together by the linker 214. There are two types of interprocedural analysis that are well known in the prior art, exhaustive and incremental. For exhaustive analysis, a call graph is formed from information in the object code file files and analyzed. This is the "start from scratch" analysis. For incremental analysis, a call graph and analysis are assumed to exist from a previous link of the program, and a small number of modified procedures are replaced in the call graph. This is the "do as little work as possible" analysis. In the traditional System V environment, a programmer can modify a procedure, compile, and link a program without having to recompile any other procedures, since no dependencies exist between procedures. In an IPA environment, dependencies exist between procedures because procedures are basing optimizations upon knowledge of how called procedures behave. Hence, when a called procedure is modified and recompiled, a calling procedure may also need to be recompiled. This problem is solved by recompiling a procedure when any of the procedures it calls has changes in its interprocedural information as determined by the IPA 230.

The compilation advisor 231 functions as an interface between the programmer and the compiler 200. In effect, the compilation adviser 231 allows the optimizers 203 and 205 to ask a programmer optimization-related questions. The compiler 200 identifies the information that it needs and formulates questions to ask the programmer. The compiler 200 saves these questions so the programmer can address them through the compilation advisor. The compilation advisor relays the programmer's answer back to the compiler 200. A second role of the compilation advisor 231 is displaying dependence information so the programmer can attempt to eliminate dependencies. Dependencies among expressions in a program inhibit vectorization and parallelization of parts of the program. Eliminating dependencies enables the compiler 200 to generate more efficient code. When there is no transformation that the compiler can do to eliminate a dependence, the programmer may be able to change the algorithm to eliminate it.

The debugger 212 is an interactive, symbolic, parallel debugger provided as part of a parallel user environment. The debugger 212 contains standard features of debuggers that are commonly available. These features enable a programmer to execute a program under the control of the debugger 212, stop it at a designated location in the program, display values of variable, and continue execution of the program. The debugger 212 of the present invention has several unique features. The combination of these innovative capabilities provide the user functionality not generally found in other debuggers. The debugger 212 has two user interfaces. The first, a line-oriented interface, accepts commands familiar to System V users. The second interface, comprised of windows, is designed to minimize the learning required to use debugger capabilities.

The software architecture of the present invention maintains the information necessary to display high-level language source, for the segment of the program being debugged, in a number of environments (e.g., Machine A, B and C). The compilation system creates a mapping of the high-level program source code to machine code and vice versa. One of several capabilities of the debugger 212 not found in other debuggers is source-level debugging of optimized code. The optimizations that can be applied and still maintain source-level debugging include dead-code elimination, code migration, code scheduling, vectorization, register assignment and parallelization.

The debugger 212 supports debugging of parallel code. A display of the program's dynamic threading structure aids the user in debugging parallel-processed programs. The user can interrogate individual threads and processes for information, including a thread's current state of synchronization. Other commands display the status of standard synchronization variables such as locks, events, and barriers. The debugger 212 provides additional capabilities. For example, a programmer can set breakpoints for data and communication, as well as code. Macro facilities assign a series of commands to one command. Control statements in the command language allow more flexibility in applying debugger commands. Support for distributed processes enables the programmer to debug codes on different machines simultaneously. Numerous intrinsic functions, including statistical tools, aid the programmer in analyzing program data. The debugger 212 support of language-specific expressions allows familiar syntax to be used.

Other optional components of the preferred embodiment of the software development system of the present invention as shown in FIG. 1 include a linker 214 and a disassembler 215. Both the linker 214 and disassembler 215 operate on the object code files (.o files) produced by the compiler 200. The linker 214 may be connected to one or more libraries 216 that contain pre-compiled object code files of existing procedures or routines that may be linked together or accessed by a source code program.

Having described the various components of the preferred embodiment of the software development system, the structure for the preferred embodiment of the integrated hierarchical representation of a computer program will now be described. The first stage of the integrated hierarchical representation (IHR) is called HiForm (HF). HF is a language-independent, intermediate representation of a high level language program. It is produced by the compiler front ends 201 and 202 and the assembler 210. All front ends of the compiler produce the same HF, and HF representations of different programming languages can be merged with each other by inlining without recompilation of source code.

HF is made up of nodes representing semantic elements of the high level program. These nodes contain references to certain broad classes of information about semantic items and data objects by way of indices into tables. The use of table-references instead of pointers for this purpose allows the present invention to take advantage of a highly pipelined vector computer architecture. It also allows greater flexibility in moving or relocating the executable object code file for a program in main memory or in storing the executable object code file in secondary memory.

The HF representation of programs produced by the compiler front-ends 201 and 202 and the assembler 210 is used by the optimizers 203 and 205, inliners 221 and 222, assembler 210, code generator 204, and debugger 212, as well as a graphical compiler development utility 213. The nodal, table-oriented structure of HF efficiently stores the information needed for debugging: source to binary mappings, register usage, and other information needed to debug highly optimized code.

The second portion of the IHR called LoForm (LF), is a graph-structured representation of the machine instructions generated for a program. This graph structure allows several machine dependent optimizations to be performed at the machine code level, while at the same time retaining a relationship to the HF representation of the program for the purpose of maintaining an integrated hierarchical representation.

Referring now to FIGS. 2a-1, 2a-2 and 2b and Tables I-X, the hierarchical structure of the IHR of the present invention will be described. The highest level of information contained in the IHR is scope nodes 100 that represent the static nesting of procedures in a program module. Scope nodes 100 contain information about symbols for that scope 101, scopes nested within that scope 102, and the next scope at the level of the current scope 103. Also represented in the scope node is the first high-level language statement 104 contained in the scope. The structure of a scope node as a defined data structure in C is shown in Table I. Although the structure of the IHR of the preferred embodiment is presented in terms of specific data structures, it will be recognized that many variations on the particular information contained in the data structure and the manner in which that information can be represented in the data structure could be accomplished and still be within the scope of the present invention.

Each statement node 104 contains information about the kind of statement represented by the node (i.e. assignment statements, if statements, loop statements, etc.). Statement node 104 also contains information about the next statement 105 in lexical order, and a representation of the expressions contained in the statement. As an aid to the debugger 212 in debugging optimized code, the statement node 104 contains an indication of whether or not the statement has been inlined. Another debugging aid for use in debugging optimized code is an indication of the first and last machine instructions generated from the statement. Using this information, the debugger 212 can accurately set breakpoints, even in the presence of code optimized by instruction scheduling. The structure of a statement node is represented in Table II.

Grouping the statement nodes 104 together are the block nodes 106. The block nodes represent the basic blocks of the program. Basic blocks are groups of statements with one entry and one exit. The block nodes 106 contain much of the information needed by the various optimization phases, including bit vectors representing live variable information. The block nodes 106 also contain a depth-first number that is used to index into bit vectors containing this optimization information. The structure of a block node is represented in Table IV.

Expression nodes 107 form a directed acyclic graph to represent high-level language expressions. They contain the information about the operator and operands 108 of the expressions, as well as the data type 111 of the operator. The structure of an expression node is represented in Table V. In many cases, the operands of expression nodes may be literal nodes representing objects with known values at compile time. The structure of a literal node is represented in Table VII.

The data type of an expression is represented by a type node 111. The type node 111 contains information about the components of a type as well as the size of the type. Other information about types is dependent on the type being represented. For example, a type node 111 for an array contains information about the number and sizes of dimensions of the array. The structure of a type node is represented in Table VI.

Symbol nodes 110 represent named items in a high-level language or assembly language program. They contain information about the kind of symbol (variable, type, label, procedure, etc.), the location in a program where the symbol is defined, and an index of the type node 111 indicating the data type of this symbol. Also contained in the symbol node is information for debugging optimized code--the debug mapping 109. A more detailed description of this feature of the present invention is presented in the description of FIGS. 5a and 5b. The symbol node 110 points to an array of bit vectors 150, each bit vector containing one entry for each block node 106 in the program. This array is indexed by the depth-first numbering of the block nodes 106. The structure of a symbol node is represented in Table VIII.

Referring now to FIG. 3, the structure of the LF portion of the IHR of the present invention will be described. LF nodes 112 are a representation of the machine instructions generated for a program. LF nodes 112 are produced by the code generator and the assembler. The LF nodes 112 contain information about the machine instruction including the opcode, the operands and an indication of the parent expression nodes 107 and 108. The structure of an LF node 112 is represented in Table IX.

One of the components of HF is the Definition Use Dependencies (DUDes). Definition-use information relates a variable's definition to all the uses of the variable that are affected by that definition. Use-definition information relates a variable's use to all the definitions of the variable that affect that use. Definition-definition information relates a variable definition with all definitions of the variable that are made obsolete by that definition. The present invention incorporates definition-use, use-definition and definition-definition information for single and multiple word variables, equivalenced variables, pointers and procedure calls (including all potential side effects) into a single representation (DUDe nodes) that is an integral part of the dependence analysis done for vectorization and multithreading. The structure of a DUDe node is represented in Table X.

Referring now to FIGS. 4a and 4b, the method for compiling source languages into binary code providing debugging of highly optimized code, inter-language inlining, and optimization and source-level debugging of assembly language programs using the present invention will be described. Each one of a plurality of language front ends 201 and 202 translates a given high level source language to a language-independent intermediate representation HF 250 using well known parsing techniques and other parsing techniques referred to above. HF 250 is then optionally fed into a language independent optimizer 203. The language independent optimizer 203 performs many machine-independent optimizations upon HF 250 and augments HF 250 with information necessary for debugging optimized programs thereby producing optimized HF 251.

At this point, the optimized HF 251 is fed into the code generator 204. The code generator 204 then augments the HF 250 with LF, a machine-independent representation of the instructions generated for a program, thus producing optimized HF with LF 252. Next, optimized HF with LF 252 is fed through another optimization phase 205 which performs machine-dependent optimizations. Optimized HF with LF 252 is further augmented to reflect transformations performed by the LF optimizer 205, producing optimized HF and LF 253. The optimized HF and LF 253 is then fed through the binary generator 206 to produce a relocatable object file 254.

A distinct component, the assembler 211, translates assembly language programs 272 into machine independent representation HF and LF 255. The HF and LF 255 produced by the assembler may then be optionally fed through the LF optimizer 205. The LF optimizer 205 performs optimizations on the assembly language program not performed by prior art systems, producing HF and optimized LF 256. Next, the binary generator 206 reads the HF and optimized LF and produces the relocatable object file 257 containing object code as well as HF and optimized LF.

Another distinct component of the system, the debugger 212 then reads the relocatable object code file 254 and 257 containing optimized HF and LF. Using the information contained in the object code file 254 and 257, the debugger is able to debug both high-level language programs and assembly language programs in the presence of optimizations.

Referring now to FIGS. 5a and 5b, an example of how the additional information represented in the IHR is used to assist in debugging optimized code will be described. A parallel array 151 contains the register number holding the symbol in the current block. The debugger uses the parallel array 151 that is part of the IHR to find the register holding a variable in a block of optimized code by performing the following steps:

1. The current block node 106 is determined using binary to source mappings.

2. The symbol node 110 for the desired variable is determined.

3. The debugger looks through the array of bit vectors 150, checking if the bit indicated by the depth-first number of the current block node 106 is set in any of the bit vectors 150. When a set bit is found, the corresponding entry in the alt-- sto-- offset array 151 is examined. This entry specifies the register number containing the variable for the current block 106.

Referring now to FIG. 6, the structure of the preferred embodiment of the loop structure graph will be explained. The loop structure graph consists of loop nodes and block nodes and represents the static nesting of loops in a high level language program. The root 600 of the structure graph is a pseudo loop surrounding the entire program; children of this loop node are the blocks and loops comprising the program. Block node 602 represents the basic block of the loop prologue for the outer loop. Loop node 603 represents the outer loop itself, while block node 604 represents the epilogue block for the outer loop. Nested within the outer loop 603 is block node 605, the prologue block for the inner loop, as well as loop node 606 and block node 607. Loop node 606 represents the inner loop, while block node 607 represents the epilogue of the inner loop. Contained within the inner loop 606 is the body of the loop comprising block nodes 608 and 609. The structure of the loop node is represented in Table III.

TABLE I
______________________________________
Scope Node Field:
Scope Node Description:
______________________________________
scope-- kind:
The class of this scope.
One of the following:
UNDEF-- SCOPE
MODULE-- SCOPE
PROCEDURE-- SCOPE
BLOCK-- SCOPE
STRUCTURE-- SCOPE
PROTOTYPE-- SCOPE
scope-- symbol:
The symbol defining this scope
(a symbol node).
scope-- symbol-- table:
The symbol table contained by this scope.
scope-- parent:
The parent of this scope
(another scope node).
scope-- sibling:
The sibling of this scope
(another scope node).
scope-- child:
This child of this scope
(another scope node).
______________________________________
The following applies only to MODULE1' SCOPE:
scope-- macro-- symbol-- table:
This is the first block symbol node
in this scope.
The following applies only to MODULE-- SCOPE and
PROCEDURE-- SCOPE:
scope-- first-- block:
This is the first block symbol node
in this scope.
The following applies only to PROCEDURE-- SCOPE:
scope-- first-- stack-- sym:
First symbol node allocated from
the stack and in this procedure.
This may or may not be entered
into a scope symbol table.
scope-- last-- stack-- sym
Last symbol node allocated from
the stock and in this procedure.
These may or may not be entered into a scope symbol table.
scope-- duc-- table:
AUX-- NODE of the global
def/use count table.
Built for IPA contains a list of global variable symbol nodes
counts of the number of definitions and uses for each
PROCEDURE-- SCOPE.
______________________________________
TABLE II
______________________________________
Statement
Node Field: Statement Node Description:
______________________________________
st-- op
Statement Operator.
One of the following:
SOPERROR Error statement
SOPCASE CASE statement
SOPCASECOMP CASE component
pseudo statement
SOPCYCLE CYCLE(F90) or
CONTINUE(C)
statement
SOPDEAD DEAD pseudo
statement
SOPENDCASE ENDCASE pseudo
statement
SOPENDGROUP End of SOPGROUP
SOPENDIF endif
STATEMENT
SOPENDLOOP ENDLOOP pseudo
statement
SOPENTRY ENTRY statement
SOPEXIT EXIT statement
SOPEXPRESSION Expression statement
SOPGOTO GOTO statement
SOPGROUP Group of statements
SOPIF IF statement
SOPIFCOMP IF component
pseudo statement
SOPLABEL LABEL pseudo
statement
SOPLOOP LOOP pseudo
statement
SOPNULL Null statement
SOPPRAGMA Pragma pseudo
statement
SOPRETURN RETURN statement
SOPREGISTER Register spill/load
pseudo statement.
st-- sfa
Source file address
st-- previous
Node for previous statement in
lexical order.
st-- next
Node for next statement in lexical order.
st-- block
Block node for basic block containing
this statement.
st -- lf-- start
First LoForm node associated
with this statement. Set by backend.
st-- If-- end
Last LoForm node associated
with this statement. Set by backend.
st-- inlined
Bit set => This statement has been
inlined to its current location.
st-- inlining-- root
If st-- inlined-- ( ) is
TRUE then this is the statement that
caused the statement to be inlined.
st-- well-- structured
Bit set => This structured
statement (SOPCASE, SOPIF, SOPLOOP)
is well-formed, i.e. there are no branches
into it from outside or between its
components. Exit branches may exist,
however. Set by global flow analysis.
st-- has-- call
Bit set => This statement has
an EOPCALL under it somewhere.
______________________________________
TABLE III
______________________________________
Loop Node Field:
Loop Node Description:
______________________________________
stmt Statement node corresponding to this loop.
prologue Block node which is the prologue
of this loop.
head Block node which is the header of this loop.
tail Block node which is the tail of this loop
(from which there is a backedge
to the head.)
epilogue Block node which is the epilogue of
this loop.
child Child node in Structure Graph; first block
node in body of loop (always the
loop's header).
sibling Sibling node in Structure graph
(always a BLOCK node).
guard Logical expression controlling execution of
this loop (may be relative to some enclosing
condition).
clear-- bv
Aux node of bit vector representing
symbols which are clear of (re)
definition in this loop.
exit-- bv
Aux node for bit vector representing block
nodes from which this loop is exited.
next Loop next consisting of all loops which
enclose this loop (i.e. this loop is NOT part
of this nest).
iv-- table
Aux node for table of IV's in this loop.
iv -- count
Number of induction variables in this loop.
ps-- table
Aux node for table of promoted
scalars in loop.
ps-- count
Number of promoted scalars in this loop.
invar-- stmt
First invariant statement hoisted to
prologue block.
completed Bit set => The body of this loop
has been completed; used during
SG construction.
vectorizable
Bit set => Loop is vectorizable.
part-- vectoriaable
Bit set => Loop is partially vectorizable.
taskable Bit set => Loop is taskable.
innermost Bit set => Loop is innermost in its nest.
has-- unknown-- call
Bit-- set => Loop contains call to an
unknown procedure.
has-- exit-- branch
Bit-- set => Loop contains an exit branch.
has-- back-- branch
Bit-- set => Loop contains
a backward branch.
has-- many-- entries
Bit-- set => Loop is entered
at more than one place.
has-- perfect-- subnest
Bit-- set => Loop has perfect subnest
beneath it. Temporary used by vector
analysis to point to a list of hazards.
______________________________________
TABLE IV
______________________________________
Block Node Field:
Block Node Description:
______________________________________
first-- stmt
First statement node in basic block.
last-- stmt Last statement
node in basic block.
pred-- list
Control flow predecessor list. This is
represented by an HF node list.
pred-- count
Number of control flow predecessors.
succ-- list
Control flow successor list. This is
represented by an HF node list.
succ-- count
Number of control flow successors.
Fall-- through Target block of the "fall
through" edge from this block (only if this
block is terminated by a conditional branch)
previous Node for previous basic block
in lexical order.
next Node for next basic in lexical order.
sibling Sibling node in Structure graph.
guard Logical expression controlling execution
of this loop (may be relative
to some enclosing condition).
loop-- mark
Corresponding SOPLOOP statement if this
block is a loop header; of loop
node corresponding to a loop header
or tail block. Set and used during
flow graph and structure graph
construction.
loop-- tail
Corresponding loop tail block if this
block is a loop header.
loop-- head
Corresponding loop header block if this
block is a loop tail.
nest Loop next in which this block resides.
parent Parent loop node in the Structure
Graph (loop that immediately encloses
this block).
exit-- level
Number of loops exited from this block.
connected Bit set => This block is connected
to the control flow graph.
visited Bit set => This block has been
visited. Used by algorithms which traverse
the flow graph recursively. Must be cleared
before reuse.
has-- call
Bit set => This block contains at least
one procedure call.
has-- unknown-- call
Bit set => This block contains a call
to a procedure about which nothing
is known.
has-- loop-- stmt
Bit set => This block contains an
SOPLOOP statement.
ia-- loop-- head
Bit set => This is the "header"
(entry) block of a loop. All other blocks
in the loop are dominated by this block.
is-- loop-- tail
Bit set => This is the "tail"
block of a loop. It contains the (last)
iteration test and branch to the loop's
header block.
is-- loop-- prologue
Bit set => This is the prologue block of
a loop. Its only successor is the header
of that loop.
is-- loop-- epilogue
Bit set => This is the epilogue
block of a loop. All normal exits from
the loop should go through this block.
is-- loop-- exit
Bit set => Control flows from this
block out of the loop or loops which contain
it AND this is not a loop tail block.
I.e. there is a premature loop exit
from this block.
has-- ind-- refs
Bit set => This block has indirect
references in it. This is used by the
scheduler to call for additional analysis.
dfn Number giving position in depth-first
ordering.
ordinal The ordinal value for a lexical ordering of
blocks in the compilation unit; this works
hand in hand with st-- ordinal-- ( ).
in Aux node of bit vector representing blocks
that reach this block in the flow graph.
out Aux node of bit vector representing
blocks reached from this block in the
flow graph.
dom Aux node of bit vector representing
blocks which dominate this block in the
flow graph.
gen-- def
Aux node of bit vector representing
definitions which are generated by
this block.
kill-- def
Aux node of bit vector representing
definitions which are killed by this block.
in-- def
Aux node of bit vector representing
definitions which reach the
beginning of this block.
out-- def
Aux node of bit vector representing
definitions which reach the end of this block.
x-- def
Aux node of bit vector representing
exposed definitions (first definitions
of variables) in this block.
x-- use
Aux node of bit vector representing
exposed uses in this block.
clear Aux node of bit vector representing
the clear vector for this block, (a bit is
set for symbols NOT defined in this block).
clear-- record
Aux node of bit vector representing
the clear vector for structured components
in this block, (a bit is set for records NOT
defined in this block).
alt-- sto
Aux node of bit vector representing
(register) colors used in this block.
first-- dude
First in a linked list of DUDe nodes
associated with this block.
pd-- pred
Predecessor of this node in the post-
dominator tree.
pd-- succ-- list
Successors of this node in the post-
dominator These are represented by an
HF node list.
cd-- in-- list
HF node list of nodes upon which
this node is control dependent.
cd-- out-- list
HF node list of nodes that are control
dependent upon this node, node structure:
______________________________________
TABLE V
______________________________________
Expression
Node Field: Expression Node Description:
______________________________________
ex-- op
One of the following expression operators:
EOPERROR, error expression
EOPADD, add
EOPADDR, address of
EOPAND, logical conjunction
EOPASSIGN, assignment
EOPCALL, procedure (function)
call
EOPCONCAT, string concatenation
EOPCONVERT, convert type of
operand
EOPDIV, divide
EOPENTRY, entry point
EOPEQ, equal
EOPEQV, logical equivalence
EOPEXP, exponent
EOPGE, greater than or equal
EOPGROUP, algebraic grouping
EOPGT, greater than
EOPJMP, unconditional jump
EOPJMPA, assigned jump
EOPJMPF, jump if false
EOPJMPT, jump if true
EOPJMPX, jump indexed
EOPLE, less than or equal
EOPLINK, placeholder
EOPLIT, literal
EOPLT, less than
EOPLVAL, I-value
EOPMOD, modulo
EOPMULT, multiply
EOPNE, not equal
EOPNOT, logical negation
EOPNULL, null (used as a
placeholder)
EOPOR, logical inclusive
disjunction
EOPRANGE, range of values of
discrete type
EOPREM, remainder
EOPRENAME, type rename
EOPRETURN, return
EOPROL, rotate left
EOPROR, rotate right
EOPRVAL, r-value
EOPSELECT, record/union member
selection
EOPSEQ, sequence of integral
values
EOPSHL, shift left
EOPSHR, shift right
EOPSUB, subtract
EOPSUBSCRIPT,
subscript
EOPSUBSTR, substring
EOPUADD, unary add
EOPUSUB, unary subtract
EOPXOR, logical exclusive
disjunction
ex-- data-- type
Type node for data type of this expression.
ex-- parent
Statement node for first statement
referencing this expression.
ex-- src-- offset
Source file byte offset of the first character
in this expression.
ex-- indegree
In-degree of this node.
ex-- value-class
Value class of expression. Set and
used in common subexpression evaluation.
ex-- addr-- kind
Kind of address (BIT-- ADDR,
BYTE-- ADDR WORD-- ADDR or
UNDEF-- ADDR) represented by an
address expression.
ex-- rank
Rank of this expression (0 => scalar,
1 => vector, . . .).
ex-- If
Root node of LowForm (LF) generated
for this node. Set by the code generator.
ex-- If-- generated
Bit set => LF has been generated for
this node.
ex-- fortran-- assign
Bit set => This assignment expression
represents a Fortran ASSIGN statement.
ex-- call-- inline
Bit set => This EOPCALL should
be inlined.
ex has-- alt -- returns
Bit set => This EOPCALL has
alternate returns (Fortran).
ex-- visited
Bit set => This EXPR-- NODE has
been visited in some
sort of temporary depth first search
or traversal. It must be cleared after
the temporary use by the code using it.
______________________________________
TABLE VI
______________________________________
Type Node Field:
Type Node Description:
______________________________________
ty-- op
Root "operator" of this data type.
One of the following:
TOPUNDEF
TOPERROR
TOPNULL
TOPVOID
TOPUNTYPED
TOPBYTE
TOPSBYTE
TOPUBYTE
TOPMBYTE
TOPSHORT-- INT
TOPINT
TOPLONG-- INT
TOPSHORT-- UINT
TOPUINT
TOPLONG-- UINT
TOPSHORT-- FLOAT
TOPFLOAT
TOPLONG-- FLOAT
TOPCOMPLEX
TOPLONG-- COMPLEX
TOPCHARACTER
TOPSTRING
TOPLOGICAL
TOPENUMERATION
TOPCONSTRAINT
TOPACCESS
TOPADDRESS
TOPARRAY
TOPRECORD
TOPSET
TOPUNION
TOPPROCEDURE
TOPFILE
ty-- size
Exact bit size of an object of this type.
ty-- container-- size
Bit size of the container allocated for a single
object of this type.
ty-- left-- justified
Bit set => Objects of this type are left
justified within their containers.
ty-- is-- volatile
Bit set => Objects of this type are volatile.
ty-- is-- const
Bit set => Objects of this type are read-only.
ty-- basic
Bit set => This is a basic, fixed-size
data type.
ty-- is-- signed
Bit set => Objects of this type are signed.
ty-- entered
Bit set => This node is entered in the
type graph.
ty-- assumed-- size
Bit set => Objects of this type assume
their sizes dynamically.
ty-- visited
Bit set => This TYPE-- NODE has
been visited in some sort of temporary depth
first search or traversal. It must be cleared
after the temporary use by the code using it.
ty-- hash
Hash value computed for this type node.
ty-- next
Link for hash collision resolution.
______________________________________
TABLE VII
______________________________________
Literal Node Field:
Literal Node Description:
______________________________________
It-- data-- type
A type node for the data type of literal.
It-- next
Link for hash collision resolution.
It-- val
Value of literal. This contains a union of
all possible target values for basic types,
a union of two node-- types
(literal nodes) for complex types, or a
string-- type or aggregate types
(like TOPSTRING or TOPARRAY).
It-- hash
Hash code for this literal.
It-- entered
Bit on => Literal has been hashed.
______________________________________
TABLE VIII
______________________________________
Symbol
Node Field: Symbol Node Description:
______________________________________
sy-- kind
Class of symbol. One of the following:
UNDEF-- SYM
BLOCK-- SYM
COMPONENT-- SYM
CONSTANT-- SYM
ENUMERATION-- SYM
ERROR-- SYM
LABEL-- SYM
MACRO-- SYM
MODULE-- SYM
PARAMETER-- SYM
POINTEE-- SYM
PROCEDURE-- SYM
TAG-- SYM
TYPE-- SYM
VARIABLE-- SYM
SY-- name
String containing name of symbol.
sy-- data-- type
Data type node for symbol.
sy-- next
Next symbol in list of symbol nodes used
for hash collision resolution. Set
when a symbol is entered in a symbol table.
sy-- sfa
Source file address of first occurrence
of symbol.
sy-- hash
Hash code for name, filled in by
syt-- enter( ) or
syt-- lookup-- or-- enter( )
sy-- declared
Flag indicating symbol has been
explicitly declared.
sy-- c-- defined
Flag indicating symbol has a defining
declaration (used in C)
sy-- type-- declared
Flag indicating symbol's data type has
been explicitly declared.
sy-- redeclared
Flag indicating that symbol has been
declared more than once.
sy-- used
Flag indicating that variable or
parameter is used (read).
sy-- defined
Flag indicating that variable or parameter
is defined (written).
sy-- aliased
Flag indicating that symbol has an alias.
Set by optimizer.
sy-- internal
Flag indicating that symbol is declared
internally by the compiler.
sy-- saved
Flag indicating that symbol appears
in a Fortran SAVE statement.
sy-- assigned-- label
Flag indicating that variable appears
in a Fortran ASSIGN statement.
sy-- target-- goto
Flag indicating that symbol was the
target of a GOTO statement.
sy-- use-- register
Flag indicating C "register" variable
or parameter.
sy-- associated
Flag indicating that this symbol has had
it's address taken.
sy-- visited
Bit set => This SYM-- NODE
has been visited in some sort
of temporary depth first search or
traversal. It must be cleared after the
temporary use by the code using it.
sy-- bfd-- used
Flag indicating that the bfd has seen
a use of this symbol.
sy-- bfd-- defined
Flag indicating that the bfd has seen a
definition of this symbol.
sy-- temp
Reusable section that must be
cleared by whomever uses it.
sy-- placeholder
Bit set => This SYM-- NODE placeholder
for the SYM-- NODE contained
in sy-- placeholder-- sym-- ( ).
sy-- bv-- index
The bit vector index of this symbol into the
clear and exposed use bit vectors
(blk-- clear-- and Blk-- x-- use--)
set
during dependence graph construction and
used for live variable analysis.
sy-- label-- depth-- id
The compound statement depth id of
the compound statement containing
this label (C only)
______________________________________
TABLE IX
______________________________________
LoForm Node Field:
LoForm Node Description:
______________________________________
bin-- address
Byte offest from beginning of
procedure of instruction.
hf HF expression node that generated this
LF node.
offset Offset to be added into a base by binary
table generator.
word The offset in words to the beginning of
the lf node.
parcel The offset in parcels from the base of the
above word
op The machine opcode for this instruction.
opnd1 Operand 1 of this LF node.
opnd2 Operand 2 of this LF node.
opnd3 Operand 3 of this LF node.
opnd4 Operand 4 of this LF node.
next Unscheduled order of LF nodes.
rdg Node in scheduler dependence graph
corresponding to this lf node.
use-- count
Use count for register allocation.
______________________________________
TABLE X
______________________________________
DUDe Node Field:
DUDe Node Description:
______________________________________
dude-- parent
The parent expression node associated
with this DUDe Node.
dude-- next
The next DUDe node.
dude-- previous
The previous DUDe node.
dude-- block
The Block node which contains this
DUDe node.
dude-- sym
The symbol node pointed to by this
DUDe node.
dude-- in
The set of Dependence Arcs that are
pointing into this DUDe Node.
dude-- out
The set of Dependence Arcs that are
pointing out of this DUDe Node.
dude-- ptr-- ind-- lvl
the indirection level of the reference
if it is a pointer.
dude-- bv-- index
The Bit Vector Index of the
definition (LVAL) corresponding to
this DUDe node.
dude-- kind
An enumeration identifying the type of
reference, USE, DEF, USE/DEF,
INDIRECT USE, INDIRECT DEF,
or CALL. An indirect use
is something like "i = *p" which
is a USE of "p" and an
INDIRECT USE of "*p". Indirect
Def follows along the same lines,
and CALL is:
"fred(a,b,c)" separate nodes are
created for "a", "b", and "c" and a
CALL node is created for "fred", this
represents all global variables as de-
fined and is only needed when IPA
is not used.
dude-- visited.
A bit telling whether or not this
DUDe was visited. Used only during
alias analysis of local dependence
graph construction.
dude-- upward-- x-- use
A bit telling whether or not this DUDe
is upward exposed.
dude-- downward-- x-- use
A bit telling whether or not this DUDe
is a downward exposed
use: is not set for defs.
dude-- ty-- qualifier
The symbol associated with this DUDe
is a basic of an access type.
ARRAY-- TYPE- The symbol asso-
ciated with this DUDe is an array.
STRUCTURE-- TYPE- The symbol
associated with this DUDe is a
structure.
OTHER-- TYPE - The symbol asso-
ciated with this DUDe is some other
(strange) type.
dude-- aff-- by-- call
A bit indicating whether or not this
DUDe should be treated as global
with respect to CALLs, in other words
a CALL can affect it.
dude-- is-- subscripted
A bit indicating that the DUDe
has a subscript operator associated with
it; different than the ty-- qualifier being
an array because "C" can have a
subscripted pointer.
dude-- multiple-- alias
A bit indicating a multiple alias
associated with the DUDe node.
dude-- parameter
A bit indicating that the DUDe is
attached to an actual parameter.
dude-- vector-- mask
Bit mask indicating the loops in which
this reference's address varies (i.e.
potentially forms a vector of values).
dude-- pointer
A bit telling whether or not this symbol
associated with this DUDe is a pointer.
dude-- ptr-- alias
A bit when set means this DUDe can
be an alias for an unresolved
pointer reference.
dude-- bound
If ptr-- alias is set, then this is the bound
class for the DUDe node. Weakly -
This DUDe is a weakly bound type.
(e.q. A "C" integer pointer can point
to an integer or unsigned integer).
Strongly - The DUDe is a strongly
bound type. (e.q. A "C" structure
pointer can only point to things that
are the same structure)
Unbound - This DUDe is unbound type.
(e.q. a "C" void pointer can point to
anything it desires)
dude-- alias-- type
If ptr-- alias is set, then this is the
specific type of thing that it points to, if
it is weakly bound it will be a class, if it
is strongly bound it will be the node
index of the type node that describes it,
and if it is unbound it will be zero.
______________________________________

Although the description of the preferred embodiment has been presented, it is contemplated that various changes could be made without deviating from the spirit of the present invention. Accordingly, it is intended that the scope of the present invention be dictated by the appended claims rather than by the description of the preferred embodiment.

Rasbold, James C., Van Dyke, Don A., Cox, David M., Strout, II, Robert E., Masamitsu, Jon A., O'Gara, Linda J., O'Hair, Kelly T., Seberger, David A., Cramer, Timothy J., Chandramouli, Ashok

Patent Priority Assignee Title
10067750, Feb 23 2015 Qualcomm Incorporated Whole-program optimization using data from previous compilation runs
10073759, Sep 29 2016 International Business Machines Corporation Identification and handling of nested breakpoints during debug session
10282179, Dec 09 2010 Microsoft Technology Licensing, LLC Nested communication operator
10289391, Mar 31 2014 International Business Machines Corporation Optimization of software symbol tables
10423391, Dec 22 2010 Microsoft Technology Licensing, LLC Agile communication operator
10545749, Aug 20 2014 Samsung Electronics Co., Ltd. System for cloud computing using web components
10620916, Nov 19 2010 Microsoft Technology Licensing, LLC Read-only communication operator
10628287, Sep 29 2016 International Business Machines Corporation Identification and handling of nested breakpoints during debug session
10732958, Mar 01 2016 YANMAR POWER TECHNOLOGY CO , LTD Terminal device and software rewriting program
10782943, Oct 29 2009 STENSAL, INC. Encoding data and metadata for run-time checking of computer code and data serialization
11232056, Dec 28 2016 Intel Corporation System and method for vector communication
11244090, Jun 01 2016 The MathWorks, Inc. Systems and methods for extracting adjustable attributes of model components
11321236, Jan 08 2018 Microsoft Technology Licensing, LLC.; Microsoft Technology Licensing LLC; Microsoft Technology Licensing, LLC Reduced instructions to generate global variable addresses
11669613, May 29 2020 Method for analyzing and verifying software for safety and security
11714619, Dec 17 2020 HUAWEI TECHNOLOGIES CO , LTD Method and apparatus for retaining optimal width vector operations in arbitrary/flexible vector width architecture
5274818, Feb 03 1992 RTPC CORPORATION; TM PATENTS, L P System and method for compiling a fine-grained array based source program onto a course-grained hardware
5325533, Jun 28 1993 Apple Inc Engineering system for modeling computer programs
5355492, Nov 05 1991 RTPC CORPORATION; TM PATENTS, L P System for compiling parallel communications instructions including their embedded data transfer information
5355494, Dec 12 1991 RTPC CORPORATION; TM PATENTS, L P Compiler for performing incremental live variable analysis for data-parallel programs
5367683, Jun 26 1992 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Smart recompilation of performing matchup/difference after code generation
5371747, Jun 05 1992 Hewlett-Packard Company Debugger program which includes correlation of computer program source code with optimized object code
5375239, Feb 17 1992 International Business Machines Corporation Use of build status indicators in connection with building of complex computer programs from source code parts
5375242, Sep 29 1993 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Compiler architecture for cross-module optimization
5410701, Jan 29 1992 Devonrue Ltd.; DEVONRUE LTD System and method for analyzing programmed equations
5446899, Jun 26 1992 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Hint generation in smart recompilation
5446900, Jul 24 1992 Mentor Graphics Corporation Method and apparatus for statement level debugging of a computer program
5448737, Mar 17 1992 International Business Machines Corporation System and method for optimizing computer code using a compact data flow representation
5471636, Jun 06 1994 Software development system and method of using same
5485615, Jun 10 1992 Telefonaktiebolaget L M Ericsson System and method of interactively developing desired computer programs by using plurality of tools within a process described in graphical language
5485616, Oct 12 1993 International Business Machines Corporation; IBM Corporation Using program call graphs to determine the maximum fixed point solution of interprocedural bidirectional data flow problems in a compiler
5493675, Feb 27 1991 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Compiler back end calling predetermined front end routines that use effect and dependency indicators to provide information to the compiler to determine the validity of an optimization
5495561, Jun 21 1993 Apple Inc Operating system with object-oriented printing interface
5519866, Jun 28 1993 Apple Inc Method and apparatus of incrementally linking components of a modeled computer program
5535392, Apr 01 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Using hint generation to cause portions of object files to remain the same
5535394, Oct 20 1993 International Business Machines Corporation Methods for distinguishing dynamically allocated storage along different call paths and for providing a compact representation of alias analysis
5539907, Mar 01 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P System for monitoring computer system performance
5555419, Jan 06 1993 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Correlation system
5577253, Feb 27 1991 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Analyzing inductive expressions in a multilanguage optimizing compiler
5581696, May 09 1995 Parasoft Corporation Method using a computer for automatically instrumenting a computer program for dynamic debugging
5581769, Dec 29 1993 International Business Machines Corporation Multipurpose program object linkage protocol for upward compatibility among different compilers
5590330, Dec 13 1994 International Business Machines Corporation Method and system for providing a testing facility in a program development tool
5604908, Feb 17 1992 International Business Machines Corportion Computer program product for using build status indicators in connection with building of complex computer programs from source code parts
5613117, Feb 27 1991 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Optimizing compiler using templates corresponding to portions of an intermediate language graph to determine an order of evaluation and to allocate lifetimes to temporary names for variables
5623696, May 30 1995 International Business Machines Corporation System for formatting a request into a packet which can be read by plurality of operating systems for providing a driver for a storage device
5625822, Jun 26 1992 COMPAQ INFORMATION TECHNOLOGIES GROUP, L P Using sorting to do matchup in smart recompilation
5642514, May 24 1995 International Business Machines Corporation Method and system for constructing compact executable files by eliminating redundant debugging strings
5659753, Feb 27 1991 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Interface for symbol table construction in a multilanguage optimizing compiler
5701489, Jun 06 1995 International Business Machines Corporation System for partial in-line expansion of procedure calls during program compilation
5740443, Aug 14 1995 International Business Machines Corporation Call-site specific selective automatic inlining
5758160, Jun 28 1993 Apple Inc Method and apparatus for building a software program using dependencies derived from software component interfaces
5761515, Mar 14 1996 International Business Machines Corporation Branch on cache hit/miss for compiler-assisted miss delay tolerance
5764987, Mar 23 1993 Apple Inc Relocatable file format and method and apparatus for creating and loading same
5764989, Feb 29 1996 Open Invention Network, LLC Interactive software development system
5778212, Jun 03 1996 RPX Corporation Interprocedural analysis user interface
5812430, Jun 02 1997 Microsoft Technology Licensing, LLC Componentized digital signal processing
5812854, Mar 18 1996 International Business Machines Corporation Mechanism for integrating user-defined instructions with compiler-generated instructions and for optimizing the integrated instruction stream
5812855, Jun 03 1996 RPX Corporation System and method for constaint propagation cloning for unknown edges in IPA
5815719, May 07 1996 Sun Microsystems, Inc. Method and apparatus for easy insertion of assembler code for optimization
5822592, Jun 28 1994 Qwest Communications International Inc Method and system for determining source code location
5826256, Oct 22 1991 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Apparatus and methods for source code discovery
5835699, Mar 25 1992 NEC Corporation Breakpoint setting/deleting system using a symbolic debugger in a digital data processing system
5836014, Feb 27 1991 Hewlett Packard Enterprise Development LP Method of constructing a constant-folding mechanism in a multilanguage optimizing compiler
5845126, Dec 06 1995 International Business Machines Corporation Method of, system for, and computer program product for providing inlined nested array constructors using normalized counters
5848274, Feb 29 1996 Open Invention Network, LLC Incremental byte code compilation system
5850554, Dec 29 1995 Intel Corporation Compiler tool set for efficiently generating and easily managing multiple program versions of different types
5857105, Feb 05 1997 Hewlett Packard Enterprise Development LP Compiler for reducing number of indirect calls in an executable code
5870588, Oct 23 1996 INTERUNIVERSITAIR MICRO-ELEKTRONICA CENTRUM IMEC VZW Design environment and a design method for hardware/software co-design
5901317, Mar 25 1996 Oracle America, Inc Method and system for register allocation using multiple interference graphs
5923881, Apr 12 1996 International Business Machines Corporation Data structure display system
5926638, Jan 17 1996 NEC Electronics Corporation Program debugging system for debugging a program having graphical user interface
5930509, Jan 29 1996 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method and apparatus for performing binary translation
5940617, Sep 17 1996 Kabushiki Kaisha Toshiba Debugger for controlling execution of software installed in object to be controlled on the basis of state transition model, debugging method thereof, record medium thereof, and method for correlating function specifications and code addresses
5966539, Mar 01 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Link time optimization with translation to intermediate program and following optimization techniques including program analysis code motion live variable set generation order analysis, dead code elimination and load invariant analysis
5987249, Aug 27 1996 JPMORGAN CHASE BANK, N A , AS SUCCESSOR AGENT IR code instrumentation
5999737, Mar 01 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Link time optimization via dead code elimination, code motion, code partitioning, code grouping, loop analysis with code motion, loop invariant analysis and active variable to register analysis
6006033, Aug 15 1994 International Business Machines Corporation Method and system for reordering the instructions of a computer program to optimize its execution
6016398, Apr 01 1997 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Method for using static single assignment to color out artificial register dependencies
6064819, Dec 08 1993 IMEC Control flow and memory management optimization
6067413, Jun 13 1996 Open Invention Network, LLC Data representation for mixed-language program development
6081801, Jun 30 1997 International Business Machines Corporation Shared nothing parallel execution of procedural constructs in SQL
6085029, May 09 1995 Parasoft Corporation Method using a computer for automatically instrumenting a computer program for dynamic debugging
6131185, Dec 13 1994 International Business Machines Corporation Method and system for visually debugging on object in an object oriented system
6131188, Dec 22 1995 Sun Microsystems, Inc. System and method for reducing the occurrence of window use overflow
6135650, Dec 22 1995 Oracle America, Inc Method and system for wrapper routine optimization
6186677, Aug 27 1996 JPMORGAN CHASE BANK, N A , AS SUCCESSOR AGENT Byte code instrumentation
6195746, Jan 31 1997 International Business Machines Corporation Dynamically typed register architecture
6219830, Mar 23 1993 Apple Computer, Inc. Relocatable object code format and method for loading same into a computer system
6226787, Jan 25 1999 Hewlett Packard Enterprise Development LP Visualization method and system for dynamically displaying operations of a program
6247174, Jan 02 1998 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Optimization of source code with embedded machine instructions
6266805, Jul 25 1997 British Telecommunications plc Visualization in a modular software system
6292938, Dec 02 1998 International Business Machines Corporation Retargeting optimized code by matching tree patterns in directed acyclic graphs
6311324, Jun 07 1995 Intel Corporation Software profiler which has the ability to display performance data on a computer screen
6311326, Jan 04 1999 EMC IP HOLDING COMPANY LLC Online debugging and tracing system and method
6314557, Dec 14 1998 Infineon Technologies Development Center Tel Aviv LTD Hybrid computer programming environment
6415433, Dec 23 1998 Cray Inc Method and system for identifying locations to move portions of the computer program
6446258, Nov 03 1998 Intle Corporation Interactive instruction scheduling and block ordering
6502237, Jan 29 1996 Hewlett Packard Enterprise Development LP Method and apparatus for performing binary translation method and apparatus for performing binary translation
6513154, Oct 21 1996 HANGER SOLUTIONS, LLC System and method for testing of computer programs in programming effort
6526567, Mar 24 1999 International Business Machines Corporation System and method for optimizing a device driver by incorporating debugging and tracing
6526568, Jan 28 1999 Fujitsu Limited Method for generating test data for listing program, listing program generation apparatus, and storage medium storing program for generating test data
6535903, Jan 29 1996 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method and apparatus for maintaining translated routine stack in a binary translation environment
6550056, Jul 19 1999 Renesas Electronics Corporation Source level debugger for debugging source programs
6564372, Feb 17 1999 Elbrus International Limited Critical path optimization-unzipping
6665866, May 28 1999 Microsoft Technology Licensing, LLC Extensible compiler utilizing a plurality of question handlers
6671873, Dec 13 2000 Intel Corporation Problematic binary isolator
6728954, Jun 30 2000 Freescale Semiconductor, Inc Method for allowing execution management of optimized code
6748583, Dec 27 2000 International Business Machines Corporation Monitoring execution of an hierarchical visual program such as for debugging a message flow
6795963, Nov 12 1999 International Business Machines Corporation Method and system for optimizing systems with enhanced debugging information
6802056, Jun 30 1999 Microsoft Technology Licensing, LLC Translation and transformation of heterogeneous programs
6820253, Sep 27 2000 Intel Corporation Method and system for interprocedural analysis with separate compilation
6918110, Apr 11 2001 VALTRUS INNOVATIONS LIMITED Dynamic instrumentation of an executable program by means of causing a breakpoint at the entry point of a function and providing instrumentation code
6973457, May 10 2002 ORACLE INTERNATIONAL CORPORATION OIC Method and system for scrollable cursors
6983456, Oct 31 2002 FG SRC LLC Process for converting programs in high-level programming languages to a unified executable for hybrid computing platforms
6983458, Jun 29 1999 Kabushiki Kaisha Toshiba System for optimizing data type definition in program language processing, method and computer readable recording medium therefor
7000227, Sep 29 2000 Intel Corporation Iterative optimizing compiler
7024255, May 18 2001 Roy-G-Biv Corporation Event driven motion systems
7024666, Jan 28 2002 Roy-G-Biv Corporation Motion control systems and methods
7031798, Feb 09 2001 Roy-G-Biv Corporation Event management systems and methods for the distribution of motion control commands
7036112, Aug 16 2002 Oracle America, Inc Multi-mode specification-driven disassembler
7089331, May 29 1998 ORACLE INTERNATIONAL CORPORATION OIC Method and mechanism for reducing client-side memory footprint of transmitted data
7100156, Sep 27 2000 International Business Machines Corporation Interprocedural dead store elimination
7103590, Aug 24 2001 ORACLE INTERNATIONAL CORPORATION OIC Method and system for pipelined database table functions
7133820, Mar 15 2000 Synopsys, Inc Method and apparatus for debugging programs in a distributed environment
7137107, Apr 29 2003 Roy-G-Biv Corporation Motion control systems and methods
7139843, May 30 1995 AUTOMATION MIDDLEWARE SOLUTIONS, INC System and methods for generating and communicating motion data through a distributed network
7174536, Feb 12 2001 IOWA STATE UNIVERSITY RESEARCH FOUNDATION, INC Integrated interactive software visualization environment
7197748, Jun 30 1999 Microsoft Technology Licensing, LLC Translation and transformation of heterogeneous programs
7210118, Jun 21 1991 International Business Machines Corporation Method and apparatus for modifying relocatable object code filed and monitoring programs
7210126, Oct 24 2001 International Business Machines Corporation Using identifiers and counters for controlled optimization compilation
7228531, Feb 03 2003 Altera Corporation Methods and apparatus for optimizing a processor core on a programmable chip
7237229, Nov 05 2001 SOCIONEXT INC Debugging aid parallel execution of a plurality of iterations with source lists display corresponding to each iteration
7243349, Aug 18 2003 Lenovo (Singapore) Pte. Ltd. Providing an image of installed software utilizing uninstall code
7266809, Aug 06 2001 MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD Software debugger and software development support system for microcomputer operable to execute conditional execution instruction
7299455, Jun 02 1995 Cisco Technology, Inc Remote monitoring of computer programs
7343600, Aug 18 2003 Lenovo PC International Providing an image of installed software utilizing uninstall code
7389284, Feb 29 2000 ORACLE INTERNATIONAL CORPORATION OIC Method and mechanism for efficient processing of remote-mapped queries
7484205, Dec 12 2002 Microsoft Technology Licensing, LLC Preprocessor-based source code instrumentation
7502662, Feb 09 2001 Roy-G-Biv Corporation Event management systems and methods for motion control systems
7526468, Jan 08 1999 Computer Associates Think, Inc System and method for recursive path analysis of DBMS procedures
7530056, Mar 31 2008 International Business Machines Corporation Method and system for detecting runtime defects in a program by comparing correct and incorrect runs
7610351, May 10 2002 ORACLE INTERNATIONAL CORPORATION OIC Method and mechanism for pipelined prefetching
7624304, Oct 07 2004 Microsoft Technology Licensing, LLC Defect detection for integers
7636697, Jan 29 2007 AILIVE HOLDING CORPORATION; YEN, WEI Method and system for rapid evaluation of logical expressions
7644308, Mar 06 2006 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Hierarchical timestamps
7689979, Aug 02 2005 Adobe Inc Methods and apparatus to improve application launch time
7703085, Oct 31 2002 FG SRC LLC Process for converting programs in high-level programming languages to a unified executable for hybrid computing platforms
7707543, Nov 24 2004 Siemens Healthcare GmbH Architecture for a computer-based development environment with self-contained components and a threading model
7716100, Dec 02 2005 Kuberre Systems, Inc. Methods and systems for computing platform
7739667, Dec 23 1998 Cray Inc. Parallelism performance analysis based on execution trace information
7836434, May 04 2004 Oracle America, Inc Method and system for analyzing array access to a pointer that is referenced as an array
7853645, Oct 07 1997 AUTOMATION MIDDLEWARE SOLUTIONS, INC Remote generation and distribution of command programs for programmable devices
7904194, Feb 09 2001 AUTOMATION MIDDLEWARE SOLUTIONS, INC Event management systems and methods for motion control systems
7917899, Feb 28 2005 Kabushiki Kaisha Toshiba Program development apparatus, method for developing a program, and a computer program product for executing an application for a program development apparatus
7984083, Jan 05 2004 International Business Machines Corporation Garbage collector with eager read barrier
8027349, Sep 25 2003 Roy-G-Biv Corporation Database event driven motion systems
8032605, Oct 27 1999 Roy-G-Biv Corporation Generation and distribution of motion commands over a distributed network
8073557, May 30 1995 AUTOMATION MIDDLEWARE SOLUTIONS, INC Motion control systems
8102869, Sep 25 2003 Roy-G-Biv Corporation Data routing systems and methods
8151260, Aug 18 2003 Lenovo PC International Method for providing an image of software installed on a computer system
8245211, Jun 05 2007 International Business Machines Corporation Method and system for finding problems caused by access to partially uninitialized data storage which is accessed using index expressions
8250553, Mar 26 2007 International Business Machines Corporation Method and data processing system for finding problems caused by access to uninitialized data storage in assembler programs
8271105, May 30 1995 AUTOMATION MIDDLEWARE SOLUTIONS, INC Motion control systems
8407206, May 16 2005 Microsoft Technology Licensing, LLC Storing results related to requests for software development services
8448153, Oct 29 2009 STENSAL, INC Encoding data and metadata for run-time checking of computer code and data serialization
8484630, Dec 23 2008 International Business Machines Corporation Code motion based on live ranges in an optimizing compiler
8549496, Feb 27 2009 Texas Tech University System Method, apparatus and computer program product for automatically generating a computer program using consume, simplify and produce semantics with normalize, transpose and distribute operations
8713684, Feb 24 2012 CA, INC Quantifying the risks of applications for mobile devices
8769503, Jan 08 2007 The MathWorks, Inc. Computation of elementwise expression in parallel
8799871, Jan 08 2007 MATHWORKS, INC , THE Computation of elementwise expression in parallel
8819641, Apr 27 2006 EPIC GAMES, INC. Program state reversing software development tool
8819772, Jun 25 2012 CA, INC In-line filtering of insecure or unwanted mobile device software components or communications
8825805, May 29 1998 National Chung Hsing University Method and mechanism for reducing client-side memory footprint of transmitted data
8839212, Feb 27 2009 Texas Tech University System Method, apparatus and computer program product for automatically generating a computer program using consume, simplify and produce semantics with normalize, transpose and distribute operations
8918881, Feb 24 2012 CA, INC Off-device anti-malware protection for mobile devices
8930909, Jul 13 2007 The MathWorks, Inc. Debugging using presentation layer representations of objects
8930927, Dec 18 2008 International Business Machines Corporation Diagnosing aliasing violations in a partial program view
9063725, Jun 24 2005 Oracle International Corporation Portable management
9075596, Jun 24 2005 Oracle International Corporation Deployment
9141377, Feb 19 2008 International Business Machines Corporation Visualization of code units across disparate systems
9170781, Jul 10 2002 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Method and system for generating object code to facilitate predictive memory retrieval
9244938, May 29 1998 Oracle International Corporation Method and mechanism for reducing client-side memory footprint of transmitted data
9262302, Dec 16 2010 International Business Machines Corporation Displaying values of variables in a first thread modified by another thread
9348596, Jun 28 2013 International Business Machines Corporation Forming instruction groups based on decode time instruction optimization
9361108, Jun 28 2013 International Business Machines Corporation Forming instruction groups based on decode time instruction optimization
9372695, Jun 28 2013 GLOBALFOUNDRIES Inc Optimization of instruction groups across group boundaries
9477474, Jun 28 2013 GLOBALFOUNDRIES U S INC Optimization of instruction groups across group boundaries
9542175, Jun 24 2005 Oracle International Corporation Continuous deployment
9582398, Jul 13 2007 The MathWorks, Inc. Debugging using presentation layer representations of objects
9678756, Jun 28 2013 International Business Machines Corporation Forming instruction groups based on decode time instruction optimization
9678757, Jun 28 2013 International Business Machines Corporation Forming instruction groups based on decode time instruction optimization
9753710, Nov 07 2013 Netronome Systems, Inc. Resource allocation with hierarchical scope
9779012, Feb 26 2016 MBIT WIRELESS, INC. Dynamic and global in-system debugger
9830134, Jun 15 2015 Qualcomm Incorporated Generating object code from intermediate code that includes hierarchical sub-routine information
9858079, Jul 10 2002 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Method and system for generating object code to facilitate predictive memory retrieval
9971575, Oct 29 2009 STENSAL, INC Encoding data and metadata for run-time checking of computer code and data serialization
RE36422, Jun 13 1996 International Business Machines Corporation Debugging system wherein multiple code views are simultaneously managed
Patent Priority Assignee Title
4667290, Sep 10 1984 501 Philon, Inc. Compilers using a universal intermediate language
4734854, Oct 08 1985 NCR Corporation System for generating software source code components
4905138, Oct 17 1985 WESTINGHOUSE ELECTRIC CO LLC Meta-interpreter
/////////////////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Aug 23 1990Supercomputer Systems Limited Partnership(assignment on the face of the patent)
Oct 10 1990CHANDRAMOULI, ASHOKSUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Oct 10 1990STROUT, ROBERT E IISUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Oct 10 1990MASAMITSU, JON A SUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Oct 10 1990O GARA, LINDA J SUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Oct 10 1990SEBERGER, DAVID A SUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Oct 10 1990COX, DAVID M SUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Oct 10 1990O HAIR, KELLY T SUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Oct 10 1990RASBOLD, JAMES C SUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Oct 10 1990CRAMER, TIMOTHY J SUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Oct 10 1990VAN DYKE, DON A SUPERCOMPUTER SYSTEMS LIMITED PARTNERSHIP, A LIMITED PARTNERSHIP OF DELAWAREASSIGNMENT OF ASSIGNORS INTEREST 0054940714 pdf
Jun 26 1992Supercomputer Systems Limited PartnershipIBM HOLDINGS, INC ASSIGNMENT OF ASSIGNORS INTEREST 0063740532 pdf
Aug 10 1993Supercomputer Systems Limited PartnershipCRAY RESEARCH, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0066500410 pdf
Aug 10 1993IBM HOLDINGS, INC CRAY RESEARCH, INC RELEASE OF SECURITY AGREEMENT0066550863 pdf
May 24 2000CRAY RESEARCH, L L C Silicon Graphics, IncASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0109270853 pdf
Nov 09 2001Silicon Graphics, IncFOOTHILL CAPITAL CORPORATIONSECURITY AGREEMENT0124280236 pdf
Dec 23 2003Silicon Graphics, IncU S BANK NATIONAL ASSOCIATION, AS TRUSTEESECURITY INTEREST SEE DOCUMENT FOR DETAILS 0148050855 pdf
Oct 17 2006Silicon Graphics, IncGeneral Electric Capital CorporationSECURITY INTEREST SEE DOCUMENT FOR DETAILS 0185450777 pdf
Sep 26 2007General Electric Capital CorporationMORGAN STANLEY & CO , INCORPORATEDASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0199950895 pdf
May 08 2009SILICON GRAPHICS, INC ET AL SILICON GRAPHICS INTERNATIONAL, CORP ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0277270212 pdf
Feb 08 2012SGI INTERNATIONAL, INC SILICON GRAPHICS INTERNATIONAL, CORP ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0277270212 pdf
Date Maintenance Fee Events
Jul 01 1996M183: Payment of Maintenance Fee, 4th Year, Large Entity.
Aug 01 1996ASPN: Payor Number Assigned.
Jun 28 2000M184: Payment of Maintenance Fee, 8th Year, Large Entity.
Jun 29 2004M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Dec 29 19954 years fee payment window open
Jun 29 19966 months grace period start (w surcharge)
Dec 29 1996patent expiry (for year 4)
Dec 29 19982 years to revive unintentionally abandoned end. (for year 4)
Dec 29 19998 years fee payment window open
Jun 29 20006 months grace period start (w surcharge)
Dec 29 2000patent expiry (for year 8)
Dec 29 20022 years to revive unintentionally abandoned end. (for year 8)
Dec 29 200312 years fee payment window open
Jun 29 20046 months grace period start (w surcharge)
Dec 29 2004patent expiry (for year 12)
Dec 29 20062 years to revive unintentionally abandoned end. (for year 12)