A method of optimizing a computer program includes executing a program including a hint defined as a variable in program and providing within the program, and a marker instruction that receives the hint as a parameter. The marker instruction marks a section of the computer program for a subsequent optimization. During the execution of the computer program, and in response to the marker instruction being executed, a hardware engine monitors data accesses associated with execution of instructions in the marked section and stores the data accesses in the storage of the hint. A subsequent execution of the marked section of the computer program is optimized using the data stored in the storage of the hint.
|
6. A method of optimizing a computer program, the computer program capable of being executed on a processor-based system, the method comprising the steps of:
automatically placing at least one marker instruction and a software variable in the computer program based on results of a profiler,
wherein the software variable provides storage within the program and has a label that indicates the software variable is used to store learned information,
wherein a hardware engine of the system is primarily responsible for managing contents of the software variable,
wherein the at least one marker instruction are represented as software functions in the program being passed the software variable as a software parameter;
executing the computer program along with the at least one marker instruction,
wherein a software code location of the corresponding software functions within the program mark respective sections of the program, wherein the executing of the at least one marker instruction performs:
monitoring of data references caused by execution of instructions in the marked sections; and
storing the data references in the software variable; and
optimizing the computer program based on the stored data references.
1. A method of optimizing a computer program, the computer program executable on a processor-based system, the method comprising the steps of:
executing the computer program, the computer program comprising a hint providing storage within the program, wherein the hint is defined as a software variable in the computer program, where the hint has a label that indicates that the hint is used to store learned information, and a hardware engine of the system is primarily responsible for managing contents of the hint,
and the computer program includes a marker instruction represented as a software function that is passed the hint as a software parameter, wherein a software code location of the software function within the computer program marks a section of the computer program for a subsequent optimization;
during the execution of the computer program, and in response to the marker instruction being executed by the computer program, the hardware engine of the system performing:
monitoring of data references caused by execution of instructions in the marked section; and
storing the data references in the storage of the hint; and
optimizing a subsequent execution of the marked section of the computer program using the data stored in the storage of the hint.
2. The method of
3. The method of
4. The method of
5. The method of
7. The method of
8. The method of
|
This invention was made with Government support under Contract No. NBCH3039004 awarded by Defense Advanced Research Projects Agency (DARPA). The Government has certain rights in this invention.
1. Field of the Invention
The present invention relates generally to the field of computer system design and programming, and, more particularly, to learning and cache management in software defined contexts.
2. Description of the Related Art
Memory latency in modern computers is not decreasing at a rate commensurate with increasing processor speeds. This results in the computing device idly waiting for the system to fetch processes from the memory, thereby not fully taking advantage of the faster processor speeds. This problem is sometimes referred to as the memory wall problem.
However, the memory wall problem is not exclusive to memory accesses and may arise in a number of information retrieval scenarios, as contemplated by those skilled in the art. For example, when a line requested by a local processor has been modified by a remote processor, then such line needs to be communicated from the remote processor to the local processor. This communication process may necessitate a significant amount of time. Thus, we refer to the memory wall problem more generally as the “data access” wall problem.
An approach to mitigating data access latency is prefetching. The term “prefetching,” as used herein, traditionally refers to retrieving data expected to be used in the future. This results in increased complexity, as well as increased off-chip traffic. Prefetching increases traffic by fetching data that is not referenced before castout. As used herein, the term “castout” refers to the process by which data is removed from the cache to make room for data that is fetched.
Prefetching may be implemented, for example, using only hardware. With such hardware-only prefetch mechanisms, the details of the prefetching mechanism are largely hidden from an application programmer. However, hardware-only prefetch mechanisms are limited because prefetching is usually forced to be based solely on historical access information (e.g., a directory extension) and/or limited predictions based on the detection of simple patterns such as address strides. The directory extension is described Franaszek et al., “Victim Prefetching in a Cache Hierarchy,” Ser. No. 10/989,997, the disclosure of which is entirely incorporated herein.
A brief description of the techniques of stride detection and directory extension will now be provided. Software code frequently processes batches of data by scanning it upwards or downwards. The resulting data access patterns can be detected and subsequently predicted by simple circuitry which is often included in computing processors. This technique can be very successful in improving the performance of software codes with the property above, but cannot predict other patterns that may arise that do not have this simple regular structure. An example of a technique that is designed to deal with the latter is a directory extension, which is described next. A victim cache stores lines evicted from a primary cache, and is known to improve the performance of direct-mapped caches significantly. With a directory extension, the concept of the victim cache is further extended such that only the identities of the victims are stored, rather than the actual lines of the cache. The victims' information is stored page-wise in a cache called the “directory extension” because it identifies which lines are in, for example, level 3 (L3) of the cache but not in level 2 (L2) for a set of recently accessed pages. Misses in L2 immediately trigger prefetching from the L3 into the L2 in accordance with the information provided by the directory extension, if an entry for the given page exists.
Alternatively, prefetching may be implemented using software. Software-directed prefetching is based on the concept that the (data and/or instruction) access patterns observed by hardware-only prefetching mechanisms are produced by software. Thus, in principle, appropriate analysis at the compiler level (e.g., online in the setting of continuous optimization of code) or application programmer-provided hints may provide higher quality prefetch instructions than in the hardware-only case. However, a compiler may be successful only for a limited class of software codes. Further, it must be assumed that a programmer will have sufficient time and experience to provide good software hints, which is not always feasible.
In one aspect of the present invention, a system is provided. The system includes at least one processor; a main memory operatively connected to the at least one processor; at least one cache operatively connected to the main memory and the at least one processor; a program application to be executed by the at least one processor, the program software includes at least one marker; and a marker management engine operatively connected to the at least one processor, the main memory, and the at least one cache; wherein the marker management engine (a) monitors when each of the at least one marker is reached in the execution of the program software, (b) monitors data accesses by the at least one processor to the at least one cache and the main memory, (c) stores at least one of the monitored data accesses in a pre-defined location in the main memory, and (d) optimizes the program software executed by the at least one processor based on the stored data accesses.
In another aspect of the present invention, a method of optimizing a computer program is provided. The computer program is capable of being executed on a processor-based system. The method includes the steps of placing at least one marker in the computer program, wherein each of the at least one marker is associated with at least one instruction in the computer program; executing the computer program; and if the step of executing the computer program involves executing the at least one instruction and if executing the at least one instruction requires at least one external data access, optimizing the computer program to reduce processor stalls associated with the at least one external data access. The optimizing may include monitoring data access patterns in executing the at least one instruction, storing information related to the data access patterns from the step of monitoring, retrieving the information related to the data access patterns when the at least one marker is reached, and executing one of a cache state update operation or a prefetch instruction operation using the data access pattern information retrieved.
In yet another aspect of the present invention, a method of optimizing a computer program is provided. The computer program capable of being executed on a processor-based system. The method includes the steps of placing at least one marker in the computer program, wherein each of the at least one marker is associated with at least one preferred processor optimization engine state and at least one instruction; executing the computer program; and if the step of executing the computer program involves executing the at least one instruction, updating a processor optimization engine state to the preferred processor optimization state prior to the execution of the at least one instruction.
The invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:
Illustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
It is to be understood that the systems and methods described herein may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. In particular, at least a portion of the present invention is preferably implemented as an application comprising program instructions that are tangibly embodied on one or more program storage devices (e.g., hard disk, magnetic floppy disk, RAM, ROM, CD ROM, etc.) and executable by any device or machine comprising suitable architecture, such as a general purpose digital computer having a processor, memory, and input/output interfaces. It is to be further understood that, because some of the constituent system components and process steps depicted in the accompanying Figures are preferably implemented in software, the connections between system modules (or the logic flow of method steps) may differ depending upon the manner in which the present invention is programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations of the present invention.
Exemplary embodiments of the present invention described herein address the problem of the data access wall resulting from processor stalls due to the increasing discrepancies between processor speed and the latency of access to data that is not stored in the immediate vicinity of the processor requesting the data. The exemplary embodiments of the present invention combine the benefits of hardware and software prefetching fields.
A computing system is comprised of physical assets, which we term “hardware,” and is additionally comprised of software, which is executed by the underlying hardware. The hardware provides generic basic services such as processing, volatile and non-volatile storage, network communication, and the like. The hardware can be customized to perform specific tasks via the software. For the purposes of this invention, hardware is the product of a physical manufacturing process. On the other hand, software includes instructions, data and metadata that may be stored in various media including magnetic storage (e.g., hard disks), optical storage (e.g., compact discs, digital versatile discs), and other types of non-volatile and volatile memory. Software may include device drivers, the hypervisor, the operating system, middleware, applications, and the like. We refer to the aggregate of the hardware and software in which a computing processor is present as a processor-based system.
In an exemplary embodiment of the present invention, we rely on the software to provide intelligence with respect to which aspects of the operation of a computer program necessitate further improvement. We then utilize monitoring services in the hardware to identify particular means of achieving the required optimization. These means include, but are not limited to, prefetch and cache replacement decisions. The software intelligence may be based on off-line considerations at the time of the application writing or compiling, or based on on-line optimization techniques with the aid of performance information extracted from various performance monitors.
The exemplary embodiment described above is based on the observation that although a software programmer or compiler can often identify portions of a computer program with a noticeably poor performance due to memory wall effects, it is nevertheless frequently impractical to engage in a detailed analysis of the associated access patterns so as to craft improved program optimization policies. Exacerbating this problem, these access patterns are often determined only at the program run-time and can differ significantly across different computing systems or even across the same system operating under different conditions. The underlying processing hardware, on the other hand, has a unique vantage point in that it can track and learn about access patterns at the very source of their generation. However, hardware will generally have limited resources and thus, will only be able to store limited learned information. In addition, the hardware does not possess the kind of global program behavior information available at the software layer.
A system benefited with the present invention results in, among other things, a significantly improved productivity/performance balance because, for example, a greater emphasis can be placed in the functional, rather than the performance aspects of the software design.
In an exemplary embodiment of the present invention, the software programmer or the compiler interacts with the hardware so as to trigger the appropriate learning procedure by means of suitable programming language semantics during a first execution of the software code. This learned information may then be stored in placeholders in memory defined by these semantics. For subsequent executions of the software code the learned information is used to execute suitable cache management policies including, but not limited to, prefetching; we refer too these cache management actions as “cache state updates”. Our learned information may be stored in program variables declared by the application programmer; these declarations may be achieved through special program declarations which employ labels distinct from those used for standard program variables. For example, standard declarations for the C programming language include “char” and “int.” In the forthcoming examples, the label employed in the declaration will be referred to as a “hint,” and we will generally refer to the learned information as “hints.”
Naturally, a section of a code in a program or application (e.g., a “function” in the parlance of computer science terminology) can behave in a number of ways, depending on the global state of the computing system executing the program. Consequently, assigning access patterns statically to a section of the code appears to be unsatisfactory, even if the access patterns are made to be relative to a given offset. Thus a more flexible mechanism is required to associate hints with events at which these hints could become useful. To address the above, we introduce the idea of relating a “hint” to a “context.” A hint is learned, whereas a context is specified by a programmer or a compiler. We provide, in greater detail below, a number of ways for a programmer or a compiler to relate a context to a hint.
The present invention also provides a general framework upon which buffer management policies can be implemented. Such policies include learned hints and fully-specified software hints. The present invention further provides a means for the hardware and the software to track the success rate of the policies as they are executed. Such information about the success rate of the policies can be used to adjust the implementation of the policies.
We introduce a new software variable that we shall call a hint; equivalently, we shall say that the type of those variables is a hint. A hint behaves like any other variable, and is stored and manipulated with other variables, depending on where the hint is declared within the program. Nevertheless, a variable of hint type has the distinction that the hardware is the main entity responsible for the management of the hint's contents, unlike regular variables where the software has such responsibilities. The hardware will store within the space provided by the hints information gathered in a first execution of the code that it believes will be useful to improve performance in a future execution of the code associated with the same context.
We present an example in which an object-oriented language, such as C++ or Java, has been augmented with hints. In the following description, we shall use terminology that is common in the computer science literature: for example, a function consists of a well-defined procedure with input and output parameters; an object is a collection of functions, variables and its relations with other objects; and an instance of an object is a specific realization at program run-time of the abstract object notion.
Suppose that one is interested in optimizing the performance of a function “foo” in every instance of an object “apple.” Like any other variables, hints can be declared globally or locally to the object, among other possibilities. The appropriate level for the storage of the hint is local to the object:
class apple {
hint h;
// this is the hint declaration
apple( );
// this is the object's constructor (standard
in the languages)
~apple( );
// this is the object's destructor (also standard)
void foo( );
// this is the function to be optimized
}
It becomes obvious in the above example that there will be one hint per instance of the object. To instruct the hardware to learn and store in “h,” and to prefetch according to what was learned, we write the following.
class apple {
prefetch(h);
// tells the hardware to prefetch according to hint h
learn(h);
// learn new access patterns and store in h
<CODE TO BE OPTIMIZED>
}
In the above, the prefetches are executed through the prefetch(h) instruction and the store function is executed through the learn(h) instruction. We refer to the prefetch and learn instruction as markers, since at a high level they are used to mark sections of the code for subsequent optimization; we say that the markers are associated with the instructions that comprise the code to be optimized. It is important to note that the markers themselves are instructions. However, for the purpose of clearly distinguishing standard instructions from markers, we shall say that the marker is reached when a marker is executed. The actual software code location of prefetch and learn instructions together with the location of the declaration of the hint constitute the context of the hint.
In an alternate, exemplary embodiment, we shall discuss alternate ways of specifying a context. The present invention is general enough to allow for a number of actions whenever the prefetched data eventually arrives, including its storage in one or more caches. Moreover, prefetching is not the only possible action resulting from the information previously learned; other actions include updates to the states of the caches in the system such as promoting or demoting the least recently used (LRU) status of a line within its cache equivalence class, and preventing or causing the eviction of a line from a cache.
Of significant importance here is determining what the hardware monitors and stores when the learn instruction is executed. For example, every data reference caused by the microprocessor may be stored in “h” until the storage provided by “h” is filled. However, this may be inefficient because the microprocessor causes data references at extremely high rates whereas only a small fraction of these result in memory references, with the vast majority resulting in cache hits of various sorts.
A solution is to place a filter to monitor accesses at higher levels of the hierarchy such as the L1/L2 layer or other suitable locations and to add mechanisms for identifying good prefetches such as the notion of feedback in the directory extension technology.
The above constitutes a general explanation of the functioning and use of one exemplary embodiment of this invention. Alternate embodiments may incorporate changes to microprocessor chips such as one or more processor cores which may include one or more threads. The learn and prefetch instructions, as well are other instructions can be part of the instruction set of the core (which can be costly to implement), or may be implemented using memory-mapped registers. An implementation of the latter requires very little changes to current microprocessor designs; here, the new prefetch and learn instructions are eventually translated by the compiler to standard load/store instructions with address targets that lie in a special purpose address range outside that occupied by memory. Simple circuitry inside of the microprocessor associated with this special purpose address range then reacts to the load or store to implement the learning or prefetching policies described above. This may be accomplished by employing logic circuits and internal storage, which interact with caches, memory controllers, and the memory subsystem.
As previously discussed, whenever the learn instruction is encountered, the hardware stores relevant information in the space associated with the hint. An important question is when this learning will stop. We have previously mentioned that such learning could stop whenever there is no more space because the hint is filled with information; other possibilities include adaptively deciding what to keep and discard as new information arrives, and hence, in this scenario, the learning never stops. Another possibility is to provide another software instruction which we shall call “stop_learn” that halts all further learning and writing to a hint whose identity is passed as a parameter.
This invention further provides a means to coordinate the management of the process of learning and prefetching for several hints simultaneously, which is of importance in processor chips that have many processor cores and/or have cores that have many threads.
The first exemplary embodiment, described in greater detail above, has the property that hints are treated as standard variables, and thus, very few changes are required in a system to implement them. Nevertheless, the embodiment possesses useful properties. For example, the hints are stored in main memory as regular variables, and thus, very few changes are required in a system to implement them. In the first exemplary embodiment, a context is defined by the location within the software code of the hint declaration.
We introduce a second exemplary embodiment in which a context is defined by aggregating the values of a set of programmer-defined values. In this second exemplary embodiment, hints are not declared explicitly. Rather, hints are created/updated only when the appropriate context is formed. In the following example, assume that “foo” has one parameter p, which changes the access pattern expected from “foo”:
void object::foo( int p) {
learn_in_context( this , p , &foo );
// context given by
parameters
prefetch_in_context( this , p , &foo );
// context given by
parameters
<CODE TO BE OPTIMIZED>
}
The learn_in_context and prefetch_in_context instructions accept any number of parameters; for example the parameters in the above case are “this”, “p” and “&foo”. These parameters are taken by a microprocessor that produces a single N-bit identity (i.e., a hint ID) for that particular learn/fetch instruction instance. The precise way in which the parameters are combined to form the hint ID is not of much importance as long as the hint ID has the properties of good hash functions such as having low probability of mapping two different contexts to the same N-bit identity.
One possibility is to assign an N-bit hash value to every parameter and then to XOR the results; these hash values themselves can be produced by applying various XOR operations to the digital representations of the parameters. Although it is entirely possible for a different set of parameters to map the same hint ID, the likelihood of such an occurrence can be substantially reduced by keeping, for example, N sufficiently large. Optimal values for N can be determined via experimentation.
Once a hint ID has been calculated, the hardware is informed, as before, that it should learn some aspect of the sequence of accesses that are about to be produced. The learned data are then associated with the hint ID and stored in the hint. Hints in turn are stored in a set associative buffer, which we call a “hint buffer,” for efficient storage and retrieval. Another hash value for the hint ID may be used to determine the set in which the hint is stored.
Execution of the prefetch_in_context instruction also triggers a search in the hint buffer for each given hint ID. In case of success, prefetches are issued in accordance to the information stored in the hint, either directly or indirectly through the directory extension or some other means.
If two separate pieces of code need to be optimized with the same function, one may write the following:
void object::foo( int p) {
prefetch_in_context( this , p , &foo, 1 );
learn_in_context( this , p , &foo, 1);
<CODE BE OPTIMIZED 1>
prefetch_in_context( this , p , &foo, 2 );
learn_in_context( this , p , &foo, 2);
<CODE BE OPTIMIZED 2>
}
Exemplary System and Method
Referring to
The hard disk 125 includes a computer program 130. Alternatively, the computer program 130 may be present on an external storage unit (not shown), such as a floppy disk or flash drive. The computer program 130 may include one or more markers (not shown). The markers may be manually inserted by a user or automatically through another application.
The system 100 further includes a marker management engine 135. The marker management engine 135 performs a number of functions. The marker management engine (a) monitors when each marker is reached in the execution of the computer program 130, (b) monitors data accesses by the processor 105 to the cache 110 and the main memory 115, (c) stores the monitored data accesses in a pre-defined location in the main memory 115, and (d) optimizes the computer program 130 executed by the processor 105 based on the stored data accesses.
It should be noted that the system 100 is only exemplary and is not so limited, as contemplated by those skilled in art. For example, an alternate system may include multiple processors, multiple caches, and multiple levels for each cache. For another example, the marker management engine 135 may be a part of the memory controller 120.
Referring to
It should be appreciated that step 225 is optional. For example, step 225 may not occur because the current optimizations are determined to be effective.
Referring to
If it is determined (at 320) that the marker does not instruct the system to monitor accesses, then it is determined (at 330) whether the marker instructs the system to prefetch. If it is determined (at 330) that the marker instructs the system to prefetch, then data is prefetched (at 335) and the effectiveness of the data is tracked (at 335). The program software continues execution (at 310); there is no need for the prefetches to finish. The program or application programmer may examine the effectiveness of the prefetches to modify the markers in the software. If it is determined (at 330) that the marker does not instruct the system to prefetch, then we continue (at 340) with other marker options, as contemplated by those skilled in the art.
The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
Franaszek, Peter A., Tremaine, R. Brett, Montaño, Luis Alfonso Lastras
Patent | Priority | Assignee | Title |
10291508, | Mar 15 2016 | International Business Machines Corporation | Optimizing monitoring for software defined ecosystems |
8136106, | Feb 16 2006 | International Business Machines Corporation | Learning and cache management in software defined contexts |
Patent | Priority | Assignee | Title |
5265254, | Aug 14 1991 | Agilent Technologies Inc | System of debugging software through use of code markers inserted into spaces in the source code during and after compilation |
5544342, | Jun 30 1993 | International Business Machines Corporation | System and method for prefetching information in a processing system |
5553276, | Jun 30 1993 | International Business Machines Corporation | Self-time processor with dynamic clock generator having plurality of tracking elements for outputting sequencing signals to functional units |
5590356, | Aug 23 1994 | Massachusetts Institute of Technology | Mesh parallel computer architecture apparatus and associated methods |
5613136, | Dec 04 1991 | University of Iowa Research Foundation | Locality manager having memory and independent code, bus interface logic, and synchronization components for a processing element for intercommunication in a latency tolerant multiple processor |
5752068, | Aug 23 1994 | Massachusetts Institute of Technology | Mesh parallel computer architecture apparatus and associated methods |
5754839, | Aug 28 1995 | SHENZHEN XINGUODU TECHNOLOGY CO , LTD | Apparatus and method for implementing watchpoints and breakpoints in a data processing system |
6029002, | Oct 31 1995 | Rocket Software, Inc | Method and apparatus for analyzing computer code using weakest precondition |
6262730, | Jul 19 1996 | Microsoft Technology Licensing, LLC | Intelligent user assistance facility |
6269438, | Aug 20 1996 | Institute for the Development of Emerging Architectures, L.L.C. | Method for identifying hard-to-predict branches to enhance processor performance |
6378064, | Mar 13 1998 | STMicroelectronics Limited | Microcomputer |
6499116, | Mar 31 1999 | Freescale Semiconductor, Inc | Performance of data stream touch events |
6536037, | May 27 1999 | Accenture Global Services Limited | Identification of redundancies and omissions among components of a web based architecture |
6766514, | Oct 19 1999 | General Electric Co. | Compiler having real-time tuning, I/O scaling and process test capability |
7043716, | Jun 13 2001 | DOXIM SOLUTIONS ULC | System and method for multiple level architecture by use of abstract application notation |
7062567, | Nov 06 2000 | NUMECENT HOLDINGS, INC | Intelligent network streaming and execution system for conventionally coded applications |
7269718, | Apr 29 2004 | GOOGLE LLC | Method and apparatus for verifying data types to be used for instructions and casting data types if needed |
7526757, | Jan 14 2004 | International Business Machines Corporation | Method and apparatus for maintaining performance monitoring structures in a page table for use in monitoring performance of a computer program |
7555566, | Feb 24 2001 | International Business Machines Corporation | Massively parallel supercomputer |
20020161908, | |||
20030051226, | |||
20050071515, | |||
20050071822, | |||
20050081019, | |||
20050228511, | |||
20050251706, | |||
20060206874, | |||
20080244533, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 12 2005 | TREMAINE, R BRETT | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017571 | /0319 | |
Dec 13 2005 | FRANASZEK, PETER A | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017571 | /0319 | |
Dec 15 2005 | MOTANO, LUIS ALFONSO LASTRAS | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017571 | /0319 | |
Feb 16 2006 | International Business Machines Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Sep 26 2014 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 26 2014 | M1554: Surcharge for Late Payment, Large Entity. |
Oct 29 2018 | REM: Maintenance Fee Reminder Mailed. |
Apr 15 2019 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Mar 08 2014 | 4 years fee payment window open |
Sep 08 2014 | 6 months grace period start (w surcharge) |
Mar 08 2015 | patent expiry (for year 4) |
Mar 08 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 08 2018 | 8 years fee payment window open |
Sep 08 2018 | 6 months grace period start (w surcharge) |
Mar 08 2019 | patent expiry (for year 8) |
Mar 08 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 08 2022 | 12 years fee payment window open |
Sep 08 2022 | 6 months grace period start (w surcharge) |
Mar 08 2023 | patent expiry (for year 12) |
Mar 08 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |