A cache memory controller and method for dumping the contents of a cache directory and a cache data random access memory (ram) are described. In order to dump the contents of the cache directory, access to the cache data ram is disabled by disabling the cache controller. Then, address tags within the cache directory are read sequentially from a reserved register. In order to dump the contents of the cache data ram, new addresses are allocated to data in the cache data ram. This is done, for example, by blocking writes to the cache data ram while enabling read access from the cache data ram and both read and write access to the cache directory. A reserved block of cacheable memory within, for example, the main system memory, is accessed. When the reserved block of cacheable memory is accessed, address tags for addresses of the reserved block of cacheable memory are written into the cache directory; however, data from the reserved block of cacheable memory is not written into the cache data ram. data in the cache data ram is now accessible using addresses for the reserved block of cacheable memory. In a preferred embodiment, the cache controller includes non-cacheable ram registers, multiplexers, a sequencer, a cache data ram controller having logic circuitry for suppressing/gating cache write enable signals, a system controller interface, configuration/diagnostic registers and a cache directory set.

Patent
   5537572
Priority
Mar 31 1992
Filed
Mar 31 1992
Issued
Jul 16 1996
Expiry
Jul 16 2013
Assg.orig
Entity
Large
65
10
all paid
1. A method for dumping the contents of a cache directory and cache data random access memory (ram), the method comprising the steps of:
(a) disabling access to the cache data ram;
(b) performing the following substeps for each address tag within the cache directory
(b.1) applying an index for an address tag to the cache directory,
(b.2) placing that address tag within a register, and
(b.3) reading the register; and,
(c) allocating new addresses to cache data in the cache data ram.
5. A cache data random access memory (ram) controller, coupled to a cache data ram, the cache data ram controller comprising:
cache directory means for storing address tags of data stored in the cache data ram, the cache directory means having an input and an output, wherein in response to an index being placed on the input, the cache directory means produces an address tag on the output;
data ram register means for receiving address tags placed on the output of the cache directory means during a cache directory dump; and,
sequencer means for placing a sequence of indexes upon the input of the cache directory means during a cache directory dump.
2. A method as in claim 1 wherein in step (b), each address tag is larger than the register, requiring substep (b.3) to include multiple reads of the register to access each address tag.
3. A method as in claim 1 wherein step (c) includes the following substeps:
(c.1) blocking writes to the cache data ram;
(c.2) enabling read access from the cache data ram and both read and write access to the cache directory;
(c.3) flushing the cache directory;
(c.4) sequentially accessing a reserved block of cacheable memory, the access of the cacheable memory resulting in address tags for addresses of the reserved block of cacheable memory being written into the cache directory, but data from the reserved block of cacheable memory not being written into the cache data ram; and,
(c.5) accessing data in the cache data ram using the addresses for the reserved block of cacheable memory.
4. A method as in claim 3 wherein substep (c.4) is performed by code in non-cacheable memory.
6. A cache data ram controller as in claim 5 wherein the data ram register means is not large enough to simultaneously hold all bits of an address tag and wherein the cache data ram controller additionally comprises:
multiplexer means, coupled between the cache directory means and the data ram register means, for selecting bits from the output of the cache directory means to be placed in the data ram register means, wherein the multiplexer means is controlled by the sequencer means.

The present invention concerns a cache controller that allows a cache dump. The cache dump feature is useful in the debugging of cache operations.

In a computer system, the operating speed of the system processor is dependent upon the rate at which data can be exchanged between main memory and the processor. In an attempt to reduce the time required for the exchange of data between the processor and main memory, many computer systems include a cache memory placed between the processor and main memory.

A cache memory is a small, high-speed buffer memory that is used to temporarily store portions of the contents of main memory. In selecting which portions of the contents of main memory to store, a cache controller estimates which data will soon be requested by the processor. The increased access speed of the cache memory generally results in a reduction in the average time necessary for the processor to access data from main memory.

A cache memory consists of many blocks of one or more words of data. Each block has associated with it an address tag. The address tags of data blocks currently residing in the cache memory are stored in a cache directory (also called a tag random access memory (RAM)). Each address tag uniquely identifies a block of data in the main memory. Each time the processor makes a memory reference, a comparison is made between an address tag of the accessed data and the address tags stored stand in the cache directory. If the desired data is in the cache, the cache provides the data to processor. If the desired memory block is not in the cache, the block of data containing the requested data is retrieved from the main memory, stored in the cache and supplied to the processor.

In addition to using a cache to retrieve data from main memory, the processor may also write data into the cache. Data is written to the cache instead of writing the data directly to the main memory, or, in a write-through cache, data is written to the cache concurrently with the writing of the data to the main memory. When the processor desires to write data to memory, the cache controller checks the cache directory to determine if the data block into which data is to be written resides in the cache. If the data block exists in the cache, the processor writes the data into the data block in the cache. If the data block into which data is to be written is not in the cache, the data block must be fetched into the cache or the data written directly into the main memory.

In complex cached computer systems, to debug the operation of the cache controller, it is desirable to determine the actual contents of the cache at a point in time. However, the cache is transparent to the computer system, so determining the data that exists in the cache at a particular point in time is very difficult. Specifically, any attempt to access data in the cache is likely to result in reallocation of the cache contents.

In accordance with the preferred embodiment of the present invention, a method is presented for dumping the contents of a cache directory and cache data RAM. In order to dump the cache directory, access to the cache data RAM is disabled. Then the address tags within the cache directory are read sequentially. This is done for each address tag by applying an index for an address tag to the cache directory. In response, the cache directory places the address tag within a register. The address tag is then read from the register.

The dump of the cache directory may be implemented using a sequencer. During a cache directory dump, the sequencer places a sequence of indexes upon the input of the cache directory. As a result, the cache directory places on its output an address tag. The address tag is captured by the register. In the preferred embodiment of the present invention, the register is not large enough to simultaneously hold all bits of an address tag. Therefore, a multiplexer is used to select bits from the output of the cache directory to be placed in the register. The multiplexer is controlled by the sequencer so that each address tag may be obtained using multiple reads of the register.

The dump of the cache data RAM is performed by allocating new addresses to cache data in the cache data RAM. This is done, for example, by blocking writes to the cache data RAM while enabling read access from the cache data RAM and both read and write access to the cache directory. A reserved block of cacheable memory is then accessed. What is meant by cacheable memory is memory locations for which the cache data RAM may be used when accessing data therein. On the other hand, the cache data RAM is not used for accesses for non-cacheable memory. When the reserved block of cacheable memory is accessed, address tags for addresses of the reserved block of cacheable memory are written into the cache directory; however, data from the reserved block of cacheable memory is not written into the cache data RAM. Data in the cache data RAM is now accessible using addresses for the reserved block of cacheable memory.

In the preferred embodiment of the present invention, accessing the reserved block of cacheable memory is performed by programming code in non-cacheable memory. This prevents the execution of the programming code from interfering with the allocation of space in the cache data RAM. The dump of the cache data RAM may be implemented using logic which, in response to a value of a write enable suppression bit in a test status register, prevents a write enable signal generated by write enable logic from reaching the cache data RAM.

FIG. 1 is a simplified block diagram of a cache memory within a complex cached computer system.

FIG. 2 is a block diagram of a cache controller shown in FIG. 1, in accordance with the preferred embodiment of the present invention.

FIG. 3 shows three registers contained within the cache controller shown in FIG. 2, in accordance with the preferred embodiment of the present invention.

FIG. 4 is a flowchart which sets out a method for dumping address tags from a cache directory within the cache controller shown in FIG. 2, in accordance with the preferred embodiment of the present invention.

FIG. 5 is a simplified logic block diagram which sets out simplified logic used to implement the method described in FIG. 4, in accordance with the preferred embodiment of the present invention.

FIG. 6 is a flowchart which sets out a method for dumping data from a data cache shown in FIG. 1, in accordance with the preferred embodiment of the present invention.

FIG. 7 is a simplified logic block diagram which sets out simplified logic used to implement the method described in FIG. 6, in accordance with the preferred embodiment of the present invention.

FIG. 1 shows a simplified block diagram of a computer system. The computer system contains a processor 11, a main system memory 12, a system controller 13, a bus controller 14, a cache controller 15, a cache data random access memory (RAM) 16 and a cache data RAM 17. System controller 13 is responsible for generating control signals for data accesses from main system memory 12. System controller is, for example, a VL82C320A System Controller Interface available from VLSI Technology, Inc. having a business address of 1109 McKay Drive, San Jose, Calif. 95131.

Bus controller 14 is responsible for generating control signals for accessed data that is not located in main system memory 12. Such data may be stored, for example, in read only memory (ROM) or by a peripheral device. Bus controller 14 is, for example, a VL82C331 Bus Controller Interface also available from VLSI Technology, Inc. A memory address (MA) bus 23 is an address bus from system controller 13 to main memory 12. A system address (SA) bus 22 is an address bus portion of the system bus used to address expansion slots and ROM. A system data (SD) bus 21 is the data bus portion of the system bus used for expansion slots and ROM. A transmit data (XD) bus 19 is a buffered version of the lowest byte of data on SD bus 21. XD bus 19 is used to transfer data to or from the internal registers within cache controller 15, system controller 13 or bus controller 14 during input/output cycles.

The computer system includes a processor data bus 20 and a processor address bus 18. In order to access (i.e. read or write) data, processor 11 places an address of a memory location on processor address bus 18. If cache controller 15 determines the contents of the addressed memory location reside in cache data RAM 16 or cache data RAM 17, the cache controller enables the access of the appropriate cache data RAM. In the case of a memory read, an access of the cache data RAM results in cache data RAM 16 or cache data RAM 17 placing data on processor data bus 20. In the case of a memory write, an access of the cache data RAM results in cache controller 15 writing data into cache data RAM 16 or cache data RAM 17. In the case of a write-through cache, data is also written through to main system memory 12.

If cache controller 15 determines the contents of the addressed memory location do not reside in cache data RAM 16 or cache data RAM 17, a cache miss results. The data is accessed from main system memory 12, or some other storage device. If the data accessed from main system memory 12, or some other storage device, is currently stored in memory locations that are cacheable (cacheable memory), the data access will generally result in the data being fetched into one of the cache data RAMs.

FIG. 2 shows a block diagram of cache controller 15. Cache controller 15 is shown to include an XD bus transceiver 31, a processor interface 32, a bus controller interface 33, non-cacheable write-protect area RAMs and comparators 34, a cache data RAM controller 35, a system controller interface 36, configuration/diagnostic registers 37, a cache directory (tag RAM) 38 and a cache directory (tag RAM) 39.

Processor interface 32 monitors signals on processor address bus 18 to determine what action cache controller 15 needs to take. For example, when a memory read access results in a cache hit, cache controller 15 enables a data access of the appropriate data cache to processor data bus 20. In a memory read or write access in which there is a cache miss, cache controller 15 signals system controller 13 of the cache miss.

System controller interface 36 provides for communication of cache controller 15 with system controller 13. Bus controller interface 33 provides for communication of cache controller 15 with bus controller 14. XD data bus transceiver 31 is used to interface with XD bus 19. Non-cacheable write-protect area RAMs and comparators 34 include I/O registers which are not memory mapped. These I/O registers form programmable look-up tables in which the user may define cacheability for various address regions. Non-cacheable RAM, which is accessed without use of cache data RAM 16 or cache data RAM 17, may be defined to reside in main system memory 12, slot bus memory, or elsewhere in the computer system.

FIG. 3 shows three of configuration/diagnostic registers 37. A cache configuration register 51 is shown to include a cache enable bit 61. Cache controller 15 is enabled by setting cache enable bit 61 to a logic 1. Cache controller 15 is disabled by clearing cache enable bit 61 to a logic 0.

A test status register 52 includes a block cache write enable bit 62. A cache write enable signal generated by cache controller 15 is blocked when block cache write enable bit 62 is set to a logic 1. A cache write enable signal generated by cache controller 15 operates normally when block cache write enable bit 62 is cleared to a logic 0. Test status register also includes five bits for a diagnostic opcode 63. When diagnostic opcode 63 has a value of 01000base 2, this indicates cache controller 15 is to dump the contents of tag RAM 38. When diagnostic opcode 63 has a value of 10000base 2, this indicates cache controller 15 is to dump the contents of tag RAM 39. A tag RAM data (RAMDATA) register 53 is used to receive addresses dumped from tag RAM 38 or tag RAM 39.

FIG. 4 sets out a simplified flowchart of a method for dumping address tags from the cache directory. In a step 71, cache enable bit 61 is cleared to disable cache controller 15, and thus disable the cache. In a step 72, diagnostic opcode 63 is set to 01000base 2 to indicate cache controller 15 is to dump the contents of tag RAM 38, or is set to 10000base 2 to indicate cache controller 15 is to dump the contents of tag RAM 39. In a step 73, data is read from tag RAM data register 53 and written to a location in main system memory 12 to complete the dump.

FIG. 5 shows a simplified block diagram of logic used to perform the dump of tag RAM address tags. Within cache controller 15, when an index 74 is placed at an input of tag RAM 38, tag RAM 38 places a tag address on lines 77. During a tag RAM dump, a multiplexer 75, places the output of a sequencer 76 on the input of tag RAM 38. On successive reads, sequencer 76 sequences through a read of all address tags in tag RAM 38 by placing of a sequence of indexes upon the input of the cache directory. Tag RAM data (RAMDATA) register 53 intercepts the tag addresses placed on lines 77. In the preferred embodiment, tag RAM 38 contains 1024 20-bit address tags, while tag RAM data register 53 is an eight-bit register. Therefore, 3072 reads need to be performed to dump the entire contents of tag RAM 38. For each read, a multiplexer 78, controlled by sequencer 76, selects a set of bits from lines 77 to place in tag RAM data register 53.

FIG. 6 sets out a simplified flowchart of a method for dumping the data within cache data RAM 16 and/or cache data RAM 17. This method of dumping data from the cache data RAM 16 and/or cache data RAM 17 may be performed directly after dumping tag RAM address tags, as described above.

In a step 81, cache enable bit 61 is cleared to disable cache controller 15, and thus disable the cache. In a step 82, the cache write enable signal is blocked by setting block cache write enable bit 62 in test status register 52. This blocks the cache write enable outputs of cache controller 15, preventing writes to cache data RAM 16 and cache data RAM 17. In a step 83, cache enable bit 61 is set to enable cache controller 15, and thus enable the cache.

In a step 84, cache directory (tag RAM) 38 and cache directory (tag RAM) 39 are flushed. In a step 85, new addresses are allocated to data within cache data RAM 16 and cache data RAM 17. This is done, for example, by using a reserved contiguous block of data which is in main system memory 12. The size of this reserved contiguous block of data is equal to the combined size of cache data RAM 16 and cache data RAM 17. The reserved contiguous block of data is read sequentially. It is important that programming code which performs the read of data from the reserved contiguous block of data is stored in non-cacheable RAM as defined by registers within non-cacheable write-protect area RAMs and comparators 34. Further, any stack or variable used by the programming code also should be stored in non-cacheable RAM. This is important so that the executing of the programming code does not interfere with the allocation of space in cache data RAM 16 and cache data RAM 17.

In a step 86, the contents of the cache data RAM 16 and cache data RAM 17 may now be read (dumped) using the addresses for the reserved contiguous block of data.

FIG. 7 shows a simplified block diagram of logic which could be used to block cache write enable signals. Within cache data RAM controller 35 of cache controller 15, write enable logic 91 is used to generate write enable signals for active-low write enable input 95 of cache data RAM 16 and for active-low write enable input 96 of cache data RAM 17. The value of cache write enable bit 62 in test status register 52 is input to logic circuitry to gate the write enable signals generated by write enable logic 91. For example, an inverter 92, a logic NAND gate 93 and a logic NAND gate 94 are shown as simplified logic circuitry which could be used to gate the write enable signals.

The foregoing discussion discloses and describes merely exemplary methods and embodiments of the present invention. As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

Murray, Joseph, Michelsen, Jeff M.

Patent Priority Assignee Title
10055320, Jul 12 2016 International Business Machines Corporation Replicating test case data into a cache and cache inhibited memory
10169180, May 11 2016 International Business Machines Corporation Replicating test code and test data into a cache with non-naturally aligned data boundaries
10223225, Nov 07 2016 International Business Machines Corporation Testing speculative instruction execution with test cases placed in memory segments with non-naturally aligned data boundaries
10261878, Mar 14 2017 International Business Machines Corporation Stress testing a processor memory with a link stack
10489259, Jan 29 2016 International Business Machines Corporation Replicating test case data into a cache with non-naturally aligned data boundaries
10540249, Mar 14 2017 International Business Machines Corporation Stress testing a processor memory with a link stack
10664405, Nov 03 2017 GOOGLE LLC In-memory distributed cache
10666754, May 29 2015 DELL PRODUCTS, L.P. System and method for cache reservation in an information handling system
11144463, Nov 03 2017 GOOGLE LLC In-memory distributed cache
11797453, Nov 03 2017 GOOGLE LLC In-memory distributed cache
5778407, Dec 22 1993 Intel Corporation Methods and apparatus for determining operating characteristics of a memory element based on its physical location
5937172, Apr 14 1997 International Business Machines Corporation Apparatus and method of layering cache and architectural specific functions to permit generic interface definition
5974510, Oct 31 1997 Advanced Micro Devices, Inc. Method for testing the non-cacheable region functioning of a cache memory controller
6032226, Apr 14 1997 International Business Machines Corporation Method and apparatus for layering cache and architectural specific functions to expedite multiple design
6061755, Apr 14 1997 International Business Machines Corporation Method of layering cache and architectural specific functions to promote operation symmetry
6115789, Apr 28 1997 International Business Machines Corporation Method and system for determining which memory locations have been accessed in a self timed cache architecture
6734867, Jun 28 2000 Micron Technology, Inc. Cache invalidation method and apparatus for a graphics processing system
6937246, Jun 28 2000 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Cache invalidation method and apparatus for a graphics processing system
7149857, May 14 2002 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Out of order DRAM sequencer
7181599, Jan 14 2004 International Business Machines Corporation Method and apparatus for autonomic detection of cache “chase tail” conditions and storage of instructions/data in “chase tail” data structure
7188145, Jan 12 2001 Parallel Networks, LLC Method and system for dynamic distributed data caching
7296130, Mar 22 2004 GOOGLE LLC Method and apparatus for providing hardware assistance for data access coverage on dynamically allocated data
7299319, Mar 22 2004 GOOGLE LLC Method and apparatus for providing hardware assistance for code coverage
7373637, Sep 30 2003 International Business Machines Corporation Method and apparatus for counting instruction and memory location ranges
7395527, Sep 30 2003 International Business Machines Corporation Method and apparatus for counting instruction execution and data accesses
7421681, Oct 09 2003 International Business Machines Corporation Method and system for autonomic monitoring of semaphore operation in an application
7421684, Mar 22 2004 International Business Machines Corporation Method and apparatus for autonomic test case feedback using hardware assistance for data coverage
7480899, Mar 22 2004 International Business Machines Corporation Method and apparatus for autonomic test case feedback using hardware assistance for code coverage
7496908, Jan 14 2004 International Business Machines Corporation Method and apparatus for optimizing code execution using annotated trace information having performance indicator and counter information
7519779, Aug 26 2002 International Business Machines Corporation Dumping using limited system address space
7526616, Mar 22 2004 International Business Machines Corporation Method and apparatus for prefetching data from a data structure
7526757, Jan 14 2004 International Business Machines Corporation Method and apparatus for maintaining performance monitoring structures in a page table for use in monitoring performance of a computer program
7574587, Jan 14 2004 International Business Machines Corporation Method and apparatus for autonomically initiating measurement of secondary metrics based on hardware counter values for primary metrics
7620777, Mar 22 2004 International Business Machines Corporation Method and apparatus for prefetching data from a data structure
7620789, May 14 2002 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Out of order DRAM sequencer
7747004, Dec 22 2005 Google Technology Holdings LLC Devices and methods for acoustic usability
7890701, Jan 12 2001 Microsoft Corporation Method and system for dynamic distributed data caching
7926041, Mar 22 2004 International Business Machines Corporation Autonomic test case feedback using hardware assistance for code coverage
7937691, Sep 30 2003 International Business Machines Corporation Method and apparatus for counting execution of specific instructions and accesses to specific data locations
7975032, Jan 12 2001 Microsoft Corporation Method and system for community data caching
8042102, Oct 09 2003 International Business Machines Corporation Method and system for autonomic monitoring of semaphore operations in an application
8135812, Jan 12 2001 Microsoft Corporation Method and system for community data caching
8135915, Mar 22 2004 International Business Machines Corporation Method and apparatus for hardware assistance for prefetching a pointer to a data structure identified by a prefetch indicator
8141099, Jan 14 2004 International Business Machines Corporation Autonomic method and apparatus for hardware assist for patching code
8171457, Mar 22 2004 International Business Machines Corporation Autonomic test case feedback using hardware assistance for data coverage
8191049, Jan 14 2004 International Business Machines Corporation Method and apparatus for maintaining performance monitoring structures in a page table for use in monitoring performance of a computer program
8205044, Jan 12 2001 Microsoft Corporation Method and system for dynamic distributed data caching
8255880, Sep 30 2003 International Business Machines Corporation Counting instruction and memory location ranges
8271628, Jan 12 2001 Microsoft Corporation Method and system for community data caching
8381037, Oct 09 2003 International Business Machines Corporation Method and system for autonomic execution path selection in an application
8504663, Jan 12 2001 Microsoft Corporation Method and system for community data caching
8572326, Jan 12 2001 Microsoft Corporation Method and system for dynamic distributed data caching when a source of data is not available
8615619, Jan 14 2004 International Business Machines Corporation Qualifying collection of performance monitoring events by types of interrupt when interrupt occurs
8639902, May 14 2002 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Methods for sequencing memory access requests
8689190, Sep 30 2003 International Business Machines Corporation Counting instruction execution and data accesses
8782664, Jan 14 2004 International Business Machines Corporation Autonomic hardware assist for patching code
8914576, Jul 30 2012 Hewlett Packard Enterprise Development LP Buffer for RAID controller with disabled post write cache
8984229, Jan 12 2001 Parallel Networks, LLC Method and system for dynamic distributed data caching
9032155, Jan 12 2001 Parallel Networks, LLC Method and system for dynamic distributed data caching
9210236, Jan 12 2001 Parallel Networks, LLC Method and system for dynamic distributed data caching
9262283, Nov 08 2012 Inventec Appliances (Pudong) Corporation; Inventec Appliances Corp.; Inventec Appliances (JiangNing) Corporation Method for reading kernel log upon kernel panic in operating system
9602618, Jan 12 2001 Parallel Networks, LLC Method and system for dynamic distributed data caching
9904489, May 14 2002 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Processing systems, memory controllers and methods for controlling memory access operations
9959182, Jan 29 2016 International Business Machines Corporation Replicating test case data into a cache with non-naturally aligned data boundaries
9959183, Jan 29 2016 International Business Machines Corporation Replicating test case data into a cache with non-naturally aligned data boundaries
Patent Priority Assignee Title
4426682, May 22 1981 Harris Corporation Fast cache flush mechanism
4493026, May 26 1982 International Business Machines Corporation Set associative sector cache
4740889, Jun 26 1984 Freescale Semiconductor, Inc Cache disable for a data processor
4775955, Oct 30 1985 International Business Machines Corporation Cache coherence mechanism based on locking
5010475, Nov 25 1985 NEC Corporation Consistency ensuring system for the contents of a cache memory
5195096, Mar 16 1990 John Fluke Mfg. Co., Inc. Method of functionally testing cache tag RAMs in limited-access processor systems
5276833, Jul 02 1990 Intel Corporation Data cache management system with test mode using index registers and CAS disable and posted write disable
5287481, Dec 19 1991 OPTI, INC Automatic cache flush with readable and writable cache tag memory
5317711, Jun 14 1991 Integrated Device Technology, Inc. Structure and method for monitoring an internal cache
5414827, Dec 19 1991 OPTI, INC , A CORP OF CA Automatic cache flush
///////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Mar 11 1992MURRAY, JOSEPHVLSI Technology, IncASSIGNMENT OF ASSIGNORS INTEREST 0060760178 pdf
Mar 16 1992MICHELSEN, JEFF M VLSI Technology, IncASSIGNMENT OF ASSIGNORS INTEREST 0060760178 pdf
Mar 31 1992VLSI Technology, Inc.(assignment on the face of the patent)
Jul 02 1999VLSI Technology, IncPHILIPS SEMICONDUCTORS VLSI INC CHANGE OF NAME SEE DOCUMENT FOR DETAILS 0229730248 pdf
Dec 29 1999PHILIPS SEMICONDUCTORS VLSI INC PHILIPS SEMICONDUCTORS INC CHANGE OF NAME SEE DOCUMENT FOR DETAILS 0229730254 pdf
Jul 15 2009PHILIPS SEMICONDUCTORS INC NXP B V ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0229730239 pdf
Sep 26 2011NXP B V CALLAHAN CELLULAR L L C ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0272650798 pdf
Date Maintenance Fee Events
Jan 10 2000M183: Payment of Maintenance Fee, 4th Year, Large Entity.
Aug 30 2000ASPN: Payor Number Assigned.
Dec 12 2003M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Jan 21 2008REM: Maintenance Fee Reminder Mailed.
Jul 07 2008M1553: Payment of Maintenance Fee, 12th Year, Large Entity.
Jul 07 2008M1556: 11.5 yr surcharge- late pmt w/in 6 mo, Large Entity.
Feb 08 2012ASPN: Payor Number Assigned.
Feb 08 2012RMPN: Payer Number De-assigned.


Date Maintenance Schedule
Jul 16 19994 years fee payment window open
Jan 16 20006 months grace period start (w surcharge)
Jul 16 2000patent expiry (for year 4)
Jul 16 20022 years to revive unintentionally abandoned end. (for year 4)
Jul 16 20038 years fee payment window open
Jan 16 20046 months grace period start (w surcharge)
Jul 16 2004patent expiry (for year 8)
Jul 16 20062 years to revive unintentionally abandoned end. (for year 8)
Jul 16 200712 years fee payment window open
Jan 16 20086 months grace period start (w surcharge)
Jul 16 2008patent expiry (for year 12)
Jul 16 20102 years to revive unintentionally abandoned end. (for year 12)