Techniques for maintaining cache coherency comprising storing data blocks associated with a main process in a cache line of a main cache memory, storing a first local copy of the data blocks in a first local cache memory of a first processor, storing a second local copy of the set of data blocks in a second local cache memory of a second processor executing a first child process of the main process to generate first output data, writing the first output data to the first data block of the first local copy as a write through, writing the first output data to the first data block of the main cache memory as a part of the write through, transmitting an invalidate request to the second local cache memory, marking the second local copy of the set of data blocks as delayed, and transmitting an acknowledgment to the invalidate request.
|
11. A processing system comprising:
a main thread associated with a main cache line that includes a first set of data and a second set of data;
a memory controller configured to generate, in response to a diverge command, a first copy of the main cache line and a second copy of the main cache line;
a first processor configured to:
store the first copy of the main cache line; and
write to the first set of data in the first copy of the main cache line and the first set of data in the main cache line; and
a second processor configured to:
store the second copy of the main cache line;
receive an invalidate request;
in response to the invalidate request, mark a cache line associated with the second set of data of the second copy of the main cache line as a delayed snoop;
write to the second set of data in the second copy of the main cache line and the second set of data in the main cache line; and
invalidate the cache line associated with the second set of data of the second copy of the main cache line, in response to writing the second set of data in the main cache line.
1. A method of a cache memory system comprising:
receiving, on a main cache line, a first data and a second data;
issuing, on a main thread, a fork command;
in response to the fork command, executing, by a first processor and a second processor, a diverge instruction;
in response to the diverge instruction:
caching, by the first processor, a first copy of the main cache line in a first local cache of the first processor; and
caching, by the second processor, a second copy of the main cache line in a second local cache of the second processor;
writing, by the first processor, to the first data in the first copy of the main cache line and to the first data in the main cache line;
in response to writing to the first data of the main cache line, snooping, by the cache memory system;
in response to the snooping, transmitting, by a memory controller, an invalidate request to the second processor to invalidate the second copy of the main cache line;
in response to the invalidate request, marking, by the second processor, a cache line associated with the second data of the second copy of the main cache line as a delayed snoop;
writing, by the second processor, to the second data in the second copy of the main cache line and to the second data in the main cache line; and
in response to the writing to the second data in the main cache line, invalidating, by the second processor, the cache line associated with the second data of the second copy of the main cache line.
2. The method of
the main cache line is a L3 cache controlled by the memory controller; and
the memory controller comprises a multicore shared memory controller.
4. The method of
the diverge instruction configures the cache memory system in a child threading mode.
5. The method of
the first processor processes a first thread;
the second processor processes a second thread;
the first thread includes a first indicator indicating that the first data is assigned to the first thread; and
the second thread includes a second indicator indicating that the second data is assigned to the second thread.
6. The method of
the main cache line is configured to only accept a write through to the first data from the first thread; and
the main cache line is configured to only accept a write through to the second data from the second thread.
7. The method of
processing, by the first processor, the first thread;
upon completion of the first thread, executing, by the first processor, a converge instruction;
processing, by the second processor, the second thread; and
upon completion of the second thread, executing, by the second processor, the converge instruction.
8. The method of
the snooping determines that the main cache line is being accessed by the second processor.
9. The method of
in response to receiving the invalidate request, transmitting, by the second processor, an acknowledgment to the cache memory system.
10. The method of
changing, by the second processor, the delayed snoop to an immediate snoop.
12. The processing system of
the main cache line is a L3 cache controlled by the memory controller; and
the memory controller is a multicore shared memory controller.
13. The processing system of
the first processor is configured to processes the first set of data in parallel with the second processor processing the second set of data.
14. The processing system of
the diverge command configures the processing system in a child threading mode.
15. The processing system of
the first processor processes a first thread;
the second processor processes a second thread;
the first thread includes a first indicator indicating that the first set of data is assigned to the first thread; and
the second thread includes a second indicator indicating that the second set of data is assigned to the second thread.
16. The processing system of
the main cache line is configured to only accept a write through to the first set of data in the main cache line from the first thread; and
the main cache line is configured to only accept a write through to the second set of data in the main cache line from the second thread.
17. The processing system of
the first processor is configured to issue a converge instruction subsequent to writing to the first set of data in the main cache line; and
the second processor is configured to issue the converge instruction subsequent to writing to the second set of data in the main cache line.
18. The processing system of
the memory controller is configured to snoop the second processor in response to the first processor writing to the first set of data in the main cache line; and
the memory controller is configured to send the invalidate request to the second processor.
19. The processing system of
the second processor is configured to transmit an acknowledgment to the memory controller in response to receiving the invalidate request.
20. The processing system of
the second processor is configured to change the delayed snoop to an immediate snoop subsequent to writing to the second set of data in the main cache line.
|
This application is a Continuation of U.S. application Ser. No. 16/601,947 filed Oct. 15, 2019, which claims priority to U.S. Provisional Application No. 62/745,842 filed Oct. 15, 2018, which are hereby incorporated by reference.
In a multi-core coherent system, multiple processor and system components share the same memory resources, such as on-chip and off-chip memories. Memory caches (e.g., caches) typically are an amount of high-speed memory located operationally near (e.g., close to) a processor. A cache is more operationally nearer to a processor based on latency of the cache, that is, one many processor clock cycles for the cache to fulfill a memory request. Generally, cache memory closest to a processor includes a level 1 (L1) cache that is often directly on a die with the processor. Many processors also include a larger level 2 (L2) cache. This L2 cache is generally slower than the L1 cache but may still be on the die with the processor cores. The L2 cache may be a per processor core cache or shared across multiple cores. Often, a larger, slower L3 cache, either on die, as a separate component, or another portion of a system on a chip (SoC) is also available to the processor cores.
Ideally, if all components had the same cache structure, and would access shared resources through cache transactions, all the accesses would be identical throughout the entire system, aligned with the cache block boundaries. But usually, some components have no caches, or, different components have different cache block sizes. For a heterogeneous system, accesses to the shared resources can have different attributes, types and sizes. For example, a central processing unit (CPU) of a system may have different sized or different speed memory caches as compared to a digital signal processor (DSP) of the system. On the other hand, the shared resources may also be in different formats with respect to memory bank structures, access sizes, access latencies and physical locations on the chip.
To maintain data coherency, a coherent interconnect is usually added in between the master components and shared resources to arbitrate among multiple masters' requests and guarantee data consistency when data blocks are shared among multiple masters or modified for each resource slave. With various accesses from different components to the same slave, the interconnect usually handles the accesses in a serial fashion to guarantee atomicity and to meet the slave's access requests while maintaining data ordering to ensure data value correctness. In a multi-slave coherent system, the data consistency and coherency is generally guaranteed on a per slave bases. This makes the interconnect an access bottleneck for a multi-core multi-slave coherence system.
To reduce CPU cache miss stall overhead, cache components could issue cache allocate accesses with the request that the lower level memory hierarchy must return the “critical line first” to un-stall the CPU, then the non-critical line to finish the line fill. In a shared memory system, to serve one CPU's “critical line first” request could potentially extend the other CPU's stall overhead and reduce the shared memory throughput if the memory access types and sizes are not considered. The problem therefore to solve is how to serve memory accesses from multiple system components to provide low overall CPU stall overhead and guarantee maximum memory throughput.
Due to the increased number of shared components and expended shareable memory space, supporting data consistency while reducing memory access latency for all cores while maintaining maximum shared memory bandwidth and throughput is a challenge. For example, many processes, such as machine learning or multichannel data or voice processing, utilize a multi-core, multi-processing concept utilizing multiple processor cores executing a common computation on different data. In systems with a coherence interconnect, the cores may operate on data included on portions of a single cache line. As an example with a 16 byte cache line, each of four cores may perform a common computation as against different four byte segments of the cache line, with the first core handling the first four bytes, the second core handing the second four bytes, and so forth. This may be referred to as false sharing. Maintaining cache coherency in a false sharing scenario is challenging as writing to a single cache line would typically happen by requesting ownership of the cache line, snooping and evicting the other cores, and then writing to the cache line. This results in each core of the four cores having to snoop and evict each of the other three cores when the core needs to write back results of the computation in a serial fashion.
This disclosure relates to a method for maintaining cache coherency, the method comprising storing a set of data blocks in a cache line of a main cache memory, the set of data blocks associated with a main process, storing a first local copy of the set of data blocks in a first local cache memory of a first processor, of a set of two or more processors, wherein the first processor is configured to modify data within a first data block of the first local copy without modifying data in other data blocks of the set of data blocks of the first local copy, storing a second local copy of the set of data blocks in a second local cache memory of a second processor, of a set of two or more processors, executing, on the first processor, a first child process of the main process to generate first output data, writing the first output data to the first data block of the first local copy as a write through, writing the first output data to the first data block of the main cache memory as a part of the write through, transmitting an invalidate request to the second local cache memory, marking the second local copy of the set of data blocks as delayed, and transmitting an acknowledgment to the invalidate request.
This disclosure also relates to a processing system comprising a main cache memory storing a set of data blocks in a cache line, the set of data blocks associated with a main process, a first processor of two or more processors is configured to store a first local copy of the set of data blocks in a first local cache memory of the first processor, modify data within a first data block of the first local copy without modifying data in other data blocks of the set of data blocks of the first local copy, execute, a first child process of the main process to generate first output data, write the first output data to the first data block of the first local copy as a write through, and write the first output data to the first data block of the main cache memory as a part of the write through, a memory controller configured to transmit an invalidate request to a second local cache memory, and a second processor of the two or more processors is configured to store a second local copy of the set of data blocks in the second local cache memory of the second processor, mark the second local copy of the set of data blocks as delayed, and transmit an acknowledgment to the invalidate request.
This disclosure further relates to a non-transitory program storage device comprising instructions stored thereon to cause a third processor associated with a main process to store a set of data blocks in a cache line of a main cache memory, the set of data blocks associated with the main process, a first processer, of a set of two or more processors to store a first local copy of the set of data blocks in the first local cache memory of the first processor, modify data within a first data block of the first local copy without modifying data in the other data blocks of the set of data blocks of the first local copy, execute, a first child process of the main process to generate first output data, write the first output data to the first data block of the first local copy as a write through, and write the first output data to the first data block of the main cache memory as a part of the write through, a memory controller to transmit an invalidate request to a second local cache memory, and a second processor of the two or more processors to store a second local copy of the set of data blocks in the second local cache memory of the second processor, mark the second local copy of the set of data blocks as delayed, and transmit an acknowledgment to the invalidate request.
For a detailed description of various examples, reference will now be made to the accompanying drawings in which:
Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
High performance computing has taken on even greater importance with the advent of the Internet and cloud computing. To ensure the responsiveness of networks, online processing nodes and storage systems must have extremely robust processing capabilities and exceedingly fast data-throughput rates. Robotics, medical imaging systems, visual inspection systems, electronic test equipment, and high-performance wireless and communication systems, for example, must be able to process an extremely large volume of data with a high degree of precision. A multi-core architecture that embodies an aspect of the present invention will be described herein. In a typically embodiment, a multi-core system is implemented as a single system on chip (SoC). In accordance with embodiments of this disclosure, techniques are provided for parallelizing writing to a common cache line.
The multi-core processing system 100 also includes a multi-core shared memory controller (MSMC) 110, through which is connected one or more external memories 114 and input/output direct memory access clients 116. The MSMC 110 also includes an on-chip internal memory 112 system which is directly managed by the MSMC 110. In certain embodiments, the MSMC 110 helps manage traffic between multiple processor cores, other mastering peripherals or direct memory access (DMA) and allows processor packages 104 to dynamically share the internal and external memories for both program instructions and data. The MSMC internal memory 112 offers flexibility to programmers by allowing portions to be configured as shared level-2 RAM (SL2) or shared level-3 RAM (SL3). External memory 114 may be connected through the MSMC 110 along with the internal shared memory 112 via a memory interface (not shown), rather than to chip system interconnect as has traditionally been done on embedded processor architectures, providing a fast path for software execution. In this embodiment, external memory may be treated as SL3 memory and therefore cacheable in L1 and L2 (e.g., caches 108).
The MSMC core 202 also includes a data routing unit (DRU) 250, which helps provide integrated address translation and cache prewarming functionality and is coupled to a packet streaming interface link (PSI-L) interface 252, which is a shared messaging interface to a system wide bus supporting DMA control messaging. The DRU includes an integrated DRU memory management unit (MMU) 254.
DMA control messaging may be used by applications to perform memory operations, such as copy or fill operations, in an attempt to reduce the latency time needed to access that memory. Additionally, DMA control messaging may be used to offload memory management tasks from a processor. However, traditional DMA controls have been limited to using physical addresses rather than virtual memory addresses. Virtualized memory allows applications to access memory using a set of virtual memory addresses without having to having any knowledge of the physical memory addresses. An abstraction layer handles translating between the virtual memory addresses and physical addresses. Typically, this abstraction layer is accessed by application software via a supervisor privileged space. For example, an application having a virtual address for a memory location and seeking to send a DMA control message may first make a request into a privileged process, such as an operating system kernel requesting a translation between the virtual address to a physical address prior to sending the DMA control message. In cases where the memory operation crosses memory pages, the application may have to make separate translation requests for each memory page. Additionally, when a task first starts, memory caches for a processor may be “cold” as no data has yet been accessed from memory and these caches have not yet been filled. The costs for the initial memory fill and abstraction layer translations can bottleneck certain tasks, such as small to medium sized tasks which access large amounts of memory. Improvements to DMA control message operations to prewarm near memory caches before a task needs to access the near memory cache may help improve these bottlenecks.
The MSMC core 202 includes a plurality of coherent slave interfaces 206A-D. While in the illustrated example, the MSMC core 202 includes thirteen coherent slave interfaces 202 (only four are shown for conciseness), other implementations of the MSMC core 202 may include a different number of coherent slave interfaces 206. Each of the coherent slave interfaces 206A-D is configured to connect to one or more corresponding master peripherals. Example master peripherals include a processor, a processor package, a direct memory access device, an input/output device, etc. Each of the coherent slave interfaces 206 is configured to transmit data and instructions between the corresponding master peripheral and the MSMC core 202. For example, the first coherent slave interface 206A may receive a read request from a master peripheral connected to the first coherent slave interface 206A and relay the read request to other components of the MSMC core 202. Further, the first coherent slave interface 206A may transmit a response to the read request from the MSMC core 202 to the master peripheral. In some implementations, the coherent slave interfaces 206 correspond to 512 bit or 256 bit interfaces and support 48 bit physical addressing of memory locations.
In the illustrated example, a thirteenth coherent slave interface 206D is connected to a common bus architecture (CBA) system on chip (SOC) switch 208. The CBA SOC switch 208 may be connected to a plurality of master peripherals and be configured to provide a switched connection between the plurality of master peripherals and the MSMC core 202. While not illustrated, additional ones of the coherent slave interfaces 206 may be connected to a corresponding CBA. Alternatively, in some implementations, none of the coherent slave interfaces 206 is connected to a CBA SOC switch.
In some implementations, one or more of the coherent slave interfaces 206 interfaces with the corresponding master peripheral through a MSMC bridge 210 configured to provide one or more translation services between the master peripheral connected to the MSMC bridge 210 and the MSMC core 202. For example, ARM v7 and v8 devices utilizing the AXI/ACE and/or the Skyros protocols may be connected to the MSMC 200, while the MSMC core 202 may be configured to operate according to a coherence streaming credit-based protocol, such as Multi-core bus architecture (MBA). The MSMC bridge 210 helps convert between the various protocols, to provide bus width conversion, clock conversion, voltage conversion, or a combination thereof. In addition or in the alternative to such translation services, the MSMC bridge 210 may cache prewarming support via an Accelerator Coherency Port (ACP) interface for accessing a cache memory of a coupled master peripheral and data error correcting code (ECC) detection and generation. In the illustrated example, the first coherent slave interface 206A is connected to a first MSMC bridge 210A and a tenth coherent slave interface 210B is connected to a second MSMC bridge 210B. In other examples, more or fewer (e.g., 0) of the coherent slave interfaces 206 are connected to a corresponding MSMC bridge.
The MSMC core logic 202 includes an arbitration and data path manager 204. The arbitration and data path manager 204 includes a data path (e.g., a collection of wires, traces, other conductive elements, etc.) between the coherent slave interfaces 206 and other components of the MSMC core logic 202. The arbitration and data path manager 204 further includes logic configured to establish virtual channels between components of the MSMC 200 over shared physical connections (e.g., the data path). In addition, the arbitration and data path manager 204 is configured to arbitrate access to these virtual channels over the shared physical connections. Using virtual channels over shared physical connections within the MSMC 200 may reduce a number of connections and an amount of wiring used within the MSMC 200 as compared to implementations that rely on a crossbar switch for connectivity between components. In some implementations, the arbitration and data path 204 includes hardware logic configured to perform the arbitration operations described herein. In alternative examples, the arbitration and data path 204 includes a processing device configured to execute instructions (e.g., stored in a memory of the arbitration and data path 204) to perform the arbitration operations described herein. As described further herein, additional components of the MSMC 200 may include arbitration logic (e.g., hardware configured to perform arbitration operations, a processor configure to execute arbitration instructions, or a combination thereof). The arbitration and data path 204 may select an arbitration winner to place on the shared physical connections from among a plurality of requests (e.g., read requests, write requests, snoop requests, etc.) based on a priority level associated with a requestor, based on a fair-share or round robin fairness level, based on a starvation indicator, or a combination thereof.
The arbitration and data path 204 further includes a coherency controller 224. The coherency controller 224 includes a snoop filter 212. The snoop filter 212 is a hardware unit that store information indicating which (if any) of the master peripherals stores data associated with lines of memory of memory devices connected to the MSMC 200. The coherency controller 224 is configured to maintain coherency of shared memory based on contents of the snoop filter 212.
The MSMC 200 further includes a MSMC configuration component 214 connected to the arbitration and data path 204. The MSMC configuration component 214 stores various configuration settings associated with the MSMC 200. In some implementations, the MSMC configuration component 214 includes additional arbitration logic (e.g., hardware arbitration logic, a processor configured to execute software arbitration logic, or a combination thereof).
The MSMC 200 further includes a plurality of cache tag banks 216. In the illustrated example, the MSMC 200 includes four cache tag banks 216A-D. In other implementations, the MSMC 200 includes a different number of cache tag banks 216 (e.g., 1 or more). The cache tag banks 216 are connected to the arbitration and data path 204. Each of the cache tag banks 216 is configured to store “tags” indicating memory locations in memory devices connected to the MSMC 200. Each entry in the snoop filter 212 corresponds to a corresponding one of the tags in the cache tag banks 216. Thus, each entry in the snoop filter indicates whether data associated with a particular memory location is stored in one of the master peripherals.
Each of the cache tag banks 216 is connected to a corresponding RAM bank 218. For example, a first cache tag bank 216A is connected to a first RAM bank 218A etc. Each entry in the RAM banks 218 is associated with a corresponding entry in the cache tag banks 216 and a corresponding entry in the snoop filter 212. Entries in the RAM banks 218 may be used as an additional cache or as additional memory space based on a setting stored in the MSMC configuration component 214. The cache tag banks 216 and the RAM banks 218 may correspond to RAM modules (e.g., static RAM). While not illustrated in
The MSMC 200 further includes an external memory interleave component 220 connected to the cache tag banks 216 and the RAM banks 218. One or more external memory master interfaces 222 are connected to the external memory interleave 220. The external memory interfaces 222 are configured to connect to external memory devices (e.g., DDR devices, direct memory access input/output (DMA/IO) devices, etc.) and to exchange messages between the external memory devices and the MSMC 200. The external memory devices may include, for example, the external memories 114 of
In certain cases, the MSMC 200 may be configured to interface, via the MSMC bridge 210, with a master peripheral, such as a compute cluster having multiple processing cores. The MSMC 200 may further be configured to maintain a coherent cache for a process executing on the multiple processing cores.
After a fork command is issued 308 on the main thread, the child threads executing on processor cores 304A-304D may each execute a diverge instruction 310 to place the cache memory system into a child threading mode. The MSMC may read the cache line containing data blocks 306A-306D and provide a copy of the cache line to each of the processor cores 304A-304D. Each processor core 304A-304D caches a copy of at least a portion of data blocks 306A-306D into their own local caches 314A-314D, such as a L1 data cache. The data blocks 306A-306D copied into local caches 312A-314D may be marked as shared, rather than owned. Local caches 312A-314D may be controlled by local cache controllers (not shown) on the respective processor cores 304A-304D. Each child thread includes an indication of which data block of the data blocks 306A-306D the corresponding child thread is assigned to. For example, processor core 304A is assigned to work on data block 312A, which may correspond to bytes 0-3 of the data blocks 306A-306D, processor core 304B is assigned to work on data block 312B corresponding to bytes 4-7 of the data blocks 306A-306D, and so forth.
Each processor core 304A-304D may freely modify their cache memory 314A-314D within their assigned data block as required by the child thread process. However, the processor cores 304A-304D may not be permitted to modify the cache memory 314A-314D outside of their assigned data block. Referring now to
After the MSMC receives the write through of the data block, such as data block 312D, the MSMC snoops the other processor cores 304A-304C to determine that the main cache line 302 is being accessed by those other processor cores 304A-304C. The MSMC then sends a cache message 502 of
According to aspects of the present disclosure as shown in
In the example discussed above, processor core 304D was the first to finish and did not invalidate any cache lines of its local cache 314D, as no cache lines were previously marked as delay snoop. As shown in
As shown in
In certain cases, the MSMC may be configured to adjust operating modes of caches coupled to the MSMC. For example, the MSMC may be coupled to the L1 cache of a specific processor core, as well as an L2 cache, which may be shared as among multiple processor cores. The MSMC may also include or be coupled to an amount of L3 cache. The MSMC may transmit one or more cache configuration messages to coupled caches to set an operating mode of the cache, such as whether the cache is set as a write back, write allocate, or write through. As discussed above, for delayed snoop, the L1 cache may be configured as a write through cache. The L2 cache may also be configured as a write through cache to simplify the process and enable a more direct view of the L1 cache to the MSMC. In certain cases, snooping of the L2 cache may be performed according to a normal snooping technique. The L3 cache may then be configured as write back cache and used to store values as processing on the child threads proceeds. Completed results may be written to a backing store, such as main memory, as processing of the data blocks are completed on the child threads, for example via a non-blocking channel (e.g., memory transactions that are not dependent upon the completion of another transaction, such as snooping, in order to complete).
At block 906, the first processor executes a first child process forked from the main process to generate first output data. For example, a processor executes the set of commands on the data blocks assigned to the processor and generates output data. At block 908, the first output data is written to the first data block of the first local copy as a write through, and at block 910 the first output data is written to the first data block of the main cache memory as a part of the write through. For example, the processor writes the output data to the local cache memory in a write through mode, which causes the output data to also be written to corresponding data blocks of the main cache memory.
At block 912, an invalidate request is transmitted to the second local cache memory. As an example, the memory controller, after receiving the write through to the main cache memory may transmit a snoop message to the second local cache memory to invalidate the cache line stored in the second local cache. At block 914, the second copy of the set of data blocks are marked as delayed. For example, a memory controller of the second processor may mark the one or more of the data blocks as delayed snoop without invalidating the data blocks. At block 916, an acknowledgement to the invalidate request is transmitted. For example, the second processor or the memory controller of the second processor may send an acknowledgement message to the memory controller without invalidating the data blocks.
In this description, the term “couple” or “couples” means either an indirect or direct wired or wireless connection. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections. The recitation “based on” means “based at least in part on.” Therefore, if X is based on Y, X may be a function of Y and any number of other factors.
Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. While the specific embodiments described above have been shown by way of example, it will be appreciated that many modifications and other embodiments will come to the mind of one skilled in the art having the benefit of the teachings presented in the foregoing description and the associated drawings. Accordingly, it is understood that various modifications and embodiments are intended to be included within the scope of the appended claims.
Anderson, Timothy David, Chirca, Kai
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
8732370, | Sep 21 2010 | Texas Instruments Incorporated | Multilayer arbitration for access to multiple destinations |
9152586, | Oct 24 2012 | Texas Instruments Incorporated | Coherent cache system with optional acknowledgement for out-of-order coherence transaction completion |
9298665, | Oct 24 2012 | Texas Instruments Incorporated | Multicore, multibank, fully concurrent coherence controller |
9652404, | Oct 24 2012 | Texas Instruments Incorporated | Multicore, multibank, fully concurrent coherence controller |
20070083735, | |||
20080320236, | |||
20120210073, | |||
20180336133, | |||
20190102301, | |||
20190243654, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 01 2022 | Texas Instruments Incorporated | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Feb 01 2022 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Nov 21 2026 | 4 years fee payment window open |
May 21 2027 | 6 months grace period start (w surcharge) |
Nov 21 2027 | patent expiry (for year 4) |
Nov 21 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 21 2030 | 8 years fee payment window open |
May 21 2031 | 6 months grace period start (w surcharge) |
Nov 21 2031 | patent expiry (for year 8) |
Nov 21 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 21 2034 | 12 years fee payment window open |
May 21 2035 | 6 months grace period start (w surcharge) |
Nov 21 2035 | patent expiry (for year 12) |
Nov 21 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |