A method for <span class="c4 g0">managingspan> a <span class="c31 g0">memoryspan> stack provides <span class="c7 g0">mappingspan> a part of the <span class="c31 g0">memoryspan> stack to a span of <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan> and a part of the <span class="c31 g0">memoryspan> stack to a span of <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan>, wherein the <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan> provides <span class="c15 g0">accessspan> <span class="c16 g0">speedspan> substantially higher than the <span class="c15 g0">accessspan> <span class="c16 g0">speedspan> provided by the <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan>.
|
10. A method of optimizing tightly integrated <span class="c31 g0">memoryspan> (tim) usage, the method comprising:
<span class="c7 g0">mappingspan> a first part of a <span class="c31 g0">memoryspan> stack to a span of tim <span class="c10 g0">addressspan> <span class="c11 g0">spacespan>;
<span class="c7 g0">mappingspan> a second part of the <span class="c31 g0">memoryspan> stack to a span of non-tim <span class="c31 g0">memoryspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan>; and
changing a <span class="c8 g0">valuespan> of a <span class="c25 g0">controlspan> <span class="c26 g0">bitspan> of a <span class="c20 g0">virtualspan> <span class="c21 g0">pagespan> on the tim <span class="c10 g0">addressspan> <span class="c11 g0">spacespan> to an <span class="c0 g0">unavailablespan> <span class="c1 g0">statusspan> upon <span class="c7 g0">mappingspan> the first part of <span class="c31 g0">memoryspan> stack to the <span class="c20 g0">virtualspan> <span class="c21 g0">pagespan>.
1. A method for <span class="c4 g0">managingspan> a <span class="c31 g0">memoryspan> stack, the method comprising:
<span class="c7 g0">mappingspan> a first part of the <span class="c31 g0">memoryspan> stack to a span of <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan> and a second different part of the <span class="c31 g0">memoryspan> stack to a span of <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan>, wherein the <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan> provides <span class="c15 g0">accessspan> <span class="c16 g0">speedspan> substantially higher than the <span class="c15 g0">accessspan> <span class="c16 g0">speedspan> provided by the <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan>; and
changing a <span class="c8 g0">valuespan> of a <span class="c25 g0">controlspan> <span class="c26 g0">bitspan> of a <span class="c20 g0">virtualspan> <span class="c21 g0">pagespan> on the <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan> to an <span class="c0 g0">unavailablespan> <span class="c1 g0">statusspan> upon <span class="c7 g0">mappingspan> the first part of the <span class="c31 g0">memoryspan> stack to the <span class="c20 g0">virtualspan> <span class="c21 g0">pagespan>.
17. One or more non-transitory <span class="c5 g0">computerspan>-readable storage medium encoding <span class="c5 g0">computerspan>-executable instructions for executing on a <span class="c5 g0">computerspan> <span class="c2 g0">systemspan> a <span class="c5 g0">computerspan> <span class="c6 g0">processspan>, the <span class="c5 g0">computerspan> <span class="c6 g0">processspan> comprising:
<span class="c7 g0">mappingspan> a first part of a <span class="c31 g0">memoryspan> stack to span a <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan> and a second different part of the <span class="c31 g0">memoryspan> stack to span a <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan>, wherein the <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan> provides <span class="c15 g0">accessspan> <span class="c16 g0">speedspan> substantially higher than the <span class="c15 g0">accessspan> <span class="c16 g0">speedspan> provided by the <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan>; and
changing a <span class="c8 g0">valuespan> of a <span class="c25 g0">controlspan> <span class="c26 g0">bitspan> of a <span class="c20 g0">virtualspan> <span class="c21 g0">pagespan> on the <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan> to an <span class="c0 g0">unavailablespan> <span class="c1 g0">statusspan> upon <span class="c7 g0">mappingspan> the first part of <span class="c31 g0">memoryspan> stack to the <span class="c20 g0">virtualspan> <span class="c21 g0">pagespan>.
2. The method of
3. The method of
4. The method of
5. The method of
providing a first <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan> and a second <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan>;
dividing each of the span of the <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan>, the span of the <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan>, the first <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan>, and the second <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan> into a pre-determined number of equal sized pages;
<span class="c7 g0">mappingspan> a first <span class="c21 g0">pagespan> of the first <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan> to a first <span class="c21 g0">pagespan> of the <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan>; and
<span class="c7 g0">mappingspan> the bottom part of the <span class="c31 g0">memoryspan> stack to the first <span class="c21 g0">pagespan> of the first <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan>.
6. The method of
7. The method of
if a size of the <span class="c31 g0">memoryspan> stack is larger than a size of a <span class="c21 g0">pagespan> of the <span class="c30 g0">fastspan> <span class="c31 g0">memoryspan>:
<span class="c7 g0">mappingspan> a second <span class="c21 g0">pagespan> of the first <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan> above the first <span class="c21 g0">pagespan> to the <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan>, and
<span class="c7 g0">mappingspan> an incremental part of the <span class="c31 g0">memoryspan> stack to the second <span class="c21 g0">pagespan> of the first <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan>.
8. The method of
providing a <span class="c31 g0">memoryspan> management unit, the <span class="c31 g0">memoryspan> management unit having a plurality of bits, each of the plurality of bits identifying <span class="c7 g0">mappingspan> of a <span class="c21 g0">pagespan> of the first <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> <span class="c11 g0">spacespan>.
9. The method of
11. The method of
12. The method of
dividing the span of tim <span class="c10 g0">addressspan> <span class="c11 g0">spacespan> into a first number of pages;
providing a first number of <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> spaces;
dividing each of the <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> spaces into a second number of pages, wherein a size of each of the second number of pages is equal to a size of the first number of pages;
<span class="c7 g0">mappingspan> a bottom <span class="c21 g0">pagespan> of one of the first number of <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> spaces to a tim <span class="c21 g0">pagespan>; and
<span class="c7 g0">mappingspan> the initial part of a <span class="c31 g0">memoryspan> stack to the bottom <span class="c21 g0">pagespan> of the one of the first number of <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> spaces.
13. The method of
<span class="c7 g0">mappingspan> a <span class="c21 g0">pagespan> above the bottom <span class="c21 g0">pagespan> of the one of the first number of <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> spaces to a <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan> <span class="c21 g0">pagespan>; and
<span class="c7 g0">mappingspan> a part of the <span class="c31 g0">memoryspan> stack above the initial part of the <span class="c31 g0">memoryspan> stack to the <span class="c21 g0">pagespan> above a bottom <span class="c21 g0">pagespan>.
14. The method of
<span class="c7 g0">mappingspan> each <span class="c21 g0">pagespan> of the <span class="c20 g0">virtualspan> <span class="c10 g0">addressspan> spaces other than a bottom <span class="c21 g0">pagespan> to a <span class="c3 g0">slowspan> <span class="c31 g0">memoryspan>.
15. The method of
providing a <span class="c25 g0">controlspan> unit comprising a number of <span class="c25 g0">controlspan> bits equal to the first number of pages, wherein each <span class="c25 g0">controlspan> <span class="c26 g0">bitspan> represents the availability <span class="c1 g0">statusspan> of a corresponding tim <span class="c21 g0">pagespan>.
16. The method of
determining when a stack mapped to a tim <span class="c21 g0">pagespan> corresponding to the one of the <span class="c25 g0">controlspan> bits is not in use; and
changing the <span class="c8 g0">valuespan> of the one of the <span class="c25 g0">controlspan> bits to an available <span class="c1 g0">statusspan>.
18. The one or more non-transitory <span class="c5 g0">computerspan>-readable storage medium of
19. The one or more non-transitory <span class="c5 g0">computerspan>-readable storage medium of
|
The present application is a continuation of U.S. application Ser. No. 12/963,933 filed on Dec. 9, 2010, which was issued as U.S. Pat. No. 8,996,842 on Mar. 31, 2015, and titled “Memory Stacks Management,” which is hereby incorporated by reference in its entirety as though fully set forth herein.
Most computing systems employ the concept of a “stack” to hold memory variables associated with one or more active subroutine or process threads (collectively referred to herein as “subroutine”). When a new subroutine is called, a stack related thereto grows in order to provide space for the temporary variables of such subroutines. When execution control is transferred from a first subroutine to a second subroutine, the registers used by the first subroutine are pushed onto the stack as well. Subsequently, after the second subroutine is done executing, the register contents may be restored. As subroutine calls nest within one another, the stack continues to grow, such that the temporary variables associated with the active portion subroutine are at the top of the stack. A system designer needs to ensure that enough memory space is available for a stack to grow to its worst-case size, which is associated with the deepest level of subroutine nesting that may occur in the system. On the other hand, growth of stacks to worst-case size may result in inefficient utilization of stack allocation space and may slow the performance of a computing system in cases where the computing system runs out of available stack allocation space.
Implementations described and claimed herein provide for the managing of memory stacks across different physical memories. A method for managing a memory stack provides mapping a part of the memory stack to a span of fast memory and a part of the memory stack to a span of slow memory, wherein the fast memory provides access speed substantially higher than the access speed provided by the slow memory. In an implementation, the fast memory is tightly integrated with a processor. This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. These and various other features and advantages will be apparent from a reading of the following detailed description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
A further understanding of the various implementations described herein may be realized by reference to the figures, which are described in the remaining portion of the specification. In the figures, like reference numerals are used throughout several figures to refer to similar components.
Both the motherboard 162 and the HDD 164 are powered by a power supply 168 that converts incoming AC power to DC power, step down an incoming voltage, step-up the incoming voltage, and/or limit current available to the motherboard 162 and the HDD 164. In one implementation, power for the HDD 164 comes from the power supply 168 through the motherboard 162.
The HDD 164 is equipped with a disc pack 170, which is mounted on a spindle motor (not shown). The disk pack 170 includes one or more individual disks, which rotate in a direction indicated by arrow 172 about a central axis 174. Each disk has an associated disc read/write head slider 176 for communication with the disk surface. The slider 176 is attached to one end of an actuator arm 178 that rotates about a pivot point 179 to position the slider 176 over a desired data track on a disk within the disk pack 170.
The HDD 164 is also equipped with a disc controller 180 that controls operation of the HDD 164. In one implementation, the disc controller 180 resides on a printed circuit board (PCB). The disc controller 180 may include a system-on-a-chip (SOC) 182 that combines some, many, or all functions of the PCB 180 on a single integrated circuit. Alternatively, the functions of the PCB 180 are spread out over a number of integrated circuits within one package (i.e., SIP). In an alternate implementation, the disc controller 180 includes controller firmware.
The computing system 160 also has internal memory such as random access memory (RAM) 190 and read only memory (ROM) 192. Furthermore, the motherboard 162 has various registers or other form of memory. Such memory residing on the motherboard 162 is accessible by one or more processors on the motherboard 162 at a higher speed compared to the speed at which such processors can generally access RAM 190, ROM 192, etc. Therefore, such memory residing on the motherboard 162 is referred to as the tightly integrated memory (TIM), also referred to sometime as a tightly coupled memory (TCM) or high speed memory. However, in alternate implementation the term TIM may also be used to refer to other memory module that is accessible by one or more processor in a high speed manner.
One or more of the memory modules, such as the RAM 190, the ROM 192, various TIM resident on the motherboard, and the memory provided by the HDD 164, are used to store one or more computer programs, such as the operating system, etc. Such computer programs use a number of functions, routines, subroutines, and other program structures to store instructions, wherein the instructions are processed using one or more processors of the computer. A subroutine may be called to process a number of instructions in an iterative manner and a computer program calling a subroutine generally provides a number of parameters to a called subroutine. At any point during execution of a computer program a number of subroutines may be active and in various stages of processing. Any time a subroutine calls another subroutine, or passes control to another subroutine, the calling subroutine stores the present values of various temporary parameters in a memory until the control is passed back from the called subroutine to the calling subroutine.
In one implementation, the SOC 182 uses a stack to hold temporary variables associated with an active subroutine. In one implementation a number of subroutines related to a process thread shares a stack. In one implementation, an application that is written on one thread uses one stack. However, an application that is multi-threaded may use multiple stacks. Each time a new subroutine is called, the stack grows to provide enough space for the temporary variables of the new subroutine. Further, because subroutine calls “nest” within one another, the stack continues to grow with more subroutine calls. A stack is a last-in-first-out (LIFO) storage structure where new storage is allocated and de-allocated at one end, called the “top” of the stack.
In an alternate arrangement of stacks, stacks are designed in a memory space so as to grow downwards in a given address space. In such an example, the initial part of the stack is at the top of the stack. An example is a reverse stack 200c illustrated in
Computing systems generally allow for the stack to grow to the “worst-case” size of the stack. The “worst-case” size of the stack is associated with the deepest level of subroutine nesting that occurs in the system. Providing for sufficient tightly integrated memory (TIM) or high speed memory, such as data tightly-coupled memory (DTCM) to account for the “worst-case” size of the stack can be cost prohibitive. Further, multi-tasking computing systems have a different stack for each task (or thread) that is active. Providing for sufficient high speed memory, such as DTCM (used herein to refer to any high speed memory or tightly integrated memory) to account for the “worst-case” size of the each stack that is active can result in inefficient utilization of the DTCM.
Generally, stacks operate at or near empty condition. However, the nesting level increases significantly in error paths. As a result, stacks get substantially filled in error paths. This is especially true in controller firmware, where expensive DTCM is used to host stacks. Therefore, when a controller firmware enters an error path, expensive DTCM is used up to store the parameters resulting from the deep nesting resulting from controller firmware entering into an error path. Performance, however, is not crucial in error paths. Thus, providing for sufficient high speed memory, such as DTCM, to account for the “worst-case” size of error paths is unnecessary.
Each bit of the MMU 310 is assigned a value of zero (0) or one (1) depending upon whether a corresponding page in the DTCM is to be aliased to the virtual address space region A 306 or to the region B 308. The process of determining the values of each bit is described in further detail below. Note that because there are two virtual address space regions A 306 and B 308, if a separate MMU were to be used for the virtual address space B, the values of the bits in such an MMU for the virtual address space B would be complement to their values in the MMU 310. For example, if bit 7 had a value of 1 in MMU 310, corresponding bit 7 in the MMU for the virtual address space B would have a value of 0, and vice versa.
Subsequently, at block 406 the lowest unused virtual page of the virtual address space region A 306 is made addressable to the DTCM0. Thus, in the example disclosed herein, VA0 is made addressable to DTCM0. The mapping of VA0 to DTCM0 is shown in
Subsequently, a block 410 determines if there is additional space required for any existing stack to grow. In the present case, with Stack 1 being open, block 410 determines is Stack 1, based in DTCM0 requires more memory. However, in an alternate situation, block 410 reviews more than one existing stacks to see if there is any growth in any of such stacks. Such additional space requirement is due to calls to new subroutines, functions, etc. Note that the size of DTCM0 may be sufficient to save variables/parameters for function/subroutine calls up to a certain level of nesting. However, if the stack grows larger, it may need more space than just that provided by the DTCM0 page. As discussed above, one example condition where this happens is in the case when one or more program enters into an error loop, in which case, it makes multiple calls to the same function/subroutine, causing the stack to grow.
In such a case, a block 412 maps the virtual page above the page which is mapped to the DTCM0, in this case VA1, to a page in the DDBA address space 304. As a result, the values and parameters related to the later called functions/subroutines are mapped to a cheaper/slower memory. Given that empirically, stacks do not grow beyond certain size, except in cases when a program has entered into an error loop, allowing stacks to grow in slower/cheaper memory such as DDBA memory 304 allows more stacks to be mapped to the expensive DTCM memory 302.
In the current case, for example, suppose that Stack 1 grows to require between two and three pages of memory. In this case, as shown in
If during a next iteration, block 402 determines that a second stack, Stack 2, needs to be opened, block 404 will select lowest unassigned DTCM space. In the present case, such as page is DTCM1. Note that DTCM1 is already mapped to the virtual address space region B 308 at VB1. Therefore, at block 406, Stack 2 will be assigned to DTCM1 and mapped to VB1. In this case, because the MMU bit related to the pages DTCM1 is already set at 0, block 408 does not need to change the MMU bit related to DTCM1.
Subsequently block 410 monitors for growth of both Stack 1 and Stack 2. If during any iteration, if Stack 2 grows, it is allowed to grow further in the virtual address space region B 308. At the same time, if there is growth in Stack 1, it is allowed to continue growing in the virtual address space region A 306. In the present example, suppose that over a number of iterations, Stack 2 grows to occupy more than two but less than three pages worth of memory.
In this case, Stack 2 takes up three consecutive pages in the virtual address space region B 308, as shown by 512 in
The system disclosed in
Subsequently, Stack 3 is allowed to grow in the virtual address space region A 306 with any subsequent pages assigned to Stack 3 being mapped to DDBA 304. Note that once Stack 3 is initiated at DTCM3 and assigned to initiate at VA3, it would not be possible to allow further growth in Stack 1. To avoid this problem, in one implementation, the number of stacks supported by the implementation is two. Alternatively, Stack 1 is assigned non-contiguous pages of the virtual address space region A 306. Thus, if Stack 3 was initiated at DTCM3 and assigned to VA3, if there is a need for Stack 1 to grow, it is allowed to grow with the next available page of the virtual address space region A 306 providing further growth opportunity. In this case, VA4 may be used for further growth of Stack 1 or Stack 3, whichever needs additional pages. Furthermore, because these are additional pages towards the top of the stack, they would be, when possible, mapped to DDBA 304. However, in certain cases it is possible that the growth of Stack 2 has already caused the next available pages in the virtual address space region A 306 to be mapped to the DTCM 302.
While the above implementation provides two virtual address space regions, in an alternate implementation, more than two virtual address space regions are provided.
Note that in the above implementation eight virtual address space regions, each virtual address space region corresponding to one pageable space of the DTCM address space 602 is provided. In an alternate implementation, any other number of virtual address space regions may also be provided. Specifically, the implementation of the stack management system 600 illustrated in
The stack management system 600 also includes eight virtual address space regions, namely virtual address space region A 606 to virtual address space region H 610 (not all virtual address space regions shown here). Because there are eight DTCM pages and eight virtual address space regions in this implementation, each DTCM page can be mapped to one of the eight virtual address space regions. Specifically, each of the DTCM pages that supports a stack is mapped to a bottom page of one of the eight virtual address space regions 606-610. For example, if at any given time, the first three DTCM pages DTCM0 to DTCM2 are used to support stacks, these three pages are mapped to VA0, VB0, and VC0, respectively.
The remaining pages of the virtual address space regions 606-610 are mapped to specific regions of the DDBA address space 604. For example, VA1 to VA7 are mapped to DDBA1 to DDBA7, whereas VB1 to VB7 are mapped to DDBA65 to DDBA71 (not shown herein).
In one example implementation, the allocation of a DTCM page is controlled by an MMU 620. Each bit of the MMU 620 designates whether the corresponding page of the DTCM address space 602 is used for supporting a stack or not. Thus, for example, before the allocation of stacks is initiated, each of the MMU control bits will be assigned a value of 0. In the example discussed above, if the first three DTCM pages DTCM0 to DTCM2 are used to support stacks, the MMU control bits for these three pages will be changes to 1, as shown by 622 in
The stack management system 600 allows each addressable page of the DTCM address space 602 to be used to initiate a new stack and then allowing the stack to grow in one of the eight virtual address space regions 606 to 610. A method of allocating stacks to one of the various virtual address spaces is disclosed in further detail by a flowchart 700 illustrated in
Now referring to
Specifically, a block 702 determines if a new stack needs to be allocated. If a new stack needs to be allocated, a block 704 selects the lowest unused DTCM page to initiate the requested stack. The block 704 also changes the value of an MMU bit related to that particular DTCM page to 1 to indicate that the particular DTCM page is being used to support a stack. Subsequently a block 706 assigns the new stack to one of the unused virtual address space region A 606-virtual address space region H 610. In an implementation of stack management system 600 wherein the number of addressable pages in the DTCM 602 is same as the number of virtual address space regions (in this implementation, each is equal to eight), an MMU control unit is not provided for each of the virtual address space regions. Specifically, in such an implementation, each of the virtual address space regions 606-610 will have its lowest addressable page, namely VA0, VB0, . . . VH0 mapped to the DTCM address space 602 whereas each of the higher addressable pages, VA1-VA7, . . . VH1-VH7, mapped to the DDBA address space 604.
After assigning one of the virtual address space regions to a stack, a block 708 monitors growth in that stack. Upon detecting growth in a given stack, block 710 maps subsequent pages of the virtual address space that is mapped to the given stack to the DDBA address space 604. Note that in the present case, because each DTCM page of the DTCM address space 602 is mapped to the bottom page of the virtual address space regions A 606-H 610, respectively and as necessary, there is no need for using an MMU bit in a manner described above in
However, in the system illustrated by
A block 712 determines if any DTCM page that was earlier assigned a stack has become available. If so, a block 714 changes an MMU bit related to that DTCM page to 0. However, if it is determined that no new DTCM pages have become available, no change to any MMU bit is made. Even though in the implementation described herein, the program 700 provides the appropriate monitoring of DTCM pages being used for stack allocation, in an alternate implementation, a microprocessor or other unit that is responsible for allocating stacks monitors and changes MMU bits as necessary.
The implementations described herein may be implemented as logical steps in one or more computer systems. The logical operations of the various implementations described herein are implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems. The implementation is a matter of choice, dependent on the performance requirements of the computer system implementing the method and system described herein. Accordingly, the logical operations making up the implementations described herein are referred to variously as operations, blocks, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
In the interest of clarity, not all of the routine functions of the implementations described herein are shown and described. It will be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions are made in order to achieve the developer's specific goals, such as compliance with application—and business-related constraints, and that those specific goals will vary from one implementation to another and from one developer to another.
The above specification, examples, and data provide a complete description of the structure and use of example implementations. Because many alternate implementations can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended. Furthermore, structural features of the different implementations may be combined in yet another implementation without departing from the recited claims.
Gaertner, Mark, Heath, Mark Alan
Patent | Priority | Assignee | Title |
10565131, | Mar 18 2016 | Industry-Academic Cooperation Foundation, Yonsei University | Main memory including hardware accelerator and method of operating the same |
Patent | Priority | Assignee | Title |
4445170, | Mar 19 1981 | ZILOG, INC , A CORP OF CA | Computer segmented memory management technique wherein two expandable memory portions are contained within a single segment |
4928239, | Jun 27 1986 | Hewlett-Packard Company | Cache memory with variable fetch and replacement schemes |
6745288, | May 21 2002 | International Business Machines Corporation | Staggering call stack offsets for multiple duplicate control threads |
8335904, | Dec 30 2008 | EMC IP HOLDING COMPANY LLC | Identifying active and inactive data in storage systems |
20070079054, | |||
20100017578, | |||
20140013056, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 02 2010 | GAETNER, MARK | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035051 | /0857 | |
Nov 04 2011 | HEATH, MARK | Seagate Technology LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035051 | /0857 | |
Feb 27 2015 | Seagate Technology LLC | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jun 28 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 26 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Jan 26 2019 | 4 years fee payment window open |
Jul 26 2019 | 6 months grace period start (w surcharge) |
Jan 26 2020 | patent expiry (for year 4) |
Jan 26 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 26 2023 | 8 years fee payment window open |
Jul 26 2023 | 6 months grace period start (w surcharge) |
Jan 26 2024 | patent expiry (for year 8) |
Jan 26 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 26 2027 | 12 years fee payment window open |
Jul 26 2027 | 6 months grace period start (w surcharge) |
Jan 26 2028 | patent expiry (for year 12) |
Jan 26 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |