The specification discloses a heap memory management system in which software streams remove and replace blocks of heap memory from the heap pile, managed in a linked list fashion, in a last-in/first-out fashion at the top of the list. A hardware device returns blocks of heap memory, and this return is to the end or bottom of the linked list. In this way, software streams may remove and return blocks of heap memory simultaneously with hardware devices returning blocks of heap memory.
|
1. A method comprising:
performing heap memory operations on a first end of a linked list of free heap memory of a heap pile by a software stream executed on a processor, the performing by the software stream using an atomic operation; and concurrently
returning a return block of heap memory to the heap pile at a second end of the linked list of free heap memory by a hardware device coupled to the processor, the returning by the hardware device using a non-atomic operation.
33. A method comprising:
performing, by a software stream executed on a processor, heap memory operations on a first end of a linked list of free heap memory of a heap pile, the performing using an atomic operation; and concurrently
returning a return block of heap memory to the heap pile at a second end of the linked list of free heap memory, the returning by a hardware device coupled to the processor using a non-atomic operations and selected from group consisting of a graphics card, a network interface card, an audio device or a mass storage device.
19. A method of managing a heap memory in a computer system, the method comprising:
allowing a software thread executed on a processor to add and remove blocks of heap memory from a linked list of free blocks of heap memory in a last-in/first-out (LIFO) fashion at a first end of the linked list and using an atomic operation; and
allowing a hardware device that uses blocks of heap memory to add the blocks of heap memory to the linked list of free blocks of heap memory at a second end of the linked list using a non-atomic operation, the hardware device coupled to the processor by way of a communication bus.
12. A method of managing a heap memory comprising:
maintaining unused blocks of heap memory as a linked list, and wherein the unused blocks of the linked list comprise a first block at a beginning of the linked list, a second block pointed to the first block, and a third block at an end of the linked list;
removing, by a software stream executed on a processor and using an atomic operation, the first block from the linked list, thus making the second block the beginning of the linked list; and
returning a return block, by a hardware device coupled to the processor that used the return block, to the linked list by placing the return block at the end of the linked list with a non-atomic operation.
29. A computer system comprising:
a microprocessor executing a software stream;
a main memory array, a portion of the main memory array allocated to be a heap memory, and wherein unused portions of the heap memory are part of a heap pile, the heap pile further comprising
a plurality of blocks;
each block having a next block field; and
wherein the heap pile is maintained as a linked list, each block's next block field pointing to a next block in the list;
a first bridge logic device coupling the microprocessor to the main memory array;
a hardware device coupled to the heap memory through the first bridge logic device;
wherein the software stream executed on the microprocessor removes blocks of heap memory from a beginning of the heap pile using an atomic operation; and simultaneously
the hardware device returns blocks of heap memory used by the hardware device to an end of the heap pile using a non-atomic operation.
2. The method as defined in
writing a null to a next block field of the return block of heap memory;
writing a block number of the return block of heap memory to a next block field of a last block of heap memory in the linked list;
changing the contents of a bottom register to point to the return block of heap memory; and hereby
making the return block of heap memory a last entry in the linked list.
3. The method as defined in
4. The method as defined in
determining a block number of a primary block of heap memory resident at the first end of the linked list;
writing the block number of the primary block of heap memory to a next block field of the second block; and
writing atomically a block number of the second block to a top register.
5. The method as defined in
6. The method as defined in
7. The method as defined in
8. The method as defined in
determining a block number of the primary block;
reading a next block field of the primary block of memory; and
removing the primary block if the next block field of the primary block does not indicate a null.
9. The method as defined in
10. The method as defined in
11. The method comprising as defined in
13. The method of managing a heap memory as defined in
writing a null to a next block field of the return block;
reading a bottom register, he bottom register identifying the third block;
writing a block number of the return block to a next state field of the third block; and
writing the block number of the return block to the bottom register.
14. The method of managing a heap memory as defined in
reading a top register, the top register identifying the first block;
reading a next block field of the first block, the next block field of the first block identifying the second block; and
writing a block number of the second block to the top register.
15. The method of managing a heap memory as defined in
16. The method of managing a heap memory as defined in
17. The method of managing a heap of memory as defined in
reading a top register, the top register identifying the beginning of the linked list;
writing a block number of the block identified by the top register to a next state field of the fourth block; and
writing a block number of the fourth block to the top register.
18. The method of managing a heap memory as defined in
20. The method of managing a heap memory in a computer system as defined in
determining, by the software thread, a block number of a block of heap memory at the first end of the linked list; and
removing the block of heap memory at the first end of the linked list.
21. The method of managing a heap memory in a computer system as defined in
22. The method of managing a heap memory in a computer system as defined in
reading a next block field of the block of heap memory at the first end of the linked list to identify a block number of a next block in the linked list; and
writing the block number of the next block in the linked list to the beginning register.
23. The method of managing a heap memory in a computer system as defined in
determining, by the software thread, a block number of a block of heap memory at the first end of the linked list;
writing the block number of the block of heap memory at the first end of the linked list to a next block field of a return block of heap memory; and
making the return block of heap memory the first end of the linked list.
24. The method of managing a heap memory in a computer system as defined in
25. The method of managing a heap memory in a computer system as defined in
26. The method of managing a heap memory in a computer system as defined in
determining, by the hardware device, a block number of a block of heap memory at the second end of the linked list;
writing, by the hardware device, a block number of a return block of heap memory to a next block field of the block of heap memory at the second end of the linked list; and
making the return block of heap memory the second end of the linked list.
27. The method of managing a heap memory in a computer system as defined in
28. The method of managing a heap memory in a computer system as defined in
30. The computer system as defined in
31. The computer system as defined in
32. The computer system as defined in
|
Not applicable.
Not applicable.
1. Field of the Invention
The preferred embodiments of the present invention are directed to run time management of heap memory. More particularly, the preferred embodiments of the present invention are directed to a concurrent non-blocking heap memory management method that allows software to remove and return blocks of memory from the heap simultaneously with a hardware agent returning blocks to the heap.
2. Background of the Invention
In the art of computer programming, a programmer may not know at the time of coding the amount of memory required to perform a particular operation. Rather than statically allocate memory large enough to encompass any situation that may arise, programmers may dynamically allocate memory at run time necessary to perform the desired operation, thus improving the utilization of computer resources.
Memory allocated for use at run time is typically referred to as heap memory. Heap memory is allocated for use by a particular process, which may include multiple threads. This use typically comprises the one or more software threads claiming or removing blocks of the heap memory, using the blocks of heap memory, and then returning the blocks to the unused heap pile for removal and use by other software threads.
An exemplary use of a removed block of heap memory is a buffer for the exchange of command lists and/or data from software threads to hardware devices. That is, a software thread may need to program or pass large amounts of data to a hardware device, and the size of the program or data block may be too large to pass by way of a direct communication message. In such an situation, the related art software threads claim or remove a portion of heap memory (which may include one of more blocks), place the command lists and/or data into the memory locations, and inform the hardware device of the location in main memory of the command lists and/or data locations. Once the hardware completes the necessary tasks or reads the data, the heap memory block or blocks remain removed from the unused heap pile.
In related art computer systems, the method by which blocks of heap memory are returned after a hardware device completes its tasks is by a software thread, either the invoking thread or another software thread, returning the block to the heap pile. More particularly, in related art computer systems, the hardware device invokes an interrupt to the microprocessor, which preempts executing software streams and loads and executes an interrupt service routine. The interrupt service routine identifies the reason for the interrupt, which is the notification that the hardware task has completed and the heap memory block or blocks are no longer needed, and either returns the heap memory block, or invokes other software streams to return the memory block. Thus, a software stream returns the block to the heap memory for further claiming or removal.
Returning heap memory using interrupts could be inefficient. This inefficiency is seen not only in the use of an interrupt from the hardware device to the microprocessor to pass the message that the heap memory block may be returned, but also in preempting other software streams to service the interrupt and return the block.
Thus, what is needed in the art is a way to return blocks of heap memory that does not require assistance of the central processing unit or software streams.
The problems noted above are solved in large part by a run time heap memory management method and related system that allows a hardware device, or an agent for hardware, to return heap memory blocks to the unused heap pile without intervention from the calling software stream, an interrupt service routine, or the like. The preferred implementation is a heap memory management method that works as a modified stack structure. Software preferably removes heap memory blocks and replaces heap memory blocks to the heap pile in a last-in/first-out (LIFO) fashion. A hardware device preferably returns heap memory blocks to the heap pile at the end or bottom of the stack without intervention of the software that removed the block of heap memory.
More particularly, the heap memory management method of the preferred embodiments comprises managing the blocks of the heap memory in a linked list format, with each memory block in the heap pile identifying the next unused block. Thus, removal of a heap memory block by a software stream preferably involves changing the value of a top pointer register, freeing the heap memory block previously listed in the top pointer register for use. Likewise, returning a block of heap memory to the heap pile by software streams preferably involves changing the address of the top pointer, and writing a portion of the heap memory block to be returned to link or point to the next block of memory in the list. While software streams remove and replace blocks of heap memory to the top of the list in a LIFO fashion, preferably hardware returns heap memory blocks to the bottom or end of the list by writing a null in the next block field of the block to be returned, changing the next block field of the last entry to point to the block to be returned, and updating a bottom pointer register to point to the block to be returned.
In the preferred implementation, however, one block of heap memory, with its next block field indicating a null, remains in the list and cannot be removed even if all the remaining heap memory blocks are removed. A hardware device, or an agent for multiple hardware devices, thus always has the capability of placing blocks of heap memory back in the heap pile.
For a detailed description of the preferred embodiments of the invention, reference will now be made to the accompanying drawings in which:
Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, computer companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function.
In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ”. Also, the term “couple” or “couples” is intended to mean either an indirect or direct electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
In this specification, and in the claims, the term “heap memory” refers generally to memory that is allocated to a software stream or streams for use during run time. The term “heap pile” refers to blocks of heap memory that have not been removed for use from the linked list of available blocks. Thus, to return a block of heap memory to the heap pile is to return the block of heap memory to the linked list such that it may be removed again at a later time.
Main memory array 26 preferably couples to the host bridge 22 through a memory bus 28. The host bridge 22 preferably includes a memory control unit (not shown) that controls transactions to the main memory 26 by asserting necessary control signals during memory accesses. The main memory 26 functions as the working memory for the CPU 20, and any additional microprocessors coupled to the host bridge 22. Generally, the main memory array 26 comprises a conventional memory device or array of memory devices in which programs, instructions and data are stored. The main memory array 26 may comprise any suitable type of memory such as dynamic random access memory (DRAM) or any of the various types of DRAM devices such as synchronous DRAM (SDRAM), extended data output DRAM (EDO DRAM), or Rambus™ DRAM (RDRAM).
The computer system 100 also preferably comprises a graphics controller or video driver card 30 that couples to the host bridge 22 by way of bus 32, which bus could be an Advanced Graphics Port (AGP), or other suitable bus. Alternatively, the graphics controller may couple to the primary expansion bus 34 or one of the secondary expansion buses, for example, peripheral component interconnect (PCI) bus 40. Graphics controller 30 further couples to a display device 36, which may comprise any suitable electronic display device upon which any image or text can be represented.
The computer system 100 also preferably comprises a second bridge logic device, input/output (I/O) bridge 38, that bridges the primary expansion bus 34 to various secondary buses including a low pin count (LPC) bus 42 and the PCI bus 40. The bridge device 36 may be any suitable bridge device on the market. Although the I/O bridge 38 is shown in
The primary expansion bus 34 may comprise any suitable expansion bus. If the I/O bridge 38 is an ICH 82801AA made by Intel Corporation, then the primary expansion bus may comprise a Hub-link bus, which is a proprietary bus of Intel Corporation. However, computer system 100 is not limited to any particular type of primary expansion bus, and thus other suitable buses may be used, for example, a PCI bus.
The exemplary heap memory in
After the software routine is called and the appropriate parameters passed, preferably all the entries in the heap memory are cleared (set to zeros) (step 62). Thereafter, the next block field 54 of each block is initialized to form the linked list (step 64), which may initially appear similar to the linked lists shown in
After the heap has been allocated and initialized into the linked list structure of the preferred embodiments, software streams are free to remove blocks from the list for use. In broad terms, removal of the blocks from the heap memory by software streams is preferably a last-in/first-out (LIFO) scheme, also known as a stack. Blocks of heap memory are preferably removed from the top or beginning of the list, and they are preferably returned by software streams to the top of the list.
Because access to the heap memory is preferably non-blocking, a software stream attempting to remove the block preferably does not block access by other software streams to parameters such as the Top register. For this reason, it is possible that the value of the Top register may change between the step of reading the block number of the first block on the list (step 74), and writing a new block number to the Top register (step 80). In such a circumstance, the process of removal preferably starts anew at step 72. One having ordinary skill in the art understands the limitations of concurrent non-blocking algorithms, and now understanding the steps involved in the removal process could account for the contingencies in a software program implementing the steps of
Because in the preferred embodiments the blocks are of fixed 2N size, calculating the base address of the removed block is a shift operation of the base memory address of the heap. Consider an exemplary case where the block size is eight bytes, implying a difference in starting addresses between contiguous blocks of three bits. In this exemplary case, determining the address of any removed block involves shifting the block number by three bits, and adding the shifted result to the base address. Preferably, the base memory address has the lowest address value, with the heap memory addresses growing larger toward the end of the heap. Thus, if block 0 is removed, calculating the address of the first memory location of block 0 simply involves shifting 0 (which is still zero) and adding the shifted result to the base memory address. This is consistent with block 0 being the first block in the heap memory. If, however, the removed block of heap memory is block 2 (10 binary), calculating the starting address involves shifting the block number three bits and adding the shifted result to the base memory address. Thus, the shift operation is left shift; however, it is equally valid to have the base address of the heap memory as the largest address, and in this case shifts to determine addresses of the block need to be a right shift (or division operation). The process exemplified in
Consider now the adding or return of a block of heap memory to the unused heap pile by a software stream where, prior to the return, the linked list is as exemplified in
Summarizing before continuing, software streams remove memory blocks from the heap by taking the first block in the linked list and atomically updating the Top register 56. Likewise, software streams return blocks of heap memory by updating the next block field of the block to be returned to point to the first block of the free list, then atomically writing the Top register 56 to point to the returned block. Thus, as for software removal and return of a block of heap memory, the linked list works in a LIFO fashion. In the preferred embodiments however, a hardware device, or an agent for multiple hardware devices, has the ability to return blocks of heap memory to the heap pile.
In the preferred embodiment, only a single agent is allowed to return blocks of heap memory in the fashion described. The single agent could be a hardware device, or could be an agent acting on behalf of multiple hardware devices. A non-limiting list of hardware devices that could implement the heap memory management method comprises graphics cards, network interface cards, audio devices, and mass storage devices such as hard drives and compact disc drives.
By allowing software threads or streams to operate on a first end of the linked list, and hardware devices to operate on a second end of the linked list, the return and removal process may take place simultaneously. In having at least one block of heap memory in the linked list when the list is considered empty, software streams and hardware devices need not access the same registers, thereby avoiding contention. That is, software streams need only access the Top register 56 for both removal and return of blocks, and hardware need only access the Bottom register 58.
The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Patent | Priority | Assignee | Title |
7328316, | Jul 16 2002 | Oracle America, Inc | Software transactional memory for dynamically sizable shared data structures |
7337298, | Oct 05 2005 | GOOGLE LLC | Efficient class memory management |
7451434, | Sep 09 2003 | SAP SE | Programming with shared objects in a shared memory |
7502826, | Mar 27 2003 | Hewlett Packard Enterprise Development LP | Atomic operations |
7587566, | Feb 20 2004 | Microsoft Technology Licensing, LLC | Realtime memory management via locking realtime threads and related data structures |
7703098, | Jul 20 2004 | Oracle America, Inc | Technique to allow a first transaction to wait on condition that affects its working set |
7895401, | Jul 16 2002 | Oracle America, Inc. | Software transactional memory for dynamically sizable shared data structures |
8074030, | Jul 20 2004 | Oracle America, Inc | Using transactional memory with early release to implement non-blocking dynamic-sized data structure |
8176264, | Jul 16 2002 | Oracle International Corporation | Software transactional memory for dynamically sizable shared data structures |
Patent | Priority | Assignee | Title |
6076151, | Oct 10 1997 | Advanced Micro Devices, Inc. | Dynamic memory allocation suitable for stride-based prefetching |
6412053, | Aug 26 1998 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | System method and apparatus for providing linearly scalable dynamic memory management in a multiprocessing system |
6504768, | Aug 25 2000 | Round Rock Research, LLC | Redundancy selection in memory devices with concurrent read and write |
20010056420, | |||
20020144073, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 19 2001 | BONOLA, THOMAS J | COMPAQ INFORMATION TECHNOLOGIES GROUP, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 012456 | /0350 | |
Oct 01 2002 | Compaq Information Technologies Group LP | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 014628 | /0103 | |
Oct 27 2015 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Hewlett Packard Enterprise Development LP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037079 | /0001 | |
Jan 15 2021 | Hewlett Packard Enterprise Development LP | OT PATENT ESCROW, LLC | PATENT ASSIGNMENT, SECURITY INTEREST, AND LIEN AGREEMENT | 055269 | /0001 | |
Jan 15 2021 | HEWLETT PACKARD ENTERPRISE COMPANY | OT PATENT ESCROW, LLC | PATENT ASSIGNMENT, SECURITY INTEREST, AND LIEN AGREEMENT | 055269 | /0001 | |
Feb 01 2021 | OT PATENT ESCROW, LLC | VALTRUS INNOVATIONS LIMITED | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055403 | /0001 |
Date | Maintenance Fee Events |
Jan 11 2010 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Dec 23 2013 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Dec 26 2017 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Jul 11 2009 | 4 years fee payment window open |
Jan 11 2010 | 6 months grace period start (w surcharge) |
Jul 11 2010 | patent expiry (for year 4) |
Jul 11 2012 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 11 2013 | 8 years fee payment window open |
Jan 11 2014 | 6 months grace period start (w surcharge) |
Jul 11 2014 | patent expiry (for year 8) |
Jul 11 2016 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 11 2017 | 12 years fee payment window open |
Jan 11 2018 | 6 months grace period start (w surcharge) |
Jul 11 2018 | patent expiry (for year 12) |
Jul 11 2020 | 2 years to revive unintentionally abandoned end. (for year 12) |