A method for executing a processing routine that utilizes an external memory is provided. The processing routine requires more than one external memory access. The method comprises the step of distributing the external memory access after a predetermined number of external memory accesses.
|
6. A method for executing a processing routine utilizing an internal memory and an external memory, comprising the steps of:
determining an available energy supply;
accessing the external memory only if the available energy supply exceeds a threshold value; and
accessing the internal memory if the available energy supply does not exceed the threshold value.
1. A method for executing a processing routine utilizing an internal memory and an external memory and requiring more than one external memory access, comprising the step of:
distributing the external memory accesses based on a predetermined number of consecutive external memory accesses wherein distributing external memory accesses comprises interrupting access to the external memory after a predetermined number of external memory accesses with a predetermined number of internal memory accesses.
11. A processing system (100), comprising:
an internal memory (102) and an external memory (103); and
a processor (101) adapted to execute a processing routine utilizing the internal memory (102) and the external memory (103), wherein the processor (101) is configured to:
distribute external memory accesses based on a predetermined number of consecutive external memory accesses; and
distribute external memory accesses by interrupting access to the external memory (103) after a predetermined number of external memory accesses with a predetermined number of internal memory accesses.
2. The method of
3. The method of
4. The method of
5. The method of
7. The method of
8. The method of
9. The method of
10. The method of
12. The processing system (100) of
13. The processing system (100) of
14. The processing system (100) of
15. The processing system (100) of
|
The present invention relates to a processing system, and more particularly, to a method for controlling access to an external memory while executing a processing routine.
Processing systems are generally known in the art and are utilized in a variety of devices. Some processing systems have an internal memory which supplies all of the processing system's data and/or instructions. However, internal memories have limited capacities and capabilities and thus, in some situations, the processing system is connected to an external memory, which augments the storage space of the internal memory. Additionally, external memories allow multiple processing systems to access the memory and therefore provide a greater flexibility.
A drawback to the use of an external memory is that accessing an external memory generally requires a greater amount of power than required to access an internal memory and in some situations can take a longer amount of time. Thus, in situations where the processing system is under power and/or time constraints, there is an incentive to minimize the external memory accesses. However, it may not be possible to include all of the data and/or instructions in the internal memory and thus, an external memory is often required.
In prior art processing systems, the external memory is accessed in bursts. In other words, a large block of external memory is accessed substantially consecutively. This results in a spike in the energy demands of the processing system. In environments where the processing system is not under an energy constraint, this type of memory access is adequate. However, where there are power constraints, for example, if the processing system is powered via a two-wire loop, a spike in required power can adversely affect components that share a power source with the processing system as the power delivered to the processing system is limited, in part, by the signaling sent on the two-wire loop. The example of the two-wire loop is merely an example and should not limit the scope of the invention as there are numerous other situations where the power delivered to a processing system is limited. Although prior art processing systems have attempted to reduce the overall power consumption, this is not always an adequate solution because even if the overall power consumption is reduced, temporary spikes in power consumption can still adversely affect the system.
Therefore, the present invention provides a method for executing a processing routine while controlling access to an external memory.
Aspects
According to an aspect of the invention, a method for executing a processing routine utilizing an external memory and requiring more than one external memory access, comprises the step of:
distributing the external memory accesses based on a predetermined number of consecutive external memory accesses.
Preferably, the method further comprises the step of distributing the external memory accesses such that the number of consecutive external memory accesses is minimized.
Preferably, the method further comprises the step of distributing the external memory accesses substantially evenly.
Preferably, the step of distributing the external memory accesses comprises temporarily interrupting access to the external memory for a predetermined amount of time after a predetermined number of consecutive external memory accesses.
Preferably, the processing routine further utilizes an internal memory and wherein the step of distributing external memory accesses comprises interrupting access to the external memory after a predetermined number of external memory accesses with a predetermined number of internal memory accesses.
Preferably, the predetermined number of consecutive external memory accesses is based on an available energy supply.
According to another aspect of the invention, a method for executing a processing routine utilizing an external memory, comprises the steps of:
determining an available energy supply; and
accessing the external memory based on the available energy supply.
Preferably, the method further comprises the step of accessing the external memory only if the available energy supply exceeds a threshold value.
Preferably, the processing routine further utilizes an internal memory and the method further comprises the step of accessing the internal memory if the available energy supply does not exceed a threshold value.
Preferably, the method further comprises the step of distributing the external memory accesses if the available energy supply does not exceed a threshold value.
Preferably, the method further comprises the step of distributing the external memory accesses based on a predetermined number of consecutive external memory accesses.
Preferably, the predetermined number of consecutive external memory accesses is determined at least in part by the available energy supply level.
Preferably, the processing routine further utilizes an internal memory and the method further comprises the step of distributing access to the external memory by interrupting access to the external memory after a predetermined number of consecutive external memory accesses with a predetermined number of internal memory accesses.
According to another aspect of the invention, a processing system, comprising:
an external memory; and
a processor adapted to execute a processing routine utilizing the external memory, wherein the processor is configured to distribute external memory accesses based on a predetermined number of consecutive external memory accesses.
Preferably, the processor is further configured to distribute the external memory accesses such that the number of consecutive external memory accesses is minimized.
Preferably, the processor is further configured to substantially evenly distribute the external memory accesses.
Preferably, the processor is further configured to temporarily interrupt access to the external memory for a predetermined amount of time after a predetermined number of consecutive external memory accesses.
Preferably, the processing system further comprises an internal memory, wherein the processor is further configured to distribute external memory accesses by interrupting access to the external memory after a predetermined number of external memory accesses with a predetermined number of internal memory accesses.
Preferably, the predetermined number of consecutive external memory accesses is based on an available energy supply.
The memories 102, 103 can store data, software routines, constant values, and variable values. It should be appreciated that each time the processor 102 reads/writes information from/to the memories 102, 103, the processing system 100 requires an additional amount of energy. While the external memory 103 is required in many embodiments, accessing the external memory 103 requires more energy than accessing the internal memory 102. This is shown in
The trace 231 shows the memory accesses. Each spike in the trace 231 represents an external memory access. It should be appreciated that relatively few consecutive external memory accesses, such as seen at 232, for example, do not substantially affect the available energy supply. This can be seen by observing the energy supply available substantially directly above the spike at 232. However, as seen in the access at 233, as the number of consecutive external memory accesses increases, the level of the energy supply available to other applications decreases. The limitation on available energy is significant where access to the external memory 103 is implemented with multiple consecutive external memory accesses, such as seen at 234 where the energy required accessing the external memory 103 uses almost the entire reserve energy available. During such access bursts, the remaining components of the processing system 100 are left with almost no energy.
According to an embodiment of the invention, the available energy 230 is determined and access to the external memory 103 is restricted to times when the available energy 230 exceeds a threshold value. This ensures that accessing the external memory 103 will not substantially deplete the energy available to the remaining components of the electronic device or the processing system 100. This method also provides a substantially real time method for determining when to access the external memory 103. According to an embodiment of the invention, the threshold value may be a predetermined value. According to another embodiment of the invention, the threshold value may depend on the number of external accesses required to execute the processing routine. For example, the threshold value may reduce as the number of external accesses required to execute the processing routine reduces. This is because as seen in
According to an embodiment of the invention, the processor 101 can execute a processing routine that utilizes only the external memory 103. In this situation, the processing system 100 can determine the available energy supply and grant access to the external memory 103 based on the available energy supply. According to one embodiment, access is granted to the external memory 103 only if the available energy supply exceeds a threshold value. When the available energy supply is below or equal to the threshold value, the processing system 100 can temporarily restrict access to the external memory 103 until the available energy once again exceeds the threshold value. It should be understood that some processing routines only utilize the internal memory 102 and therefore, the method of restricting access to the internal memory 102 is equally applicable; however in many embodiments, the threshold value will be substantially lower for restricting access to the internal memory 102 than for restricting access to the external memory 103. This is because accessing the internal memory 102 requires less energy than accessing the external memory 103. According to another embodiment, access to the external memory 103 is granted according to one of the distributions described below in relation to
According to another embodiment of the invention, the processor 101 can execute a processing routine that utilizes both the internal memory 102 and the external memory 103. In this embodiment, the processing system 100 can determine the available energy supply and grant access to the external memory 103 based on the available energy supply. According to one embodiment, access to the external memory 103 is only granted if the available energy supply exceeds the threshold value. If on the other hand, the available energy supply does not exceed the threshold value, access to the external memory 103 is restricted, however, access to the internal memory 102 may be granted. Thus, the processor 101 can access the internal memory 102 during the periods where the available energy supply does not exceed the threshold value, and once the energy supply exceeds the threshold value, the processor 101 can again access the external memory 103.
According to another embodiment of the invention, if the available energy supply 230 does not exceed the threshold value, the processor 101 may distribute the external memory accesses according to one of the methods outlined below. The particular method can be chosen based on the available energy supply 203. It should be understood that “distribution” is meant to mean that access is spread out or separated by one of the methods used below rather than accessing in bursts as in the prior art. The accesses may be separated by periods of time where the processor 101 ceases all functions or the accesses may be separated by periods where the processor 101 simply switches accessing the particular memory, but may access a different memory, for example.
In many situations, the amount of information stored in the external memory 103 is substantially less than the amount of information stored in the internal memory 102. For a given processing routine, the processor 101 may require, for example, 1000 external memory accesses for every 100,000 total memory accesses. It should be understood however, that the ratio of 1 to 100 is used merely as an example and the actual ratio will vary depending on the particular processing routine. The prior art processing systems naturally group the 1000 external memory accesses together, i.e., burst access. As shown in
External1
External2
External3
External1000
Internal1
Internal2
Internal3
Internal99000
Although such a grouping presents no problem with an unlimited power supply, as seen at access 234, such a burst in external memory accesses can substantially deplete the reserve power available for the remaining electronics when the processing system 100 is under a power constraint. One reason for such a grouping in the prior art processing systems is that it requires much less context switching. Thus, the overall bandwidth can be maximized.
As can be seen, such a “bursty” method of accessing the external memory 103 can present serious problems with the available energy supply. Thus, such a method is many times, unfavorable. According to an embodiment of the invention, in order to overcome energy constraint issues, the processor 101 distributes the external memory accesses on a predetermined basis. The distribution may be performed before or after compiling of the processing routine. Furthermore, the distribution may be performed manually, or substantially automatically as described below.
External1
External10
Internal1
Internal1000
External11
External20
Internal1001
Internal2000, etc.
According to another embodiment of the invention, the predetermined number of consecutive external memory accesses may be substantially the same regardless of the available energy supply. However, in this embodiment, the predetermined number should be chosen such that the processor 101 can access the predetermined number of external memory locations even at the minimum available energy supply.
External1
Internal1
Internal10
External2
Internal11
Internal20
According to one embodiment, the invention outlined above provides a method for executing a processing routine where a portion of the processing routine is stored in an external memory 103 and a portion of the processing routine is stored in an internal memory 102. Therefore, in order to execute the processing routine, the processor 101 is required to access both the internal memory 102 and the external memory 103. The method reduces the peak power consumption of the processor 101 by distributing access to the external memory 101 according to a predetermined number of external memory accesses. Thus, the processor 101 does not access the external memory 103 in large bursts causing large spikes in the power consumption, as in the prior art. According to one embodiment, access to the external memory 103 remains inaccessible for a predetermined amount of time. According to another embodiment of the invention, access to the external memory 103 remains inaccessible until the processor 101 performs a predetermined number of internal memory accesses. The predetermined number of accesses for the internal and external memory may be the same number or may be a different number. The particular predetermined numbers will depend on the particular situation and the particular number of total accesses required by the processing routine. Once the processor 101 completes the predetermined number of internal memory accesses, the processor 101 can again return to accessing the external memory 103.
As described above, in some embodiments, the predetermined numbers used to distribute access to the external memory 103 are chosen such that the number of consecutive external memory accesses is minimized. According to another embodiment of the invention, the predetermined numbers are substantially constant throughout the processing routine. Therefore, the distribution of access to the external memory 103 is substantially even throughout the processing routine. According to other embodiments, the predetermined numbers may vary and therefore, the number of accesses performed by the processor 101 will change as the processor 101 executes the processing routine. According to another embodiment of the invention, the external memory accesses are interrupted by internal memory accesses according to the approximate ratio of external to internal memory accesses. Thus, for example, if the ratio of internal memory accesses to external memory accesses required is 2:1, access to the external memory will be interrupted after every external access, and will remain interrupted for two internal memory accesses before the processor 101 returns to accessing the external memory.
The method of distributing the accesses can be carried out in a variety of manners. The distribution may take place manually, in software, or in hardware. According to one embodiment of the invention, the distribution is performed after the compiling of the processing routine. According to another embodiment of the invention, the distribution is performed before the compiling of the processing routine. According to one embodiment of the invention, distributing the accesses is carried out by manual distribution. Manual distribution of the external memory accesses can be achieved via inspection and hand-distribution of the source code. This can be accomplished by inspection and hand-distributing the source code. Although this method is suitable for some situations, it is error-prone and time consuming. Furthermore it must be performed for every new code change or software release, thereby furthering the time required.
According to another embodiment of the invention, the memory accesses are distributed in software. According to one embodiment, the software may be modified using a post-processing program. This method is briefly shown above where a ‘jump’ is inserted every ‘n’ instructions. For example, in the discussion accompanying
According to another embodiment of the invention, a real time operating system (RTOS) is utilized. According to this embodiment, each task of the processor 101 has a property that defines the task in internal or external memory 102, 103. The external tasks are preemptively interrupted after ‘n’ instructions. The external tasks are not allowed to resume execution until ‘m’ internal instructions have been performed. In this context, ‘n’ and ‘m’ are the number of locations required to be accessed in the external and internal memories 103, 102, respectively. In addition, the RTOS can be programmed to keep the size of ‘n’ as small as possible. According to another embodiment of the invention, the RTOS can be programmed to keep ‘n’ and ‘m’ relatively constant over time, thereby maximizing the optimal distribution of external memory accesses.
According to another embodiment of the invention, the distribution of memory accesses can be implemented in hardware in which a memory cache is modified to access the external memory in a temporarily optimum manner. According to one embodiment, the temporarily optimum manner comprises one external memory access for every ‘n’ total accesses. One drawback to this embodiment is that it requires an internal cache memory. However, in embodiments where an internal cache memory is available, this embodiment may be implemented. Other methods generally known in the art may be used to program the processing system and are therefore included within the scope of the present invention.
The above description provides a method for programming a processing routine required to access external memory 103. In some embodiments, the processing routine utilizes internal memory 102 as well. The method distributes (interrupts) the external memory accesses based on a predetermined number of consecutive external memory accesses. Although portions of the above description makes reference to a particular number of accesses, it should be understood that the numbers are used solely to assist in the understanding of the invention, and should not in any way limit the scope of the invention as the particular number of accesses will vary depending on the processing routine. Furthermore, the present invention should not be limited by the particular ratio of external memory accesses to internal memory accesses as the ratio can vary from less than, greater than, or equal to one.
The above description also provides a method for controlling access to an external memory 103 based on an available energy supply. Thus, the method ensures that the energy consumed by the processor 101 while accessing the external memory 103 does not exceed the available energy supply. It should be appreciated that the two methods (controlling access based on available energy and access distribution) may be used together or separately as needed.
The detailed descriptions of the above embodiments are not exhaustive descriptions of all embodiments contemplated by the inventors to be within the scope of the invention. Indeed, persons skilled in the art will recognize that certain elements of the above-described embodiments may variously be combined or eliminated to create further embodiments, and such further embodiments fall within the scope and teachings of the invention. It will also be apparent to those of ordinary skill in the art that the above-described embodiments may be combined in whole or in part to create additional embodiments within the scope and teachings of the invention.
Thus, although specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. The teachings provided herein can be applied to other processing systems, and not just to the embodiments described above and shown in the accompanying figures. Accordingly, the scope of the invention should be determined from the following claims.
McAnally, Craig B, Hays, Paul J, Mansfield, William M
Patent | Priority | Assignee | Title |
10101910, | Sep 15 2015 | Amazon Technologies, Inc | Adaptive maximum limit for out-of-memory-protected web browser processes on systems using a low memory manager |
10248321, | Sep 15 2015 | Amazon Technologies, Inc | Simulating multiple lower importance levels by actively feeding processes to a low-memory manager |
10289446, | Sep 15 2015 | Amazon Technologies, Inc | Preserving web browser child processes by substituting a parent process with a stub process |
9881161, | Dec 06 2012 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | System on chip to perform a secure boot, an image forming apparatus using the same, and method thereof |
Patent | Priority | Assignee | Title |
6330647, | Aug 31 1999 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Memory bandwidth allocation based on access count priority scheme |
6681285, | Jul 22 1999 | Rovi Guides, Inc | Memory controller and interface |
7035155, | Sep 26 2002 | CUFER ASSET LTD L L C | Dynamic memory management |
7064994, | Jan 30 2004 | Oracle America, Inc | Dynamic memory throttling for power and thermal limitations |
7913034, | Jul 27 2004 | GLOBALFOUNDRIES U S INC | DRAM access command queuing |
9913754, | Sep 03 2012 | URGO RECHERCHE INNOVATION ET DEVELOPMENT; URGO RECHERCHE INNOVATION ET DEVELOPPEMENT | Self-adhesive elastic bandage that can be used, in particular, for the treatment and prevention of diseases of the veins |
20030137528, | |||
20040059439, | |||
20050289292, | |||
20060090031, | |||
20070106860, | |||
20070260815, | |||
20070294471, | |||
20080025126, | |||
20090144577, | |||
EP1110136, | |||
EP1410186, | |||
JP2007013710, | |||
WO225448, | |||
WO2006121202, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 23 2008 | HAYS, PAUL J | Micro Motion, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025578 | /0157 | |
Jul 23 2008 | MCANALLY, CRAIG B | Micro Motion, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025578 | /0157 | |
Jul 23 2008 | MANSFIELD, WILLIAM M | Micro Motion, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025578 | /0157 | |
Jul 23 2008 | Micro Motion, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Oct 02 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 23 2021 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Apr 01 2017 | 4 years fee payment window open |
Oct 01 2017 | 6 months grace period start (w surcharge) |
Apr 01 2018 | patent expiry (for year 4) |
Apr 01 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 01 2021 | 8 years fee payment window open |
Oct 01 2021 | 6 months grace period start (w surcharge) |
Apr 01 2022 | patent expiry (for year 8) |
Apr 01 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 01 2025 | 12 years fee payment window open |
Oct 01 2025 | 6 months grace period start (w surcharge) |
Apr 01 2026 | patent expiry (for year 12) |
Apr 01 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |