A method of demultiplexing data, the method comprising: during each of a series of time-units, receiving multiplexed data, wherein the multiplexed data comprises, for each of a plurality of channels, a corresponding quantity of channel data of a corresponding data size; and during each of the series of time-units, for each of the plurality of channels, storing the corresponding quantity of channel data received during that time-unit in a contiguous region of a memory associated with that channel; wherein each of the plurality of channels has a corresponding time-unit-number such that, for each of the plurality of channels, the channel data stored in the corresponding region of the memory for that channel is to be processed after a number of time units equal to the time-unit-number for that channel has passed since channel data for that channel was last processed; characterized in that the method comprises: determining the locations of the regions of the memory based on the data sizes and the time-unit-numbers corresponding to one or more channels from the plurality of channels such that the step of storing will not store channel data at a location in the memory that is currently storing channel data that has not yet been processed.
|
2. A method of multiplexing data, the method comprising:
during each of a series of time-units, forming multiplexed data by outputting, for each of a plurality of channels, a corresponding quantity of channel data of a corresponding data size that is being stored in a memory;
wherein each of the plurality of channels has a corresponding time-unit-number, the method comprising, for each of the plurality of channels, storing channel data in a corresponding region of the memory for that channel after a number of time units equal to the time-unit-number for that channel has passed since channel data for that channel was last stored in the memory;
characterised in that the method comprises:
determining the locations of the regions of the memory based on the data sizes and the time-unit-numbers corresponding to the one or more channels from the plurality of channels such that the step of storing will not store channel data at a location in the memory that is currently storing channel data that has not yet been output.
1. A method of demultiplexing data, the method comprising:
during each of a series of time-units, receiving multiplexed data, wherein the multiplexed data comprises, for each of a plurality of channels, a corresponding quantity of channel data of a corresponding data size; and
during each of the series of time-units, for each of the plurality of channels, storing the corresponding quantity of channel data received during that time-unit in a contiguous region of a memory associated with that channel;
wherein each of the plurality of channels has a corresponding time-unit-number such that, for each of the plurality of channels, the channel data stored in the corresponding region of the memory for that channel is to be processed after a number of time units equal to the time-unit-number for that channel has passed since channel data for that channel was last processed;
determining the locations of the regions of the memory based on the data sizes and the time-unit-numbers corresponding to one or more channels from the plurality of channels such that the step of storing will not store channel data at a location in the memory that is currently storing channel data that has not yet been processed.
20. A non-transitory computer readable media comprising a computer program which, when executed by a computer, carries out a method of demultiplexing data, the method comprising:
during each of a series of time-units, receiving multiplexed data, wherein the multiplexed data comprises, for each of a plurality of channels, a corresponding quantity of channel data of a corresponding data size; and
during each of the series of time-units, for each of the plurality of channels, storing the corresponding quantity of channel data received during that time-unit in a contiguous region of a memory associated with that channel;
wherein each of the plurality of channels has a corresponding time-unit-number such that, for each of the plurality of channels, the channel data stored in the corresponding region of the memory for that channel is to be processed after a number of time units equal to the time-unit-number for that channel has passed since channel data for that channel was last processed;
determining the locations of the regions of the memory based on the data sizes and the time-unit-numbers corresponding to one or more channels from the plurality of channels such that the step of storing will not store channel data at a location in the memory that is currently storing channel data that has not yet been processed.
3. A method according to
4. A method according
5. A method according to
6. A method according to
repeating the step of determining to update the location of one or more of the regions of the memory.
7. A method according to
determining whether the location of one or more of the regions of the memory should be updated during the current time-unit.
8. A method according to
9. A method according to
10. A method according to
11. A method according to
12. A method according
13. A method according to
14. A method according to
repeating the step of determining to update the location of one or more of the regions of the memory.
15. A method according to
determining whether the location of one or more of the regions of the memory should be updated during the current time-unit.
16. A method according to
17. A method according to
18. A method according to
19. A method according
|
The present invention relates to a method of multiplexing data, to a method of demultiplexing data, to an apparatus, to a computer program and to a data carrying medium.
Methods of multiplexing and demultiplexing data are known.
A known demultiplexer for demultiplexing channel data for a plurality of channels is described below. During each of a series of channel-time-units, the demultiplexer receives multiplexed data, for example as a multiplexed data stream. For each of the plurality of channels, a corresponding quantity of channel data for that channel is contained within the received multiplexed data. The demultiplexer then identifies the quantity of channel data for a channel and stores that channel data in a memory.
The demultiplexer divides its memory into a predetermined number of equally sized memory regions, one for each possible channel that the demultiplexer is arranged to handle. The number of channels currently being handled by the demultiplexer may be less than the maximum number of channels that the demultiplexer can handle. However, the demultiplexer divides its memory into a number of equally sized memory regions equal to this maximum number of channels to cater for the situation in which the number of channels within the multiplexed data stream increases to this maximum number of channels. The demultiplexer stores the channel data for a channel in a memory region corresponding to that channel.
For each channel, after a corresponding number of channel-time-units has passed, the channel data stored in the memory for that channel is processed. Once the channel data stored in the region of the memory associated with a channel has been processed, then that memory region is free for re-use, i.e. that memory region can be re-used to store subsequently demultiplexed channel data for that channel or for other channels.
A known multiplexer for multiplexing channel data for a plurality of channels is described below. During each of a series of channel-time-units, the multiplexer outputs multiplexed data, for example as a multiplexed data stream. For each of the plurality of channels, a corresponding quantity of channel data for that channel is contained within the output multiplexed data. The multiplexer stores channel data for each of the channels in a memory. During each of the series of channel-time-units, the multiplexer identifies a quantity of channel data for a channel to output from the channel data being stored in the memory for that channel.
The multiplexer divides the memory into a predetermined number of equally sized memory regions, one for each possible channel that the multiplexer is arranged to handle. The number of channels currently being handled by the multiplexer may be less than the maximum number of channels that the multiplexer can handle. However, the multiplexer divides its memory into a number of equally sized memory regions equal to this maximum number of channels to cater for the situation in which the number of channels that it is to handle increases to this maximum number of channels. The multiplexer stores the channel data for a channel in the memory region corresponding to that channel.
For each channel, after a corresponding number of channel-time-units has passed, the multiplexer stores fresh channel data in the memory for that channel. During the next corresponding number of channel-time-units for that channel, this fresh channel data will be output as one or more quantities of channel data within the output multiplexed data. At the end of the next corresponding number of channel-time-units for that channel, all of the fresh channel data will have been output, so that the memory region for that channel is then free, i.e. the multiplexer can then re-use that memory region to store a new amount of channel data for that channel or for other channels.
According to aspects of the invention, there is provided a method, an apparatus, a computer program and a data carrying medium as described in the appended claims.
Specific embodiments of the invention are set forth in the dependent claims. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
In the description that follows and in the figures, certain embodiments of the invention are described. However, it will be appreciated that the invention is not limited to the embodiments that are described and that some embodiments may not include all of the features that are described below. It will be evident, however, that various modifications and changes may be made herein without departing from the broader scope of the invention as set forth in the appended claims.
The apparatus 100 is arranged to perform multiplexing and/or demultiplexing of data, as will be described in more detail below. The apparatus 100 may be any device for which multiplexing and/or demultiplexing of data is desirable or required. For example, the apparatus 100 may be: a mobile or portable telephone; a personal computer; a personal digital assistant; a node in a communications network; a component part (or element) of one of these devices, etc.
The apparatus 100 is arranged to perform its multiplexing and/or demultiplexing for a plurality of channels, each channel having corresponding channel data. These channels may be of one or more channel types, such as voice channels, video channels and data channels, although it will be appreciated that other types of channels having channel data may also be used. A channel may be considered to be a flow of corresponding channel data from a source to a destination. The plurality of channels may be multiplexed together for transport across a communications network, and then demultiplexed upon arrival at a destination.
The interface 106 may be arranged to receive multiplexed data that comprises the channel data for the plurality of channels. For example, when the apparatus 100 is a mobile telephone, the multiplexed data may be received from a mobile telephone network base-station (not shown in
Additionally, or alternatively, the interface 106 may be arranged to output multiplexed data that comprises channel data for the plurality of channels. For example, when the apparatus 100 is a mobile telephone, the multiplexed data may be transmitted to a mobile telephone network base-station (not shown in
It will be appreciated that the apparatus 100 may be configured to only perform the above data multiplexing (so that it does not perform data demultiplexing), or to only perform the above data demultiplexing (so that it does not perform data multiplexing), or to perform both the above data multiplexing and data demultiplexing.
The memory 104 may be any kind of memory suitable for storing data, such as one or more of a random-access-memory, a solid-state memory, a flash memory, registers, etc.
The processing unit 102 may be any kind of processing unit which, when configured (as described below) is capable of carrying out the multiplexing and/or demultiplexing according to embodiments of the invention. The processing unit 102 may be, for instance, in the form of a field-programmable-gate-array, an application-specific-integrated-circuit (ASIC), a digital-signal-processor, etc., or any combination thereof.
The processing unit 102 may for instance be a generic processor that has been configured to carry out a multiplexing or demultiplexing method The processing unit 102 may for example be capable of executing computer program instructions which, when executed, carry out a method according to an embodiment of the invention. Such computer program instructions may be stored on a storage medium (such as a CD-ROM, a DVD-ROM, a BluRay disk, or flash-memory device, other memory device) or may be transmitted to the device 100 over a network. In this way, the device 100 may be configured with the computer program instructions to carry out embodiments of the invention. Alternatively, the processing unit 102 may be a dedicated processing unit specifically designed to carry out embodiments of the invention, e.g implemented as a ASIC or other dedicated circuitry.
The interface 106 may be any interface suitable for transmitting and/or receiving data. Many such interfaces are known and, for the sake of brevity, the interface shall not be described in more detail herein.
In the following, the various channels that the apparatus 100 is currently handling (either for multiplexing or demultiplexing) shall be referred to as channels Ci for i=1, 2, . . . Nc, where Nc is the number of currently active channels. As will be discussed later, the value of Nc may vary over time, i.e. channels may be added (or activated) or removed (or deactivated) at different points in time. However, the apparatus 100 may impose an upper bound, K, on the value of Nc, so that, when multiplexing or when demultiplexing, the apparatus 100 can handle at most K (active) channels at any one time.
As will be described in more detail below, the apparatus 100 operates based on a series of channel-time-units (CTUs), so that certain actions or processing are performed for each of the channels during a CTU. A CTU may be any predetermined period of time, such as 1 ms or 2 ms, although it will be appreciated that the length of a CTU may be set according to the particular data processing requirements and/or the data processing abilities of the apparatus 100. The series of CTUs is thus a contiguous sequence of consecutive regular periods (or durations) of time.
When multiplexing data, during each CTU the interface 106 outputs, for each of the channels, a corresponding quantity (or amount or block) of channel data that is currently stored in the memory 104 to form part of the output multiplexed data. When demultiplexing data, during each CTU the interface 106 receives, for each of the channels, a corresponding quantity (or amount or block) of channel data as part of the received input multiplexed data, which is then stored in the memory 104 during that CTU.
Each of the channels has a corresponding CTU-processing-number, the purpose of which is described in more detail below. The CTU-processing-number for the i-th channel Ci shall be represented by Nbi, for i=1 . . . Nc. The CTU-processing-number Nbi for the i-th channel Ci for multiplexing purposes may be the same as, or may be different from, the CTU-processing-number Nbi for the i-th channel Ci for demultiplexing purposes. However, as multiplexing and demultiplexing are treated separately below, the distinction between the CTU-processing-number Nbi for multiplexing purposes and the CTU-processing-number Nbi for demultiplexing purposes shall not be emphasised herein and shall, instead, simply be clear from the context of the description.
In the examples that follow, it is assumed, for integers i and j with 1≦i, j≦Nc, that if Nbi is greater than Nbj, then Nbi is a positive integer multiple of Nbj. As will become evident, this helps align the processing for the plurality of channels during the multiplexing and demultiplexing. However, it will appreciated that the values for Nbi for i=1 . . . Nc need not necessarily follow this criterion.
The largest value of Nbi shall be referred to as Nbmax. The apparatus 100 may impose an upper bound on Nbmax, so that none of the channels may have a corresponding CTU-processing-number larger than this upper bound threshold. In some embodiments, the values of Nbi may then be constrained so that Nbmax is always an integer multiple of Nbi.
The purpose of the CTU-processing-number Nbi for the i-th channel Ci is as follows. When demultiplexing received input multiplexed data that comprises channel data for the plurality of channels, the processing unit 102 will process (or make use of) the channel data that has been received and stored in the memory 104 for the i-th channel Ci when Nbi CTUs have passed since the processing unit 102 last processed channel data stored in the memory 104 for that i-th channel C. Thus, as a quantity of channel data for the i-th channel Ci is received every CTU, then, every Nbi-th CTU, the processing unit 102 performs processing on the most recently received Nbi quantities of channel data for the channel Ci. The processing unit 102 will then be ready to receive a further Nbi quantities of data for the channel Ci during the next Nbi CTUs.
This processing of the channel data could involve presenting the channel data to a user (such as outputting audio data, video data, graphics data, or text data). This processing of the channel data could involve copying the channel data to a different area of the memory 104, or to a different memory, that is not being used for the demultiplexing processing. Additionally, this processing of the channel data could involve data de-compression, data compression, data encryption, data decryption, etc. Furthermore, (such in 3GPP UMTS communications), the processing of the channel data may involve transport channel processing, such as de-interleaving the channel data, de-rate-matching the channel data, error correction decoding (such as Viterbi- or turbo-decoding), etc. It will be appreciated, though, that this processing of the channel data may involve some or all of the above-mentioned processing, as well as other types of processing.
When multiplexing channel data for the plurality of channels to form output multiplexed data, the processing unit 102 will store Nbi quantities of channel data to be output for the i-th channel Ci in the memory 104 when Nbi CTUs have passed since the processing unit 102 last stored channel data in the memory 104 for output for the i-th channel Ci. Thus, every Nbi-th CTU, the processing unit 102 stores Nbi quantities of channel data in the memory 104 for the channel Ci. As a quantity of channel data for the i-th channel Ci is output from the memory 104 every CTU during the multiplexing operation, then, every Nbi-th CTU, the processing unit 102 will have output the Nbi quantities of data that it initially stored in the memory 104, and will be ready to store a further Nbi quantities of data for that channel, ready for subsequent multiplexing.
Thus, the value of Nbi for channel Ci may for example be determined by how often a channel produces data for outputting and/or how much data can be multiplexed per CTU. Similarly, in some demultiplexing embodiments, the value of Nbi for channel Ci may be determined by how often the processing unit 102 must process data for that channel (e.g. how often audio data must be processed and output to maintain a coherent audio output to a user, or how much data is required in order to be able to perform processing such as de-interleaving, de-rate-matching, error correction decoding, encryption or decryption, compression or decompression, etc.) and/or how much data is received per CTU.
The processing for each channel Ci is essentially repeated every Nbi CTUs (albeit on different data). If the index for the current CTU in the series of CTUs is CTU_current (where CTU_current starts at 0 and increases by 1 for each CTU that passes) then the index of the current CTU for the channel Ci in the repeated series of Nbi CTUs, referred to as CTU_Ci, is calculated as CTU_Ci=CTU_current modulo Nbi. Thus CTU_Ci lies in the range from 0 to Nbi−1. A sequence of consecutive CTUs as CTU_Ci runs from 0 to Nbi−1 shall be referred to as a CTU-processing-cycle for the channel Ci.
For example, for the channel Ci the processing unit 102 is arranged to store Nbi quantities of channel data in the memory 104 when CTU_Ci is 0, and is then arranged to output a respective one of these quantities of channel data at each CTU as CTU_Ci ranges from 0 to Nbi−1. However, it will be appreciated that, in other embodiments of the invention, the Nbi quantities of channel data may be stored in the memory 104 when CTU_Ci assumes a predetermined value other than 0.
In an example, for the channel Ci, the processing unit 102 is arranged to store a respective quantity of channel data in the memory 104 at each CTU as CTU_Ci ranges from 0 to Nbi−1, and is then arranged to performing its processing on Nbi quantities of channel data stored in the memory 104 for the channel Ci when CTU_Ci is Nbi−1. However, it will be appreciated that, in other embodiments of the invention, the Nbi quantities of channel data stored in the memory 104 may be processed by the processing unit 102 when CTU_Ci assumes a predetermined value other than Nbi−1.
For example, the size of a quantity of data for the i-th channel Ci may be a corresponding value Szi. The value of Szi may remain fixed for the entire duration of time that the channel Ci is active, so that each quantity of data for the channel Ci is of the size Szi. Alternatively, the processing unit 102 may be arranged to update the value of Szi. The processing unit 102 may be constrained to only update the value of Szi when the value of CTU_Ci is 0, so that, during a CTU-processing-cycle for the channel Ci, the value of Szi at each of the CTUs in that CTU-processing-cycle is constant. For some multiplexing embodiments, the value of Szi may be updated during a CTU-processing-cycle before the Nbi quantities of channel data are stored in the memory 104 for that CTU-processing-cycle; for some demultiplexing embodiments, the value of Szi may be updated during a CTU-processing-cycle before the first quantity of channel data is stored in the memory 104 for that CTU-processing-cycle. The description that follows will be described with reference to these embodiments. However, it will be appreciated that the processing unit 102 could be arranged to update Szi for the channel Ci at any stage, and that the equations given later would simply need to be updated to cater for corresponding changes to the value Szi, as opposed to assuming that Szi is fixed for a CTU-processing-cycle for the channel Ci.
Additionally, the processing unit 102 may be arranged to update the value of Nbi for the i-th channel Ci. To help keep the multiplexing and demultiplexing of the plurality of channels synchronised, some embodiments only permit the processing unit 102 to update the value of Nbi for the i-th channel Ci when CTU_current modulo Nbmax equals 0. In some embodiments in which an upper bound B is imposed on Nbmax, this update may only be permitted when CTU_current modulo B equals 0. For some multiplexing embodiments, the value of Nbi may be updated during a CTU-processing-cycle before the Nbi quantities of channel data are stored in the memory 104 for that CTU-processing-cycle; for some demultiplexing embodiments, the value of Nbi may be updated during a CTU-processing-cycle before the first quantity of channel data is stored in the memory 104 for that CTU-processing-cycle. In such embodiments in which additionally, for integers i and j with 1≦i, j≦Nc, if Nbi is greater than Nbj then Nbi is a positive integer multiple of Nbj, then limiting the update of the value of Nbi in this way ensures that the value of Nbi is only updated when each of the channels is starting a respective CTU-processing-cycle. Additionally, in such embodiments in which additionally an upper bound is placed on Nbmax and each Nbi is a positive integer multiple of Nbmax, then limiting the update of the value of Nbi in this way ensures that the value of Nbi is only updated when each of the channels is starting a respective CTU-processing-cycle.
Additionally, the processing unit 102 may be arranged to add (or activate) or remove (or deactivate) a channel (i.e. update the value of Nc). To help keep the multiplexing and demultiplexing of the plurality of channels synchronised, some embodiments only permit the processing unit 102 to update the value of Nc when CTU_current modulo Nbmax equals 0. In some embodiments in which an upper bound B is imposed on Nbmax, this update may only be permitted when CTU_current modulo B equals 0. In embodiments in which, for integers i and j with 1≦i, j≦Nc, if Nbi is greater than Nbj then Nbi is a positive integer multiple of Nbj, then limiting the update of the value of Nc in this way ensures that the value of Nc is only updated when each of the channels is starting a respective CTU-processing-cycle. Additionally, in such embodiments in which additionally an upper bound is placed on Nbmax and each Nbi is a positive integer multiple of Nbmax, then limiting the update of the value of Nbi in this way ensures that the value of Nbi is only updated when each of the channels is starting a respective CTU-processing-cycle.
The processing unit 102 may be arranged to determine, for each of the channels, a respective region of the memory 104 to associate with that channel. The processing unit 102 may have allocated (or reserved) an area of the memory 104 for the multiplexing or demultiplexing processing, in which case the memory regions determined for the channels are regions of the allocated area of the memory 104. A memory region is a contiguous area of the memory 104 (i) for storing channel data to be multiplexed (for multiplexing embodiments) or (ii) for storing channel data that has been demultiplexed (for demultiplexing embodiments).
To do this, the processing unit 102 does not simply divide the memory 104 into Nc, or even K, memory regions of the same predetermined size, one for each of the channels, as per the above-described known methods for multiplexing and demultiplexing. Instead, when the processing unit 102 associates regions of the memory 104 with the channels, it determines the locations for those regions within the memory 104 based on some or all of the CTU-processing-numbers Nbi and some or all of the size values Szi for the channels. In other words, the processing unit 102 determines the start addresses in the memory 104 for respective contiguous memory regions that are to be associated with respective ones of the plurality of channels, and these start addresses are determined based on some or all of the CTU-processing-numbers Nbi and some or all of the size values Szi for the channels. The size of the contiguous memory region for the channel Ci is then determined by the size value Szi and the CTU-processing number Nbi. In particular, if Szi is constant during a CTU-processing-cycle, then the size of the memory region is Szi×Nbi.
For multiplexing embodiments, the processing unit 102 determines the locations of the memory regions in such a way that the processing unit 102 will not store channel data for a channel at a location in the memory 104 that is currently storing channel data that has not yet been output by the processing unit 102 as part of the output multiplexed data. For demultiplexing embodiments, the processing unit 102 determines the locations of the memory regions in such a way that the processing unit 102 will not store received channel data for a channel at a location in the memory 104 that is currently storing received channel data that has not yet been processed by the processing unit 102. This is explained in more detail below.
The processing unit 102 is arranged to update the locations of the memory regions associated with the plurality of channels. Such updating is desirable as it can take account of: changes to the values of some or all of the CTU-processing-numbers Nbi; changes to the values of some or all of the size values Szi; and changes to the number of active channels, Nc. In this way, the processing unit 102 ensures that channel data for one of the channels is never stored in the memory 104 by overwriting channel data that is being stored for another channel and that has yet to be used (either output via multiplexing or processed for demultiplexing).
The processing performed by the processing unit 102 to determine and update the memory regions associated with the channels also helps reduce the fragmentation of the memory 104. As will be illustrated in more detail below, the memory regions associated with the plurality of channels may together be contiguous in the memory 104. Thus, the most efficient usage of the memory 104 is achieved. This arrangement then allows for one or more of: an increase in the maximum number K of channels that can be processed; an increase in the size values Szi; an increase in the CTU-processing-numbers Nbi; a decrease in the probability of overflowing the memory 104 with channel data; and decreasing the size of the memory 104 (or the size of the area of memory allocated for the multiplexing or demultiplexing) for a given predetermined probability of overflowing the memory 104 (or allocated memory area).
The memory region associated with the i-th channel Ci remains constant during the CTU-processing-cycle for the channel Ci. Thus, the larger the value of a CTU-processing-number Nbi for a channel Ci, the longer the memory region associated with that channel Ci will remain the same before being updated. Hence, some embodiments process the channels in order of decreasing CTU-processing-number for the channels, so that channels whose memory region locations are the same for longer are processed first, and channels whose memory region locations can be changed more often are processed later, so that such memory regions can be more easily fitted in the memory 104 around other memory regions.
At a step S202, the processing for the next CTU begins. Thus, the value of CTU_current is incremented by 1 (to reflect the fact that a new CTU is about to begin). Additionally, updating CTU_current means that the value of the index CTU_Ci of the current CTU for the channel Ci in the CTU-processing-cycle for the channel Ci is also updated according to the formula given above.
Additionally, as mentioned above, at the beginning of a CTU: one or more size values Szi may be updated; one or more CTU-processing-numbers Nbi may be updated; one or more channels may be added (or become active); and one or more channels may be removed (or become deactivated). As discussed above, there may be restrictions on the CTUs during which such updates can occur.
At a step S204, the processing unit 102 determines the next active channel to be considered during this CTU, i.e. which channel is the next active channel that the processing unit 102 will inspect to determine whether or not to update the location of its associated memory region. As mentioned above, in some embodiments, the channels are processed in order of decreasing CTU-processing-number, so that at the step S204, the processing unit 102 selects a channel out of the channels that have not been considered yet during this CTU that has the highest value of Nbi. However, in another embodiment of the invention, the processing unit 102 may simply select the channels in their currently indexed order, i.e. in the order C1, C2, . . . .
At a step S206, the processing unit 102 determines whether it is now time to update the location of the region in the memory 104 associated with the channel identified at the step S204. This will be described in more detail later with respect to a number of example embodiments. If it is now time to update the location of the region in the memory 104 associated with that channel, then processing continues at a step S208; otherwise, processing continues at a step S210.
At the step S208, the processing unit 102 determines a new start address for the region in the memory 104 associated with the identified channel. Thus, the processing unit 102 updates the location of the region in the memory 104 associated with the identified channel. It will be appreciated that the new start address may actually be the same as the current start address for that region in the memory 104, so that the update of the start address does not change the start address. Processing then continues at the step S210.
At the step S210, it is determined whether all of the active channels have now been considered during the processing 200 for the current CTU, i.e. whether the processing unit 102 has inspected each of the current channels during this CTU to determine whether or not to update its corresponding memory region. If so, then processing continues at a step S212; otherwise, processing returns to the step S204.
At the step S212, the processing unit 102 performs the relevant multiplexing or demultiplexing processing on the channels. As described above, for multiplexing data, the processing at the step S212 involves, for each of the channels
(i) potentially storing a fresh amount of channel data in the memory region associated with that channel and (ii) outputting a quantity of channel data from the memory region associated with that channel. Additionally, as described above, for demultiplexing data, the processing at the step S212 involves, for each of the channels (i) storing a quantity of channel data in the memory region associated with that channel and (ii) potentially processing all of the channel data stored in the memory region associated with that channel.
Six channels (C1, . . . , C6) are processed.
The embodiments shown in
For ease of notation, the i-th channel determined at the step S204 will be referred to as channel Cp(i), so that the channels undergo the processing of the step S206 (and potentially the step S208 too) in the order Cp(1), Cp(2), . . . . Thus, for CTUs 0-7 in
It will be appreciated that, (i) when demultiplexing channel data from received multiplexed data, the channel data need not be present in the multiplexed data in the same order as the order in which the channels are to be processed, i.e. the channel data within the multiplexed data need not be in the order Cp(1), Cp(2) . . . ; and (ii) for multiplexing channel data, the channel data need not be present in the output multiplexed data in the same order as the order in which the channels are to be processed, i.e. the channel data within the output multiplexed data need not be in the order Cp(1), Cp(2), . . . .
It will be appreciated that other configurations of channels, with different numbers of channels, different size values Szi, different CTU-processing-numbers Nbi, etc. could be implemented and realized in practice.
In
In the descriptions that follow, for each of the channels Ci (for i=1 . . . Nc), the starting address for the contiguous region of the memory 104 to associate with the channel Ci shall be designated as Si. The memory region associated with channel Ci thus starts at the starting address Si and is a contiguous block of the memory 104 of size Szi×Nbi.
In
A first multiplexing embodiment of the invention is schematically illustrated in
A second multiplexing embodiment of the invention is schematically illustrated in
A third multiplexing embodiment of the invention is schematically illustrated in
A fourth multiplexing embodiment of the invention is schematically illustrated in
In
A first demultiplexing embodiment of the invention is schematically illustrated in
A second demultiplexing embodiment of the invention is schematically illustrated in
A third demultiplexing embodiment of the invention is schematically illustrated in
A fourth demultiplexing embodiment of the invention is schematically illustrated in
It should be noted that the memory regions for the channels Ci may overlap. For example: (i) in
However, as can be seen, the area overlap between two such memory regions is arranged such that only one channel requires that area of overlap at a time. When demultiplexing data, the channel data stored in an area of overlap for a first channel is processed before channel data is stored in that area at a later stage for another channel. When multiplexing data, the channel data stored in an area of overlap for a first channel is output as multiplexed data before channel data is stored in that area at a later stage for another channel.
In other embodiments of the invention, for example those shown in
It will be appreciated that the processing unit 102 may determine the location of a memory region for a channel in a number of other ways such that (i) when demultiplexing, the processing unit 102 will not store channel data at a location in the memory 104 that is currently storing channel data that has not yet been processed and (ii) when multiplexing, the processing unit 102 will not store channel data at a location in the memory 104 that is currently storing channel data that has not yet been output as multiplexed data.
As an example, the processing unit 102 may wish to align a memory region for a channel within the memory 104 based on an alignment criterion. This criterion may be, for example, aligning a memory region so that it starts at a 1-byte, 2-byte, or 4-byte boundary within the memory 104. The start address Si for channel Ci may be determined (according to the above-described equations) and may then be adjusted as follows: if alignment to 1-byte is required, then Si (as measured in bits) would be set to be
if alignment to 2-bytes is required, then Si (as measured in bits) would be set to be
and if alignment to 4-bytes is required, then Si (as measured in bits) would be set to be
and so on.
The example embodiments described above may be modified so that the calculation of start addresses Si involves determining a larger value than that given by the above-mentioned equations, for example by adding a predetermined constant offset to above equations.
In the example embodiments described above, the start address for a memory region is a lower address in the memory 104 than the end address for that memory region. However, it will be appreciated that the roles of the start address and end address for a memory region may be interchanged, so that the start address is higher than the end address for the memory region.
In the example embodiments described above, the memory regions are determined starting from the base address B. However, it will be appreciated that, in a similar manner, the memory regions could be determined working backwards from the end (top) of the memory 104, towards the base address B.
For multiplexing embodiments, it will be appreciated that the quantities of channel data being output from the memory 104 as multiplexed data need not be output from the positions in the channels' memory regions in the order shown in
As an example to demonstrate the improvements made by embodiments of the invention over the above-described prior-art memory management, the demultiplexing embodiment of
For each channel Ci, the value Ai will refer to the amount of channel data to be received for a channel before that channel data is processed by the processing unit 102, i.e. Ai=Szi×Nbi.
In this analysis, it is assumed that the maximum amount of channel data to be received for a channel before that channel data is processed by the processing unit 102 is of a predetermined maximum size M, i.e. Ai≦M for each channel Ci for 1≦i≦Nc.
For values of r=0, 0.1, 0.2, . . . , 0.9, it shall be assumed (for this analysis), that P(rM<Ai≦(r+0.1)M)=0.1, i.e. the probability that the about of channel data to be received for channel Ci before that channel data is processed by the processing unit 102 lies in the range (rM,(r+0.1)M)] is 0.1.
To store the demultiplexed channel data, it shall be assumed that the amount of memory 104 available to the processing unit 102 is αKM, where α is a predetermined value assuming a value of 0, 0.1, 0.2, . . . , 0.9 or 1. Naturally, if
α=1, then the amount of memory 104 allocated to store demultiplexed channel data is sufficient for the worst-case-scenario in which Ai=M for 1≦i≦K. Similarly, if α=0, then no memory 104 is allocated to store demultiplexed channel data.
According to the above-mentioned prior-art demultiplexer, the demultiplexer assigns to each of the possible K channels an equal amount of memory 104 for storing demultiplexed channel data. Thus, each memory region for the possible K channels is of size αM. Thus, the probability that, during the demultiplexing for a channel Ci, the memory region for the channel Ci will overflow is P(Ai>αM)=1−α. Thus, the probability of at least one of the memory regions overflowing when storing demultiplexed channel data is 1−(1−(1−α))N
For the demultiplexing embodiment of
In some embodiments, the amount of memory allocated for storing demultiplexed channel data may be varied according to the number of active channels there current are. For example, instead of allocating αKM memory for storing demultiplexed channel data, the processing unit 102 may allocate αNcM memory for storing demultiplexed channel data. In this case, for the demultiplexing embodiment of
As can be seen from
Thus, embodiments of the invention help reduce the probability of data loss or data corruption.
Similarly, embodiments of the invention may use a smaller amount of memory for storing channel data for a desired probability of data loss or corruption. For example, in
As can be seen from
In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims. For example, the connections may be an type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise the connections may for example be direct connections or indirect connections.
Also, the invention may for example be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention. The computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system. The computer program may be provided on a computer readable data carrier, such as a CD-rom or diskette, stored with data loadable in a memory of a computer system, the data representing the computer program. The data carrier may further be a data connection, such as a telephone cable or a wireless connection. The computer readable data carrier may be permanently, removably or remotely coupled to an information processing system such as apparatus 100. The computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD ROM, CD R, etc.) and digital video disk storage media; nonvolatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc.; and data transmission media including computer networks, point-to-point telecommunication equipment, and carrier wave transmission media, just to name a few.
Some of the above embodiments, as applicable, may be implemented using a variety of different information processing systems. For example, although
Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In an abstract, but still definite sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.
Nistor, Adrian Ioan, Pelly, Jason
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5781918, | Aug 16 1991 | DOT ASSETS NO 8 LLC | Memory system and method for selecting a different number of data channels depending on bus size |
6295594, | Oct 10 1997 | Advanced Micro Devices, Inc. | Dynamic memory allocation suitable for stride-based prefetching |
6324625, | Mar 16 1999 | FUJITSU NETWORK COMMUNICATIONS, INC | Rotating rationed buffer refresh |
6529523, | Dec 26 1997 | Fujitsu Limited | Converting circuits and bandwidth management apparatus in mixed network |
6684317, | Dec 21 2001 | Xylon LLC | Method of addressing sequential data packets from a plurality of input data line cards for shared memory storage and the like, and novel address generator therefor |
6792484, | Jul 28 2000 | Ericsson AB | Method and apparatus for storing data using a plurality of queues |
7068545, | Jan 04 2005 | ARM Limited | Data processing apparatus having memory protection unit |
7342915, | May 10 1999 | NTT DOCOMO, INC. | Data multiplexing method and data multiplexer, and data transmitting method and data transmitter |
7870352, | Nov 13 2003 | TECH 5 SAS | State-based memory unloading |
20040153566, | |||
20070171830, | |||
20090113153, | |||
20090219892, | |||
20100100669, | |||
20100208732, | |||
20100232324, | |||
20100272045, | |||
20110170451, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 29 2008 | Freescale Semiconductor, Inc. | (assignment on the face of the patent) | / | |||
May 14 2008 | NISTOR, ADRIAN IOAN | Freescale Semiconductor Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025112 | /0353 | |
May 21 2008 | PELLY, JASON | Freescale Semiconductor Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025112 | /0353 | |
Jan 16 2012 | Freescale Semiconductor, Inc | CITIBANK, N A , AS COLLATERAL AGENT | SECURITY AGREEMENT | 027621 | /0928 | |
May 21 2013 | Freescale Semiconductor, Inc | CITIBANK, N A , AS NOTES COLLATERAL AGENT | SECURITY AGREEMENT | 030633 | /0424 | |
Nov 01 2013 | Freescale Semiconductor, Inc | CITIBANK, N A , AS NOTES COLLATERAL AGENT | SECURITY AGREEMENT | 031591 | /0266 | |
Dec 07 2015 | CITIBANK, N A | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 11759915 AND REPLACE IT WITH APPLICATION 11759935 PREVIOUSLY RECORDED ON REEL 037486 FRAME 0517 ASSIGNOR S HEREBY CONFIRMS THE ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS | 053547 | /0421 | |
Dec 07 2015 | CITIBANK, N A | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 11759915 AND REPLACE IT WITH APPLICATION 11759935 PREVIOUSLY RECORDED ON REEL 037486 FRAME 0517 ASSIGNOR S HEREBY CONFIRMS THE ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS | 053547 | /0421 | |
Dec 07 2015 | CITIBANK, N A | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE PATENTS 8108266 AND 8062324 AND REPLACE THEM WITH 6108266 AND 8060324 PREVIOUSLY RECORDED ON REEL 037518 FRAME 0292 ASSIGNOR S HEREBY CONFIRMS THE ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS | 041703 | /0536 | |
Dec 07 2015 | CITIBANK, N A , AS COLLATERAL AGENT | Freescale Semiconductor, Inc | PATENT RELEASE | 037357 | /0285 | |
Dec 07 2015 | CITIBANK, N A | MORGAN STANLEY SENIOR FUNDING, INC | ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS | 037486 | /0517 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 042762 FRAME 0145 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051145 | /0184 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12681366 PREVIOUSLY RECORDED ON REEL 039361 FRAME 0212 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 042762 | /0145 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | SECURITY AGREEMENT SUPPLEMENT | 038017 | /0058 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 042985 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051029 | /0001 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12681366 PREVIOUSLY RECORDED ON REEL 038017 FRAME 0058 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 042985 | /0001 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 042762 FRAME 0145 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051145 | /0184 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 039361 FRAME 0212 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051029 | /0387 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 038017 FRAME 0058 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051030 | /0001 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 039361 FRAME 0212 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051029 | /0387 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 042985 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051029 | /0001 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12092129 PREVIOUSLY RECORDED ON REEL 038017 FRAME 0058 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 039361 | /0212 | |
May 25 2016 | Freescale Semiconductor, Inc | MORGAN STANLEY SENIOR FUNDING, INC | SUPPLEMENT TO THE SECURITY AGREEMENT | 039138 | /0001 | |
Jun 22 2016 | MORGAN STANLEY SENIOR FUNDING, INC | NXP B V | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 11759915 AND REPLACE IT WITH APPLICATION 11759935 PREVIOUSLY RECORDED ON REEL 040928 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE RELEASE OF SECURITY INTEREST | 052915 | /0001 | |
Jun 22 2016 | MORGAN STANLEY SENIOR FUNDING, INC | NXP B V | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 11759915 AND REPLACE IT WITH APPLICATION 11759935 PREVIOUSLY RECORDED ON REEL 040928 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE RELEASE OF SECURITY INTEREST | 052915 | /0001 | |
Jun 22 2016 | MORGAN STANLEY SENIOR FUNDING, INC | NXP B V | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 040928 | /0001 | |
Sep 12 2016 | MORGAN STANLEY SENIOR FUNDING, INC | NXP, B V F K A FREESCALE SEMICONDUCTOR, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 11759915 AND REPLACE IT WITH APPLICATION 11759935 PREVIOUSLY RECORDED ON REEL 040925 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE RELEASE OF SECURITY INTEREST | 052917 | /0001 | |
Sep 12 2016 | MORGAN STANLEY SENIOR FUNDING, INC | NXP, B V , F K A FREESCALE SEMICONDUCTOR, INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 040925 | /0001 | |
Sep 12 2016 | MORGAN STANLEY SENIOR FUNDING, INC | NXP, B V F K A FREESCALE SEMICONDUCTOR, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 11759915 AND REPLACE IT WITH APPLICATION 11759935 PREVIOUSLY RECORDED ON REEL 040925 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE RELEASE OF SECURITY INTEREST | 052917 | /0001 | |
Nov 07 2016 | Freescale Semiconductor Inc | NXP USA, INC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 040652 | /0180 | |
Nov 07 2016 | Freescale Semiconductor Inc | NXP USA, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE NATURE OF CONVEYANCE LISTED CHANGE OF NAME SHOULD BE MERGER AND CHANGE PREVIOUSLY RECORDED AT REEL: 040652 FRAME: 0180 ASSIGNOR S HEREBY CONFIRMS THE MERGER AND CHANGE OF NAME | 041354 | /0148 | |
Feb 17 2019 | MORGAN STANLEY SENIOR FUNDING, INC | SHENZHEN XINGUODU TECHNOLOGY CO , LTD | CORRECTIVE ASSIGNMENT TO CORRECT THE TO CORRECT THE APPLICATION NO FROM 13,883,290 TO 13,833,290 PREVIOUSLY RECORDED ON REEL 041703 FRAME 0536 ASSIGNOR S HEREBY CONFIRMS THE THE ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS | 048734 | /0001 | |
Feb 17 2019 | MORGAN STANLEY SENIOR FUNDING, INC | SHENZHEN XINGUODU TECHNOLOGY CO , LTD | CORRECTIVE ASSIGNMENT TO CORRECT THE TO CORRECT THE APPLICATION NO FROM 13,883,290 TO 13,833,290 PREVIOUSLY RECORDED ON REEL 041703 FRAME 0536 ASSIGNOR S HEREBY CONFIRMS THE THE ASSIGNMENT AND ASSUMPTION OF SECURITY INTEREST IN PATENTS | 048734 | /0001 | |
Sep 03 2019 | MORGAN STANLEY SENIOR FUNDING, INC | NXP B V | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 050744 | /0097 |
Date | Maintenance Fee Events |
Jul 02 2014 | ASPN: Payor Number Assigned. |
Mar 20 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 19 2021 | REM: Maintenance Fee Reminder Mailed. |
Jan 03 2022 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Nov 26 2016 | 4 years fee payment window open |
May 26 2017 | 6 months grace period start (w surcharge) |
Nov 26 2017 | patent expiry (for year 4) |
Nov 26 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 26 2020 | 8 years fee payment window open |
May 26 2021 | 6 months grace period start (w surcharge) |
Nov 26 2021 | patent expiry (for year 8) |
Nov 26 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 26 2024 | 12 years fee payment window open |
May 26 2025 | 6 months grace period start (w surcharge) |
Nov 26 2025 | patent expiry (for year 12) |
Nov 26 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |