A system and method for viterbi decoding utilizes a general purpose processor with application specific extensions to perform viterbi decoding operations specified in a viterbi decoding algorithm stored in memory.
|
9. A method for viterbi decoding using application specific extensions, the method comprising:
obtaining a viterbi decoding algorithm, wherein the viterbi decoding algorithm specifies a plurality of viterbi decoding operations; and
exclusively performing the plurality of viterbi decoding operations within a general purpose processor using a plurality of application specific extensions in the general purpose processor.
1. A viterbi decoding system comprising:
a non-transitory memory configured to store a viterbi decoding algorithm, wherein the viterbi decoding algorithm specifies a plurality of viterbi decoding operations; and
a general purpose processor comprising a plurality of application specific extensions, wherein each application specific extension is configured to perform at least one of the plurality of viterbi decoding operations specified in the viterbi decoding algorithm stored in the non-transitory memory, and the viterbi decoding system is configured such that all the viterbi decoding operations specified in the viterbi decoding algorithm are performed exclusively within the general purpose processor using at least one of the application specific extensions.
16. A viterbi decoding system comprising:
a non-transitory memory configured to store a viterbi decoding algorithm, wherein the viterbi decoding algorithm specifies a plurality of viterbi decoding operations; and
a general purpose processor comprising a processor core and a plurality of application specific extensions, wherein the processor core includes a plurality of functional units and each application specific extension is configured to perform one of the viterbi decoding operations specified in the viterbi decoding algorithm stored in the non-transitory memory, and the viterbi decoding system is configured such that all the viterbi decoding operations specified in the viterbi decoding algorithm are performed exclusively within the general purpose processor using at least one of the application specific extensions.
2. The viterbi decoding system of
3. The viterbi decoding system of
4. The viterbi decoding system of
5. The viterbi decoding system of
6. The viterbi decoding system of
7. The viterbi decoding system of
8. The viterbi decoding system of
10. The method of
performing a viterbi decoding decision bits generating operation in parallel on least significant bit parts of two path metric input data words and most significant bit parts of the two path metric input data words; and
using a branch metric data word to generate four decision bits for four viterbi decoding states using at least one of the application specific extensions in the general purpose processor.
11. The method of
performing a viterbi decoding add-compare-select operation in parallel on least significant bit parts of two path metric input data words and a branch metric data word and most significant bit parts of the two path metric input data words and the branch metric data word to generate a first path metric output data word and a second path metric output data word using at least one of the application specific extensions in the general purpose processor.
12. The method of
performing a viterbi decoding bit interleaving operation on a first input data block and a second input data block to generate an output data word using at least one of the application specific extensions in the general purpose processor, each input data block has sixteen bits ranging from bit number zero to bit number fifteen, the first input data block and the second input data block are packed into an input data word that has thirty two bits, the output data word has thirty two bits ranging from bit number zero to bit number thirty one, the number I bit of the output data word is equal to the number (I/1)/2 bit of the second input data block, and the number J bit of the output data word is equal to the number J/2 bit of the first input data block, I is an odd integer from one and thirty one, and J is an even integer from zero and thirty.
13. The method of
performing a viterbi decoding bit shift interleaving operation on a first input data block and a second input data block using a viterbi decoding state number N to generate an output data word using at least one of the application specific extensions in the general purpose processor, each input data block has sixteen bits ranging from bit number zero to bit number fifteen, the first input data block and the second input data block are packed into a first input data word that has thirty two bits, N is an integer that is greater than or equal to zero and smaller than thirty one, the viterbi decoding state number N is in a second input data word that has thirty two bits, the output data word has thirty two bits ranging from bit number zero to bit number thirty one, the bit number N of the output data word is equal to the bit number zero of the first input data block, the bit number N+1 of the output data word is equal to the bit number zero of the second input data block, and all other bits of the output data words are reset to zero.
14. The method of
processing a current viterbi decoding state and a viterbi decoding decision bits to generate a next viterbi decoding state using at least one of the application specific extensions in the general purpose processor, wherein the next viterbi decoding state is an upper viterbi decoding state when the viterbi decoding decision bit is set to zero and a lower viterbi decoding state when the viterbi decoding decision bit is set to one.
15. The method of
performing a viterbi decoding branch metric calculating operation on a first input data block and a second input data block to generate a first output data block and a second output data block using at least one of the application specific extensions in the general purpose processor, wherein each input data block has sixteen bits, the first input data block and the second input data block are packed into an input data word that has thirty two bits, each output data block has sixteen bits, the first output data block and the second output data block are packed into an output data word that has thirty two bits, the first output data block is equal to adding the first input data block and the second input data block, and the second output data block is equal to a difference between the first input data block and the second input data block.
17. The viterbi decoding system of
an addition/subtraction functional unit configured to perform addition functions and subtraction functions;
a multiplication functional unit configured to perform multiplication functions; and
an arithmetic logic functional unit configured to perform logic functions.
18. The viterbi decoding system of
19. The viterbi decoding system of
20. The viterbi decoding system of
|
Embodiments of the invention relate generally to electronic systems and, more particularly, to a system and method for Viterbi decoding using application specific extensions.
Viterbi decoding is used for decoding convolutional codes and solving estimation problems for a variety of applications such as software digital radios and pattern recognitions. Because Viterbi decoding puts computing burdens on general purpose processors, external hardware may be implemented to perform Viterbi decoding. However, the external hardware puts restrictions on Viterbi decoding software optimization, reduces Viterbi decoding software portability, and increases development costs for interfacing with the general purpose processors and risks associated with system integration.
Thus, there is a need for a system and method for Viterbi decoding that assists Viterbi decoding software optimization, improves Viterbi decoding software portability, and lowers development costs for interfacing with the general purpose processors and system integration risks.
A system and method for Viterbi decoding utilizes a general purpose processor with application specific extensions to perform Viterbi decoding operations specified in a Viterbi decoding algorithm stored in memory.
In an embodiment, a Viterbi decoding system comprises memory and a general purpose processor. The memory is configured to store a Viterbi decoding algorithm, wherein the Viterbi decoding algorithm specifies a plurality of Viterbi decoding operations. The general purpose processor comprises a plurality of application specific extensions, wherein each application specific extension is configured to perform at least one of the Viterbi decoding operations specified in the Viterbi decoding algorithm stored in the memory. The Viterbi decoding system is configured such that all the Viterbi decoding operations specified in the Viterbi decoding algorithm are performed exclusively within the general purpose processor using at least one of the application specific extensions.
In an embodiment, a method for Viterbi decoding using application specific extensions comprises (a) obtaining a Viterbi decoding algorithm, wherein the Viterbi decoding algorithm specifies a plurality of Viterbi decoding operations and (b) exclusively performing the plurality of Viterbi decoding operations within a general purpose processor using a plurality of application specific extensions in the general purpose processor.
In an embodiment, a Viterbi decoding system comprises memory and a general purpose processor. The memory is configured to store a Viterbi decoding algorithm, wherein the Viterbi decoding algorithm specifies a plurality of Viterbi decoding operations. The general purpose processor comprises a processor core and a plurality of application specific extensions, wherein the processor core includes a plurality of functional units and each application specific extension is configured to perform one of the Viterbi decoding operations specified in the Viterbi decoding algorithm stored in the memory. The Viterbi decoding system is configured such that all the Viterbi decoding operations specified in the Viterbi decoding algorithm are performed exclusively within the general purpose processor using at least one of the application specific extensions.
Other aspects and advantages of embodiments of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrated by way of example of the principles of the invention.
Throughout the description, similar reference numbers may be used to identify similar elements.
With reference to
As shown in
The ASEs 104 may be implemented in hardware and/or software. In some embodiments, each ASE may be a set of processor instructions for the general purpose processor 102, where the set of processor instructions perform a Viterbi decoding operation specified in the Viterbi decoding algorithm 108 stored in the memory 106. In some embodiments, the ASEs may reuse existing functional units 114 in the processor core 112, which will result in more efficient source code that better utilizes processor resources, more flexible and portable software, and less risk than to develop and to integrate more complex hardware in a system-on-chip (SoC).
The Viterbi decoding algorithm 108 specifies Viterbi decoding operations for the general purpose processor. In some embodiments, the Viterbi decoding operations specified in the Viterbi decoding algorithm include Viterbi decoding branch metric summing and subtracting operations that may be used in the branch metric process, Viterbi decoding ACS operations that may be used in the ACS process, and Viterbi decoding bit manipulating operations that may be used in the traceback process.
Each ASE 104 of the general purpose processor 102 performs at least one of the Viterbi decoding operations specified in the Viterbi decoding algorithm 108. Embodiments of the ASEs may perform Viterbi decoding branch metric summing and subtracting operations, Viterbi decoding ACS operations, and Viterbi decoding bit manipulating operations specified in the Viterbi decoding algorithm.
Embodiments of the ASEs 104 that are configured to perform Viterbi decoding branch metric summing and subtracting operations are first described.
A “SADDSUBR2_DUAL16 ASE” in accordance with an embodiment of the invention is configured to perform a Viterbi decoding branch metric summing and subtracting operation on two sixteen-bit input data blocks A and B, which are packed into a thirty two-bit input data word, to generate two sixteen-bit output data blocks C and D, which are packed in a thirty two-bit output data word, where C=A+B, D=A−B. In some embodiments, the “SADDSUBR2_DUAL16” ASE computes branch metric for code rate R=½ Viterbi decoding systems. In some embodiments, the “SADDSUBR2_DUAL16” ASE may perform the Viterbi decoding branch metric summing and subtracting operation using saturating sixteen-bit arithmetic.
A “SADDSUBR4_QUAD16” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding branch metric summing and subtracting operation on four sixteen-bit input data blocks A, B, C, and D, which are packed into two thirty two-bit input data words, to generate four sixteen-bit output data blocks E, F, G, and H, which are packed in two thirty two-bit output data words, where E=A+B+C+D, F=A+B+C−D, G=A+B−C+D, and H=A+B−C−D. In some embodiments, the “SADDSUBR4_QUAD16” ASE computes half of the needed branch metrics for code rate R=¼ Viterbi decoding systems. A “SADDSUBR4N_QUAD16” ASE described below computes the other values. In some embodiments, the “SADDSUBR4_QUAD16” ASE may perform the Viterbi decoding branch metric summing and subtracting operation using saturating sixteen-bit arithmetic.
The “SADDSUBR4N_QUAD16” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding branch metric summing and subtracting operation on four sixteen-bit input data blocks A, B, C, and D, which are packed into two thirty two-bit input data words, to generate four sixteen-bit output data blocks E, F, G, and H, which are packed in two thirty two-bit output data words, where E=A−B+C+D, F=A−B+C−D, G=A−B−C+D, and H=A−B−C−D. In some embodiments, the “SADDSUBR4N_QUAD16” ASE computes half of the needed branch metrics for code rate R=¼ Viterbi decoding systems. The “SADDSUBR4_QUAD16” ASE described above computes the other values. In some embodiments, the “SADDSUBR4N_QUAD16” ASE may perform the Viterbi decoding branch metric summing and subtracting operation using saturating sixteen-bit arithmetic.
A “SADDSUBR4_QUAD8” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding branch metric summing and subtracting operation on four eight-bit input data blocks A, B, C, and D, which are packed into a thirty two-bit data word, to generate four eight-bit output data blocks E, F, G, and H, which are packed in a thirty two-bit output data word, where E=A+B+C+D, F=A+B+C−D, G=A+B−C+D, and H=A+B−C−D. In some embodiments, the “SADDSUBR4_QUAD8” ASE computes half of the needed branch metrics for code rate R=¼ Viterbi decoding systems. A “SADDSUBR4N_QUAD8” ASE described below computes the other values. In some embodiments, the “SADDSUBR4_QUAD8” ASE may perform the Viterbi decoding branch metric summing and subtracting operation using saturating eight-bit arithmetic.
The “SADDSUBR4N_QUAD8” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding branch metric summing and subtracting operation on four eight-bit input data blocks A, B, C, and D, which are packed into a thirty two-bit data word, to generate four eight-bit output data blocks E, F, G, and H, which are packed in a thirty two-bit output data word, where E=A−B+C+D, F=A−B+C−D, G=A−B−C+D, and H=A−B−C−D. In some embodiments, the “SADDSUBR4N_QUAD16” ASE computes half of the needed branch metrics for code rate R=¼ Viterbi decoding systems. The “SADDSUBR4_QUAD8” ASE described above computes the other values. In some embodiments, the “SADDSUBR4N_QUAD8” ASE may perform the Viterbi decoding branch metric summing and subtracting operation using saturating eight-bit arithmetic.
A “SADDSUBR4_OCT8” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding branch metric summing and subtracting operation on four eight-bit input data blocks A, B, C, and D, which are packed into a thirty two-bit input data word, to generate eight eight-bit output data blocks E, F, G, H, I, J, K, and L, which are packed in two thirty two-bit output data words, where E=A+B+C+D, F=A+B+C−D, G=A+B−C+D, H=A+B−C−D, I=A−B+C+D, J=A−B+C−D, K=A−B−C+D, and L=A−B−C−D. In some embodiments, the “SADDSUBR4_OCT8” ASE computes half of the needed branch metrics for code rate R=¼ Viterbi decoding systems. The other branch metrics may be computed by negating some of the elements of the E, F, G, H, I, J, K, and L data blocks. In some embodiments, the “SADDSUBR4_OCT8” ASE may perform the Viterbi decoding branch metric summing and subtracting operation using saturating eight-bit arithmetic.
Embodiments of the ASEs 104 that are configured to perform Viterbi decoding bit manipulating operations are now described.
A “BIT_INTERLEAVE_DUAL16” ASE is configured to perform a Viterbi decoding bit interleaving operation on two sixteen-bit input data blocks A and B, which are packed into a thirty two-bit input data word, to generate a thirty two-bit output data words C, where each bit of C is taken in turn from A and B, for example, C[0]=A[0], C[1]=B[0], C[2]=A[1], C[3]=B[1], C[4]=A[2], C[5]=B[2], etc., which can be mathematically expressed as C[I]=B[(I−1)/2] and C[J]=A[J/2], where I is an odd integer from one and thirty one and J is an even integer from zero to thirty.
A “BIT_INTERLEAVE_QUAD8” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding bit interleaving operation on four eight-bit input data blocks A, B, C, and D, which are packed into a thirty two bit input data word, to generate a thirty two-bit output data word E, where each bit of E is taken in turn from A, B, C, and D, for example, E[0]=A[0], E[1]=B[0], E[2]=C[0], E[3]=D[0], E[4]=A[1], AND E[5]=B[1]. In some embodiment, a sequence of quad eight-bit ILEQ may generate decision bit that are not in order and need to be interleaved and the “BIT_INTERLEAVE_QUAD8” ASE may collect the decision bits from the ILEQ and interleave the decision bits into decision words.
A “BIT_SHIFT_INTERLEAVE_DUAL16” ASE is configured to perform a Viterbi decoding bit shift interleaving operation on two sixteen-bit input data blocks A and B packed into a thirty two-bit first input data word and a thirty two-bit second input data word, which includes an integer N that is greater or equal to zero and smaller than thirty one, to generate a thirty two-bit output data word C, where C[N]=ext32b(A[0])<<N, C[N+1]=ext32b(B[0])<<N+1, and all other bits of C are reset to 0, where the ext32b function extends a bit into a thirty two-bit data word. In some embodiments, N is a Viterbi decoding state number.
A “BIT_SHIFT_INTERLEAVE_QUAD8” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding bit shift interleaving operation on four eight-bit input data blocks A, B, C, and D packed into a thirty two-bit first input data word, and a thirty two-bit second input data word, which includes an integer N that is greater or equal to zero and smaller than twenty nine, to generate a thirty two-bit output data word E, where E[N]=ext32b(A[0])<<N, E[N+1]=ext32b(B[0])<<N+1, E[N+2]=ext32b(C[0])<<N+2, E[N+3]=ext32b(D[0])<<N+2, and all other bits of E are reset to 0, where the ext32b function extends a bit into a thirty two-bit data word. In some embodiments, N is a Viterbi decoding state number. In some embodiments, the “BIT_SHIFT_INTERLEAVE_QUAD8” ASE shifts the decision bits from quad eight-bit ILEQ and interleaves the decision bits into decision words.
A “VSTATE2BIT” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding bit manipulating operation on a thirty two-bit unsigned integer input data word A and a thirty two-bit second input data word, which includes an integer N that is greater or equal to zero and smaller than thirty two, to generate a thirty two output data word B, where B=A|1<<N. In some embodiments, N is a Viterbi decoding state number. In some embodiments, the “VSTATE2BIT” ASE performs the decoding bit manipulating operation at the traceback process to accumulate decoded bits packed into thirty two-bit words. In some embodiments, the “VSTATE2BIT” ASE used to set the N decoded bit in a thirty two-bit unsigned “decodedbits” input data word, using the parity of the current state stored in the “state” variable, if (state&0x1) decodedbits=VSTATE2BIT(decodedbits,N). In other words, if the “state” variable is odd, the Nth bit is set to one. If the “state” variable is even, the Nth bit is unchanged and left to its initial value, which should be zero.
A “VNEXTSTATE_LE” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding bit manipulating operation on a thirty two-bit unsigned integer first input data word and a thirty two-bit second input data word, which includes an integer N that is greater or equal to zero and smaller than thirty two, to generate a thirty two-bit output data word, where N is the current state for the current step and the output is the most likely state of the previous step (this is part of backtracking, i.e., going backwards through the steps produced by the ACS process, where each step corresponds to one decoded bit). Each state has two possible next states in the previous step.
In some embodiments, N is a Viterbi decoding state number. In some embodiments, the “VNEXTSTATE_LE” ASE is used for instance to find which state is most likely to be preceding the current state, using the value of the decision bit for that state, nextstate=VNEXTSTATE_LE (decisions, state). As used herein, “next state” means the next state after the current state during the traceback process, which is actually the previous state of the current state. The Viterbi decoding bit manipulating operation performed by the “VNEXTSTATE_LE” ASE may be described by the following C code excerpt,
unsigned int VNEXTSTATE_LE(unsigned int decisions, unsigned int state)
{
If(decision & 1<<state) {
If(state&1) return (state − 1)/2 + M/2;
Else return state/2 + M/2;
} else {
If (state & 1) return (state −1)/2;
Else return state/2;
}
}
unsigned int VNEXTSTATE_BE(unsigned int decisions, unsigned int state)
{
If (decision & 1<<state)
{
If (state & M/2) return 2* state − M/2 +1;
Else return 2*state +1;
} else {
If (state & M/2) return 2* state − M/21;
Else return 2*state ;
}
}
An “ORI_QUAD32” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding bit manipulating operation on four thirty two-bit input data words A, B, C, D to generate a thirty two-bit output data word E, which is the logical OR combination of the four input data words into, E=A|B|C|D. In some embodiment, the “ORI_QUAD32” ASE is used to combine results from the ASEs performing Viterbi decoding bit interleaving operations and the ASEs Viterbi decoding bit shift interleaving operations described above.
Embodiments of the ASEs that are configured to perform Viterbi decoding ACS operations are now described.
A “VACS_DUAL16” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding ACS operation on four sixteen-bit path metric input data blocks packed in two thirty two-bit input data words and also using two branch metric data blocks packed into a thirty two-bit branch metric data word to generate four sixteen-bit path metric output data block packed in two thirty two-bit output data words. The “VACS_DUAL16” ASE performs the Viterbi decoding ACS operation in parallel on the least significant bit (LSB) side of the path metric input data words and branch metric data words and on the most significant bit (MSB) side of the path metric input and branch metric data words. The Viterbi decoding ACS operation performed by the “VACS_DUAL16” ASE may be described by the following C code excerpt,
VACS_DUAL16 (&output1,&output2,input1,input2,bmetric)
{
lsb(output1) = max(lsb(input1)+lsb(bmetric),lsb(input2)−lsb(bmetric));
msb(output1) = max(lsb(input1)−lsb(bmetric),lsb(input2)+lsb(bmetric));
lsb(output2) = max(msb(input1)+msb(bmetric),msb(input2)−
msb(bmetric));
msb(output2) = max(msb(input1)−msb(bmetric),msb(input2)+
msb(bmetric));
}
A “VACS_QUAD8” ASE in accordance with an embodiment of the invention is configured to perform a Viterbi decoding ACS operation on eight eight-bit input data blocks packed in two thirty two-bit input data words using four eight-bit branch metric data blocks packed into a thirty two bit branch metric data word to generate eight eight-bit output data blocks packed in two thirty two-bit output data words. The “VACS_QUAD8” ASE performs the Viterbi decoding ACS operation in parallel on the LSB side of the input and branch metric data words and on the MSB side of the input and branch metric data words.
A “VDECISION_DUAL16” ASE in accordance with an embodiment of the invention is configured to process four sixteen-bit input data blocks packed into two thirty two-bit path metric input data words and also using two sixteen-bit branch metric data blocks packed into a thirty two-bit branch metric data word and a Viterbi decoding state number N, which is an integer that is greater or equal to zero and smaller than thirty two, to generate four Viterbi decoding decision bits for the four Viterbi decoding states, N, N+1, N+2, and N+3, where each Viterbi decoding decision bit is included in a thirty two-bit Viterbi decoding decision data word. The “VDECISION_DUAL16” ASE performs the Viterbi decoding operation in parallel on the LSB side of the path metric input and branch metric data words and on the MSB side of the path metric input and branch metric data words. The Viterbi decoding operation performed by the “VDECISION_DUAL16” ASE may be described by the following C code excerpt,
int VDECISION_DUAL16(input1,input2,bmetric,n)
{
UInt32 tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7, tmp8;
UInt32 tmpdec1, tmpdec2, tmpdec3, tmpdec4;
tmp1 = lsb(input1) + lsb(bmetric);
tmp2 = lsb(input1) − lsb(bmetric);
tmp3 = lsb(input2) − lsb(bmetric);
tmp4 = lsb(input2) + lsb(bmetric);
tmp5 = msb(input1) + msb(bmetric);
tmp6 = msb(input1) − msb(bmetric);
tmp7 = msb(input2) − msb(bmetric);
tmp8 = msb(input2) + msb(bmetric);
tmpdec1 = tmp1 >= tmp3 ? 1<<n : 0;
tmpdec2 = tmp2 >= tmp4 ? 1<<n+1 : 0;
tmpdec3 = tmp5 >= tmp7 ? 1<<n+2 : 0;
tmpdec4 = tmp6 >= tmp8 ? 1<<n+3 : 0;
return tmpdec1\tmpdec2\tmpdec3\tmpdec4;
}
A “VDECISION_QUAD8” ASE in accordance with an embodiment of the invention is configured to process four eight-bit input data blocks using four eight-bit branch metric data blocks packed into a thirty two-bit branch metric data word and a Viterbi state data word to generate eight Viterbi decoding decision bits for the Viterbi decoding states N to N+7. The Viterbi state data word includes a state number N that is an integer greater or equal to zero and smaller than thirty two Each of the eight Viterbi decoding decision bits is included in a thirty two-bit Viterbi decoding decision data word.
Although the operations of the method herein are shown and described in a particular order, the order of the operations of the method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner.
In addition, although specific embodiments of the invention that have been described or illustrated include several components described or illustrated herein, other embodiments of the invention may include fewer or more components to implement less or more functionality.
Furthermore, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
7043682, | Feb 05 2002 | Synopsys, Inc | Method and apparatus for implementing decode operations in a data processor |
7185260, | Jun 05 2002 | Synopsys, Inc | Method and apparatus for implementing a data processor adapted for turbo decoding |
8201064, | Feb 05 2002 | Synopsys, Inc | Method and apparatus for implementing decode operations in a data processor |
20030028844, | |||
20050049843, | |||
20050157823, | |||
20070250689, | |||
EP1322041, | |||
WO167619, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 03 2009 | NXP, B.V. | (assignment on the face of the patent) | / | |||
May 11 2011 | CHABOT, XAVIER | NXP, B V | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026360 | /0582 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12092129 PREVIOUSLY RECORDED ON REEL 038017 FRAME 0058 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 039361 | /0212 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 042762 FRAME 0145 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051145 | /0184 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 039361 FRAME 0212 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051029 | /0387 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 042985 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051029 | /0001 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 042762 FRAME 0145 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051145 | /0184 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 038017 FRAME 0058 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051030 | /0001 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | SECURITY AGREEMENT SUPPLEMENT | 038017 | /0058 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 042985 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051029 | /0001 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12681366 PREVIOUSLY RECORDED ON REEL 038017 FRAME 0058 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 042985 | /0001 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12681366 PREVIOUSLY RECORDED ON REEL 039361 FRAME 0212 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 042762 | /0145 | |
Feb 18 2016 | NXP B V | MORGAN STANLEY SENIOR FUNDING, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION 12298143 PREVIOUSLY RECORDED ON REEL 039361 FRAME 0212 ASSIGNOR S HEREBY CONFIRMS THE SECURITY AGREEMENT SUPPLEMENT | 051029 | /0387 | |
Sep 03 2019 | MORGAN STANLEY SENIOR FUNDING, INC | NXP B V | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 050745 | /0001 |
Date | Maintenance Fee Events |
Mar 20 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 05 2021 | REM: Maintenance Fee Reminder Mailed. |
Dec 20 2021 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Nov 12 2016 | 4 years fee payment window open |
May 12 2017 | 6 months grace period start (w surcharge) |
Nov 12 2017 | patent expiry (for year 4) |
Nov 12 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 12 2020 | 8 years fee payment window open |
May 12 2021 | 6 months grace period start (w surcharge) |
Nov 12 2021 | patent expiry (for year 8) |
Nov 12 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 12 2024 | 12 years fee payment window open |
May 12 2025 | 6 months grace period start (w surcharge) |
Nov 12 2025 | patent expiry (for year 12) |
Nov 12 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |