A technique for resolving access contention in a parallel turbo decoder is described. The technique includes associating a plurality of buffer memories with the subdecoders so that accesses to banks of a shared interleaver memory can be rescheduled. Accesses can be rescheduled to prevent simultaneous accesses to a single bank of the shared interleaver memory based on an interleaver pattern.
|
9. A memory arbitrator for a parallel turbo decoder for resolving subdecoder access contention to a shared interleaver memory having multiple banks comprising:
a plurality of n subdecoders configured for parallel sub-block decoding of a turbo code;
a plurality of n buffer memories, each buffer memory associated with a different one of the plurality of subdecoders and having a depth m so as to buffer a plurality of m access attempts to the shared interleaver memory by the associated subdecoder, wherein each buffer memory is a last-in first-out memory; and
a multiplexer coupled between the plurality of n buffer memories and the shared interleaver memory and configured to enable access to the shared interleaver memory by the plurality of buffer memories.
1. A memory arbitrator for a parallel turbo decoder for resolving subdecoder access contention to a shared interleaver memory having multiple banks comprising:
a plurality of n subdecoders configured for parallel sub-block decoding of a turbo code;
a plurality of n buffer memories, each buffer memory associated with a different one of the plurality of subdecoders and having a depth m so as to buffer a plurality of m access attempts to the shared interleaver memory by the associated subdecoder;
a multiplexer coupled between the plurality of n buffer memories and the shared interleaver memory and configured to enable access to the shared interleaver memory by the plurality of buffer memories; and
a scheduler coupled to the plurality of subdecoders and configured to reschedule the accesses to the shared interleaver memory based on an interleaver pattern of the turbo decoder to avoid simultaneous access to at least one of the multiple banks of the shared interleaver memory.
2. The memory arbitrator of
3. The memory arbitrator of
5. The memory arbitrator of
6. The memory arbitrator of
7. The memory arbitrator of
8. The memory arbitrator of
|
The present invention relates generally to turbo decoding and more particularly to resolving access contention to a shared interleaver memory in a parallel turbo decoder.
Turbo codes are a powerful technique for reducing the effects of errors in a noisy communication channel. A turbo encoder at a transmitter inserts redundancy into a communicated signal, and a turbo decoder at a receiver uses the redundancy to correct transmission errors. Turbo decoding is, however, computationally complex. In some communication systems, the computational complexity of the turbo decoder sets an upper limit on the data rate that can be communicated through the communication system. Hence, techniques to improve the speed of turbo decoders are highly desirable.
One approach to increasing the speed of a turbo decoder is to perform parallel decoding. In parallel decoding, a block of data to be decoded is broken into a number of sub-blocks, and a number of sub-decoders operate simultaneously, each decoder decoding a different sub-block of data. By using N sub-decoders to decode each block, the time required to decode a block can be reduced by approximately a factor of N.
Parallel decoding presents a number of difficult challenges. To appreciate the difficulties of parallel decoding, the general encoding and decoding process will first be discussed. Usually, data is turbo coded by using two component encoders and an interleaver. The component encoders convert input data into output data which has redundancy added (and thus, for a given number of k input bits, n encoded bits are produced, where, generally, n>k). For example, a component encoder may be a block or convolutional code. The data encoded by one component encoder is interleaved with respect to the data encoded by the other component encoder, as discussed further below. Interleaving consists of reordering the data in a predefined pattern known to both the transmitter and receiver.
Both parallel and serial turbo code configurations are known in the art. In a parallel configuration, the component encoders operate on the data simultaneously, with a first component encoder operating on the data in sequential order and a second component encoder operating on the data in interleaver order. The outputs of the first and second component encoders are then combined for transmission. In a serial configuration, a first “outer” component encoder operates on the data, the encoded data is interleaved, and a second “inner” component encoder operates on the interleaved encoded data. In either configuration, some of the encoded bits may be deleted (punctured) prior to transmission.
Decoding turbo encoded data is typically performed using an iterative process. Two decoders repeatedly decode the data, each decoder corresponding to one of the component encoders. After the first decoder performs its decoding, the data is deinterleaved and then passed to the second decoder which performs its decoding. The data is then interleaved and passed back to the first decoder and the process repeated. With each iteration, more transmission errors are typically corrected. The number of iterations required to achieve most of the potential error correction capacity of the code depends on the particular code used.
A turbo decoder can be implemented by using a single decoder engine, which reads and writes results into a memory for each iteration. This memory is sometimes called an interleaver memory, and may be implemented as a ping-pong memory, where previous results are read from one part while new results are written to the other part. The write and read addressing is usually different, corresponding to interleaved or de-interleaved data ordering. For example, data is read (written) in sequential order by sequentially incrementing the address into the interleaver memory. Data is read (written) in interleaved order by using permuted addresses. For example, interleaving can be performed by writing results into the memory at a series of discontinuous permuted addresses corresponding to the interleaver pattern. The data can then be read from interleaver memory using sequential addresses to result in interleaved data for the next decoding iteration. Similarly, deinterleaving can be performed by writing results into the memory at a second series of discontinuous depermuted addresses that have the effect of placing interleaved data back into sequential order, available for sequential addressing for the next decoding iteration. Interleaving and deinterleaving can also be performed during the read operations, using sequential, permuted, or depermuted addresses, in which case writes can be performed using sequential addresses.
Turning to parallel decoding, each sub-decoder accesses the interleaver memory to read/write data. Hence, the interleaver memory is shared among the sub-decoders; the interleaver memory can therefore present a bottleneck in the performance of a parallel decoder. One approach to reduce this bottleneck is to implement the shared interleaver memory as a multiport memory, allowing multiple reads or writes each clock cycle. Multiport memories tend to be more expensive and complex than single port memories.
Another approach to reduce this bottleneck is to break the interleaver memory into banks, one bank for each sub-block/subdecoder, where the sub-decoders can access their corresponding banks simultaneously. This approach is very appropriate for field programmable gate array (FPGA) based decoder designs, as FPGAs have on-chip memory organized into banks, often called block RAMs. This approach, however, can fail for some interleavers. For example, interleaving is typically implemented across the entire block of data, and thus when reading or writing in permuted or depermuted address order, it may be necessary for a sub-decoder to access a bank corresponding to a different sub-decoder. When two subdecoders attempt to access the same bank at the same time a conflict results.
One prior solution involves reorganizing the interleaver structure as a conflict free interleaver to ensure that contention into the same bank is avoided. Conflict free interleavers, however, result in regularities in the interleaving structure that can result in reduced coding gain and susceptibility to jamming. Conflict free interleavers can be defined for particular degrees of parallelism, but may not be conflict free for all potentially useful degrees of parallelism. Additionally, many communications systems must comply with predefined interleaver structures which are not conflict free.
It has been recognized that it would be advantageous to develop a technique for reducing contention in parallel turbo decoder.
One embodiment of the present invention is a method for resolving access contention in a turbo decoder. The turbo decoder can include a plurality of subdecoders and a shared interleaver memory. The method includes providing a plurality of buffer memories and associating a buffer memory with each subdecoder to enable time shifting of accesses to the shared interleaver memory. The method also includes rescheduling the accesses to the shared interleaver memory based on the interleaver pattern to prevent simultaneous accesses to a single bank of the shared interleaver memory.
Another embodiment of the present invention includes a memory arbitrator for a parallel turbo decoder.
Reference will now be made to the exemplary embodiments illustrated in the drawings, and specific language will be used herein to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Alterations and further modifications of the inventive features illustrated herein, and additional applications of the principles of the inventions as illustrated herein, which would occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the invention.
The shared interleaver memory 102 may be broken into a number of banks P. For example, P may be set equal to N, providing one bank for each subdecoder, although this is not essential. Each bank can be a single port memory, and thus a P bank shared interleaver memory can support P simultaneous accesses, provided that each access is to a different bank. The buffer memories 106 provide buffering which can be used to resolve access contention when two or more subdecoders attempt to write into the same bank of the shared interleaver memory.
For example,
Conflicting access attempts can be resolved by using the buffer memory to reschedule accesses as will now be explained. As a first example, contention resolution for write access is illustrated. For write access buffering, each buffer memory is configured to buffer write access to the shared interleaver memory by the associated subdecoder. In one exemplary embodiment, the buffer memory may be implemented by a first-in first-out (FIFO) memory. For cycles in which there is a conflict, the memory access is rescheduled to a later time, and the data held in the buffer memory. The table in
As can be seen, a buffer memory depth of 2 provides adequate buffering for this exemplary interleaver. The total number of clock cycles required to complete the writing into the shared interleaver memory is 10 cycles; 8 cycles corresponding to the length of the sub-blocks plus 2 additional cycles of overhead to resolve the access contention. In general, an interleaver of length L will use L/N+L0 clock cycles to write the data into the shared interleaver memory, where L0 is the additional overhead due to rescheduled accesses to avoid conflicts. For typical interleavers, L0 is approximately 5 to 10% of L.
In accordance with another embodiment of the present invention, the buffer memory may include a last-in first-out (LIFO) memory as is known in the art. Operation in such a case is illustrated by the table in
A detailed implementation of a write-side memory arbitrator in accordance with an embodiment of the present invention is illustrated in
Referring to
Another advantage of
The table of
Turning to the rescheduling strategy in more general terms, one exemplary technique is to use round robin scheduling, where subdecoders are provided access to each bank on a regular basis. For example, at t=1, the first subdecoder has access to the first bank, the second subdecoder has access to the second bank, etc. At time t=2, the first subdecoder has access to the second bank, the second subdecoder to the third bank, etc., and so on. Although this is a simple approach to implement, it can result in relatively inefficient operation.
For example, interleavers typically have somewhat random permutation patterns, and thus the banks to be written to tend to be somewhat random. A better rescheduling approach is to allow most of the accesses to occur at their nominal time, and reschedule just the few that have conflicts. Whenever two or more accesses conflict, there is a choice as to which access is allowed and which is rescheduled. This choice can affect later potential conflicts, as exemplified by the table in
A detailed implementation of a read-side memory arbitrator in accordance with an embodiment of the present invention is illustrated in
The read-side memory arbitrator can include means for sequentially executing a plurality of buffered memory access requests. For example, read accesses by the buffer memories can be controlled by a scheduler 604, which reschedules access to the shared interleaver memory by the plurality of buffer memories, for example by controlling the buffer memories and multiplexer. Read-access to the shared interleaver memory by a particular subdecoder can be advanced from a nominally required write time by reading the data in the buffer memory associated with that particular subdecoder before it is normally required, e.g., to remove an access contention with another subdecoder. Rescheduling of read access can be precomputed and stored in a read only memory. The scheduler can therefore include means for rescheduling the plurality of rescheduled times and means for selecting the plurality of rescheduled times as discussed above.
Finally, the flowchart of
The method also includes associating 904 a buffer memory with each subdecoder to enable time shift of accesses to the shared interleaver memory. For example, a buffer memory macro may be included with the subdecoder macro in FPGA or ASIC definitions. Interconnect of the buffer memory to the subdecoder may be defined by the subdecoder macro or the buffer memory macro or separately. An ASIC or FPGA may be configured to physically place each buffer memory near the corresponding subdecoder, providing the advantage of simplifying interconnect between the buffer memories and subdecoders. For example, a macro can be defined where the parallelism N is a parameter. Because of the regularity of the structure provided by associating the buffer memories with the subdecoders, this macro can scale as N is changed. For example, a hard macro optimizing the placement and routing of the subdecoders and buffer memories can be used.
The method also includes rescheduling 906 the accesses to the shared interleaver memory by the plurality of buffer memories to prevent simultaneous accesses to a single bank of the interleaver memory. This rescheduling can be based on the interleaver as described above. For example, read access contention can be resolved by performing read access to the shared interleaver memory earlier than a nominally required time and buffering the read result in the buffer memory associated with the subdecoder. Alternately, write access contention can be resolved by performing write access to the shared interleaver memory later than a nominally required time by holding the write data in the buffer memory associated with the subdecoder until the later time the actual write is performed. Rescheduling may be pre-calculated, hardwired (e.g. using a read only memory as control), or dynamically determined (e.g. using a processor).
From the foregoing, and reiterating to some extent, it will be appreciated that several advantages are provided by the presently disclosed inventive techniques. The foregoing examples are necessarily limited in complexity in the interest of brevity. The principles described herein can be extended for parallel decoders using more than N=2 subdecoders. This extension will be apparent to one of skill in the art having possession of this disclosure. In general, a memory arbitrator structure in accordance with embodiments of the present invention uses relatively shallow depth buffer memories associated with each subdecoder to allow the rescheduling of accesses (read side or write side) to the shared memory. This helps to prevent access conflicts in the shared interleaver memory, enabling increased throughput in a parallel turbo decoder. The memory arbitrator structure is scalable with the degree of parallelism and provides straightforward interconnect between the subdecoders, buffer memories, multiplexer, and shared interleaver memory, making it amenable to macro definitions in an ASIC or FPGA library.
It is to be understood that the above-referenced arrangements are illustrative of the application for the principles of the present invention. It will be apparent to those of ordinary skill in the art that numerous modifications can be made without departing from the principles and concepts of the invention as set forth in the claims.
Hall, Eric K., Abbaszadeh, Ayyoob, Ertel, Richard
Patent | Priority | Assignee | Title |
10354733, | Oct 17 2017 | XILINX, Inc. | Software-defined memory bandwidth reduction by hierarchical stream buffering for general matrix multiplication in a programmable IC |
10762034, | Jul 30 2017 | NeuroBlade, Ltd. | Memory-based distributed processor architecture |
8055973, | Jun 05 2009 | STMicroelectronics, Inc.; STMicroelectronics, Inc | Channel constrained code aware interleaver |
8621160, | Dec 17 2010 | FUTUREWEI TECHNOLOGIES, INC | System and method for contention-free memory access |
8775750, | Sep 16 2009 | NEC Corporation | Interleaver with parallel address queue arbitration dependent on which queues are empty |
9368199, | Sep 02 2014 | Kioxia Corporation | Memory device |
9706272, | Aug 01 2012 | Alcatel-Lucent | Bit-interleaver for an optical line terminal |
Patent | Priority | Assignee | Title |
5063533, | Apr 10 1989 | Freescale Semiconductor, Inc | Reconfigurable deinterleaver/interleaver for block oriented data |
5968200, | Jun 06 1995 | Ikanos Communications, Inc | Implied interleaving a family of systematic interleavers and deinterleavers |
6381728, | Aug 14 1998 | Qualcomm Incorporated; QUALCOMM INCORPORATED, A DE CORP | Partitioned interleaver memory for map decoder |
6392572, | May 11 2001 | QUALCOMM INCORPORATED, A DELAWARE CORPORATION | Buffer architecture for a turbo decoder |
6606725, | Apr 25 2000 | Mitsubishi Electric Research Laboratories, Inc | MAP decoding for turbo codes by parallel matrix processing |
6678843, | Feb 18 1999 | INTERUNIVERSITAIR MICROELECKTRONICA CENTRUM IMEC | Method and apparatus for interleaving, deinterleaving and combined interleaving-deinterleaving |
7289569, | Apr 16 2002 | INTERDIGITAL MADISON PATENT HOLDINGS | HDTV trellis decoder architecture |
7340664, | Sep 20 2000 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Single engine turbo decoder with single frame size buffer for interleaving/deinterleaving |
7640479, | Sep 20 2000 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Single engine turbo decoder with single frame size buffer for interleaving/deinterleaving |
20020136332, | |||
20030028843, | |||
20040025103, | |||
20040052144, | |||
20040117716, | |||
20060088119, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 12 2006 | HALL, ERIC | L3 Communications Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018377 | /0452 | |
Jun 12 2006 | ERTEL, RICHARD | L3 Communications Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018377 | /0452 | |
Jun 14 2006 | ABBASZADEH, AYYOOB | L3 Communications Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018377 | /0452 | |
Sep 28 2006 | L-3 Communications, Corp. | (assignment on the face of the patent) | / | |||
Jan 19 2011 | L3 Communications Corporation | L-3 Communications Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026602 | /0245 | |
Dec 31 2016 | L-3 Communications Corporation | L3 Technologies, Inc | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 063295 | /0788 |
Date | Maintenance Fee Events |
Feb 12 2014 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Feb 13 2018 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Feb 24 2022 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Aug 24 2013 | 4 years fee payment window open |
Feb 24 2014 | 6 months grace period start (w surcharge) |
Aug 24 2014 | patent expiry (for year 4) |
Aug 24 2016 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 24 2017 | 8 years fee payment window open |
Feb 24 2018 | 6 months grace period start (w surcharge) |
Aug 24 2018 | patent expiry (for year 8) |
Aug 24 2020 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 24 2021 | 12 years fee payment window open |
Feb 24 2022 | 6 months grace period start (w surcharge) |
Aug 24 2022 | patent expiry (for year 12) |
Aug 24 2024 | 2 years to revive unintentionally abandoned end. (for year 12) |