The present invention is a method and system for providing a scalable memory building block device. The memory building block device includes a plurality of separate memory arrays, decode logic for selecting only one bit from the plurality of memory arrays, and output means for providing only one bit as an output of the memory building block device, such that the memory building block device generates as its output only one bit.
|
1. A memory device comprising:
a plurality of separate memory arrays;
decode logic for selecting only one bit from said plurality of memory arrays; and
output means for providing said only one bit as an output of said memory device, wherein said device generates as its output only one bit.
7. A memory device comprising:
a plurality of separate memory arrays;
decode logic for selecting only one bit from said plurality of memory arrays;
output means for providing said only one bit as an output of said memory device, wherein said device generates as its output only one bit;
said memory device including a first memory array separated from a second memory array by a first decode logic;
said memory device including a third memory array separated from a fourth memory array by a second decode logic;
said first memory array being separated from said third memory array by a third decode logic; and
said second memory array being separated from said fourth memory array by a fourth decode logic.
13. A scalable memory having a particular width wordline, said memory comprising:
a plurality of memory devices, each one of said memory devices including:
a plurality of separate memory arrays;
decode logic for selecting only one bit from said plurality of memory arrays; and
output means for providing said only one bit as an output of said memory device, wherein said device generates as its output only one bit;
said particular width wordline being a particular number of bits wide;
each one of said plurality of memory devices providing a different one bit of said particular number of bits;
said particular width wordline being increased from said particular number of bits wide to a second number of bits wide, wherein additional bits are added to said wordline; and
a second plurality of memory devices being added to said memory, wherein each one of said second plurality of memory devices provides only one of said additional bits.
17. A method for scaling a memory having a particular width wordline, said method comprising the steps of:
providing a plurality of memory devices, each one of said memory devices including:
a plurality of separate memory arrays;
decode logic for selecting only one bit from said plurality of memory arrays; and
output means for providing said only one bit as an output of said memory device, wherein said device generates as its output only one bit;
said particular width wordline being a particular number of bits wide;
each one of said plurality of memory devices providing a different one bit of said particular number of bits;
increasing said particular width wordline from said particular number of bits wide to a second number of bits wide, wherein additional bits are added to said wordline;
adding a second plurality of memory devices to said memory, wherein each one of said second plurality of memory devices provides only one of said additional bits.
14. A method for generating a memory device, said method comprising the steps of:
said memory device including a plurality of separate memory arrays;
selecting only one bit from said plurality of memory arrays;
providing said only one bit as an output of said memory device, wherein said device generates as its output only one bit;
said memory device being a four-quadrant memory device, wherein said plurality of separate memory arrays includes four memory arrays;
selecting a row of said first and second quadrants using first row decode logic located between first and second quadrants of said four quadrants;
selecting a row of said third and fourth quadrants using second row decode logic located between third and fourth quadrants of said four quadrants;
selecting a bit from either said selected row of said first quadrant or said selected row of said third quadrant using first column decode logic located between said first and third quadrants;
selecting a bit from either said selected row of said second quadrant or said selected row of said fourth quadrant using second column decode logic located between said second and fourth quadrants; and
providing said only one bit as said output using an I/O portion located between said first row decode logic, second row decode logic, first column decode logic, and second column decode logic, wherein said four-quadrant device produces only one bit as an output, further wherein said output bit is either said bit selected by said first column decode logic or said second column decode logic.
2. The memory device according to
each one of said plurality of separate memory arrays being separated from all other memory arrays of said plurality of separate memory arrays by one of a plurality of decode logic sections.
3. The memory device according
said memory device being a four-quadrant memory device, wherein said plurality of separate memory arrays includes four memory arrays; and
said four-quadrant memory device outputting a single bit output.
4. The memory device according to
first row decode logic located between first and second quadrants of said four quadrants for selecting a row of said first and second quadrants;
second row decode logic located between third and fourth quadrants of said four quadrants for selecting a row of said third and fourth quadrants;
first column decode logic located between said first and third quadrants for selecting a bit from either said selected row of said first quadrant or said selected row of said third quadrant;
second column decode logic located between said second and fourth quadrants for selecting a bit from either said selected row of said second quadrant or said selected row of said fourth quadrant; and
an I/O portion located between said first row decode logic, second row decode logic, first column decode logic, and second column decode logic for providing said only one bit as said output, wherein said four-quadrant device produces only one bit as an output, further wherein said output bit is either said bit selected by said first column decode logic or said second column decode logic.
5. The memory device according to
said first column decode logic including an upper decode logic, a lower decode logic, and a logic selector; and
said second column decode logic including a second upper decode logic, a second lower decode logic, and a second logic selector.
6. The memory device according to
said upper decode logic including a first plurality of multiplexers, and generating a first output;
said lower decode logic including a second plurality of multiplexers, and generating a second output;
said logic selector receiving as its inputs said first output and said second output, and generating a third output;
said second upper decode logic including a third plurality of multiplexers, and generating a fourth output;
said second lower decode logic including a fourth plurality of multiplexers, and generating a fifth output;
said second logic selector receiving as its inputs said fourth output and said fifth output, and generating a sixth output; and
said I/O portion receiving as its input said third output and sixth output, and generating said output.
8. The memory device according to
said first decode logic for selecting a first bit from either said first memory array or said second memory array and for providing said selected first bit as an output of said first decode logic; and
said second decode logic for selecting a second bit from either said third memory array or said fourth memory array and for providing said selected second bit as an output of said second decode logic.
9. The memory device according to
an I/O portion for selecting either said first bit or said second bit as a selected bit; and
said I/O portion for providing said selected bit as an output of said memory device, wherein only one bit is provided as said output.
10. The memory device according to
said I/O portion being located between said first, second, third, and fourth decode logic.
11. The memory device according to
said first decode logic including a first column decode logic, a second column decode logic, and a logic selector;
said first column decode logic for selecting a first column decode logic output bit from said first memory array;
said second column decode logic for selecting a second column decode logic output bit a bit from said second memory array; and
said logic selector for selecting either said first column decode logic output bit or said second column decode logic output bit and for providing either said first column decode logic output bit or said second column decode logic output bit as said selected first bit as said output of said first decode logic.
12. The memory device according to
said second decode logic including a third column decode logic, a fourth column decode logic, and a second logic selector;
said third column decode logic for selecting a third column decode logic output bit from said third memory array;
said fourth column decode logic for selecting a fourth column decode logic output bit from said third memory array; and
said second logic selector for selecting either said third column decode logic output bit or said fourth column decode logic output bit, and for providing either said third column decode logic output bit or said fourth column decode logic output bit as said selected second bit as said output of said second decode logic.
15. The method according to
said first column decode logic includes an upper decode logic, a lower decode logic, and a logic selector; and
said second column decode logic includes a second upper decode logic, a second lower decode logic, and a second logic selector.
16. The method according to
generating a first output using a first plurality of multiplexers included in said upper decode logic;
generating a second output using a second plurality of multiplexers included in said lower decode logic;
providing said first output and said second output to said logic selector as its inputs;
generating a third output using said logic selector;
generating a fourth output using a third plurality of multiplexers included in said second upper decode logic;
generating a fifth output using a fourth plurality of multiplexer included in said second lower decode logic;
providing said fourth output and said fifth output to said second logic selector as its inputs;
generating a sixth output using said second logic selector;
providing said third output and said sixth output to said I/O portion as its inputs; and
generating said output using said I/O portion.
|
1. Technical Field
The present invention relates generally to memory circuits, and more particularly to a memory building block that is scalable.
2. Description of the Related Art
Memory devices typically include a memory core having an array of memory cells for storage and retrieval of data. Today's memory devices typically include at least one memory cell array organized in rows and columns of memory cells. The rows of memory cells are coupled through word lines and the columns of memory cells are coupled through bit lines. Each column can have a single bit line, for single-ended memory cells, or a complementary bit line pair, for dual-ended memory cells. Although many architectures are possible, a row or word line decoder including a plurality of word line drivers and a column decoder are provided for decoding an address for accessing a particular location of the memory array. Address decode circuitry is included to select a word line based upon the value of the address provided to the memory device.
Large memory arrays require large drivers and have high internal delays. In addition, compiled memories that reach large configuration ranges can get very slow and use a lot of power due to the increased driver sizes that are required. A compiled memory is any memory which is built in a manner which allows its expansion while keeping the same general functionality. An example of which would be a single port SRAM memory which may be compiled to support numerous memory sizes between a 16×32 and a 16384×128. In order to produce a very large address space, memory arrays get large which result in long access times for the address space.
Therefore, a need exists for a method and device for a scalable memory building block that will provide improved access times and will provide large address spaces.
The present invention is a method and system for providing a scalable memory building block device. The memory building block device includes a plurality of separate memory arrays, decode logic for selecting only one bit from the plurality of memory arrays, and output means for providing only one bit as an output of the memory building block device, such that the memory building block device generates as its output only one bit.
The above as well as additional objectives, features, and advantages of the present invention will become apparent in the following detailed written description.
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself however, as well as a preferred mode of use, further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
The description of the preferred embodiment of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
The present invention is a memory building block and a method for using the memory building block for creating a larger memory.
Column decode logic section 28 is used to select one of the 64 columns of array 12. Column decode logic section 30 is used to select one of the 64 columns of array 16. Top/bottom decode section 32 is used to select either the output provided by column decode logic section 28 or the output provided by column decode logic section 30. Top/bottom decode section 32 provides the selected output to central portion 26.
Memory building block 10 also includes right portion 22. Right portion 22 includes memory core 14 and memory core 18. In addition, right portion 22 includes decode logic 34 used to select one bit from right portion 22 and provide that bit to central portion 26. Decode logic 34 includes column decode logic section 36, column decode section 38, and a top/bottom decode logic section 40.
Column decode logic section 36 is used to select one of the 64 columns of array 14. Column decode logic section 38 is used to select one of the 64 columns of array 18. Top/bottom decode section 40 is used to select either the output provided by column decode logic section 36 or the output provided by column decode logic section 38. Top/bottom decode section 40 provides the selected output to central portion 26.
Central portion 26 receives address inputs and provides the output for memory block 10. Central portion 26 receives address information and provides address decode and control information to top row decode 42 and to bottom row decode 44. Top row decode 42 is used to select one of the 64 rows of either array 12 or array 14. Bottom row decode 44 is used to select one of the 64 rows of either array 16 or array 18. Central portion 26 also provides control information to devices in decode logic 24 and decode logic 34. Central portion 26 receives the output bit from top/bottom decode 32 and the output bit from top/bottom decode 40. Central portion 26 then selects one of these two bits and provides that one selected bit as the output of memory block 10.
Each array is arranged in rows and columns. Thus, for the depicted example, there are 64 rows and 64 columns of memory cells in each array. Row decode/wordline drivers 42 and 44 are used to select one of the rows of a memory cell. Thus, row decode/wordline drivers 42 selects one of the rows of array 12 or one of the rows of array 14. Similarly, row decode/wordline drivers 44 selects one of the rows of array 16 or one of the rows of array 18. Because memory arrays 12, 14, 16, and 18 are so small, none of the drivers included in row decode/wordline drivers 42 or 44 need to be very large.
Control circuitry and address predecoder 80 is included within central portion 26 and is used to provide control signals to both row decode/wordline drivers 42 and 44 to indicate the row to be selected. In addition, control circuitry and address predecoder 80 provides control signals to sense amplifiers and multiplexers throughout memory building block 10 in order to select the bit that is indicated by the address received by control circuitry and address predecoder 80.
Each memory cell is coupled to a sense amplifier utilizing a pair of bit lines. Each sense amplifier (amp) amplifies the differential voltage placed thereon from accessing a memory cell. The output of each sense amp is provided to two-stage 64:1 decode logic implemented utilizing a plurality of multiplexers. These multiplexers may be implemented using, for example, 8:1 multiplexers.
For example, first column decode logic section 28 includes sense amp 82, sense amp 84, and multiplexers 86, 88, and 90. Cell 60 is coupled to sense amp 82, and cell 62 is coupled to sense amp 84. Multiplexer 86 is coupled to a plurality of sense amps, such as sense amps 82 and 84 and other sense amps that are not shown. Multiplexer 88 receives as its inputs the outputs from multiplexers 86, 90, and other multiplexers that are not shown. All of these multiplexers together make up a two-stage, 64:1 column decode logic. The other column decode logic sections 30, 36, and 38 operate in a similar fashion and include sense amps and multiplexers similar to those described above.
Top/bottom decode 32 includes a multiplexer 92 that receives as its inputs the output from multiplexer 88 and the output from multiplexer 94. Thus, multiplexer 92 is used to select a bit from either array 12 or array 16. In a similar manner, top/bottom decode 40 includes a multiplexer 100 that receives as its inputs the output from multiplexer 102 and the output from multiplexer 104. Multiplexer 100 is used to select a bit from either array 14 or array 18.
I/O portion 26 includes a multiplexer 106 that receives as its inputs the output from multiplexer 92 and the output from multiplexer 100. Thus, multiplexer 106 is used to select a bit from either the left portion 20 of block 10 or the right portion 22 of block 10. Multiplexer 106 then provides its output, depicted as 108, as the output of block 10. As described above, multiplexer 106 provides a single bit output.
Any number of memory building blocks may be used to form a larger memory. In this manner, each memory cell array is small, such as 64×64 in the described example. These small arrays are then used to produce a large memory. The approach of the present invention provides a large memory with a very fast access time.
The present invention is a 1-bit memory building block. In the example depicted, the building block provides a 16K address space. Each of the four quadrants is a memory core that is a 64×64 array of bitcells. There are row decode and wordline drivers located down the middle of the memory block at the top and bottom of the block. As the bitlines feed inward, they pass through a two-stage, 64:1 column decoder provided by 8:1 decoders, and then pass into a 2:1 multiplexer for top or bottom decoding. In the center I/O block of the memory building block, address predecoding is performed in addition to decoding from the right or left portions of the building block. Because the memory cores are small cell arrays, none of the drivers have to be very large. Memory selftime is simple as well. With the small array sizes, required self time compensation for RC effects are minimized. An inverter sense or delay string triggered latching sense amp could be used.
The present invention building block may be used as a standalone memory such that the memory core cells have edge cells, or the blocks may be built for abutment with additional other memory building blocks and tiled in a compiled manner as depicted by
The present invention provides many advantages over the prior art. Power consumption is minimized due to the quadrant approach and the short bitlines and wordlines that result. The wordline/predecode/write drivers are small. Power consumption is more evenly distributed through each building block and not centralized as it is in prior art memories. Access time is greatly reduced for the memory building block due to the minimal bitline and wordline lengths.
Each building block is fully independent of the others. If the present invention is adopted in a metal programmable chip, for example, any one or more building blocks could easily be left unused so that its metal area could be used for routing. If the building blocks are too large, any quadrant could easily be metal programmed to be disabled.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Jung, Chang Ho, Brown, Jeffrey Scott, Chafin, Craig R.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5953261, | Nov 28 1995 | Mitsubishi Denki Kabushiki Kaisha | Semiconductor memory device having data input/output circuit of small occupied area capable of high-speed data input/output |
5973984, | Oct 06 1997 | Mitsubishi Denki Kabushiki Kaisha | Static semiconductor memory device with reduced power consumption, chip occupied area and access time |
6181626, | Apr 03 2000 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Self-timing circuit for semiconductor memory devices |
6434074, | Sep 04 2001 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Sense amplifier imbalance compensation for memory self-timed circuits |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 02 2003 | JUNG, CHANG HO | LSI Logic Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014057 | /0739 | |
May 02 2003 | CHAFIN, CRAIG R | LSI Logic Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014057 | /0739 | |
May 02 2003 | BROWN, JEFFREY SCOTT | LSI Logic Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014057 | /0739 | |
May 08 2003 | LSI Logic Corporation | (assignment on the face of the patent) | / | |||
Apr 06 2007 | LSI Logic Corporation | LSI Corporation | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 033102 | /0270 | |
May 06 2014 | Agere Systems LLC | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 032856 | /0031 | |
May 06 2014 | LSI Corporation | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 032856 | /0031 | |
Aug 14 2014 | LSI Corporation | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035390 | /0388 | |
Feb 01 2016 | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | LSI Corporation | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS RELEASES RF 032856-0031 | 037684 | /0039 | |
Feb 01 2016 | DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT | Agere Systems LLC | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS RELEASES RF 032856-0031 | 037684 | /0039 | |
Feb 01 2016 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | BANK OF AMERICA, N A , AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 037808 | /0001 | |
Jan 19 2017 | BANK OF AMERICA, N A , AS COLLATERAL AGENT | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS | 041710 | /0001 |
Date | Maintenance Fee Events |
Apr 01 2008 | ASPN: Payor Number Assigned. |
Apr 30 2009 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Mar 07 2013 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Jun 16 2017 | REM: Maintenance Fee Reminder Mailed. |
Dec 04 2017 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Nov 08 2008 | 4 years fee payment window open |
May 08 2009 | 6 months grace period start (w surcharge) |
Nov 08 2009 | patent expiry (for year 4) |
Nov 08 2011 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 08 2012 | 8 years fee payment window open |
May 08 2013 | 6 months grace period start (w surcharge) |
Nov 08 2013 | patent expiry (for year 8) |
Nov 08 2015 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 08 2016 | 12 years fee payment window open |
May 08 2017 | 6 months grace period start (w surcharge) |
Nov 08 2017 | patent expiry (for year 12) |
Nov 08 2019 | 2 years to revive unintentionally abandoned end. (for year 12) |