Techniques related to a high bandwidth interface (hbi) for communication between multiple host devices on an interposer are described. In an example, the hbi repurposes a portion of the high bandwidth memory (HBM) interface, such as the physical layer. A computing system is provided. The computing system includes a first host device and at least a second host device. The first host device is a first die on an interposer and the second host device is a second die on the interposer. The first host device and the second host device are interconnected via at least one hbi. The hbi implements a layered protocol for communication between the first host device and the second host device. The layered protocol includes a physical layer protocol that is configured according to a HBM physical layer protocol.
|
12. A method for communication between devices on an interposer, comprising:
sending at least a first signal from a first device on the interposer to a second device on the interposer via a high bandwidth interface (hbi), wherein sending the first signal via the hbi comprises sending the first signal using a layered protocol including a physical layer protocol that is configured according to a high bandwidth memory (HBM) physical layer protocol; and
receiving at least a second signal from the second device on the interposer via the hbi.
1. A computing system, comprising:
a first host device comprising a first die on an interposer; and
at least one second host device comprising a second die on the interposer, wherein:
the first host device and the at least one second host device are interconnected via at least one high bandwidth interface (hbi) that implements a layered protocol for communication between the first host device and the at least one second host device, the layered protocol includes a physical layer protocol that is configured according to a high bandwidth memory (HBM) physical layer protocol.
2. The computing system of
3. The computing system of
4. The computing system of
5. The computer system of
6. The computer system of
7. The computer system of
8. The computer system of
9. The computer system of
10. The computer system of
11. The computer system of
13. The method of
14. The method of
15. The method of
16. The method of
17. The method of
18. The method of
19. The method of
20. The method of
|
Examples of the present disclosure generally relate to electronic circuits and, in particular, to a high bandwidth chip-to-chip interface using the high bandwidth memory (HBM) physical interface.
Electronic devices, such as tablets, computers, copiers, digital cameras, smart phones, control systems and automated teller machines, among others, often employ electronic components such as dies that are connected by various interconnect components. The dies may include memory, logic or other integrated circuit (IC) device.
ICs may be implemented to perform specified functions. Example ICs include mask-programmable ICs, such as general purpose ICs, application specific integrated circuits (ASICs), and the like, and field programmable ICs, such as field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and the like.
ICs have become more “dense” over time, i.e., more logic features have been implemented in an IC. More recently, Stacked-Silicon Interconnect Technology (“SSIT”) allows for more than one semiconductor die to be placed in a single package. SS IT ICs may be used to address increased demand for having various ICs within a single package. Conventionally, SSIT products are implemented using an interposer that includes an interposer substrate layer with through-silicon-vias (TSVs) and additional metallization layers built on top of the interposer substrate layer. The interposer provides connectivity between the IC dies and the package substrate.
Chip-to-chip interfaces (also called interconnects) provide a bridge between host devices, such as between ICs, system-on-chip (SoCs), FPGAs, ASICs, central processing units (CPUs), graphic processing units (GPUs), etc.
As the data rates which can be processed by systems increases, providing interfaces that can keep up with the processing speed of the chip becomes increasingly difficult. Power-efficient, robust, and low-cost chip-to-chip interfaces are desirable to meet the needs of high-performance systems.
High speed chip-to-chip interfaces sometime involve tradeoffs between pin count, input/output (I/O) die area, power, etc. Some examples of chip-to-chip interfaces include low voltage complementary metal oxide semiconductor (LVCMOS) I/O, low voltage differential signaling (LVDS) I/O, high speed serializer/deserializer (SERDES) I/O.
High bandwidth memory (HBM) is a high-performance random access memory (RAM) interface for 3D-stacked dynamic RAM (DRAM) and has been adopted by the Joint Electron Device Engineering Council (JEDEC) standards body. The HBM standard defines a new type of physical interface for communication between an HBM DRAM device and a host device such as an ASIC, CPU, GPU, or FPGA. The HBM physical interface can improve tradeoff point as far as I/O die area and power as compared to certain other interfaces. HBM can achieve high bandwidth using less power in a small form factor.
For some systems, a high speed interface is desirable to efficiently integrate other host devices on a single interposer. Thus, techniques for a high bandwidth chip-to-chip interface would be useful.
Techniques related to a high bandwidth chip-to-chip interface using the high bandwidth memory (HBM) physical interface are described.
In an example, a computing system is provided. The computing system includes a first host device and at least a second host device. The first host device is a first die on an interposer and the second host device is a second die on the interposer. The first host device and the second host device are interconnected via at least one high bandwidth interface (HBI). The HBI implements a layered protocol for communication between the first host device and the second host device. The layered protocol includes a physical layer protocol that is configured according to a high bandwidth memory (HBM) physical layer protocol.
In another example, a method for communication between devices on an interposer is provided. The method includes sending at least a first signal from a first device on the interposer to a second device on the interposer via a HBI. Sending the first signal via the HBI includes sending the first signal using a layered protocol. The layered protocol includes a physical layer protocol that is configured according to a HBM physical layer protocol. The method includes receiving at least a second signal from the second device on the interposer via the HBI.
These and other aspects may be understood with reference to the following detailed description.
So that the manner in which the above recited features can be understood in detail, a more particular description, briefly summarized above, may be had by reference to example implementations, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical example implementations and are therefore not to be considered limiting of its scope.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements of one example may be beneficially incorporated in other examples.
Various features are described hereinafter with reference to the figures. It should be noted that the figures may or may not be drawn to scale and that the elements of similar structures or functions are represented by like reference numerals throughout the figures. It should be noted that the figures are only intended to facilitate the description of the features. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated example need not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular example is not necessarily limited to that example and can be practiced in any other examples even if not so illustrated or if not so explicitly described.
Examples of the disclosure relate to techniques and apparatus for high bandwidth interface (HBI), such as a high speed chip-to-chip interface, at least partially using the high bandwidth memory (HBM) physical interface to efficiently integrate host devices on a single interposer. In some examples, the HBI interface uses HBM at the physical layer (PHY) and uses different protocols or adjustments of the HBM for other layers.
Before describing exemplary implementations illustratively depicted in the several figures, a general introduction is provided to further understanding.
An Example Silicon Stack Interconnect Technology (SSIT) Product
Silicon stacked interconnect technology (SSIT) involves packaging multiple integrated circuit (IC) dies into a single package that includes an interposer and a package substrate. Utilizing SSIT expands IC products, such as and including FPGA products and other types of products, into higher density, lower power, greater functionality, and application specific platform solutions with low cost and fast-to-market advantages.
The integrated chip package 110 includes a plurality of IC dies 114 (e.g., IC dies 114(1) and 114(2) are shown by example) connected optionally by a silicon-through-via (TSV) interposer 112 (also referred to as “interposer 112”) to a package substrate 122. The chip package 110 may also have an overmold covering the IC dies 114 (not shown). The interposer 112 includes circuitry (not shown) for electrically connecting the IC dies 114 to circuitry (not shown) of the package substrate 122. The circuitry of the interposer 112 may optionally include transistors. Package bumps 132, also known as “C4 bumps,” are utilized to provide an electrical connection between the circuitry of the interposer 112 and the circuitry of the package substrate 122. The package substrate 122 may be mounted and connected to a printed circuit board (PCB) 136, utilizing solder balls 134, wire bonding or other suitable technique. The PCB 136 can be mounted in the interior of a housing 102 of the electronic device 100.
The IC dies 114 are mounted to one or more surfaces of the interposer 112, or alternatively, to the package substrate 122. The IC dies 114 may be programmable logic devices, such as FPGAs, memory devices, optical devices, processors or other IC logic structures. In the example depicted in
The electrical components of integrated chip package 110, such as the IC dies 114, communicate via traces formed on electrical interconnect components. The interconnect components having the traces can include one or more of the PCB 136, the package substrate 122 and interposer 112, among others components.
As mentioned, currently, The HBM standard defines a new type of physical interface for communication between an HBM DRAM device and a host device such as an ASIC, CPU, GPU, or FPGA. In one example, the printed circuit board 136 is a graphics card and the IC 114(1) is a GPU. In this case, the IC 114(1) may include a 3D engine, a display controller, and an HBM controller; and the IC 114(2) may include stacked DRAM dies and an optional base HBM controller die interconnected by through-silicon vias (TSVs) and microbumps. The interface is divided into independent channels, each operating as a data bus.
In some examples, HBM devices have up to 8 independent DRAM channels. Each DRAM channel includes two 64-bit data channels known as a pseudochannel (PC), and one command/address channel shared by the two PCs. Each PC can operate at a maximum data rate of 2000 MT/sec, double data rate, using a 1000 MHz clock. HBM features include: typically a die stack with 1-2 channels per die; 8×128b independent channels; 8 or 16 banks with optional bank grouping; 1 Kb page size; 1-8 Gbit of storage per channel; 2 Gbps (1 GHz) operation (32 GByte/sec per 128 bit channel); burst length (BL) of 4, thus, minimum access unit per PC is 32 bytes; 1.2 V (+/−5%) I/O and core voltage (independent); 2.5 V (+/−5%) pump voltage (VPP); unterminated I/O with nominal drive current 6-18 mA; write data mask (DM) support; error correcting code (ECC) support by using the DM signals, 16 bits per 128 b data (partial write not supported when ECC is used); data bus inversion (DBI) support; separate read and write data strobe (DQS) signals (differential); separate row and column command channels; command/address parity support; data parity support (in both directions); and address/data parity error indication.
In some cases, however, it may be desirable to have a high-speed interface to interconnect multiple host devices, for example, on the interposer 112 (i.e., rather than a DRAM and a host device). Thus, aspects of the present disclosure relate implementing portions of the HBM interface as a high-speed interconnect between host devices, which may be on a same interposer.
Example Chip-to-Chip High Bandwidth Interface (HBI) Using HBM
The HBI(s) 206 . . . 206n is a high performance chip-to-chip interface. The HBI(s) 206 . . . 206n may be at least partially based on the JEDEC HBM specifications. In some examples, the HBI interface uses the physical layer (PHY) and I/O as defined by the HBM specification, but uses different protocols or adjustments of the HBM for other layers. For example, since the HBI is an interconnect for host devices, the HBI may dispense with DRAM specific protocols of HBM.
HBI Compatibility with HBM
The HBI interface (e.g., HBI(s) 206 . . . 206n) may be compatible with HBM PHY and I/O at a date rate up to 2000 MT/s. In some examples, the HBI uses the PHY in a “bit-slice mode”.
The HBI interface (e.g., HBI(s) 206 . . . 206n) may support a user-side interface at one-fourth the HBM data rate (e.g. 500 MHz). While the nominal HBM clock rate described herein may be 1000 MHz and the HBI user-side rate may be 500 MHz, other rates may be used as actual device rates may vary depending on implementation, speed grades, etc.
HBI Device Symmetry
Portions of the HBM interface may not be symmetrical between a master (e.g., controller) and a slave (e.g., DRAM). For example, the command/address channel is unidirectional.
To ensure symmetry and interoperability with either master or slave HBM PHY (or both simultaneously), the HBI interface (e.g., HBI(s) 206 . . . 206n) may use only a subset of the HBM standard interface which is symmetrical, i.e., which makes it possible for either side to transmit or receive data. For example, the HBM signals “Command/Address”, “DERR”, and “AERR” signals may not be used by the HBI interface.
HBI Multi-Channel Support
The HBI interface (e.g., HBI(s) 206 . . . 206n) may support multiple independent communication channels.
HBI Static Configuration
The HBI interface (e.g., HBI(s) 206 . . . 206n) may be configured and calibrated once at start time. In some examples, the HBI may require little or no maintenance after the initial configuration and calibration.
HBI Dual Simplex Operation
As mentioned above, the HBI interface (e.g., HBI(s) 206 . . . 206n) may provide multiple channels. Each channel may operate in one direction (e.g., output or input).
HBI Scalability
As mentioned above, the HBI interface (e.g., HBI(s) 206 . . . 206n) may provide multiple channels. The number of HBI channels may vary depending on the application. In some examples, an HBI may use an HBM PHY consisting of 8 128-bit data channels, however, a different number of channels may be used. An HBM PHY with 8×128-bit channels may be referred to as an “HBM PHY unit”.
HBI Layered Protocol
The HBI interface (e.g., HBI(s) 206 . . . 206n) may support three protocol layers as shown in
The HBI interface (e.g., HBI(s) 206 . . . 206n) may use the physical interface as a general purpose communication channel between the chips (e.g., the host devices 202, 204 . . . 204n on the interposer 200. The PHY layer may be the first and lowest layer (also referred to as layer 0) and may refer to the circuitry that implements physical layer functions in communications. The PHY may define the electrical and physical specifications of the data connection.
The HBI layer-0 402 is the direct access to the HBM PHY. A “controller bypass” mode may be used in which PHY signals are directly exposed to the PL, and data flows continuously. The HBM standard defines eight 128-bit legacy channels, or sixteen 64-bit pseudo channels. The basic data unit in layer-0 402 is a 32-bit data word. Therefore, thirty-two layer-0 channels are available per HBM PHY unit. The HBM PHY may operate in a 4:1 SERDES (serializer/deserializer) mode, meaning that it provides bus-width conversion and corresponding clock speed conversion at a 4:1 ratio from the user's point of view.
In some examples, on the I/O side, each L0 (i.e., PHY) channel is 32-bit wide, operating at 1000 MHz DDR (2000 MT/s), while on the user side the L0 channel is seen as a single-data-rate 128-bit channel operating at 500 MHz. The subset of HBM signals available for L0 is summarized in the table 500 shown in
For the HBI PHY Internet Protocol (IP), only an HBM PHY may be needed. As discussed above, the HBM PHY may be directly accessible, the HBM PHY may be in a “bit slice mode” that allows continuous data flow per 32-bit I/O word, the I/O direction may selectable per 32-bit I/O word, and the HBM PHY may be in a 4:1 SERDES mode.
The HBI L1 may provide the DBI functionality as defined in the HBM standard. The purpose of the DBI is to reduce I/O power by minimizing the number of transitions on the data bus.
The HBI L1 may provide the parity protection as defined in the HBM standard. For example, each 32-bit word is protected with one parity bit. The HBI L1 provides parity generation on the transmit side, and parity checking, error logging, and error reporting on the receive side. Error recovery can be implemented external to the HBI.
As shown in
As shown in
The HBI L1 receive logic may be responsible for achieving and maintaining alignment, detecting alignment errors, and recovering from such errors. Alignment errors may be reported via a status/interrupt register.
The HBI L1 may reorder the bits from the L0, for example, prior to use. The reordering may depend on the die and PHY orientation.
The HBI L1 user-side interface may be defined as shown in the Table 900 in the
For the HBI L1 IP, the L1 function may be implemented as soft logic IP in the PL on the side of the host device1 202.
Ignoring the user side channel and other overhead signals, each HBI L1 channel may sustain a throughput of around 16 GBytes/sec. Total HBI L1 throughput per HBI (for 1 HBM PHY unit) is therefore 256 GBytes/sec, or 2.048 Tbits/sec.
As shown in
The memory-mapped HBI L2 protocol is illustrated in
The outbound master (inbound slave) channel is used for read and write commands, and the inbound master (outbound slave) channel is used for read and write responses. Each HBI L2 channel may support two virtual channels (VCs). The VCs may ensure independent forward progress of the read and write transactions. There may be separate flow control credit management per VC.
The HBI L2 may not employ read tags or reorder buffers. The HBI L2 may not support ECC.
Features of an HBI protocol layer AXI4 interface are summarized in the Table 1200 in
In some systems, 32 bits of write strobe (WSTRB) are used per data beat for a 256-bit AXI bus 32 to allow any combination of write strobes. However, such flexibility, though allowed, is rarely required. The strobes are often used in single beat partial writes or unaligned burst writes—in both cases, the WSTRB pattern can be encoded with far fewer than 32 bits. In some examples, the memory-mapped HBI L2 may support only single-beat partial writes for WSTRB containing zeros. The WSTRB word has only one contiguous region of nonzero WSTRB bits. In the data beat there are only three contiguous strobe regions: region 1 in which all WSTRB bits are 0; region 2 in which all WSTRB bits are 1; and region 3 in which all WSTRB bits are 0. Such case can be fully described using two values—a value N1 describing the number of 0's in region 1 (0-31) and a value N2 describing the number of 1's in region 2 (1-32). 10 bits may be used for the encoding. In some examples, for multi-beat transactions, no partial writes are allowed, i.e. all WSTRB bits must be set. Multi-beat unaligned writes are chopped prior to entering the memory-mapped HBI L2. The memory-mapped HBI L2 hardware may include a detector for violations and debugging of WSTRB restrictions. Allowed WSTRB values are shown in the Table 1300 in the
Transmissions in the memory-mapped HBI L2 may be packetized. The AXI4 protocol has five channels: write address, write data, write response, read address, read response. The memory-mapped HBI L2 may combines the write address and write data channels, and packetize the transactions into four VCs of packets: Write command packet (includes both address and data); Read command packet; Write response packet; and Read response packet. The command packets are outbound (from master to slave), while the response packets are inbound (from slave to master).
Each VC has separate flow control credit management and can make forward progress independent of other VCs. For example, the outbound channel can issue two credits per cycle, one each for the two inbound VCs, and the inbound channel can issue two credits per cycle, one each for the two outbound VCs. In some examples, the credits are per word, not per packet. Write commands and read commands share the same outbound memory-mapped HBI L2 channel, while read and write responses share the same inbound memory-mapped HBI L2 channel. Packetization improves throughput per wire and is widely used in network-on-chip (NoC) solutions.
The response channel carries read data, read response, and write response packets.
For maximum read throughput, the memory-mapped HBI L2 response channel may sustain a read response every cycle. The read response is allocated 256 bits of data and 12 bits of the L1 user side-channel. For maximum write throughput, a write response is may be performed at most once per two cycles, since the shortest write packet has one header word and one data word, and takes two cycles to transmit. Therefore, the write response can be transmitted over two cycles without loss of throughput. The write response channel may be allocated 7 bits of the L1 user side channel; 1 bit to mark the response start; and 6 bits for the first or second half of the 11-bit write response. The read response words (e.g., with different AXI ID) may be interleaved.
As discussed above, another example of the HBI L2 is a streaming protocol (e.g., such as an AXI4 (L2s) protocol). The streaming HBI L2 protocol may use a 256-bit AXI-S interface at 500 MHz mapped to one L1 channel. Thus, one HBM PHY unit can support 16 such AXI-S interfaces. The interface may be configurable as a master (outbound) or a slave (inbound). The streaming HBI L2 protocol may support credit-based flow control, full throughput (e.g., no packetization overhead), two mode of operation (e.g., a “Normal” mode and a “Simple” mode). The streaming HBI L2 protocol creates a 256-bit data stream. The AXI valid-ready handshake is replaced by credit-based flow control, and all other AXI-S signals are carried over the available 20 bits of the L1 user side channel. The Table 1700 in
The streaming HBI L2 protocol may not support the TSTRB signal. The TKEEP signal may be supported. In some examples, the TKEEP signal allows a streaming packet to start and end on an unaligned boundary, but otherwise the packet must contain a contiguous stream of valid bytes. In the first word of an AXI-S packet (TLAST=0), TKEEP indicates the location of the first valid byte; in the last word of an AXI-S packet (TLAST=1), TKEEP indicates the location of the first invalid byte; in other packet words TKEEP should not be used.
TID may be the source ID. The TID may be useful if multiple streams are interleaved onto a single physical channel. The TDEST is the destination ID. The TDEST may be used to route streaming packets to their final destination. Depending on the application, either TID or TDEST may or may not be required. A total of 8 bits are allocated for both TID and TDEST. The user may choose one of the static configurations shown in the Table 1900 in
The streaming HBI L2 protocol “simple” mode, may be a subset of AXI-S in which only flow control is provided. In some examples, the simple mode may be a point to point, single-source to single-destination stream, and provide continuous flow of whole words. In the simple mode, TID/TDEST, TSTRB/TKEEP, and TLAST signals may be omitted. Instead, the user may be given the full available 20 bits of the side channel as TUSER bit, to be used for any purpose.
For the HBI L2 IP, the L2 function may be implemented as soft logic IP in the PL on the side of the host device1 202.
HBI Reset, Initialization, and Calibration
The dies connected by the HBI, such as the host device 1 202 and the host device(s) 204 . . . 204n, may be reset and initialized independently. For HBI initialization, calibration, and data flow initiation it is assumed that there is one or more controller entities (e.g. a CPU) responsible for sequencing the process. The controller entities can be on-chip or off-chip, and the communication between the 2 controller entities is done out-of-band (i.e., not via the HBI). For example, there may be a simple micro-controller on each die, and some message passing interface between the dies (e.g., such as I2C, SPI, or Ethernet).
The HBI activation steps may include initialization, configuration, link training, FIFO training, and link activation. For the initialization step, the HBI logic (including the PHY) is powered up, reset, provided with a stable clock, and taken out of reset and into the idle, inactive state. For the configuration step, runtime programmable features of the HBI may be initialized with desired values. For example, this may include channel direction, parity, DBI, PHY initialization, self-calibration, and redundant wire assignment, etc. For the link training step, each L0 channel configured as output transmits a special training pattern that allows the receiving L0 channel on the other die to center the DQS edge relative to the DQ (data) eye. For the FIFO training step, each L0 channel configured as output transmits a special incrementing pattern that allows the receiving L0 channel on the other die to adjust the receive FIFO such that the FIFO operates near the half full point, providing the most tolerance to jitter. In applications where low latency is desired, the FIFO level may be trained to a different point to reduce latency. For the link activation step, when all previous steps are successfully completed, the data flow may begin. The L1 function may start issuing idle data words and the DQS will toggle continuously. Then user-side traffic can be enabled and real data may start flowing across the HBI.
HBI Clocking
The HBI-based system may operate as a mesochronous network. For example, the HBM-related clocks on both dies (interconnected by the HBI) may run at the same frequency, but with unknown phase relationships. This may be achieved by both die sharing the same reference clock used by the PLL in the HBM PHY (or equivalent).
The transmitted data may be source-synchronous. For example, the clock, or DQS is sent along with the data from transmitter to receiver. In addition, phase and jitter variations may be absorbed in the receive FIFO which is part of the PHY. The HBM channel clock and clock enable signals (CK_t, CK_c, and CKE) may not be used. Long-term jitter variations between the dies may be controlled such that they do not exceed a level which could overflow or underflow the PHY receive FIFO. For example, the long-term jitter of the 1.0 GHz clock may be maintained such that is does not exceed 1 UI (1000 ps).
HBI Power Management
Coarse grain power management for the HBI may be achieved by the external controller entities terminating activity on both die and then powering down the HBI link.
HBI Die to Die Wiring
The HBM micro-bump and bailout arrangement has been selected for ease of routing between the master device and the HBM stack. In HBI systems, when both devices (i.e., the host devices interconnected by the HBI) may have the same orientation of the PHY bailout when placed on the interposer, the die-to-die wiring may be simple. For example, the die-to-die wiring may follow signal routing as in the HBM protocol between master device and HBM stack device. When one die is rotated, wiring becomes more complex. HBI may support both same-orientation, and rotated die cases.
When both die have the same orientation, the connections may be 1-to-1 (e.g. DQS is connected to DQS, etc.), with the exception of the WDQS_t/c of one chip may be connected to the RDQS_t/c of the other chip, and vice versa. For example, the read and write DQS may be crossed. The HBI may not use a DERR.
When one die is rotated, maintaining the same wiring may lead to long wires and complex interposer routing. In some examples, the HBI may use a 1-to-1 wiring on the interposer as shown in
HBI Redundant Data Wires
The HBI may handle redundant data wires according to the HBM standard. The HBM standard defines 8 redundant data wires per 128 data bits, or 2 redundant bits per DWORD. Two lane remapping modes are defined, as detailed below. In HBI, the redundant data wires can be used for lane repair only when both die have the same orientation.
In Mode 1 it is allowed to remap one lane per byte. No redundant pin is allocated in this mode, and DBI functionality is lost for that byte only; however, other bytes continue to support DBI function as long as the Mode Register setting for DBI function is enabled. If the Data Parity function is enabled in the Mode Register and a lane is remapped, both DRAM and host may assume DBI input as “0” for parity calculation for Read and Write operation in this mode. In Mode 1 each byte is treated independently.
In Mode 2, one lane per double byte may be remapped. One redundant pin per double byte is allocated in this mode, and DBI functionality is preserved as long as the Mode Register setting for DBI function is enabled. Two adjacent bytes (e.g. DQ [15:0])) may be treated as a pair (double byte), but each double byte is treated independently.
Certain signals, such as the WDQS_c, WDQS_t, RDQS_c, RDQS_t, PAR, and DERR signals may not be remapped. In mode 1, the DBI signal is lost; so DBI pins cannot be interchangeable with other pins. Therefore for the rotated die case, where DBI is wired to DM, mode 1 may not be used. In mode 2, no functionality is lost but PAR cannot be remapped, so mode 2 may not be used for rotated die.
Example Operations
While the foregoing is directed to specific examples, other and further examples may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Ahmad, Sagheer, Arbel, Ygal, Jayadev, Balakrishna
Patent | Priority | Assignee | Title |
10628354, | Dec 11 2017 | Micron Technology, Inc. | Translation system for finer grain memory architectures |
10838897, | Dec 11 2017 | Micron Technology, Inc. | Translation system for finer grain memory architectures |
10936221, | Oct 24 2017 | Micron Technology, Inc. | Reconfigurable memory architectures |
11264361, | Jun 05 2019 | Invensas Corporation | Network on layer enabled architectures |
11270979, | Jun 05 2019 | Invensas Corporation | Symbiotic network on layers |
11281608, | Dec 11 2017 | Micron Technology, Inc. | Translation system for finer grain memory architectures |
11515222, | Dec 31 2020 | Micron Technology, Inc.; Micron Technology, Inc | Semiconductor assemblies with flow controller to mitigate ingression of mold material |
11614875, | Oct 24 2017 | Micron Technology, Inc. | Reconfigurable memory architectures |
11698747, | Nov 03 2020 | Micron Technology, Inc | Pulse amplitude modulation (PAM) for multi-host support in a memory sub-system |
11755515, | Dec 11 2017 | Micron Technology, Inc. | Translation system for finer grain memory architectures |
11824046, | Jun 05 2019 | Invensas LLC | Symbiotic network on layers |
11841815, | Dec 31 2021 | Eliyan Corporation | Chiplet gearbox for low-cost multi-chip module applications |
11842986, | Nov 25 2021 | Eliyan Corporation | Multi-chip module (MCM) with interface adapter circuitry |
11855043, | May 06 2021 | Eliyan Corporation | Complex system-in-package architectures leveraging high-bandwidth long-reach die-to-die connectivity over package substrates |
11893242, | Nov 25 2021 | Eliyan Corporation | Multi-chip module (MCM) with multi-port unified memory |
Patent | Priority | Assignee | Title |
20170192706, | |||
20190042518, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 26 2018 | ARBEL, YGAL | Xilinx, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 046490 | /0667 | |
Jul 26 2018 | AHMAD, SAGHEER | Xilinx, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 046490 | /0667 | |
Jul 26 2018 | JAYADEV, BALAKRISHNA | Xilinx, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 046490 | /0667 | |
Jul 27 2018 | XILINX, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jul 27 2018 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Feb 22 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 10 2022 | 4 years fee payment window open |
Mar 10 2023 | 6 months grace period start (w surcharge) |
Sep 10 2023 | patent expiry (for year 4) |
Sep 10 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 10 2026 | 8 years fee payment window open |
Mar 10 2027 | 6 months grace period start (w surcharge) |
Sep 10 2027 | patent expiry (for year 8) |
Sep 10 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 10 2030 | 12 years fee payment window open |
Mar 10 2031 | 6 months grace period start (w surcharge) |
Sep 10 2031 | patent expiry (for year 12) |
Sep 10 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |