Category Archives: 3D Integration

Blog Review October 14 2013


October 14, 2013

At the recent imec International Technology Forum Press Gathering in Leuven, Belgium, imec CEO Luc Van den hove provided an update on blood cell sorting technology that combines semiconductor technology with microfluidics, imaging and high speed data processing to detect tumorous cancer cells. Pete Singer reports.

Pete Singer attended imec’s recent International Technology Forum in Leuven, Belgium. There, An Steegan, senior vice president process technology at imec, said FinFETs will likely become the logic technology of choice for the upcoming generations, with high mobility channels coming into play for the 7 and 5nm generation (2017 and 2019). In DRAM, the MIM capacitor will give way to the SST-MRAM. In NAND flash, 3D SONOS is expected to dominate for several generations; the outlook for RRAM remains cloudy.

At Semicon Europa last week, Paul Farrar, general manager of G450C, provided an update on the consortium’s progress in demonstrating 450mm process capability. He said 25 tools will be installed in the Albany cleanroom by the end of 2013, progress has been made on notchless wafers with a 1.5mm edge exclusion zone, they have seen significant progress in wafer quality, and automation and wafer carriers are working.

Phil Garrou reports on developments in 3D integration from Semicon Taiwan. He notes that at the Embedded Technology Forum, Hu of Unimicron looked at panel level embedded technology.

Kathryn Ta of Applied Materials connects how demand for mobile devices is driving materials innovation. She says that about 90 percent of the performance benefits in the smaller (sub 28nm) process nodes come from materials innovation and device architecture. This number is up significantly from the approximate 15 percent contribution in 2000.

Tony Massimini of Semico says the MEMS market is poised for significant growth thanks to major expansion of applications in smart phone and automotive. In 2013, Semico expects a total MEMS market of $16.8 B but by 2017 it will have expanded to $28.5 B, a 70 percent increase in a mere four years time.

Steffen Schulze and Tim Lin of Mentor Graphics look at different options for reducing mask write time. They note that a number of techniques have been developed by EDA suppliers to control mask write time by reducing shot count— from simple techniques to align fragments in the OPC step, to more complex techniques of simplifying the data for individual writing passes in multi-pass writing.

If you want to see SOI in action, look no further than the Samsung Galaxy S4 LTE. Peregrine Semi’s main antenna switch on BSOS substrates from Soitec enables the smartphone to support 14 frequency bands simultaneously, for a three-fold improvement in download times.

Vivek Bakshi notes that a lot of effort goes into enabling EUV sources for EUVL scanners and mask defect metrology tools to ensure they meet the requirements for production level tools. Challenges include modeling of sources, improvement of conversion efficiency, finding ways to increase source brightness, spectral purity filter development and contamination control. These and other issues are among topics that were proposed by a technical working group for the 2013 Source Workshop in Dublin, Ireland.

3D-IC: Two for one


September 25, 2013

Zvi Or-Bach, President & CEO of MonolithIC 3D Inc. blogs about upcoming events related to 3D ICs.

This coming October there are two IEEE Conferences discussing 3D IC, both are within an easy drive from Silicon Valley.

The first one is the IEEE International Conference on 3D System Integration (3D IC), October 2-4, 2013 in San Francisco, and just following in the second week of October is the S3S Conference on October 7-10 in Monterey. The IEEE S3S Conference was enhanced this year to include the 3D IC track and accordingly got the new name S3S (SOI-3D-Subthreshold). It does indicate the growing importance and interest in 3D IC technology.

This year is special in that both of these conferences will contain presentations on the two aspects of 3D IC technologies. The first one is 3D IC by the use of Through -Silicon-Via which some call -“parallel” 3D and the second one is the monolithic 3D-IC which some call “sequential.”

This is very important progress for the second type of 3D IC technology. I clearly remember back in early 2010 attending another local IEEE 3D IC Conference: 3D Interconnect: Shaping Future Technology. An IBM technologist started his presentation titled “Through Silicon Via (TSV) for 3D integration” with an apology for the redundancy in his presentation title, stating that if it 3D integration it must be TSV!

 Yes, we have made quite a lot of progress since then. This year one of the major semiconductor research organization – CEA Leti – has placed monolithic 3D on its near term road-map, and was followed shortly after by a Samsung announcement of mass production of monolithic 3D non volatile memories – 3D NAND.

We are now learning to accept that 3D IC has two sides, which in fact complement each other. In hoping not to over-simplify- I would say that main function of the TSV type of 3D ICs is to overcome the limitation of PCB interconnect as well being manifest by the well known Hybrid Memory Cube consortium, bridging the gap between DRAM memories being built by the memory vendors and the processors being build by the processor vendors. At the recent VLSI Conference Dr. Jack Sun, CTO of TSMC present the 1000x gap which is been open between  on chip interconnect and the off chip interconnect. This clearly explain why TSMC is putting so much effort on TSV technology – see following figure:

System level interconnect gaps

System level interconnect gaps

On the other hand, monolithic 3D’s function is to enable the continuation of Moore’s Law and to overcome the escalating on-chip interconnect gap. Quoting Robert Gilmore, Qualcomm VP of Engineering, from his invited paper at the recent VLSI conference: As performance mismatch between devices and interconnects increases, designs have become interconnect limited. Monolithic 3D (M3D) is an emerging integration technology that is poised to reduce the gap significantly between device and interconnect delays to extend the semiconductor roadmap beyond the 2D scaling trajectory predicted by Moore’s Law…” In IITC11 (IEEE Interconnect Conference 2011) Dr. Kim presented a detailed work on the effect of the TSV size for 3D IC of 4 layers vs. 2D. The result showed that for TSV of 0.1µm – which is the case in monolithic 3D – the 3D device wire length (power and performance) were equivalent of scaling by two process nodes! The work also showed that for TSV of 5.0µm – resulted with no improvement at all (today conventional TSV are striving to reach the 5.0µm size) – see the following chart:

Cross comparison of various 2D and 3D technologies. Dashed lines are wirelengths of 2D ICs. #dies: 4.

Cross comparison of various 2D and 3D technologies. Dashed lines are wirelengths of 2D ICs. #dies: 4.

So as monolithic 3D is becoming an important part of the 3D IC space, we are most honored to have a role in these coming IEEE conferences. It will start on October 2nd in SF when we will present a Tutorial that is open for all conference attendees. In this Monolithic 3DIC Tutorial we plan to present more than 10 powerful advantages being opened up by the new dimension for integrated circuits. Some of those are well known and some probably were not presented before. These new capabilities that are about to open up would very important in various market and applications.

In the following S3S conference we are scheduled on October 8, to provide the 3D Plenary Talk for the 3D IC track of the S3S conference. The Plenary Talk will present three independent paths for monolithic 3D using the same materials, fab equipment and well established semiconductor processes for monolithic 3D IC. These three paths could be used independently or be mixed providing multiple options for tailoring differently by different entities.

Clearly 3D IC technologies are growing in importance and this coming October brings golden opportunities to get a ‘two for one’ and catch up and learn the latest and greatest in TSV and monolithic 3D technologies — looking forward to see you there.

Inside the Hybrid Memory Cube


September 18, 2013

The HMC provides a breakthrough solution that delivers unmatched performance with the utmost reliability.

Since the beginning of the computing era, memory technology has struggled to keep pace with CPUs. In the mid 1970s, CPU design and semiconductor manufacturing processes began to advance rapidly. CPUs have used these advances to increase core clock frequencies and transistor counts. Conversely, DRAM manufacturers have primarily used the advancements in process technology to rapidly and consistently scale DRAM capacity. But as more transistors were added to systems to increase performance, the memory industry was unable to keep pace in terms of designing memory systems capable of supporting these new architectures. In fact, the number of memory controllers per core decreased with each passing generation, increasing the burden on memory systems.

To address this challenge, in 2006 Micron tasked internal teams to look beyond memory performance. Their goal was to consider overall system-level requirements, with the goal of creating a balanced architecture for higher system level performance with more capable memory and I/O systems. The Hybrid Memory Cube (HMC), which blends the best of logic and DRAM processes into a heterogeneous 3D package, is the result of this effort. At its foundation is a small logic layer that sits below vertical stacks of DRAM die connected by through-silicon -vias (TSVs), as depicted in FIGURE 1. An energy-optimized DRAM array provides access to memory bits via the internal logic layer and TSV – resulting in an intelligent memory device, optimized for performance and efficiency.

By placing intelligent memory on the same substrate as the processing unit, each system can do what it’s designed to do more efficiently than previous technologies. Specifically, processors can make use of all of their computational capability without being limited by the memory channel. The logic die, with high-performance transistors, is responsible for DRAM sequencing, refresh, data routing, error correction, and high-speed interconnect to the host. HMC’s abstracted memory decouples the memory interface from the underlying memory technology and allows memory systems with different characteristics to use a common interface. Memory abstraction insulates designers from the difficult parts of memory control, such as error correction, resiliency and refresh, while allowing them to take advantage of memory features such as performance and non-volatility. Because HMC supports up to 160 GB/s of sustained memory bandwidth, the biggest question becomes, “How fast do you want to run the interface?”

The HMC Consortium
A radically new technology like HMC requires a broad ecosystem of support for mainstream adoption. To address this challenge, Micron, Samsung, Altera, Open-Silicon, and Xilinx, collaborated to form the HMC Consortium (HMCC), which was officially launched in October, 2011. The Consortium’s goals included pulling together a wide range of OEMs, enablers, and tool vendors to work together to define an industry-adoptable serial interface specification for HMC. The consortium delivered on this goal within 17 months and introduced the world’s first HMC interface and protocol specification in April 2013.
The specification provides a short-reach (SR), very short-reach (VSR), and ultra short-reach (USR) interconnection across physical layers (PHYs) for applications requiring tightly coupled or close proximity memory support for FPGAs, ASICs and ASSPs, such as high-performance networking and computing along with test and measurement equipment.

3Dintegration_fig1
FIGURE 1. The HMC employs a small logic layer that sits below vertical stacks of DRAM die connected by through-silicon-vias (TSVs).

The next goal for the consortium is to develop a second set of standards designed to increase data rate speeds. This next specification, which is expected to gain consortium agreement by 1Q14, shows SR speeds improving from 15 Gb/s to 28 Gb/s and VSR/USR interconnection speeds increasing from 10 to 15–28 Gb/s.

Architecture and Performance

Other elements that separate HMC from traditional memories include raw performance, simplified board routing, and unmatched RAS features. Unique DRAM within the HMC device are designed to support sixteen individual and self-supporting vaults. Each vault delivers 10 GB/s of sustained memory bandwidth for an aggregate cube bandwidth of 160 GB/s. Within each vault there are two banks per DRAM layer for a total of 128 banks in a 2GB device or 256 banks in a 4GB device. Impact on system performance is significant, with lower queue delays and greater availability of data responses compared to conventional memories that run banks in lock-step. Not only is there massive parallelism, but HMC supports atomics that reduce external traffic and offload remedial tasks from the processor.

As previously mentioned, the abstracted interface is memory-agnostic and uses high-speed serial buses based on the HMCC protocol standard. Within this uncomplicated protocol, commands such as 128-byte WRITE (WR128), 64-byte READ (RD64), or dual 8-byte ADD IMMEDIATE (2ADD8), can be randomly mixed. This interface enables bandwidth and power scaling to suit practically any design—from “near memory,” mounted immediately adjacent to the CPU, to “far memory,” where HMC devices may be chained together in futuristic mesh-type networks. A near memory configuration is shown in FIGURE 2, and a far memory configuration is shown in FIGURE 3. JTAG and I2C sideband channels are also supported for optimization of device configuration, testing, and real-time monitors.

HMC board routing uses inexpensive, standard high-volume interconnect technologies, routes without complex timing relationships to other signals, and has significantly fewer signals. In fact, 160GB/s of sustained memory bandwidth is achieved using only 262 active signals (66 signals for a single link of up to 60GB/s of memory bandwidth).

3Dintegration_fig2
FIGURE 2. The HMC communicates with the CPU using a protocol defined by the HMC consortium. A near memory configuration is shown.
3Dintegration_fig3
FIGURE 3.A far memory communication configuration.

FIGURE 2. The HMC communicates with the CPU using a protocol defined by the HMC consortium. A near memory configuration is shown.

A single robust HMC package includes the memory, memory controller, and abstracted interface. This enables vault-controller parity and ECC correction with data scrubbing that is invisible to the user; self-correcting in-system lifetime memory repair; extensive device health-monitoring capabilities; and real-time status reporting. HMC also features a highly reliable external serializer/deserializer (SERDES) interface with exceptional low-bit error rates (BER) that support cyclic redundancy check (CRC) and packet retry.

HMC will deliver 160 GB/s of bandwidth or a 15X improvement compared to a DDR3-1333 module running at 10.66 GB/s. With energy efficiency measured in pico-joules per bit, HMC is targeted to operate in the 20 pj/b range. Compared to DDR3-1333 modules that operate at about 60 pj/b, this represents a 70% improvement in efficiency. HMC also features an almost-90% pin count reduction—66 pins for HMC versus ~600 pins for a 4-channel DDR3 solution. Given these comparisons, it’s easy to see the significant gains in performance and the huge savings in both the footprint and power usage.

Market Potential

HMC will enable new levels of performance in applications ranging from large-scale core and leading-edge networking systems, to high-performance computing, industrial automation, and eventually, consumer products.

Embedded applications will benefit greatly from high-bandwidth and energy-efficient HMC devices, especially applications such as testing and measurement equipment and networking equipment that utilizes ASICs, ASSPs, and FPGA devices from both Xilinx and Altera, two Developer members of the HMC Consortium. Altera announced in September that it has demonstrated interoperability of its Stratix FPGAs with HMC to benefit next-generation designs.

According to research analysts at Yole Développement Group, TSV-enabled devices are projected to account for nearly $40B by 2017—which is 10% of the global chip business. To drive that growth, this segment will rely on leading technologies like HMC.

3Dintegration_fig4
FIGURE 4.Engineering samples are set to debut in 2013, but 4GB production in 2014.

Production schedule
Micron is working closely with several customers to enable a variety of applications with HMC. HMC engineering samples of a 4 link 31X31X4mm package are expected later this year, with volume production beginning the first half of 2014. Micron’s 4GB HMC is also targeted for production in 2014.

Future stacks, multiple memories
Moving forward, we will see HMC technology evolve as volume production reduces costs for TSVs and HMC enters markets where traditional DDR-type of memory has resided. Beyond DDR4, we see this class of memory technology becoming mainstream, not only because of its extreme performance, but because of its ability to overcome the effects of process scaling as seen in the NAND industry. HMC Gen3 is on the horizon, with a performance target of 320 GB/s and an 8GB density. A packaged HMC is shown in FIGURE 4.

Among the benefits of this architectural breakthrough is the future ability to stack multiple memories onto one chip. •


THOMAS KINSLEY is a Memory Development Engineer and ARON LUNDE is the Product Program Manager at Micron Technology, Inc., Boise, ID.