Category Archives: Online Magazines

A wide array of package level integration technologies now available to chip and system designers are reviewed.

As technical challenges to shrink transistors per Moore’s Law become increasingly harder and costlier to overcome, fewer semiconductor manufacturers are able to upgrade to the next lower process nodes (e.g., 20nm). Therefore various alternative schemes to cram more transistors within a given footprint without having to shrink individual devices are being pursued actively. Many of these involve 3D stacking to reduce both footprint and the length of interconnect between the devices.

A leading memory manufacturer has just announced 3D NAND products where circuitry are fabricated one over the other on the same wafer resulting in higher device density on an area basis without having to develop smaller transistors. However such integration may not be readily feasible when irregular non-memory structures, such as sensors and CPUs, are to be integrated in 3D. Similar limits would also apply for 3D integration of devices that require very different process flows, such as analog with digital processor and memory.

For applications where integration of chips with such heterogeneous designs and processes are required, integration at the package level becomes a viable alternative. For package level integration, 3D stacking of individual chips is the ultimate configuration in terms of reducing footprint and improving performance by shrinking interconnect length between individual chips in the stack. Such packages are already in mass production for camera modules that require tight coupling of the image sensor to a signal processor. Other applications, such as 3D stacks of DRAM chips and CPU/memory stacks, are under development. For these applications 3D modules have been chosen so as to reduce not just the form factor but also the length of interconnects between individual chips.

Figure 1: Equivalent circuit for interconnect between DRAM and SoC chips in a PoP package.

Interconnects a necessary evil

To a chip or system designer the interconnect between transistors or the wiring between chips is a necessary evil. They introduce parasitic R, L and C into the signal path. For die level interconnects this problem became recognized at least two decades ago as RC delay in such interconnects for CPUs became a roadblock to operation over 2GHz. This prompted major changes in materials for wafer level interconnects. For the conductors, the shift was from aluminum to lower resistance copper which enabled a shrink in geometries. For the surrounding interlayer dielectric that affect the parasitic capacitance, silicon dioxide was replaced by various low and even ultra low k ( dielectric constant ) materials, in spite of their poorer mechanical properties. Similar changes were made even earlier in the chip packaging arena when ceramic substrates were replaced by lower– k organic substrates that also reduced costs. Interconnects in packages and PCBs too introduce parasitic capacitance that contributes to signal distortion and may limit the maximum bandwidth possible. Power lost to parasitic capacitance of interconnects while transmitting digital signals through them depend linearly on the capacitance as well as the bandwidth. With the rise in bandwidth even in battery driven consumer electronics, such as smart phones, power loss in the package or PCBs becomes ever more significant (30%) as losses in chips themselves are reduced through better design (e.g., ESD structures with lower capacitance ).

Improving the performance of package level interconnects

Over a decade ago the chip packaging world went through a round of reducing the interconnect length and increasing interconnect density when for high performance chips such as CPUs, traditional peripheral wirebond technology was replaced by solder-bumped area-array flip chip technology. The interconnect length was reduced by at least an order of magnitude with a corresponding reduction in the parasitics and rise in the bandwidth for data transfer to adjacent chips, such as the DRAM cache. However, this improvement in electrical performance came at the expense of mechanical complications as the tighter coupling of the silicon chip to a substrate with a much larger coefficient of thermal expansion (6-10X of Si ) exposed the solder bump interconnects between them to cyclic stress and transmitted some stress to the chip itself. The resulting Chip Package Interaction (CPI) gets worse with larger chips and weaker low-k dielectrics on the chip.

The latest innovation in chip packaging technology is 3D stacking with through silicon vias (TSVs) where numerous vias (5µm in diameter and getting smaller) are etched in the silicon wafer and filled with a conductive metal, such as Cu or W. The wafers or singulated chips are then stacked vertically and bonded to one another. 3D stacking with TSVs provides the shortest interconnect length between chips in the stack, with improvements in bandwidth, efficiency of power required to transmit data, and footprint. However, as we shall see later, the 3D TSV technology is delayed not only because of complex logistics issues that are often discussed, but actual technical issues rooted in choices made for the most common variant: TSVs filled by Cu, with parallel wafer thinning.

Figure 2: Breakdown of capacitance contributions from various elements of intra-package interconnect in a PoP. The total may exceed 2 pF.

Equivalent circuit for packages

PoP (package-on-package) is a pseudo-3D package using current non-TSV technologies and are ubiquitous in SmartPhones. In a PoP, two packages (DRAM and SoC) are stacked over one another and connected vertically by peripheral solder balls or columns. The PoP package is often talked about as a target for replacement by TSV-based 3D stacks. The SoC to DRAM interconnect in the PoP has 4 separate elements (wirebond in DRAM package, vertical interconnect between the top and bottom packages, substrate trace and flip chip in bottom package for SoC) in series. The equivalent circuit for package level interconnect in a typical PoP is shown in FIGURE 1.

From FIGURE 2 it is seen that interconnect capacitance in a PoP package is dominated by not just wire bonds (DRAM) but the lateral traces in the substrate of the flip chip package (SoC) as well. Both of these large contributions are eliminated in a TSV based 3D stack.

In a 3D package using TSVs the elimination of substrate traces and wire bonds between the CPU and DRAM leads to a 75% reduction in interconnect capacitance (FIGURE 3) with consequent improvement in maximum bandwidth and power efficiency.

Effect of parasitics

Not only do interconnect parasitics cause power loss during data transmission but they also affect the waveform of the digital signal. For chips with a given input/output buffer characteristics, higher capacitance slows down the rise and falling edges [1,2]. Inductance causes more noise and constricts the eye diagram. So higher interconnect parasitics limit the maximum bandwidth for error free data transmission through a package or PCB.

TSV-based 3D stacking

As has been previously stated, a major reason for developing TSV technology is to use it to improve data transmission – measured by bandwidth and power efficiency — between chips and go beyond bandwidth limits imposed by conventional interconnect. Recently a national Lab in western Europe has reported results [3] of stacking a single DRAM chip to a purpose-designed SoC with TSVs in a 4 x 128 bit wide I/O format and at a clock rate of just 200MHz. They were able to demonstrate a bandwidth of 12.8 MB/sec (2X that in a PoP with LP DDR3 running at 800MHz). Not surprisingly the power efficiency for data transfer reported (0.9 pJ/bit) was only a quarter of that for the PoP case.

Despite a string of encouraging results over the last three years from several such test vehicles, TSV-based 3D stacking technology is not yet mature for volume production. This is true for the TSV and manufacturing technology chosen by a majority of developers, namely filling the TSVs with copper and thinning the wafers in parallel but separately which requires bonding/debonding to carrier wafers. The problems with filling the TSVs with copper have been apparent for several years and affect electrical design [4]. The problem arises from the large thermal expansion mismatch between copper and silicon and the stress caused by it in the area surrounding copper-filled TSVs, which alters electron mobility and circuit performance. The immediate solution is to maintain keep-out zones around the TSVs, however this affects routing and the length of on-die interconnect. Since the stress field around copper-filled TSVs depend on the square of the via diameter, smaller diameter TSVs are now being developed to shrink the keep out zone.

Only now the problems of debonding thinned wafers with TSVs, such as fracturing, and subsequent handling are being addressed by development of new adhesive materials that can be depolymerized by laser and thinned wafers removed from the back-up without stress.

The above problems were studied and avoided by the pioneering manufacturer of 3D memory stacks. They changed via fill material from copper to tungsten, which has a small CTE mismatch with copper, and opted for a sequential bond/thin process for stacked wafers thereby totally avoiding any issues from bond/debond or thin wafer handling.

It is baffling why such alternative materials and process flows for TSVs are not being pursued even by U.S. based foundries that seem to take their technical cues instead from a national laboratory in a small European nation with no commercial production of semiconductors!

Figure 3: When TSVs (labeled VI) replace the conventional interconnect in a PoP package, the parasitic capacitance of interconnect between chips, such as SoC and DRAM, is reduced by 75%.

Options for CPU to memory integration

Given the delay in getting 3D TSV technology ready at foundries, it is normal that alternatives like 2.5D, such as planar MCMs on high density silicon substrates with TSVs, have garnered a lot of attention. However the additional cost of the silicon substrate in 2.5D must be justified from a performance and/or foot-print standpoint. Interconnect parasitics due to wiring between two adjacent chips in a 2.5D module are significantly smaller than that in a system built on PCBs with packaged chips. But they are orders of magnitude larger than what is possible in a true 3D stack with TSVs. Therefore building a 2.5D module of CPU and an adjacent stack of memory chips with TSVs would reduce the size and cost of the silicon substrate but won’t deliver performance anywhere near an all TSV 3D stack of CPU and memory.


Alternatives to TSVs for package level integration

Integrating a non-custom CPU to memory chips in a 3D stack would require the addition of redistribution layers with consequent increase in interconnection length and degradation of performance. In such cases it may be preferable to avoid adding TSVs to the CPU chips altogether and integrate the CPU to a 3D memory stack via a substrate in a double-sided package configuration. The substrate used is silicon with TSVs and high-density interconnects. Test vehicles for such an integration scheme have been built and electrical parameters evaluated [5,6]. For cost driven applications e,g. Smart Phones the cost of large silicon substrates used above may be prohibitive and the conventional PoP package may need to be upgraded. One approach to do so is to shrink the pitch of the vertical interconnects between the top and bottom packages and quadruple the number of these interconnects and the width of the memory bus [7,8]. While this mechanical approach would allow an increase in the bandwidth, unlike TSV based solutions they would not reduce the I/O power consumption as nothing is done to reduce the parasitic capacitance of the interconnect previously discussed (FIGURE 3).

A novel concept of “Active Interconnects” has been proposed and developed at APSTL. This concept employs a more electrical approach to equal the performance of TSVs [1] and replace these mechanically complex intrusions into live silicon chips. Compensation circuits on additional ICs are inserted into the interconnect path of a conventional PoP package for a Smart Phone (FIGURE 4) to create the SuperPoP package with Bandwidth and Power efficiency to approach that of TSV-based 3D stacks without having to insert any troublesome TSVs into the active chips themselves.

Figure 4: Cross-section of a APSTL Super POP package under development to equal performance of TSV based 3D stacks. Integrated circuit with compensation circuits for ea. interconnect is inserted between the two layers of a PoP for SmartPhones. This chip contains through vias and avoids insertion of TSVs in high value dice for SoC or DRAM.

A wide array of package level integration technologies now available to chip and system designers have been discussed. The performance of package level interconnect has become ever more important for system performance in terms of bandwidth and power efficiency. The traditional approach of improving package electrical performance by shrinking interconnect length and increasing their density continues with the latest iteration, namely TSVs. Like previous innovations, TSVs too suffer from mechanical complications, only now more magnified due to stress effects of TSVs on device performance. Further development of TSV technology must not only solve all remaining problems of the current mainstream technology – including Cu-filled vias and parallel thinning of wafers — but also simplify the process where possible. This includes adopting more successful material (Cu-capped W vias) and process choices (sequential wafer bond and thin) already in production. In the meantime innovative concepts like Active Interconnect that altogether avoids using TSVs and APSTL SuperPoP using this concept show promise for cost-driven power-sensitive applications like smart phones. •

Gupta, D., “A novel non-TSV approach to enhancing the bandwidth in 3D packages for processor- memory modules “, IEEE ECTC 2013, pp 124 – 128.

Karim, M. et al , “Power Comparison of 2D, 3D and 2.5D Interconnect Solutions and Power Optimization of Interposer Interconnects,” IEEE ECTC 2013, pp 860 – 866.

Dutoit, D. et al, “A 0.9 pJ/bit, 12.8 GByte/s WideIO Memory Interface in a 3D-IC NoC-based MPSoC,” 2013 Symposium on VLSI Circuits Digest of Technical Papers.

Yang, J-S et al, “TSV Stress Aware Timing Analysis with Applications to 3D-IC Layout Optimization,” Design Automation Conference (DAC), 2010 47th ACM/IEEE , June 2010.

Tzeng, P-J. et al, “Process Integration of 3D Si Interposer with Double-Sided Active Chip Attachments,” IEEE ECTC 2013, pp 86 – 93.

Beyene, W. et al, “Signal and Power Integrity Analysis of a 256-GB/s Double-Sided IC Package with a Memory Controller and 3D Stacked DRAM,” IEEE ECTC 2013, pp 13 – 21.

Mohammed, I. et al, “Package-on-Package with Very Fine Pitch Interconnects for High Bandwidth,” IEEE ECTC 2013, pp 923 – 928

Hu, D.C., “A PoP Structure to Support I/O over 1000,” ECTC IEEE 2013, pp 412 – 416

DEV GUPTA is the CTO of APSTL, Scottsdale, AZ ([email protected]).

Inside the Hybrid Memory Cube

September 18, 2013

The HMC provides a breakthrough solution that delivers unmatched performance with the utmost reliability.

Since the beginning of the computing era, memory technology has struggled to keep pace with CPUs. In the mid 1970s, CPU design and semiconductor manufacturing processes began to advance rapidly. CPUs have used these advances to increase core clock frequencies and transistor counts. Conversely, DRAM manufacturers have primarily used the advancements in process technology to rapidly and consistently scale DRAM capacity. But as more transistors were added to systems to increase performance, the memory industry was unable to keep pace in terms of designing memory systems capable of supporting these new architectures. In fact, the number of memory controllers per core decreased with each passing generation, increasing the burden on memory systems.

To address this challenge, in 2006 Micron tasked internal teams to look beyond memory performance. Their goal was to consider overall system-level requirements, with the goal of creating a balanced architecture for higher system level performance with more capable memory and I/O systems. The Hybrid Memory Cube (HMC), which blends the best of logic and DRAM processes into a heterogeneous 3D package, is the result of this effort. At its foundation is a small logic layer that sits below vertical stacks of DRAM die connected by through-silicon -vias (TSVs), as depicted in FIGURE 1. An energy-optimized DRAM array provides access to memory bits via the internal logic layer and TSV – resulting in an intelligent memory device, optimized for performance and efficiency.

By placing intelligent memory on the same substrate as the processing unit, each system can do what it’s designed to do more efficiently than previous technologies. Specifically, processors can make use of all of their computational capability without being limited by the memory channel. The logic die, with high-performance transistors, is responsible for DRAM sequencing, refresh, data routing, error correction, and high-speed interconnect to the host. HMC’s abstracted memory decouples the memory interface from the underlying memory technology and allows memory systems with different characteristics to use a common interface. Memory abstraction insulates designers from the difficult parts of memory control, such as error correction, resiliency and refresh, while allowing them to take advantage of memory features such as performance and non-volatility. Because HMC supports up to 160 GB/s of sustained memory bandwidth, the biggest question becomes, “How fast do you want to run the interface?”

The HMC Consortium
A radically new technology like HMC requires a broad ecosystem of support for mainstream adoption. To address this challenge, Micron, Samsung, Altera, Open-Silicon, and Xilinx, collaborated to form the HMC Consortium (HMCC), which was officially launched in October, 2011. The Consortium’s goals included pulling together a wide range of OEMs, enablers, and tool vendors to work together to define an industry-adoptable serial interface specification for HMC. The consortium delivered on this goal within 17 months and introduced the world’s first HMC interface and protocol specification in April 2013.
The specification provides a short-reach (SR), very short-reach (VSR), and ultra short-reach (USR) interconnection across physical layers (PHYs) for applications requiring tightly coupled or close proximity memory support for FPGAs, ASICs and ASSPs, such as high-performance networking and computing along with test and measurement equipment.

FIGURE 1. The HMC employs a small logic layer that sits below vertical stacks of DRAM die connected by through-silicon-vias (TSVs).

The next goal for the consortium is to develop a second set of standards designed to increase data rate speeds. This next specification, which is expected to gain consortium agreement by 1Q14, shows SR speeds improving from 15 Gb/s to 28 Gb/s and VSR/USR interconnection speeds increasing from 10 to 15–28 Gb/s.

Architecture and Performance

Other elements that separate HMC from traditional memories include raw performance, simplified board routing, and unmatched RAS features. Unique DRAM within the HMC device are designed to support sixteen individual and self-supporting vaults. Each vault delivers 10 GB/s of sustained memory bandwidth for an aggregate cube bandwidth of 160 GB/s. Within each vault there are two banks per DRAM layer for a total of 128 banks in a 2GB device or 256 banks in a 4GB device. Impact on system performance is significant, with lower queue delays and greater availability of data responses compared to conventional memories that run banks in lock-step. Not only is there massive parallelism, but HMC supports atomics that reduce external traffic and offload remedial tasks from the processor.

As previously mentioned, the abstracted interface is memory-agnostic and uses high-speed serial buses based on the HMCC protocol standard. Within this uncomplicated protocol, commands such as 128-byte WRITE (WR128), 64-byte READ (RD64), or dual 8-byte ADD IMMEDIATE (2ADD8), can be randomly mixed. This interface enables bandwidth and power scaling to suit practically any design—from “near memory,” mounted immediately adjacent to the CPU, to “far memory,” where HMC devices may be chained together in futuristic mesh-type networks. A near memory configuration is shown in FIGURE 2, and a far memory configuration is shown in FIGURE 3. JTAG and I2C sideband channels are also supported for optimization of device configuration, testing, and real-time monitors.

HMC board routing uses inexpensive, standard high-volume interconnect technologies, routes without complex timing relationships to other signals, and has significantly fewer signals. In fact, 160GB/s of sustained memory bandwidth is achieved using only 262 active signals (66 signals for a single link of up to 60GB/s of memory bandwidth).

FIGURE 2. The HMC communicates with the CPU using a protocol defined by the HMC consortium. A near memory configuration is shown.
FIGURE 3.A far memory communication configuration.

FIGURE 2. The HMC communicates with the CPU using a protocol defined by the HMC consortium. A near memory configuration is shown.

A single robust HMC package includes the memory, memory controller, and abstracted interface. This enables vault-controller parity and ECC correction with data scrubbing that is invisible to the user; self-correcting in-system lifetime memory repair; extensive device health-monitoring capabilities; and real-time status reporting. HMC also features a highly reliable external serializer/deserializer (SERDES) interface with exceptional low-bit error rates (BER) that support cyclic redundancy check (CRC) and packet retry.

HMC will deliver 160 GB/s of bandwidth or a 15X improvement compared to a DDR3-1333 module running at 10.66 GB/s. With energy efficiency measured in pico-joules per bit, HMC is targeted to operate in the 20 pj/b range. Compared to DDR3-1333 modules that operate at about 60 pj/b, this represents a 70% improvement in efficiency. HMC also features an almost-90% pin count reduction—66 pins for HMC versus ~600 pins for a 4-channel DDR3 solution. Given these comparisons, it’s easy to see the significant gains in performance and the huge savings in both the footprint and power usage.

Market Potential

HMC will enable new levels of performance in applications ranging from large-scale core and leading-edge networking systems, to high-performance computing, industrial automation, and eventually, consumer products.

Embedded applications will benefit greatly from high-bandwidth and energy-efficient HMC devices, especially applications such as testing and measurement equipment and networking equipment that utilizes ASICs, ASSPs, and FPGA devices from both Xilinx and Altera, two Developer members of the HMC Consortium. Altera announced in September that it has demonstrated interoperability of its Stratix FPGAs with HMC to benefit next-generation designs.

According to research analysts at Yole Développement Group, TSV-enabled devices are projected to account for nearly $40B by 2017—which is 10% of the global chip business. To drive that growth, this segment will rely on leading technologies like HMC.

FIGURE 4.Engineering samples are set to debut in 2013, but 4GB production in 2014.

Production schedule
Micron is working closely with several customers to enable a variety of applications with HMC. HMC engineering samples of a 4 link 31X31X4mm package are expected later this year, with volume production beginning the first half of 2014. Micron’s 4GB HMC is also targeted for production in 2014.

Future stacks, multiple memories
Moving forward, we will see HMC technology evolve as volume production reduces costs for TSVs and HMC enters markets where traditional DDR-type of memory has resided. Beyond DDR4, we see this class of memory technology becoming mainstream, not only because of its extreme performance, but because of its ability to overcome the effects of process scaling as seen in the NAND industry. HMC Gen3 is on the horizon, with a performance target of 320 GB/s and an 8GB density. A packaged HMC is shown in FIGURE 4.

Among the benefits of this architectural breakthrough is the future ability to stack multiple memories onto one chip. •

THOMAS KINSLEY is a Memory Development Engineer and ARON LUNDE is the Product Program Manager at Micron Technology, Inc., Boise, ID.

IC Insights traced the sales of the top 10 semiconductor companies dating back to 1985, in its Research Bulletin dated August 27, 2013.  In 1990, six Japanese companies were counted among the top 10 leaders in semiconductor sales.  In that year—in many ways, the peak of its semiconductor manufacturing and market strength—Japanese companies accounted for 51 percent of total semiconductor capital spending (Figure 1).


North American companies accounted for 31 percent of semiconductor capex in 1990 and the Asia-Pacific region captured 10 percent share, slightly ahead of the eight percent held by European companies.  For perspective, Japan’s share of semi capex in 1990 was 20 points more than North America, 41 points more than Asia-Pacific, and 43 points more than Europe.

After reaching its highest-ever share of capital spending in 1990, Japan relinquished 20 points of marketshare and in five years trailed North America in semiconductor capital spending.  Economic malaise forced many of Japan’s strongest semiconductor companies to trim capex budgets and re-evaluate long-term strategic business plans.  At the same time, Japan was also feeling competitive pressure from South Korea, which had developed a strong memory manufacturing presence of its own; and from Taiwan, where the foundry business model was beginning to prosper.  In 1998, Japan trailed not only the North America region in semiconductor capital spending, but the Asia-Pacific region as well.  Fast-forward to 2010 and Japan and Asia-Pacific had essentially swapped places in terms of semiconductor capex marketshare.  In 1H13, Japan’s share of total semiconductor capital spending had dwindled to seven percent.

Japanese suppliers that are no longer in the semiconductor business include NEC, Hitachi, and Matsushita.  Other Japanese semiconductor companies that have greatly curtailed semiconductor operations include Sanyo, which was acquired by ON Semiconductor; Sony, which cut semiconductor capital spending and announced its move to an asset-lite strategy for ICs; Fujitsu, which sold its wireless group to Intel, sold its MCU and analog IC business to Spansion, and is consolidating its system LSI business with Panasonic’s; and Mitsubishi.

Meanwhile, from 2000-1H13, China joined semiconductor companies in South Korea, Taiwan, and Singapore by investing heavily in wafer fabs and advanced process technology.  These investments by Asia-Pacific companies were used primarily to produce DRAM and flash memory, microcontrollers, and to bolster wafer foundry operations.  Asia-Pacific accounted for 53 percent of capex marketshare in 1H13, down slightly from its 55 percent peak in 2010.

Mostly on account of spending by Intel, GlobalFoundries, Micron, and SanDisk, North America accounted for 37 percent of capital spending in 1H13, a few points higher than the steady 29 percent-33 percent share it has held since 1990.

There are three large European semiconductor suppliers and each now operates using a fab-lite or asset-lite strategy, which is why semiconductor capital spending from European companies accounted for only three percent of total capex in 1H13.  IC Insights forecasts capex spending by Europe-based ST, Infineon, and NXP and all other European semiconductor suppliers combined will amount to less than $1.5 billion in 2013.  Led by Samsung, Intel, and TSMC, there are nine semiconductor suppliers that are forecast to spend more money on their own than Europe will spend collectively in 2013.  In IC Insights’ opinion, IC manufacturers that are currently spending less than $1.0 billion a year on capital outlays will find it just about impossible to continue being able to manufacture using leading-edge digital processing technology, which is why European suppliers now outsource their most critical processing to foundries.

Integrated Device Technology, Inc., the analog and digital company delivering mixed-signal semiconductor solutions, announced that Dr. Ted Tewksbury has resigned as president, chief executive officer and board member, effective August 27, 2013. The Board of Directors has appointed board member Jeffrey McCreary as interim president and CEO.

Jeffrey McCreary has served on IDT’s Board since June of 2012. A former Texas Instruments senior vice president, McCreary brings thirty years of broad based semiconductor industry leadership and significant boardroom experience to the role. As interim president and CEO, McCreary will work closely with IDT’s current executive team and board of directors to oversee the company’s ongoing operations and strategic initiatives. The board has formed a search committee to identify and consider candidates for the permanent president and CEO role.

“On behalf of the board of directors, I want to thank Ted for his many contributions to IDT over the past five years,” said John Schofield, IDT’s Chairman of the Board. “Since joining the company, he has directed IDT’s transformation into a premier analog and mixed signal semiconductor company delivering system level solutions.”

Schofield continued, “As we begin the CEO search, we are fortunate to have Jeffrey McCreary available to serve in the interim role. Jeff possesses a proven track record as a semiconductor industry executive and has spent significant time making vital decisions in the boardroom as well. We are confident he will provide essential leadership for the Company for as long as required.”

“I am excited about the future for IDT and look forward to contributing to the team’s success,” said McCreary. “We are on a path to reach our previously stated financial targets and to continue leveraging our proven strengths in timing solutions, memory interface, RF, serial switching and power management with great new products.”

Jazz Semiconductor Inc., a fully owned U.S. subsidiary of Tower Semiconductor Ltd., has announced the accreditation for trusted status of Jazz Semiconductor Trusted Foundry (JSTF). JSTF has been accredited as a Category 1A Trusted Supplier by the United States Department of Defense as a provider of trusted semiconductors in critical defense applications. JSTF joins a small list of companies accredited by the DoD Trusted Foundry Program, established to ensure the integrity of the people and processes used to deliver national security critical microelectronic components, and administered by the DoD’s Defense Microelectronics Activity (DMEA).

TowerJazz said in its official release that the creation and accreditation of JSTF will help broaden existing business relationships previously disclosed with major defense contractors such as Raytheon, Northrop Grumman, BAE Systems, DRS, Alcatel-Lucent, and L-3 Communications.

“In the United States, there was no ‘pure play’ trusted foundry capability available,” TowerJazz CEO Russell Ellwanger said. “Our aerospace and defense customers asked that we would go this route to enable them greater freedom to serve their great country’s needs; a country that stands as a banner for democratic process throughout the world. Primarily for this purpose, we went beyond our initial commitment to the US State Department to continue support of our ITAR customers and engaged in rounds of discussion with the US Department of Defense toward participation in the Trusted program in our Newport Beach facility. And, as in all activities where one serves purposes of great principle, it is also good business."

“Jazz Semiconductor Trusted Foundry is proud to join the DoD Trusted Foundry Program to enable trusted access to a broad range of on-shore technologies and manufacturing capabilities,” said Scott Jordan, president, JSTF. “The accreditation process adds trust to the existing quality and security systems, improving our level of service to our military and defense customers.”

Entegris, Inc., a developer of contamination control and materials handling technologies for highly demanding advanced manufacturing environments, and imec, a research center in nanoelectronics, announced they are collaborating to advance the development and broaden the adoption of 3D integrated circuits.

3D IC technology, a process by which multiple semiconductor dies are stacked into a single device, is aimed at increasing the functionality and performance of next-generation integrated circuits while reducing footprint and power consumption. It is a key technology to enable the next generation of portable electronics such as smartphones and tablets that require smaller ICs which consume less power.

One of the key steps in 3D IC manufacturing process entails thinning semiconductor wafers while they are bonded to carrier substrates. Handling such thinned 3D IC wafers during the production process can result in wafer breakage, edge damage, and particle generation. A standardized, fully automated solution that supports the handling of multiple types of wafers would result in a significant cost reduction and pave the way toward further development and scaling of 3D IC technologies. Imec and Entegris are working on creating a solution to safely transfer and handle multiple kinds of 3D IC wafers without the risk of breakage and other damage that may occur during the 3D production process.

Read more: Paradigm changes in 3D-IC manufacturing

"We are excited to work with the imec team, which is a key research center leading technology innovation for the semiconductor industry," said Bertrand Loy, president and CEO of Entegris. "Our current collaboration is aimed at leveraging our wafer handling expertise and technology to reduce contamination and breakage by applying full automation to the handling of thin wafers during 3D wafer production. This project builds on our previously completed work with imec to develop dispense and filtration methods to reduce bubble and defect formation during the dispense of material that is used to temporarily bond 3D wafers to carrier substrates," said Loy.

"This collaboration with Entegris aims at developing a solution toward fully automated handling of multiple types of 3D IC wafers," stated Eric Beyne, director of imec’s 3D integration research program. "Such a general solution would imply a significant reduction of the development cost, which is key to the realization of a scalable and manufacturable 3D IC technology."

After experiencing runaway growth in recent years, the OLED display market is gearing up to make another big leap. Flexible OLED technology is expected to bring about an unprecedented change in flat displays which have ruled the display market for the last 20 years since the emergence of a liquid crystal display. Flexible OLED technology has already been introduced in a series of exhibitions and conferences for the last few years, and it is expected to make an innovative change in the conventional display industry structure once commercialized.

Unlike the conventional rigid OLED screen, the flexible OLED panel refers to the OLED display with flexibility. It is a very attractive product concept in that flexible OLED technology enables consumer goods manufacturers to develop applications in a variety of shapes to maximize its usability. For panel makers, the technology can cut manufacturing costs and simplify manufacturing processes by minimizing the use of glass substrates.

More Flexible Displays news

In order to produce a flexible OLED display, alternative substrate materials and encapsulation process to a conventional glass substrate are required. Until before 2010, most prototypes had used a metal foil substrate. But the trend recently shifted to a flexible OLED panel using a plastic substrate because the metal foil substrate has a rough surface and lacks flexibility. A wide range of methods are also being studied to develop alternative encapsulation techniques encompassing the use of plastic film and thin-film deposition technologies.

Read more: Flexible substrate market to top $500 million in 2020

Still, technological approaches vary depending on panel makers. Performance of a flexible OLED display, productivity and costs change significantly depending on flexible materials and manufacturing techniques which could also determine the marketability of flexible OLED displays. Therefore, there is a big difference in the time frames under which each panel maker plans to enter the flexible OLED market.

At this point of time, the “Flexible OLED Competitiveness and Market Forecasts” report from Displaybank, now part of IHS Inc., analyzes strategies taken by each panel maker for a flexible OLED display to take root in the display panel market, as well as various relevant technological issues. It discusses the growth potential of flexible OLED panels in the existing display market at the current point in time. This report is expected to help panel makers set a plan on how to approach the flexible OLED market in terms of technologies and come up with appropriate strategies to make a successful foray into the conventional display market with flexible OLED technology.

IC Insights’ recently released August Update to The McClean Report includes Part 1 of an in-depth analysis of the fast-growing IC foundry market.  Part 2 of the IC foundry analysis will be presented in the September Update.

foundry sales
Figure 1

Figure 1 shows the reported IC foundry sales and “final market value” IC foundry sales as a percent of total IC industry sales from 2007-2017.  The “final market value” figure is 2.22x the reported IC foundry sales number. The 2.22x multiplier estimates the IC sales amount (i.e., market value) that is eventually realized when an IC is ultimately sold to the final customer (i.e., the electronic system producer).

An example of how an IC foundry’s “final market value” sales level is determined can be made using Altera. Since a fabless company like Altera purchases PLDs from an IC foundry, and does not incorporate them into an electronic system, Altera is not considered the final end-user of these ICs.  Eventually, Altera resells its IC foundry-fabricated PLDs to electronic system producers/final end-users such as Cisco or Nokia at a much higher price than it paid the IC foundry for the devices (i.e., gross margin).  As a result, a 2.22x multiplier, which assumes a 55 percent industry-wide average gross margin for the IC foundry’s customer base, is applied to the IC foundry’s reported sales to arrive at the “final market value” sales figure.

As was shown in Figure 1, the total “final market value” sales figure for the IC foundries is expected to represent just over 36 percent of the worldwide $271 billion IC market forecast for 2013, and just over 45 percent of the $359 billion worldwide IC market forecast for 2017.  The “final” IC foundry share in 2017 is forecast to be slightly more than double the 22.6 percent “final” marketshare the IC foundries held ten years earlier in 2007.

Read more: The changing future of the Asian foundry landscape

To further illustrate the increasingly important role that foundries play in the worldwide IC market, IC Insights applied the “final market value” sales multiplier to TSMC’s quarterly revenues and compared them to Intel’s quarterly IC sales from 1Q11 through 2Q13. Since TSMC’s sales are so heavily weighted toward leading-edge devices, IC Insights estimates that the gross margin for TSMC’s customer base averages 57 percent (a 57 percent gross margin equates to a 2.33x sales multiplier).  Using the 2.33x multiplier, IC Insights believes that TSMC’s “final market value” IC sales surpassed Intel’s IC sales in 2Q13 (Figure 2), and that TSMC currently has more impact on total IC market revenue than any company in the world. Considering that Intel’s IC sales were 45 percent greater than TSMC’s “final market value” IC sales as recently as 1Q12, this was a dramatic change in a very short period of time.

Read more: Reinventing Intel

The “final market value” IC sales figure of TSMC helps explain why the capital expenditures of Intel and TSMC are expected to be fairly close in size this year ($11.0 billion for Intel and $10.0 billion for TSMC) and next year ($11.0 billion for Intel and $11.5 billion for TSMC).  Thus, when comparing the semiconductor capital spending as a percent of sales ratios for IDMs and IC foundries, the foundries’ “final market value” sales levels should be used.

In general, IC foundries have two main types of customers—fabless IC companies (e.g., Qualcomm, Nvidia, Xilinx, AMD, etc.) and IDMs (e.g., Freescale, ST, TI, Fujitsu, etc.).  The success of the fabless IC segment of the market, as well as the movement to more outsourcing by existing IDMs, has fueled strong growth in IC foundry sales since 1998.  Moreover, an increasing number of mid-size companies are ditching their fabs in favor of the fabless business model.  A few examples include IDT, LSI Corp., Avago, and AMD, which have all become fabless IC suppliers over the past few years.  IC Insights believes that the result of these trends will be continued strong growth for the total IC foundry market, which is forecast to increase by 14 percent this year as compared to only six percent growth expected for the total IC market.

tsmc passes intel in final market value

In its Research Bulletin dated August 2, 2013, IC Insights published its list of the top semiconductor sales leaders for the first half of 2013. The list showed the usual big-time players that we’ve come to expect like Intel, Samsung, and TSMC, leading the way in semiconductor sales through the first six months of the year. What stood out nearly as much, however, was that only one Japanese company—Toshiba—was present among the top 10 suppliers through the first half of 2013.  Anyone who has been involved in the semiconductor industry for a reasonable amount of time realizes this is a major shift and a big departure for a country that once was feared and revered when it came to its semiconductor manufacturing presence on the global market.

Figure 1 traces the top 10 semiconductor companies dating back to 1985, when Japanese semiconductor manufacturers wielded their influence on the global stage.  That year, there were five Japanese companies ranked among the top 10 semiconductor suppliers.  Then, in 1990, six Japanese companies were counted among the top 10 semiconductor suppliers—a figure that has not been matched by any country or region since.  The number of Japanese companies ranked in the top 10 in semiconductor sales slipped to four in 1995, then fell to three companies in 2000 and 2006, two companies in 2012, and then to only one company in the first half of 2013.

Read more: First half of 2013 shows big changes to the top 20 semiconductor supplier ranking

It is worth noting that Renesas (#11), Sony (#16), and Fujitsu (#22) were ranked among the top 25 semiconductor suppliers in 1H13, but Sony has been struggling to re-invent itself and Fujitsu has spent the first half of 2013 divesting most of its semiconductor operations.

Japan’s total presence and influence in the semiconductor marketplace has waned.  Once-prominent Japanese names now gone from the top suppliers list include NEC, Hitachi, Mitsubishi, and Matsushita. Competitive pressures from South Korean IC suppliers—especially in the DRAM market—have certainly played a significant role in changing the look of the top 10.  Samsung and SK Hynix emulated and perfected the Japanese manufacturing model over the years and cut deeply into sales and profits of Japanese semiconductor manufacturers, resulting in spin-offs, mergers, and acquisitions becoming more prevalent among Japanese suppliers.

  • 1999 — Hitachi and NEC merged their DRAM businesses to create Elpida Memory.
  • 2000 — Mitsubishi divested its DRAM business into Elpida Memory.
  • 2003 — Hitachi merged its remaining Semiconductor & IC Division with Mitsubishi’s System LSI Division to create Renesas Technology.
  • 2003 — Matsushita began emphasizing Panasonic as its main global brand name in 2003.  Previously, hundreds of consolidated companies sold Matsushita products under the Panasonic, National, Quasar, Technics, and JVC brand names.
  • 2007 — To reduce losses, Sony cut semiconductor capital spending and announced its move to an asset-lite strategy—a major change in direction for its semiconductor business.
  • 2010 — NEC merged its remaining semiconductor operations with Renesas Technology to form Renesas Electronics.
  • 2011 — Sanyo Semiconductor was acquired by ON Semiconductor.
  • 2013 — Fujitsu and Panasonic agreed to consolidate the design and development functions of their system LSI businesses.
  • 2013 — Fujitsu sold its MCU and analog IC business to Spansion.
  • 2013 — Fujitsu sold its wireless semiconductor business to Intel.
  • 2013 — Elpida Memory was formally acquired by Micron.
  • 2013 — After failing to find a buyer, Renesas announced plans to close its 300mm and 125mm wafer-processing site in Tsuruoka, Japan, by the end of 2013.  The facility makes system-LSI chips for Nintendo video game consoles and other consumer electronics.
  • 2013 — Unless it finds a buyer, Fujitsu plans to close its 300mm wafer fab in Mie.

Besides consolidation, another reason for Japan’s reduced presence among leading global semiconductor suppliers is that the vertically integrated business model that served Japanese companies so well for so many years is not nearly as effective in Japan today.  Due to the closed nature of the vertically integrated business model, when Japanese electronic systems manufacturers lost marketshare to global competitors, they took Japanese semiconductor divisions down with them.  As a result, Japanese semiconductor suppliers missed out on some major design win opportunities for their chips in many of the best-selling consumer, computer, and communications systems that are now driving semiconductor sales.

It is probably too strong to suggest that in the land of the rising sun, the sun has set on semiconductor manufacturing.  However, the global semiconductor landscape has changed dramatically from 25 years ago. For Japanese semiconductor companies that once prided themselves on their manufacturing might and discipline to practically disappear from the list of top semiconductor suppliers is evidence that competitive pressures are fierce and that as a country, perhaps Japan has not been as quick to adopt new methods to carry on and meet changing market needs.

RFMD today announced it has shipped more than one million RF7196D high-power, high-efficiency CMOS power amplifiers (PAs). The ultra-low cost RF7196D is RFMD’s newest and most innovative CMOS PA, delivering a revolutionary combination of cost, size and performance. It is in mass production in support of multiple high-volume 2G and 3G handset platforms, and shipments are expected to increase rapidly, reaching approximately 10 million units by the end of the September quarter.

RFMD is seeing strong adoption of its CMOS power amplifier technologies in next-generation handset platforms targeting emerging markets. The company is migrating its diverse set of customers of 2G power amplifiers (both GaAs and CMOS) to its ultra-low cost RF7196D and expects shipments will more than double in the December quarter and exceed 100 million units worldwide in calendar 2014.

Eric Creviston, president of RFMD’s Cellular Products Group (CPG), said, "RFMD’s ultra-low cost CMOS PA technology delivers excellent overall performance at highly competitive costs versus prior generations. We intend to launch a broad portfolio of innovative new CMOS products in the coming quarters, and we forecast strong growth in emerging markets across a highly diversified customer set."

Industry analysts forecast the total addressable market for RF applications in emerging markets will increase at a compound annual growth rate of approximately 20 percent through 2018 as next-generation 3G and 4G air standards are introduced, as existing subscribers upgrade their devices, and as new subscribers are added.