Tag Archives: R&D

CMP Slurry Trade-offs in R&D

As covered at SemiMD.com, the CMP Users Group (of the Northern California Chapter of The American Vacuum Society) recently held a meeting in Albany, New York in collaboration with CNSE, SUNY Polytechnic Institute, and SEMATECH. Among the presentations were deep dives into the inherent challenges of CMP slurry R&D.
Daniel Dickmann of Ferro Corporation discussed trade-offs in designing CMP slurries in his presentation, “Advances in Ceria Slurries to Address Challenges in Fabricating Next Generation Devices.” Adding H2O2 to ceria slurry dramatically alters the zeta-potential of the particles and thereby alters the removal rates and selectivities. For CMP of Shallow Trench Isolation (STI) structures, adding H2O2 to the slurry allows for lowering of the particle concentration from 4% to <2% while maintaining the same removal rate. Reducing the average ceria particle size from 130nm to 70nm results in a reduction in scratch defects while maintaining the same removal rate by tuning the chemistry, but the company has not yet found chemistries that allow for reasonable removal rates with 40nm diameter particles. The ceria morphology is another variable that must be controlled according to Dickmann, “It can seem counter-intuitive, but we’ve seen that non-spherical particles can demonstrate superior removal-rates and defectivities compared to more perfect spheres.”
Selectivity is one of the most critical and difficult aspects of the CMP process, and arguably the key distinction between CMP and mere polishing. The more similarity between the two or more exposed materials, the more difficult to design high selectivity in a slurry. Generally, dielectric:dielectric selectivity is difficult, and how to develop a slurry that is highly selective to nitride (Si3N4) instead of TEOS-oxide (PECVD SiO2 using tetra-ethyl-ortho-silicate precursor) was discussed by Takeda-san of Fujimi Corporation. In general, dielectric CMP is dominated by mechanical forces, so the slurry chemistry must be tuned to achieve selectivity. Choosing <5 pH for the slurry allows for reducing the oxide removal rate while maintaining the rate of nitride removal. Legacy nitride slurries have acceptable selectivities but unacceptable edge-over-erosion (EOE) – the localized over-planarization often seen near pattern edges. Reducing the particle size reduces the mechanical force across the surface such that chemical forces dominate the removal even more, while EOE can be reduced because negatively charged particles are attracted to the positively charged nitride surface resulting in local accumulation.
—E.K.

Batteries? We don’t need no stinking batteries.

We’re still used to thinking that low-power chips for “mobile” or “Internet-of-Things (IoT)” applications will be battery powered…but the near ubiquity of lithium-ion cells powering batteries could be threatened by capacitors and energy-harvesting circuits connected to photovoltaic/thermoelectric/piezoelectric micro-power sources. At ISSCC2015 in San Francisco last week, there were several presentations on novel chip designs that run on mere milliWatts (µW) of power, and the most energy efficient circuit blocks now target nanoWatt (nW) levels of power consumption. Two presentations covered nW-scale microprocessor designs based on the ARM Cortex-M0+ core, and a 500nW energy-harvesting interface based on a DC-DC converter operating from 1µm available power was shown by a team from Holst Centre/imec/KU Leuven working with industrial partner OMRON.

Read more on this in MicroWatt Chips shown at ISSCC available at SemiMD.

—E.K.

Oscar for DMD Inventor Hornbeck

Texas Instrument Oscars 1Kudos to Dr. Larry J. Hornbeck, the extended team at Texas Instruments (TI) that has worked on Digital Micromirror Device (DMD) technology, and to the TI executives who continued to fund the R&D through years of initial investment losses. Hornbeck has been awarded an Academy Award® of Merit (Oscar® statuette) for his contribution to revolutionizing how motion pictures are created, distributed, and viewed using DMD technology (branded as the DLP® chip for DLP Cinema® display technology from TI).

The technology now powers more than eight out of 10 digital movie theatre screens globally. Produced with different resolutions and packages, DLP chips also see use in personal electronics, industrial, and automotive markets. The present good-times with DMD are enjoyed only because TI was willing to make a major long-term bet on this novel way to modulate pixel-arrays, which required building the most complex Micro-Electro-Mechanical System (MEMS) the world had ever seen.

Development of the DLP chip began in TI’s Central Research Laboratories in 1977 when Hornbeck first created an array of “deformable mirrors” controlled with analog circuits. In 1987 he invented the DMD, and TI invested in developing multiple money-losing generations of the technology over the next 12 years. Finally, in 1999 the first full-length motion picture was shown with DLP Cinema technology, and since then TI claims that the technology has been installed in more than 118,000 theaters around the globe. We understand that TI now makes a nice profit from each chip.

“It’s wonderful to be recognized by the Academy. Following the initial inventions that defined the core technology, I was fortunate to work with a team of brilliant Texas Instruments engineers to turn the first DMD into a disruptive innovation,” said Hornbeck, who has 34 U.S. patents for his groundbreaking work in DMD technology. “Clearly, the early and continuing development of innovative digital cinema technologies by the DLP Cinema team created a definitive advancement in the motion picture industry beyond anyone’s wildest dreams.”

—E.K.

Ferromagnetic Room Temperature Switching

Bismuth-ferrite could make spin-valves that use 1/10th the power of STT

A research team led by folks at Cornel University (along with University of California, Berkeley; Tsinghua University; and Swiss Federal Institute of Technology in Zurich) have discovered how to make a single-phase multiferroic switch out of bismuth ferrite (BiFeO3) as shown in an online letter to Nature. Multiferroics, allowing for the control of magnetism with an electric field, have been investigated as a potential solid-state memory cell for many years but this is the first time that reversible room-temperature switching has been reportedly achieved at room temperature. Most importantly, the energy per unit area required to switch these new cells is approximately an order of magnitude less than that needed for spin-transfer torque (STT) switching.

“The advantage here is low energy consumption,” said Cornell postdoctoral associate John Heron, in a press release. “It requires a low voltage, without current, to switch it. Devices that use currents consume more energy and dissipate a significant amount of that energy in the form of heat.”

The trick that Heron and others discovered involves a two-step sequence of partial switching events—using only applied voltages—that add up to full magnetic reversal. Previous theory had shown that single-step switching was thermodynamically impossible, and no other groups had reported work on similar two-step switching. Also published in the News & Views section of Nature is “Materials science:  Two steps for a magnetoelectric switch” written by other researchers, which explores the possibilities of using this phenomenon in nanoscale memory chips.

While the thermodynamics of all of this seem incredibly positive, the kinetics of this two-step process have yet to be reported. Also, the effect seems to require specific crystal stuctures such as that of SrRuO3 in a particular orientation as electrical contacts, instead of the inherently less-expensive randomly oriented metal contacts to STT cells. Consequently, this could be inherently slow and expensive technology, and thus limited to niche applications.

—E.K.

NanoParticle Self-Assembly at UofM

Theory and Practice synergize R&D

UofM_Glotzer-Kotov_MRS2014awardSharon C. Glotzer and Nicholas A. Kotov are both researchers at the University of Michigan who were just awarded a MRS Medal at the Materials Research Society (MRS) Fall Meeting in San Francisco for their work on “Integration of Computation and Experiment for Discovery and Design of Nanoparticle Self-Assembly.” Due to the fact that surface atoms compose a large percent of the mass of nanoparticles, the functional properties of quasi-1D nanoparticles differ significantly from 2D thin-films and from 3D bulk materials. An example of such a unique functional property is seen in self-assembly of nanoparticles to form complex structures, which could find applications in renewable energy production, optoelectronics, and medical electronics.
While self-assembly has been understood as an emergent property of nanoparticles, research and development (R&D) has been somewhat limited to experimental trial-and-error due to a lack of theory. Glotzer and Kotov along with their colleagues have moved past this limit using a tight collaboration between computational prediction and experimental observation. The computational theorist Glotzer provides modeling on shapes and symmetric structures, while the experimentalist Kotov’s explores areas involving atomic composition and finite interactions. Kotov and his students create a nanoparticle and look for Glotzer and her group to explan the structure. Conversely, Glotzer predicts the formation of certain structures and has those predictions confirmed experimentally by Kotov.
One specific area the two scientists have explored is the formation of supraparticles—agglomerations of tightly packed nanoparticles that are self-limiting in size. The supraparticles are so regular in size and sphericality that they would actually pack to form face-centered-cubic (fcc) lattice-like structures. The theoretical and computational work, followed by experimental verification, further proved that these supraparticles could be formed from a vast variety of nanoparticles and even proteins, provided they were small enough and had significant van der Waals and electronic repulsion forces. This exciting development creates a whole new class of “bionic” materials that may combine biomaterials and inorganics.
—E.K.

Nakamura Co-Wins Nobel for Blue LEDs

The Nobel Prize in Physics 2014 was awarded jointly to Isamu Akasaki, Hiroshi Amano, and Shuji Nakamura “for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources”. In the late 1980s red and green LEDs had been around for decades, but despite large programs in both academia and industry there had been almost no R&D progress in blue LEDs (this editor did process R&D in an LED fab in that era). Then Akasaki and Amano at the University of Nagoya showed work on improved p-doping in GaN due to electron irradiance, leading to p-n junctions to make diodes.

Structure of a blue LED with a InGaN/AlGaN double heterojunction [Source: S. Nakamura, T. Mukai & M. Senoh, Appl. Phys. Lett. 64, 1687 (1994)].

Structure of a blue LED with a InGaN/AlGaN double heterojunction (Source: S. Nakamura, T. Mukai & M. Senoh, Appl. Phys. Lett. 64, V1687, 1994).

From 1989 to 1994, Shuji Nakamura worked at Nichia Chemicals in Tokushima, Japan where he led a small team of co-workers to achieve a quantum efficiency of 2.7% using a double heterojunction InGaN/AlGaN (see Figure). With these important first steps, the path was cleared towards the development of efficient blue LEDs and solid-state white lighting. Nakamura-sensei is now a Professor of Physics at the University of California, Santa Barbara, and co-founder of Soora Corp. where GaN-on-GaN technology is used to increase efficiency through the elimination of the buffer-layers needed with saphhire substrates. The “Tales of Nakamura” article at IEEE Spectrum provides an excellent summary of this extraordinary man’s life story, including the US$600M payout from Nichia that was reduced to US$8M by a higher court.
Incandescent light bulbs lit the 20th century; the 21st century will be lit by LED lamps with high lm/W efficiency. The most recent record is just over 300 lm/W, which can be compared to 16 for regular light bulbs and close to 70 for fluorescent lamps. As about one fourth of world electricity consumption is used for lighting purposes, the LEDs contribute to saving the Earth’s resources.
Shine on!
—E.K.

IBM Shows Graphene as Epi Template

Last month in Nature Communications (doi:10.1038/ncomms5836) IBM researchers Jeehwan Kim, et al. published “Principle of direct van der Waals epitaxy of single-crystalline films on epitaxial graphene.” They show the ability to grow sheets of graphene on the surface of 100mm-diameter SiC wafers, the further abilitity to grow epitaxial single-crystalline films such as 2.5-μm-thick GaN on the graphene, the even greater ability to then transfer the grown GaN film to any arbitrary substrate, and the complete proof-of-manufacturing-concept of using this to make blue LEDs.

(Source: IBM)

(Source: IBM)

The figure above shows the basic process flow. The graphenized-SiC wafer can be re-used to grow additional transferrable epi layers. This could certainly lead to competition for the Leti/Soitec/ST “SmartCut” approach to layer-transfer using hydrogen implants into epi layers.
No mention is made of the kinetics of growing 100mm-diameter sheets of single-crystalline GaN on graphene. Supplemental information in the online article mentions 1 hour at 1250°C to cover the full wafer, but the thickness grown in that time is not mentioned. From first principles of materials engineering, they must either:

A) Go slow at first to avoid independent islands growing to form a multicrystalline layer, or
B) Initially grow a multicrystalline layer and then zone anneal (perhaps using a scanned laser) to transform it into a single-crystal.
In either case, we would expect that after just a few single-crystalline atomic layers had been either slowly grown or annealed, that a 2nd much-higher speed epi process would be used to grow the remain microns of material. More details can be seen in the EETimes write up.
—E.K.

Figure 1:  Leti’s 300mm diameter silicon wafer fabrication line on the MINATEC campus in Grenoble, France. In the foreground is space for a new fab intended for work on silicon-photonics. (Source: Ed Korczynski)

Figure 1: Leti’s 300mm diameter silicon wafer fabrication line on the MINATEC campus in Grenoble, France. In the foreground is space for a new fab intended for work on silicon-photonics. (Source: Ed Korczynski)

Now I know how wafers feel when moving through a fab. Leti in Grenoble, France does so much technology integration that in 2010 it opened a custom-developed people-mover to integrate cleanrooms (“Salles Blanches” in French) it calls a Liaison Blanc-Blanc (LBB) so workers can remain in bunny-suits while moving batches of wafers between buildings. I got to ride the LBB from the 300mm diameter wafer silicon CMOS and 200mm diameter wafer MEMS fabs (Fig.1) along the cement monorail to the more specialized fab spaces for industrial partners and for nanoelectronics start-ups. This was my first time experiencing this world-exclusive ISO 6 (“Class 1000”) mobile cleanroom, and it very nicely moves people in 3 minutes between cleanroom buildings that would otherwise take 30 minutes of de-gowning and walking and re-gowning. In the foreground of Fig.1 is space for a new fab intended for silicon-photonics R&D and pilot fabrication.

Figure 2:  Leti’s “Liaison Blanc-Blanc” (LBB) ISO 6 mobile cleanroom connects buildings on the MINATAC campus with elevator-like automation along a cement monorail. (Source: Ed Korczynski)

Figure 2: Leti’s “Liaison Blanc-Blanc” (LBB) ISO 6 mobile cleanroom connects buildings on the MINATAC campus with elevator-like automation along a cement monorail. (Source: Ed Korczynski)

Fig.2 shows the LBB as it passes a Linde gas tower in front of spectacular alpine scenery on the way to Leti’s specialized and start-up fab building. One of Leti’s great strengths is that it does more than just lab-scale R&D, but has invested in all of the tools and facilities to be able to do pilot manufacturing of nanoscale devices. Didier Louis, Leti international communications manager and gracious tour host through the cleanrooms, explained that when working with new materials a pragmatic approach is needed; for example, color coding for wafer transport carriers informs if there is no copper, copper encased by other materials, or exposed copper on wafers therein.
—E.K.

Chasing IC Yield when Every Atom Counts

Increasing fab costs coming for inspection and metrology
ITRS2013_Yield_overviewAt SEMICON West this year in Thursday morning’s Yield Breakfast sponsored by Entegris, top executives from Qualcomm, GlobalFoundries, and Applied Materials discussed the challenges to achieving profitable fab yield for atomic-scale devices (Figure source is the ITRS 2013 Yield Chapter). Due to the sensitive nature of the topic, recording was not allowed and copies of the presentations could not be shared.
Qualcomm – Geoffrey Yu
Double-patterning will be needed for metal and via layers as we go before 90nm pitch for the next generations of ICs. Qualcomm is committed to designing IC with smaller features, but not all companies may need to do so. Fab costs keep going up for atomic-scale devices…and there are tough trade-offs that must be made, including possibly relaxing reliability requirements. “Depending on the region. If you’re in an emerging region maybe the reliability requirements won’t be as high,” said Yu. Through-Silicon Vias (TSV) will eventually be used to stack IC layers, but they add cost and will only be used when performance cannot be met with cheaper solutions. “An early idea was to use TSV for logic:memory,” reminded Yu, “but then there was innovation to LPDDR4 allowing it deliver the same bandwidth with one-half the power of LPDDR3, which delayed TSV.”
GlobalFoundries – Harry Levenson
“A more expensive part could provide a better value proposition for a customer,” reminded Levenson as he discussed the challenges of inspecting next-generation commercial ICs in high-volume manufacturing (HVM). “We still have clear demand for products to run in HVM at the leading edge, but we are now in the world of double-patterning and this applies to optical inspection as well as imaging.” Requirements for inspection and imaging are different, but he same physics applies. In imaging Depth of Focus (DoF) of ~140nm is generally preferred, while the same used for inspection  of a <140nm thin film would to induce noise from lower-levels. We can’t do e-beam inspections due to too much energy concentration needed to get acceptable throughput (and the challenge gets worse as the pixel area is reduced, inherently slowing down throughput). However, e-beams are helpful because they can detect open contracts/vias in metal levels due to the conductivity of electrons providing additional contrast compared to any possible optical inspection.
Applied Materials – Sanjiv Mittal
Mittal discussed how the CMOS transistor gate formation process has increased in complexity over the last few device generations:  8x more unit-process steps, 3x higher fab cost, 50x lower defects needed for yield. “The challenges are immense,” admitted Mittal. “What happens when you try to work on yield improvement when you’re ramping volume? At the same time you’re trying to improve yield by making changes, you’re trying to increase the volume by not making changes.”
Entegris – Jim O’Neill
O’Neill is CTO of the combined Entegris post-merger with ATMI, and was recently director of advanced process R&D for IBM. Since Entegris provides materials and sub-systems, in the simplest cases the company works to improve IC fab yield by minimizing defects. “However, the role of the materials-supplier should change,” averred O’Neill. “The industry needs bottle-to-nozzle wet chemistry solutions, and applications-based clean gas delivery.” In an exclusive interview with SST/SemiMD, O’Neill provided as example of a ‘wetted process solution’ a post-CMP-clean optimized through tuning of the brush polymer composition with the cleaning chemistry.
ITRS Difficult Challenges for Yield 2013-2020

  • Existing techniques trade-off throughput for sensitivity, but at expected defect levels, both throughput and sensitivity are necessary for statistical validity.
  • Reduction of inspection costs and increase of throughput is crucial in view of CoO.
  • Detection of line roughness due to process variation.
  • Electrical and physical failure analysis for killer defects at high capture rate, high throughput and high precision.
  • Reduction of background noise from detection units and samples to improve the sensitivity of systems.
  • Improvement of signal to noise ratio to delineate defect from process variation.
  • Where does process variation stop and defect start?

—E.K.

Moore’s Law is Dead – (Part 4) Why?

We forgot Moore merely meant that IC performance would always improve (Part 4 of 4)

IC marketing must convince customers to design ICs into electronic products. In 1965, when Gordon Moore first told the world that IC component counts would double in each new product generation, the main competition for ICs was discrete chips. Moore needed a marketing tool to convince early customers to commit to using ICs, and the best measure of an IC was simply the component count. When Moore updated his “Law” in 1975 (see Part 1 of this series for more details), ICs had clearly won the battle with discretes for logic and memory functions, but most designs still had only single-digit thousands of transistors so increases in the raw counts still conveyed the idea of better chips.

MooresLaw_1965_graphFor almost 50 years, “Moore’s Law” doubling of component counts was a reasonable proxy for better ICs. Also, if we look at Moore’s original graph from 1965 (right), we see that for a given manufacturing technology generation there is a minimal cost/component at a certain component count. “What`s driven the industry is lower cost,” said Moore in 1997. “The cost of electronics has gone down over a million-fold in this time period, probably ten million-fold, actually. While these other things are important, to me the cost is what has made the technology pervasive.”

Fast forward to today, and we have millions of transistors working in combinations of “standard cell” blocks of pre-defined functionalities at low cost. Graphics Processor Units (GPU) and other Application Specific Integrated Circuits (ASIC) take advantage of billions of components to provide powerful functionalities at low cost. Better ICs today are measured not by mere component counts, but by performance metrics such as graphics rendering speed or FLOPS.

The limits of lithography (detailed in Part 2 of this blog series) mean that further density improvements will be progressively more expensive, and the atomic limits of physical reality (detailed in Part 3) impose a hard-stop on density at ~1000x of today’s leading-edge ICs. “If we say we can`t improve the density anymore because we run up against all these limitations, then we lose that factor and we`re left with increasing the die size,” said Moore in 1997.

Since the cost of an IC is proportional to the die size, and since the cost/area of lithographic patterning is not decreasing with tighter design-rules, increasing the die size will almost certainly increase cost proportionally. We may not need larger dice with more transistors, however, as future markets for ICs may be better served by the same number of transistors integrated with new functionalities.

International R&D center IMEC knows as well as any organization the challenges of pushing lithography and junction-formation and ohmic contacts to atomic limits. In the 2014 Imec Technology Forum, held the first week of June in Brussels, president and chief executive officer Luc Van den hove’s keynote address focused on the applications of ICs into communications, energy, health-care, security, and transportation applications.

TI has been making ICs since they were co-invented by Kilby in 1959, and over a decade ago TI made a conscious decision to stop chasing ever-smaller digital. First it outsourced digital chip fabrication to foundries, and in 2012 began retiring digital communications chips. Without continually shrinking components, how has TI managed to survive? By focusing on design and integration of analog components, in the most recent financial quarter the company posted 58% gross margin on $3.29B in sales.

At The ConFab last month, Dr. Gary Patton, vice president, semiconductor research and development center at IBM, said there is a bright future in microelectronics (as documented at Pete’s Posts blog).

The commercial semiconductor manufacturing industry will see only continued revenue growth in the future. We will process more area of silicon ICs each year, in support of shipping an ever increasing number of chips worldwide. More fabs will be built needing more tools and an increasing number of new materials.

Moreover, next generation chips will be faster or smaller or cheaper or more functional, and so will better serve the needs of new downstream customers. ASICs and 3D heterogeneous chip stacks will create new IC product categories leading to new market opportunities. Personalized health care could be the next revolution in information technologies, requiring far more sensors and communications and memory and logic chips. With a billion components, the possibilities for new designs to create new IC functionalities seems endless.

However, we are past the era when the next chips will be simultaneously faster and smaller and cheaper and more functional. We have to accept the end of Dennard Scaling and the economic limits of optical lithography. Still, we should remember what Gordon Moore meant in 1965 when he first talked about the future of IC manufacturing, because one factor remains the same:

The next generation of commercial IC chips will be better.

Past posts in the blog series:

Moore’s Law is Dead – (Part 1) What defines the end.

Moore’s Law is Dead – (Part 2) When we reach economic limits,

Moore’s Law is Dead – (Part 3) Where we reach atomic limits.

Future posts in this blog will ruminate about new materials, designs, and technologies for next 50 years of IC manufacturing.

E.K.