Category Archives: Semicon West

by M. David Levenson, Editor-in-Chief, Microlithography World

In an interview in Milpitas, CA after SEMICON West, Edward Charrier, VP/GM of KLA-Tencor’s process control information division, described the latest improvements in Prolith, the venerable litho simulation tool.

Prolith 11 supports the most likely double patterning option for the 32nm node, “litho-etch-litho etch” (LELE), according to Charrier. “Prior to Prolith 11, computational lithography studies assumed that the two exposure steps could be considered independently,” he reported, “but embedded topography from the first pass can disrupt the second exposure.”

Prolith 11 calculates the electric fields inside the resist/hardmask stack using its extensive catalog of material parameters and the topography inputted by the user, preferably from scatterometry or CD-SEM measurements of the results of the first step. The exposure and development of the second resist film is simulated including the patterned hardmask, substrate topography, and reflectivity. For accuracy at 32nm, EMF effects at the wafer due to the nonuniform film stack have to be included (see figure below). According to Charrier, there are major differences compared to assuming planarity that result in shifts in the patterns.

A comparison of the electric field in the resist film caused by a plane perpendicular wave at the second pass exposure clearly highlights the complexity of introducing topography. (Source: KLA-Tencor)

Prolith has long strived to provide portable resist models that can be plugged into new situations and predict CDs and profiles accurately. Charrier reported major progress in the last few years in resist characterization and modeling, making 1nm accuracy possible through focus and dose. Of course, the physical models used by Prolith are slower than the heuristics employed by OPC engines, but he insists they are now fast enough to use on the small clips of circuit patterns of interest to R&D and process engineers. LithoWare, the Linux version of Prolith intended for layout and OPC designers, runs the same models more quickly on clusters of up to 120 processors.

Double patterning undoubtedly leads to increases in complexity and cost, but now computational tools are emerging to help with the decision-making. One can only hope that they achieve the same “predictive accuracy” and ease of use that characterized Prolith in previous eras. — M.D.L.

by James Montgomery, News Editor, Solid State Technology

July 30, 2008 – There’s a lot of optimism and interest crowding the photovoltaic space, and this was certainly the case witnessed at several Intersolar presentations during SEMICON West (Weds. 7/16), offering different takes on the direction of the PV market. The first of these offered a decidedly contrarian view about the future (or lack of it) for many PV businesses.

Eric Wesoff, senior analyst with Greentech Media, started his high-energy presentation by immediately changing its focus away from technologies and more toward the emerging PV market in general, calling it “innovation in a frothy market.” Few other markets offer the growth promise that PV solar does, tracking at 40% growth for the past decade (see Figure 1) and now a $20B market with promises of 30% annual growth “for decades to come,” he said. Most PV capacity announcements today tout hundreds of megawatts in the next year or two (notably Kyocera, SunPower, Schott, Yingli, and First Solar) with a few handful at the top planning gigawatt-scale production (including Q-Cells, Suntech, Solarworld, Brightsource, and Sharp), but it won’t be long before we’ll start hearing about (and requiring) terawatt-scale expansion, he said (see Figure 2).

Figure 1: Global PV market has shown consistent 40% growth. (Source: Prometheus Institute/Greentech Media)

The surge in interest in the PV sector has generated concern whether this is just another hot market bubble as has been seen in the past (e.g. telecommunications and the early Internet buzz). Wesoff sees no such “bubble” in PV, though there are some similarities, e.g., massive VC investments for technology brought forth prematurely from labs. He criticized the “irrational exuberance” among investors, noting that VC money in 1H08 amounted to $1B, with PV taking up 15%-20% of the entire VC asset class. Those other bubbles likewise saw a huge influx of VC money invested in premature technologies, and ultimately resulted in a limited list of survivors, successes, and technology advances, he said.

Figure 2: Cumulative solar installed capacity, in GW. (Source: Greentech Media)

The stampede of new entrants to get a piece of the PV market has generated a “herd mentality” among VC supporters who are scrambling to find the small handful of good IPO candidates, out of the lot which is already likely to include some well-known names such as Nanosolar, GT Solar, and even carmaker Tesla, Wesoff noted. He put up two slides stuffed with the names of >100 new entrants to the PV market — all of whom expect to grab the clichéd 10% marketshare in their PV technology niches, ranging from flexible substrates to quantum dots to inks. “For slightly used thin-film equipment by around 2009-2010, here’s your list of people to talk to,” he quipped. One company in particular Wesoff referred to as a “money oxidizer” whose infamous achievement to date has been a solar panel technology that utterly failed to work in sunlight. Another company has its eyes on another VC round later this year and projects a $1B valuation, without having shipped a single product or booked a single sale, he pointed out. These firms and many others seeking to make a name in PV, Wesoff scolded, are simply “prematurely liberated from the lab into a frothy market.”

For now it’s still too early and expensive for PV technologies such as CIGS and organics, with the mix still mostly silicon vs. thin-film (90%/10%, maybe moving closer to 80%/20%), Wesoff said (see Figure 3). But he reiterated that scale of production seems to be the main hurdle, not technical challenges.

Figure 3: PV technology breakdown by % and region. Note nearly half of all US production was thin-film based in 2005-2006. (Source: Prometheus Institute/Greentech Media)

Ultimately, it won’t be investors or technology that drives solar. Wesoff noted that in Germany, seen as a global leader in PV, the weather is generally ~200 cloudy days/year. Finding ways to better utilize the sun’s energy in such an environment isn’t what drives the market, he said — it’s establishing the political will to support such businesses and technology development and use. Session host Eicke Weber from Germany’s Fraunhofer Institute, frequent participant in various Intersolar talks during the week, was seen nodding in agreement from down the table. — J.M.

by Debra Vogler, Senior Technical Editor, Solid State Technology

Susan Felch, principal member of the technical staff, frontend development at Spansion, summarized research she conducted while at Applied Materials — and done in conjunction with IMEC — at the West Coast Junction Technology Group meeting, sponsored by the northern California chapter of the American Vacuum Society (AVS) and held in conjunction with this year’s SEMICON West (July 17) in San Francisco.

Going from 90nm to 65nm to 45nm, scaling USJs could be accomplished by controlling the diffusion using co-implants (e.g., fluorine, carbon) and by introducing strain to boost the “on” current, Felch explained to the meeting attendees. From 45nm to 32nm, however, more drastic changes had to be used, such as new materials (HK+MG) and lower threshold voltage (Vt) processes to control the amount of diffusion. “As we go from 32nm to 22nm to 16nm, we’ll have to start thinking about changing device architecture. The challenges are getting tougher and tougher,” said Felch. Therefore, precise implantation along with millisecond annealing (MSA) for diffusion-less annealing seems to be the answer for USJ scaling. (Figure 1 from John Borland’s presentation at the event summarizes various MSA options.)

Figure 1: Millisecond anneal as an option for 32nm for a) n-type and b) p-type devices. (Source: IMEC, J.O.B. Technologies)

Reminding the attendees of just how challenging the road to scaled USJs will be, Felch pointed out that even the revised 2007 International Technology Roadmap for Semiconductors’ (ITRS) requirement for scaling of Lg and Xj for logic devices was changed from the previous unrealistic 70Å to 90Å-100Å, made possible because HK+MG lifted some of the burden. “This new requirement is still very, very tough,” she noted, adding that using a laser anneal-only approach appears to meet all the requirements for 32nm.

Felch presented data that illustrated the compatibility of MSA with HK+MGs, reviewing several options for HK+MGs: fully silicided/gate last (FUSI, a scheme IMEC has promoted); Replacement gate (RPT, promoted by Intel); and metal-insert polysilicon (MIPS, a gate-first approach). For the MIPS approach, materials have to be able to withstand the thermal budgets of the anneal and be able to etch the new gate stack. Additionally, when using laser annealing, the etching has to be perfectly straight, with no footing. Data presented by C. Ortolland at this summer’s VLSI Technology conference (Honolulu, June 19, 2008) shows that dopant implantation and placement are very sensitive to gate profile with diffusion-less anneal. “A good straight profile will give nice Vt roll-off characteristics,” said Felch.

Another advantage of using laser annealing with the MIPS technique is that if a single metal is desired, it can be combined with a capping layer. Laser annealing thus enables the work function of the two metals can be adjusted. Showing data from IEDM 2007, Felch observed that when using capping layers (for either pMOS or nMOS), the threshold voltage of the gate stack can be adjusted by varying the laser power and annealing temperature. Additional data further illustrated ways to find implant and laser annealing conditions that give the same leakage obtained with spike anneal (which is the baseline reference), but with a tradeoff in low leakage vs. high activation, as the deep Ge-PAI degrades leakage by >1000× (see Figs. 2 and 3).

Figure 2: Diode leakage. (Source: IMEC/C. Ortolland)

Figure 3: Defects positions. (Source: IMEC/C. Ortolland)

MSA must also be compatible with strain boosters used in logic devices, therefore laser annealing must be compatible as well, but Felch noted that increasing the annealing temperature also increases the number of defects in the SiGe, since its melting temperature (1200°C-1300°C for a 20%-40% Ge concentration) is much lower than that of Si (1410°C). It is therefore necessary, she said, to try to find process windows in the annealing space that will enable slightly higher temperatures to get better device activation and less leakage. Data on the effects of laser dwell times shows that at lower Ge concentrations, laser temperatures can be higher — however, using higher Ge concentrations will force the lowering of MSA temperatures. “If we go to shorter dwell times for the laser anneal, that will enable slightly higher MSA annealing temperatures,” Felch said.

She also presented data showing a small variation in microsheet resistance when using laser anneal due to the stitching of the laser beam. However, it was found that the impact of laser stitching in short channel devices is negligible compared to other sources of spread.

For 32nm USJs, Felch advocated the need for millisecond anneal (MSA)-only, but noted that various integration issues/compatibilities will need to be dealt with, “that all have their process windows.” The process will have to be compatible with the HK+MG stack if embedded SiGe is going to be used, which places limitations on how high the temperature can go to prevent the onset of defect formation. In order to get the required activation and defect annihilation (for low leakage), higher temperatures will be needed. In the mid-temperature range processes, it is important to minimize poly depletion and also get good mixing of the high-k dielectric and capping layer. “So for different customers’ integration schemes and device structure, we’re going to have to find a proper process window where we can get all of the benefits and compatibilities with MSA,” she said.

At 22nm and beyond, when traditional planar bulk CMOS devices run out of steam, Felch noted that the industry will probably have to go to some sort of multigate scheme such as FinFET. “We’ll get better control over gate-to-channel currents, lower off-state leakage current, better immunity to short channel effects, and higher mobility in these types of devices,” she observed. But from a junction perspective there are some very serious challenges, she added. For one, the thin fins tend to become fully amorphized; the question then becomes, how can they be regrown with good crystalline nature? Other USJ challenges at 22nm and beyond are conformality and defectivity. — D.V.

by M. David Levenson, Editor-in-Chief, Microlithography World

July 23, 2008 – Device scaling is at the heart of Moore’s Law and progress in semiconductors, but technologists increasingly worry about the viability of the next steps. These issues were aired at a SEMICON West TechXpot discussion on Thursday (7/17).

Discussions began with Lars Liebmann of IBM musing about profitability at 32nm and beyond. There are no new lithographic exposure technologies in the offing, he warned, and the economics of computational lithography are different from that of exposure tool upgrades. Whereas the cost of a new scanner can be amortized over two or more entire generations of chips, the investment in computer simulation must be recovered with only a few chip designs. Thus, while computation is cheaper than new hardware, that does not mean that it is more economical for the industry, he said.

To profit in the new sub-sub-wavelength era, Liebmann recommended a systems approach to design that synergistically optimizes the few components that contributed provable value, and could be amortized over a broad range of products and a long time horizon. He cited OPC implemented fab-wide as a good example, and mentioned chip-specific design-for-manufacturing (DFM) as a counterexample.

Liebmann also warned against being seduced by “silver bullets” like double-patterning technology (DPT), which he noted would be needed at different nodes for different tasks. DPT was already in production for line ends at 45nm, he noted. [IBM and Infineon have published work using DPT to solve gate linewidth variations — see Microlithography World 17.2 , May 2008, p7.] Liebmann also reported that a Power PC chip had been implemented in a gridded design using PdBrix, without increased area but with better yield. Such “co-optimized prescriptive designs” were the way to continue device scaling profitably, in his view.

Milind Weling of Cadence reviewed the challenges of splitting a design for double patterning, especially when there were multiple options for the patterning technologies. Weling pointed out that the recent SEMATECH Litho Forum came to a consensus that DPT would be essential for the next two nodes or so (to 22nm and possibly beyond), so in his view some investment is warranted — but the challenge is that there are several proposed DPT paradigms requiring diverse models for implementation. Cadence has developed heuristics that provide a complete DP-compliant DFM flow (with verification, density balancing, and split minimization) for all types of DPT, he claimed.

From the design side, making money means doing the least possible computation-driven layout modification, pointed out Michael Buehler-Garcia of Mentor Graphics. He advocated implementing DFM processes across all steps of design, from rule development to tape-out.

Tracy Weed of Synopsys highlighted the importance of yield management for 32nm and beyond. Unpredictable yield was “the elephant in the room,” he worried, and advocated design-for-test, corrective methods, and cost control.

Nobuhito Toyama of DNP described the reticle industry’s struggles with DFM. The first wave (OPC) disturbed the maskmaking infrastructure, while the second wave created wafer-patterning hotspots that were undetectable on the photomask, he reported. Today, tools exist for intelligent fracturing of layouts and to deal with specific problems (like hotspots and slivers), but they have not yet been integrated into an effective system, and the technologies are still chasing the specifications, observed Toyama.

Mark Mason of Texas Instruments concluded by pointing out that device scaling is first and foremost an economics issue; if it doesn’t pay off, it will stop. He pointed to two issues as to why it might stop — potential barriers of fabrication costs and variability — but observed that both could be helped by some form of DFM. One problem is the diversity of DFM terminology and viewpoints, something that he is trying to solve as chairman of the DFM consortium within the SI2 Alliance, working toward common definitions and visions.

Audience members expressed deep skepticism during the Q&A session. Are the EDA partners of the chipmakers actually providing value, or should just they go home and let the IDMs write their own tools? How can you design so that DPT works in multiple foundries? Panel members seemed aware of the problems, but offered no pat solutions. — M.D.L.

(July 22, 2008) NASHUA, NH &#151 On Wednesday, July 16, 2008, the editors and publisher of Solid State Technology (SST) and Advanced Packaging Magazine presented the 2008 ACA Awards, in usual impromptu style on the floor at SEMICON West. This was the sixth straight year attendees of SEMICON West were invited to vote on products they saw at the annual trade show.

Although the award program has traditionally been front-end heavy in new product entries, this year saw a shift in that balance, with three quarters of the products entered were in the final manufacturing space.

Kristine Collins, publisher of both publications, Pete Singer, Editorial Director of both SST and Advanced Packaging and Gail Flower, Editor-in-Chief of Advanced Packaging acknowledged recipients in 3 catagories &#151 Best solution to a problem, most innovative product, and best cost-of-ownership &#151 for both front-end and back-end processes.

In wafer processing, winner for best solution to a problem is Levitronix for its BPS 600 pump. Most innovative product went to Aviza Technology for its Versalis fxP 200/300mm cluster system. And finally, best cost-of-ownership was awarded to NEXX Systems for the Apollo sputter deposition system.

In final manufacturing, the winners are:

best solution to a problem Juki CX-1 advanced placement system

Best cost-of-ownership Kyzen MX2628 aqueous cleaner

Most innovative product BTU International Pyramax 75A furnace

Congratulations to all the winners!

by Pete Singer, editor-in-chief

July 22, 2008 — There is a lot that can still be done to improve the efficiency of today’s 300mm factories. An early move to 450mm would only carry those inefficiencies forward another generation. That was the main message in an interview with Gerald Goff, principle member of technical staff for Advanced Micro Devices (AMD) in Austin, TX, during last week’s SEMICON West.

Goff described the company’s experience with the transition from 200 to 300mm, which was done only the keep up with leading edge process technology. “It’s not our perspective either that 450mm is never going to happen. It seems like it’s going to happen inevitably but the bottom line is that we just think the 2012 vision that’s out there is premature,” Goff said. “We don’t put a real date on it but we like to say we have 20-20 vision, but it’s 2020 and beyond is the timeframe when we will actually start to see some real life to it.”

Goff said “if you look at it from a holistic perspective, there’s really no point in taking all the inefficiencies we have today in our factories with us to 450. There are a ton of things we can be doing to eliminate inefficiencies in our factories at 300mm that are directly applicable to a 450mm fab. If we do the exact same thing that we’re doing today it’s going to multiply those inefficiencies by the square centimeters of the silicon we produce.”

Although it was not called 200mm Prime at the time, Goff said AMD’s transition to 300mm was not driven by a desire to improve fab efficiency, but only by the need to keep up with the latest technology that was only available on 300mm tools. “In Fab 30, we had a very productive 200mm fab. It was your conventional early 200mm fab which was open cassette, sub-Class 1 complete fab environment. We were doing fine, we were getting the volumes that we needed, we had the efficiencies. We had the lowest cycle time in the industry per Sematech data per mask layer. We were doing really well in that space and we had the potential to make some good money out of it. The problem in the 300mm space was all the R&D money shifted from the chambers at 200mm to the chambers at 300mm so the technology ran out. We could have still been very productive in a 200mm factory. We just couldn’t do what we wanted to do because we were leading edge technology wise, so we were forced to be shifted into the 300mm space.”

Goff said he sees the same thing happening moving from 300 to 450mm. “There are only so many R&D dollars that are available from either our side of the fence or the suppliers side of the fence.” Those R&D dollars are going to be put on where the industry decides they are most needed. “If we go focus on a wafer size change and forget about things like EUV and other process technologies that we’ve got to still work on 300mm — not to mention all the efficiency gains and white space eliminate that we can go through as far as we operate our factories then we’re going to be quickly forced into a 450mm environment if the R&D money starts going that way. This doesn’t really seem to make sense to us,” he said. — P.S.

by M. David Levenson, editor-in-chief, Microlithography World

July 17, 2008 – In an interview at SEMICON West, Mark Melliar-Smith, CEO of Molecular Imprints described two paths for his company: one toward insertion into the semiconductor industry in 2009, and another toward producing a billion disks a year with 20nm features for the hard disk drive industry in 2010. He reported that nine tools have already been sold to the hard disk manufacturers (and 20+ to the semiconductor industry), with the Imprio HD-2200 full-wafer tool capable of printing both sides of 65nm disks at a 180/hour clip with <20nm features. Such patterned media will be essential for data storage to maintain its own Moore's law progression (see Fig. 1). For a real factory the lithography tool throughput would need to be 1000 disks/hour, though, he said MII would soon build those tools, based on a cluster tool concept.

While Melliar-Smith does not expect long lifetimes for imprint templates in such an environment, he reported that Molecular Imprints had already demonstrated a low-cost replication technology that employed the S-FIL process with the Imprio-300. Nick Stacey, MII’s director of marketing and business development, noted that an entire generation of R-θ e-beam tools is being developed for patterned HDD media mastering.

Fig. 1: Imprinted disk drive pattern. (Source: Molecular Imprints)

In semiconductor lithography, MII’s roadmap calls for continued improvements in the overlay of its 300 series tools, with 20nm being achieved routinely by the end of this year. That would allow those interested in ≤32nm manufacturing to do process development and integration, observed Melliar-Smith. In 2009, MII expects to develop its first high volume tools, with 20 wafers/hour throughput and 15nm overlay. Then, one year later, those tools would be clustered into a system capable of litho-cell scale throughput. Since the S-FIL tools are so much smaller than steppers and no track needs to be attached, the footprint of even a 10-head system would be comparable to today’s installations, he reported. The cost of ownership at 22nm is projected to be smaller than DUV double-patterning or EUV, though perhaps not as low as today’s 193nm single-pass immersion (see Fig. 2).

Fig. 2: Litho cost comparison at 22nm (Source: Molecular Imprints)

Melliar-Smith noted that neither EUV nor imprint can employ pellicles to protect the mask (or template) from accumulating defects, but that imprint had the “advantage” of being a 1X technology. Thus, the finely crafted master template could be replicated conveniently into low-cost high-fidelity stampers using the proven S-FIL technology. If a stamper was damaged, it could be discarded; in EUV, a contaminated or damaged 4X mask would have to be entirely re-written.

Imprint has prospered in spite of the fact that the majority of litho development attention and funding has gone into other technologies. Now with the need to ramp up production of two classes of tools — a stepper for semiconductors and a full-disk printer for hard drives — progress may be constrained by investment. Molecular Imprints and its R&D partners have demonstrated the viability of their method. Having done that, the time has come to get on with the next (industrial) stage. — M.D.L.

by James Montgomery, news editor, Solid State Technology

July 16, 2008 – Where to turn for inspiration in a tough year 2008 for semiconductor suppliers? Try Yankee baseball great Yogi Berra and famously underappreciated comic Rodney Dangerfield, according to analyst John Housley of Techcet, in a Wednesday talk about materials forecasts at SEMICON West.

Despite handwringing on the semiconductor equipment side of the industry, materials firms actually are still chugging right along with growth as they have for decades, Housley pointed out, adding that in fact he’s found very little correlation between semiconductor sales and materials sales.

In his southern drawl, Housley invoked favorite and relevant Berra-isms about forecasting (“It’s tough to make predictions, especially about the future”) and market slowdowns (“I’m not in a slump, I’m just not hitting,” which he said is roughly equivalent to materials firms “not being in a downturn, we’re just not selling”). Still, he predicts a $43B market this year, and a 8%-9% CAGR through 2011 to $55B.


Spending per node for resists/ancillaries and interconnect materials. (Source: Techcet)

Among hot topics touched on in his brief (~25min) presentation, Housley countered general industry concerns about shortages of polysilicon, noting that most if not all of the top 10 major polysilicon suppliers plan to double or triple (or more) their capacity in the next five years, and there’s another 16 new players coming into the market — all together that could push Si capacity by 5× by 2011, to feed hungry markets in semiconductors and elsewhere. Solar, for example, is going “berserko, bonkers” to get its hands on silicon, he said, emphasizing his point by asking whom among the roomful of participants are playing in some way in solar — and nearly all raised their hands.

Housley paused briefly to mention 450mm, noting that many significant questions still need to be answered, e.g. how usage of things like photoresists will be reworked. “God help us,” he said, and quickly moved on.

Housley spent several minutes on why the typical “confrontational” style of procurement relationship between chemicals/materials producers, their suppliers, and their IC customers needs to change right now. First, heavy demand means materials suppliers are having to compete with much bigger players (e.g. in aerospace) to provide things like titanium, copper, and tungsten — and other industries who want those materials are less picky about purity or even cost. And second is pricing. Years ago price quotes were good for 30, 60, 90 days, he said — but today, he knows someone who has to call back four hours after a price quote or it’ll be gone. The risk to either of these factors is simple, Housley said — consolidation, citing Air Products and Ashland as examples.

For materials firms, who historically feel they’ve been burdened to shoulder much of the initial cost for new technology development while still getting squeezed on pricing and thus not reaping the benefits, the best defense is to stick to their guns. “Learning to say ‘no’ is something the people in this business need to learn to do,” Housley said. This hard-line stance also opens up opportunities, he said, citing as an example customers’ increasing desire to find softer, greener strip technologies while still pressing suppliers on prices. Look for “a disruptive technology in resist strip and removal” in the next two years, he predicted.

Rushed to close out his short talk, Housley zipped through a few final slides about continued demand for SiC and CMP characterized as “the darling of the materials business.” A pie chart showing regional demand was called out because it showed US demand at 18% — a figure Housley said was “very generous” — down from 30% not too long ago. This is a trend that materials firms need to recognize and adapt to, he said.

Housley also spotlighted efforts from Jenoptik to address customers’ desire to get more silicon for their investment, namely by pulling it out of scribe lines. Their technology essentially uses a laser to put a hot and cold spot on a silicon wafer to cleave off excess for reuse. — J.M.

by Sarah Fister Gale, contributing editor, Solid State Technology

As the semiconductor industry matures and consolidations increase, demand for greater efficiencies in the fab is driving a change in thinking about how and when factories invest in new generations of technology.

Rather than diving into new facility upgrades, many fabs are taking a closer look at the systems they have in place and making incremental changes to improve them, suggests Mihir Parikh, president and CEO of Aquest Systems, a global supplier of automated material handling systems (AMHS). He talked with SST in the days before SEMICON West, where the company is showing off its FabEX technology.

“As industry growth rates decrease to single digits and more companies consolidate, it is critical that factories improve their productivity,” he said. “The industry is not just going to invest in next generation tools and technologies. They are going to do what they can to get the most from their existing fabs.”

Most fabs have some areas where the existing overhead transport (OHT) or overhead shuttle-based AMHS cannot keep pace with the demand, creating bottlenecks and traffic jams that “steal productivity from the fab,” Parikh said. Aquest’s FabEX transporter, he explained, moves FOUPs at 3m/sec between interface locations (e.g. stocker and OHT input/output ports), which translates to an 8%-20% productivity improvement in areas where fab equipment might otherwise sit idle waiting for FOUPs, and that translates to millions of dollars in savings.

Improving fab performance and productivity is a big pain point in ever-expanding and increasingly complex fab environments, and Parikh believes using “incremental products and services” (like his FabEX) are the key. “As an industry we’ve got to get the best out of what we have,” he said. — S.F.G.