Category Archives: Blogs

DFM Services in the Cloud


February 27, 2013

DFM Services in the CloudJoe Kwan is the Product Marketing Manager for Calibre LFD and DFM Services at Mentor Graphics. He is also responsible for the management of Mentor’s Foundry Programs. He previously worked at VLSI Technology, COMPASS Design Automation, and Virtual Silicon. Joe received a BA in Computer Science from the University of California Berkeley and an MS in Electrical Engineering from Stanford University.

When to Farm Out Your DFM Signoff

The DFM requirements at advanced process nodes pose not only technical challenges to design teams, but also call for new business approaches. At 40nm, 28nm, and 20nm, foundries require designers to perform lithography checking and litho hotspot fixing before tapeout. In the past, DFM signoff has almost always been done in-house. But, particularly for designers who are taping out relatively few devices, the better path may be to hire a qualified external team to perform some or all of your DFM signoff as a service.

 DRC and DFM have changed pretty dramatically over the past few years. At advanced nodes, you need to be more than just “DRC-clean” to guarantee good yield. Even after passing rule-based DRC, designs can still have yield detracting issues that lead to parametric performance variability and even functional failure. At the root of the problem is the distortion of those nice rectilinear shapes on your drawn layout when you print them with photolithographic techniques. Depending on your layout patterns and their nearby structures, the actual geometries on silicon may exhibit pinching (open), bridging (short) or line-end pull-back (see Figure 1).

SEM images of pinching and bridging, 40nm, 28nm and 20nm process nodes
Figure 1: SEM images of pinching and bridging. LPC finds these problems and lets you fix them before tapeout. Litho checking is mandatory at TSMC for 40nm, 28nm and 20nm process nodes.

In the past, these problems were fixed by applying optical proximity correction (OPC) after tapeout, often at the fab. But at 40nm and below, the alterations to the layout must be done in the full design context, i.e. before tapeout, which means that the major foundries now require IC designers to find and fix all Level 1 hotspots. TSMC’s terminology for this is Litho Process Check, LPC.

Usually, design companies purchase the DFM software licenses and run litho checking in-house. This approach has the obvious benefits of software ownership. Designers have full control over when and how frequently they run the checks. The design database doesn’t leave the company’s network. There is a tight loop between updating the design database and re-running verification.

But what if you have not yet set up your own LPC checking flow and need time to plan or budget for software and CPU resources? Or, if you only have a few tapeouts a year? In these cases, you would benefit from the flexibility and convenience of outsourcing the LPC check.

A DFM analysis service is an alternative option to software purchase by performing litho checking for you. Here’s how it works: the design house delivers the encrypted design database to a secure electronic drop box. The analysis service then runs TSMC-certified signoff—for example, Calibre LFD—in a secure data center. Your DFM analysis service should demonstrate that they have an advanced security infrastructure that can isolate and secure you IP. Access should be limited to only those employees that need to handle the data. You would get the litho results back, along with potential guides for fixing the reported hotspots. A cloud-based DFM analysis service for TSMC’s 40nm, 28nm, and 20nm process nodes is available from Mentor Graphics.

A DFM service can also be useful when you already have Calibre LFD licenses, but find yourself with over-utilized computing resources. Having a DFM service option gives you flexibility in getting through a tight CPU resource situation or can ease a critical path in your tape-out schedule. The DMF service can run the LPC while you perform the remaining design and verification tasks in parallel.

Whether you use a DFM services or run LPC in-house on purchased software, it is very important to run litho checking early and often. This lets you identify problematic structures early and allows more time to make the necessary fixes. But now you have more flexibility to make the right business decision regarding how to reach DFM signoff.

Tim Turner, the Reliability Center Business Development Manager at the College of Nanoscale Science and Engineering (CNSE), Albany, NY, blogs about the potential of resistive memory and the reliability challenges the must be overcome.

Resistive Memory, RRAM or Memristors is a hot topic right now.  RRAM has the potential for single digit nano parameters (speed as fast as 1 ns, area per bit as small as 5 square nm) and is non-volatile.

The technology is based on the formation of a small conductive filament inside an insulator.  The filament is formed the first time using a high voltage.  After that, set or reset transformation (conductive to non-conductive or visa versa) is accomplished by moving one or a few atoms an atomic scale distance.  This can be done with a low voltage (less than a volt).    This small movement gives a repeatable set or reset that can withstand many cycles.

Conduction in the filament appears to be due to oxygen vacancies existing in a percolation path through the insulator.  A small electric field in the reverse direction causes the migration of these oxygen vacancies in a mechanism similar to electromigration of Al or Cu atoms in a metal line.  Momentum exchange between electrons and the vacancies appears to be the driving force.  The vacancies do not have to move far to open the small filament.  An oxygen vacancy moves an atomic scale distance and the tiny filament opens, allowing an insulator to exist between points in the filament.  Forcing a forward voltage can move the oxygen vacancy back into the area where the filament is conductive.  This small movement can give a 100X change in the conduction through the dielectric.  This is the state change that can be interpreted as the digital signal stored on the memory cell.

The material set used for RRAM is CMOS compatible.  RRAM cells have been made out of Cu/HfOx [1], Al/AlOx/Pt, TiN/AlOx/Pt or even Al/AlOx/CNT (Carbon Nano Tubes)[2].  Most of the work reported to date has been on arrays where the cell is similar to a DRAM, using one transistor and one capacitor [3].  The RRAM cell starts with a capacitor, then forms the filament in the capacitor dielectric.  The advantage  this technology has is the smaller size of the capacitor.  There is no need for deep trenches in the silicon or for thick vertical stacks.   The technology is also non-volatile, so there is no need to refresh the charge every few milliseconds.

In polycrystalline materials, the filaments appear to form along grain boundaries between crystals [7].  For amorphous material there are no grain boundaries, but the material is reported to be able to withstand more cycles before failure [1].

RRAM might also be produced with a simple single resistor cross-point array (no transistor per cell required).  Figure 1 shows an array where the each cell is addressed by a row and a column.  The conduction in the row/column pair determines if the cell is set or reset (conductive or insulating).  This arrangement has the distinct advantage of allowing the memory array to be printed on top of a logic circuit.  Active circuits are required only for the address circuitry, allowing a large memory array to be added with little additional silicon area. 

Figure 1: Cross-point RRAM cell

That is the good news.  Now for the bad news.  What are the technology challenges that prevent you from enjoying this technology today?

The first issue is one of measurement noise.  With atomic spacing causing the difference between a set and a reset state, there is some uncertainty in the answer.  Sometimes, a bit will not program.  Nimal Ramaswamy of Micron [3] reported that random bits in a large array failed any given write operation.  There was an average number of failures for each write of a large array, but different bits failed each time.  Every bit apparently has the same probability of failure. 

Random Telegraph Noise (RTN) is another issue.  The state of the bit will most likely be read by forcing voltage and measuring current.  RTN is caused by trap states in the gate dielectric of a transistor that might address the bit.  These traps randomly fill or emit, changing the conduction of the channel.  The noise generated by this increases as the transistors are scaled.  Originally, this was thought to be just the larger impact of a single trap on a smaller area gate [4], but Realov and Shepard [5] showed that shorter L transistors show a greater noise than longer transistors with the same total area (below 40nm).  Thus, this is a problem that will increase as the technology is scaled.  There is also a chance that RTN will be generated by the movement of oxygen vacancies in the filament itself.

Degraeve et. Al. [6] reported a highly voltage sensitive disturb in the reset state.  Their RRAM cell could withstand 100 thousand disturb pulses (100ns) at -0.5 volts, but at -0.6 volts the cell could only withstand a little over 100 pulses.  They also showed that the sensitivity to disturb could be reduced significantly by balancing and optimizing the set and reset pulses.

Figure 2: Disturb in Reset State

Optimization of the Set and reset pulses also has a strong impact on the set/reset cycling endurance of the cell.  Degraeve was able to show up to 10 G set/reset pulses after optimization. 

Wu et. Al [2] showed the impact of scaling on a cross-point array.  According to their model, scaling the technology from 22nm to 5 nm resulted in an increase for the parasitic word and bit line resistance from under 10 ohms to almost 100,000 ohms as the lines width and thickness are reduced.  Adding to the significance of this is the variation in resistance between the closest cell in the array and the furthest call in the array.  This variation could be over 4 orders of magnitude while the difference between the set and reset resistance is only 2 orders of magnitude.  This issue could restrict the size of sub arrays, compromising the potential area savings using this technology.

As the metal lines are scaled to obtain higher memory densities, the filament that generates the conduction in the cell does not scale.  That means the set and reset pulse currents remain about the same as the array is scaled.  This results in an electromigration issue in the scaled metal lines.

Figure 3: Oxygen Vacancy Filament Determines Set or Reset State of RRAM Memory Cell

RRAM is certainly an appealing technology with its ability to scale the cell to tiny dimensions, good speed, CMOS compatible material set and the possibility of mounting the technology above a logic array.  Unfortunately, the devil is in the details and the list of advantages is balanced by a list of problems that must be overcome before this technology can carve out a space as a memory solution.

References:
1] Jihan Capulong, Benjamin Briggs, Seann Bishop, Michael Hovish, Richard Matyi, Nathaniel Cady, College of Nanoscale Science and Engineering, “Effect of Crystallinity on Endurance and Switching Behavior of HfOx based Resistive Memory Devices”, Proceedings of the International Integrated Reliability Workshop 2012
2] Yi Wu, Jiale Liang, Shimeng Yu, Ximeng Guan and H. S. Philip Wong, Stanford University, “Resistive Switching Random Access Memory – Materials, Device, Interconnects and Scaling Considerations”, Proceedings of the International Integrated Reliability Workshop, 2012
3] Nirmal Ramaswamy, Micron, “Challenges in Engineering RRAM Technology for High Density Applications”, Proceedings of the International Integrated Reliability Workshop, 2012
4] K.K. Hong, P.K Ko, Chemming Hu and Yiu Cheng,  Random Telegraph Noise of Deep Sub-Micrometer MOSFETs, 1990 IEEE 1741-3106/90/0200-0090 http://www.eecs.berkeley.edu/~hu/PUBLICATIONS/Hu_papers/Hu_JNL/HuC_JNL_167.pdf
5] Simeon Realov and Kenneth L. Shepard, “Random telegraph noise in 45nm CMOS: Analysis Using an on-Chip Test and Measurement System,  IEDM10-624, 978-1-4244-7419-6/10/$26.00 ©2010 IEEE, http://bioee.ee.columbia.edu/downloads/2010/S28P02.PDF
6] R. Degraeve, A. Fantini, S. Clima, B. Guvoreanu, L. Goux, Y. Y. Chen, D. J. Wouters, Ph. Rousset, G. S. Kar, G. Pourtois, S. Cosemans, J. A. kittl, G. Groeseneken, M. Jurczak, l. Altimime, IMEC, “Reliability of Low Current Filamentary HfO2 RRAM Discussed in the Framework of the Hourglass set/reset Model”, Proceedings of the Integrated Reliability Workshop, 2012.
7] Gennadi Bersuker, SEMATECH, “Origin of Conductive Filaments and Resistive Switching in HfO2 based RRAMS” Proceedings of the International Integrated Reliablity Workshop, 2012, 1.2-1

In the second article of the MEMS new product development blog, the importance of the first prototype will be discussed. Theoretical work is valuable and a necessary step in this process but nothing shows proof of principle and sells a design like a working prototype. It’s something people can touch, observe and investigate while distracting them from doubt associated with change. Building multiple prototypes in this first phase is equally important to begin validation early and show repeatability or provide evidence to change design and process directions.

The first prototypes should include both non functional and function samples. The non functional samples are used to test one or more characteristics such as burst strength of a pressure sensor element. Fully functional samples can be used to test multiple performance interactions. An interaction is likely to include how the packaging of a MEMS device influences its accuracy or how exposure to environmental conditions affect sensor performance over life. Let’s look at a few examples of how prototypes can influence proper decision making and expedite new product development.

When working with an OEM on the development of a MEMS sensor, the team hit a road block with the customer pursuing one design direction (for very specific reasons) and the sensor team trying to make a change to improve sensor performance in fluid drainage. The sensor package had two long, narrow ports of specific diameter and the customer was resistant to change because of envelope size constraints and the need to retrofit legacy products in the field. However, the diameter of the ports was the most important factor in improving drainage. Engineers on both sides threw around theories for months with no common ground achieved before a prototype was made. Then a prototype was built with several different size ports and a drainage study was completed. A video was made showing visual evidence of the test results. It turned out that making a 2 mm increase in port diameter resulted in full drainage with gravity where the previous design held fluid until it was vigorously shook.  When the customer saw the results of the prototype testing in the video, a solution to open port diameter was reached in just a days including a method to retrofit existing products in production.   

For another application, the engineering team needed to develop a method to prevent rotation of a MEMS sensor package. The customer requested that rotation be eliminated with a key feature added at the end of a threaded port. One method to achieve this is through broaching. This method involves cutting a circular blind hole, using a secondary tool to cut the material to a slightly different shape such a hexagon and then removing the remaining chip with a post drill operation. When the idea was first introduced, most experts stated it was crazy to attempt such a feature in hardened stainless steel and no quoted the business. However, the team built a prototype to test the idea. Our first prototype successfully broached 3 holes and then the tool failed due to a large chip in the tool’s tip. The team examined the failure and learned that the chip in the tool resulted from a sharp cutting edge. The material was also suboptimal for this broaching process but it was obtained quickly. Learning from these mistakes the team chose a more robust material and slightly dulled the cutting edge. These changes improved tool life from 3 to 92 broaches. This was a significant improvement but not to the point of a robust manufacturing process. Again learning from the prototype the team saw evidence heat was playing a role in the failure. This led the team to change to a more robust lubrication (something similar to the consistency of honey). This single, additional change improved tool life from 92 to over 1100 broaches and it was learned that increased tool life could be obtained with periodic sharpening and dulling the edge slightly. With further development, over 12,000 broaches were obtained in a single sharpening with tool life lasting over 96,000 broaches. Hence a prototype quickly showed proof of concept but also led to process and tool design changes that provided a successful solution.  

The last example is of a fully functional, prototype MEMS pressure sensor. Prior to building a prototype, analytical tools such as finite element analysis were used to predict interactions between the packaging and sense element when large external loads were applied to package extremities. These models are highly complex and often misuse of the tool by non experienced users results in team skepticism of the results. Colleagues may refer to work of this nature as "pretty pictures" but not very meaningful or doubtful at best. However, when performed properly with attention to meshing, material properties, boundary conditions, applied loads and solvers accurate results can be obtained. This allows for multiple design iterations analytically prior to the first prototype to ensure the sensor has the highest probability of achieving the desired performance. After finding a design solution where the packaging had less than 0.1% influence on the MEMS sense element performance, prototypes were built to validate both the optimized (slightly higher cost, better predicted performance) and a non optimized design (lower cost, lower predicted performance).  Upon validation of both prototypes the team found over 90% correlation between experimental and theoretical results. In addition, the first prototype (although having some flaws) was very functional and performed well enough to be used in a customer validation station.  With high correlation between theory and experimentation, the once questionable results were validated as trustworthy and further FEA could be performed for design optimization.

In each of the case studies reviewed above, it was seen that early prototypes led to a wealth of information for the engineering team and proof of principle. In some cases, proof of principle is not obtained and design / process direction needs to change which is equally valuable information. The first prototypes can also be extremely valuable for influencing colleagues, customers and managers to pursue a particular design or process direction when theory can be disputed at length. In the next article of the blog, critical design and process steps that lead to successful first prototypes will be discussed.   

 

Author Biography:

David DiPaola is Managing Director for DiPaola Consulting, a company focused on engineering and management solutions for electromechanical systems, sensors and MEMS products. A 16 year veteran of the field, he has brought many products from concept to production in high volume with outstanding quality. His work in design and process development spans multiple industries including automotive, medical, industrial and consumer electronics. Previously he has held engineering management and technical staff positions at Texas Instruments and Sensata Technologies, authored numerous technical papers and holds 5 patents. To learn more, please visit www.dceams.com.  

Integration is a feature we all look for in our electronic devices. Information readily available on our smart phones is integrated with web-based services and with our personal data on our home computer.  This interoperability that we take for granted is thanks to common software and hardware platforms that are shared by all the elements of this system. Platforms surround us everywhere in our daily lives – the specific model of the car we drive is built on a platform, the electrical systems in our house are on a platform: 110/220V with universal plugs. Platforms?! So I got curious and looked up a more formal definition on Wikipedia:

Platform technology is a term for technology that enables the creation of products and processes that support present or future development.

Why has the concept of platforms been on my mind? Because I hear it more and more often from engineers in the trenches of the post-tapeout flow – people who develop the data preparation sequences that ready their design for manufacturing. They say it is getting increasingly complicated to accommodate all the functional requirements and still meet the TAT (turn-around-time) requirements.  The 20nm node adds additional complexity to this flow – beyond retargeting, etch correction, fill insertion, insertion of assist features and the application of optical proximity correction– now decomposition-induced steps are required and replicate some of the steps for both layers.  Industry standards like the OASIS format enable the communication between independent standalone tools, but are not enough to enable extension in new functional areas and maintain a steady overall runtime performance. Users have to be familiar with all the features and conventions for each tool – not an efficient way to scale up an operation.

The oldest and most versatile platform in computational lithography is Calibre. It started with a powerful geometry processing engine and a hierarchical data base and is accessed through an integrated scripting environment using the Tcl-based  standard verification rule format (SVRF) and the Tcl verification format (TVF). As the requirements for making a design manufacturable with available lithography tools has grown, so has the scope of functionality available to lithographers and recipe developers. APIs have expanded the programming capabilities: the Calibre API provides access to the data base, the lithography API provides access to the simulation engine, the metrology API enables advanced programming of measurement site selection and characterization, the fracture API enables custom fracture (Figure 1). All of these functions let you both build data processing flows that meet manufacturing needs and encode your very own ideas for the most efficient data processing approach. The additional benefit of a unified platform is that it also enables the seamless interaction and integration of tools in a data processing flow. If you can cover the full flow within one platform, rather than transferring giant post-tapeout files between point tools, you will realize a much faster turn-around time.

Common workflow
Figure 1: All tools in the Calibre platform are programmed using the SVRF language and tcl extensions and can be customized via a number of APIs – maintaining a common and integrated workflow.

A platform like Calibre is uniformly used in both the physical verification of the design and in manufacturing, so that innovation entering the verification space flows freely over to the manufacturing side without rework and qualification. Examples include the smart fill applications and the decomposition and compliance checks for double-patterning (DP).

The benefits to using a unified software platform in the post-tapeout flow, illustrated in Figure 2, are also leveraged by the EDA vendor—our software developers use the common software architecture in the platform for very fast and efficient development of new tools and features. This reduces the response time to customer enhancement requests. New technology, like model-based fracture and self-aligned double patterning (SADP) decomposition, were rapidly prototyped based on that.

Calibre workflow
Figure 2: Benefit and scope of a platform solution and the support level provided by Calibre.  

 

A platform not only provides the integration and efficient operation at the work-flow level, but it also enables efficiency at the data-center level, considering the simultaneous and sequential execution of many different designs and computational tasks. The tapeout manufacturing system is a complex infrastructure of databases, planning, and tracking mechanisms to manage the entire operation. Common interfaces into the tools used –which are guaranteed by a platform solution–let you track data and information associated with each run and manage interactions and feedback across different jobs.  This leads, for example, to an improved utilization of the computer system overall as well as better demand and delivery forecasting. Operating a manufacturing system requires a different level of support than single tool solutions and the necessary infrastructure has evolved with the development of the components.

Once you start using a unified platform in your post-tapeout flow, you will see how the platform expands and grows. For today’s sub-nanometer technologies, a powerful and flexible platform for computational lithography is part of a successful business strategy.

Author biography

Dr. Steffen Schulze is the Product Management Director for the Mentor Graphics’ Calibre Semiconductor Solutions. He can be reached at [email protected].

When a 300mm wafer is vacuum mounted onto the chuck of a scanner, it needs to be flat to within about 16nm over a typical exposure field, for wafers intended for 28nm node devices.1 A particle as small as three microns in diameter, attached to the back side of the wafer—the dark side, if you will—can cause yield-limiting defects on the front side of the wafer during patterning of a critical layer. The impact of back side particles on front side defectivity becomes even more challenging as design rules decrease.

Studies have shown that a relatively incompressible particle three microns in diameter or an equivalent cluster of smaller particles, trapped between the chuck and the back surface of the wafer, can transmit a localized height change on the order of 50nm to the front side of the wafer.2 With the scanner’s depth-of-focus reduced to 50nm for the 28nm node, the same back side particle or cluster can move the top wafer surface outside the sweet spot for patterning. The CD of the features may broaden locally; the features may be misshapen. The result is often called a defocus defect or a hotspot (Figure 1). These defects are frequently yield-limiting because they will result in electrical shorts or opens from the defective feature to its neighbors.

A particle on the back side of the wafer may remain attached to the wafer, affecting the yield of only that wafer, or it may be transferred to the scanner chuck, where it will create similar defects on the next wafer or wafers that pass through the scanner.

At larger design nodes, back side defects were not much of an issue. The scanner’s depth of focus was sufficient to accommodate a few microns of localized change in the height of the top surface of the wafer. At larger design nodes, then, inspection of the back side of the wafer was performed only after the lithography track and only if defects were found on successive wafers, indicating that the offending particle remained on the scanner chuck, poised to continue to create yield issues for future wafers. In this case corrective measures were undertaken on the track to remove any suspected contamination. The track was re-qualified by sending another set of wafers through it and looking for defectivity at the front side locus of the suspected back side particle. This reactive approach was economically feasible for most devices throughout volume production of 32nm devices.

At the 28nm node, however, lithography process window requirements are such that controlling back side particles requires a more proactive approach. Advanced fabs now tend to inspect the wafer back side before the wafer enters the scanner, heading off any potential yield loss. Scanner manufacturers are also encouraging extensive inspection of the back side of wafers before they enter the track. As we see what lithography techniques unfold for the 16nm, 10nm nodes and beyond, it’s entirely possible that 100% wafer sampling will become the best-known method.

As with inspection of the front side of the wafer, sensitivity to defects of interest (DOI) and the ability to discriminate between DOI and nuisance events are important. Even though particles need to be two to three microns in diameter before they have an impact on front side defectivity, the inspection system ought to be able to detect sub-micron defects, since small defects can agglomerate to form clusters of critical size. Sub-micron sensitivity is beneficial for identifying process tool issues based on the spatial signature of the defects—while high-resolution back side review enables imaging of localized defects, so that appropriate corrective actions can be taken to protect yield. Sub-micron sensitivity also serves to extend the tool’s applicability for nodes beyond 28nm.

For further information on back side inspection equipment or methodologies, please consult the second author.

Rebecca Howland, Ph.D., is a senior director in the corporate group, and Marc Filzen is a product marketing manager in the SWIFT division at KLA-Tencor.

Check out other Process Watch articles: “The Dangerous Disappearing Defect,” “Skewing the Defect Pareto,” “Bigger and Better Wafers,” “Taming the Overlay Beast,” “A Clean, Well-Lighted Reticle,” “Breaking Parametric Correlation,” “Cycle Time’s Paradoxical Relationship to Yield,” and “The Gleam of Well-Polished Sapphire.”

Notes:

1.       Assuming 193nm exposure wavelength, NA = 1.35 and K2 = 0.5, then depth of field = 50nm. Normally 30% of the DOF is budgeted for wafer flatness.

2.       Internal studies at KLA-Tencor.

By David DiPaola, DiPaola Consulting, LLC.

New product development is an extremely rewarding area of engineering and business. It often brings innovation to unmet needs that can improve quality of life and be extremely profitable for entrepreneurs and large corporations alike. With MEMS technology exploding with new business opportunities, this blog will discuss the critical factors needed for success in the early stage of new product development.

New product development starts with an idea. A product to enable the blind to see is very appealing to consider. However, without a viable business and technical plan to show the path to commercialization, the idea is not worth very much and it’s impossible to influence investors or managers to support it. Hence the first step is to identify an application and a lead customer that a business plan can be developed around. Equally important are a favorable competitive landscape, no or limited patents surrounding the area of interest, and a large impact to society.

Applications that are driven by legislation or regulations are excellent because they have a high likelihood of fruition with definitive timelines. Legislation in automotive resulted in the development of MEMS based occupant weight sensors that provided feedback in systems used to deploy air bags with different force levels or not at all to better protect passengers in the event of an accident.  Even better are applications that give consumers what they want. The Argus II Retinal Prosthesis System is a device that partially restores sight for specific blindness. This device provides electrical stimulation of the retina to elicit visual patterns of light that can be interpreted by the brain (see figure). Hence users can recognize doorways and windows and gain greater independence; a highly desired quality with significant impact. Over 1 million people in the US may benefit from this device and the lead customers are people with profound retinitis pigmentosa.

The Argus II will be the first device to hit the market and hence the competitive landscape is extremely favorable.  Second Sight also benefits from large barriers to enter this market due to the rigorous FDA approval process. However, competition is on their heels. Nano Retina is developing another device that is smaller, fits uniquely in the eye alone and promises to provide greater number of pixels enabling recognition of humans.  Second Sight is also developing the next generation device to be smaller, places the video camera in the eye and provides improved vision with greater number of electrodes.

Timing is another important aspect of new product development. There are limited windows in which a product can be developed and launched. When products are developed without an underlying customer demand, they rarely make it passed the R&D phase into commercialization.  Often, technologies are developed in universities 15 – 30 years before they become mainstream commercialized products. Conversely, if the product comes to market too late, OEMs have already picked development partners and are reluctant to change suppliers. The application space may also be saturated with competitors making it difficult to win market share. Depending on the industry, these windows vary in size considerably. A typical cycle in automotive can range from 2 – 5 years. Consumer electronics can be as little as 6 months and Class III biomedical applications can see cycles greater than 10 years. Hence it is important to fully understand market opportunities and have a detailed schedule to demonstrate the product can be launched within this defined window. Equally important, some core technology elements of the design must be developed to a functional point with limited areas needing major development or it will be challenging to meet the defined schedule.

For the occupant weight sensor, there was a limited time to engage with OEMs and show proof of concept before production suppliers were chosen after the legislation came into law. The sense element and conditioning electronics were proven in another automotive sensor and the packaging was a major development piece. The required compliance with government legislation dictated the schedule for aggressive product development, validation, launch and ramp cycle.

An often mismanaged portion of new product development is the team behind the innovation. A team with robust chemistry, passion and a single leader are key to success. Multiple team leaders and poor chemistry only leads to infighting and redundant efforts. It is also important to limit team size to a critical few to expedite decision making and keep focused on what’s important.   Larger teams tend to get distracted with items outside of the core focus and can miss critical details and deadlines causing product failure. Self assembled teams starting at the grass roots level more times than not have excellent chemistry. They begin with an idea generated by 1 – 2 people and an additional 1 – 3 trusted colleagues are brought in as support roles to help manage the work load that often occurs after hours. This natural selection process brings people with similar passions together and weeds out less motivated people as they do not want the added work load.

An extremely important attribute of successful teams is to keep a low profile and minimize negative influences from external sources. At a project’s beginning, it seems the vast majority of people are against it or have an opinion on why the project will not be successful. In reality, it is a fear of risk and the unknown. Hence those teams who understand this and maintain a high risk tolerance yet work to minimize it, have a definite advantage. Once early project successes are achieved, there will be plenty of time to tell others about the latest innovation. Having an advocate at the vice president level in this early stage is also extremely helpful because it can channel much needed funds to the project and keep middle managers without similar vision from halting activity.

Speaking the language of investors and business leaders is critical to get the financial backing to make the development happen and commercialization a reality. Hence the product’s business plan must show that target profits can be achieved with a reasonable payback time of investment dollars.  It is recommended that the plan include low, medium and high production volume estimations, product costs, product selling price and gross revenues. Operational costs, taxes, equipment depreciation, travel, engineering, marketing and overhead costs all need to be captured as accurately as possible. Concluding the analysis with return on investment, net present value and initial rate of return provide a good financial overview for the project. 

New product development is an exciting area with many opportunities in MEMS applications.  Identification of your lead customer and application, knowing the competitive and patent landscape, creating high impact products, being sensitive to timing, having small, focused teams, and developing a robust business plan can make a large difference in the success of product commercialization. Please stay tuned for future articles that explore additional aspects to achieve success in new product development.

Author Biography:

David DiPaola is Managing Director for DiPaola Consulting a company focused on engineering and management solutions for electromechanical systems, sensors and MEMS products.  A 16 year veteran of the field, he has brought many products from concept to production in high volume with outstanding quality.  His work in design and process development spans multiple industries including automotive, medical, industrial and consumer electronics.  Previously he has held engineering management and technical staff positions at Texas Instruments and Sensata Technologies, authored numerous technical papers and holds 5 patents. To learn more, please visit www.dceams.com.   

By Rebecca Howland, Ph.D., and Tom Pierson, KLA-Tencor.

Is it time for high-brightness LED manufacturing to get serious about process control?  If so, what lessons can be learned from traditional, silicon-based integrated circuit manufacturing?

The answer to the first question can be approached in a straight-forward manner: by weighing the benefits of process control against the costs of the necessary equipment and labor.  Contributing to the benefits of process control would be better yield and reliability, shorter manufacturing cycle time, and faster time to market for new products. If together these translate into better profitability once the costs of process control are taken into account, then increased focus on process control makes sense.

Let’s consider defectivity in the LED substrate and epi layer as a starting point for discussion. Most advanced LED devices are built on sapphire (Al2O3) substrates. Onto the polished upper surface of the sapphire substrate an epitaxial (“epi”) layer of gallium nitride (GaN) is grown using metal-organic chemical vapor deposition (MOCVD).

Epitaxy is a technique that involves growing a thin crystalline film of one material on top of another crystalline material, such that the crystal lattices match—at least approximately. If the epitaxial film has a different lattice constant from that of the underlying material, the mismatch will result in stress in the thin film. GaN and sapphire have a huge lattice mismatch (13.8%), and as a result, the GaN “epi layer” is a highly stressed film. Epitaxial film stress can increase electron/hole mobility, which can lead to higher performance in the device. On the other hand, a film under stress tends to have a large number of defects.

Common defects found after deposition of the epi layer include micro-pits, micro-cracks, hexagonal bumps, crescents, circles, showerhead droplets and localized surface roughness. Pits often appear during the MOCVD process, correlated with the temperature gradients that result as the wafer bows from center to edge. Large pits can short the p-n junction, causing device failure. Submicron pits are even more insidious, allowing the device to pass electrical test initially but resulting in a reliability issue after device burn-in. Reliability issues, which tend to show up in the field, are more costly than yield issues, which are typically captured during in-house testing. Micro-cracks from film stress represent another type of defect that can lead to a costly field failure.

Typically, high-end LED manufacturers inspect the substrates post-epi, taking note of any defects greater than about 0.5mm in size. A virtual die grid is superimposed onto the wafer, and any virtual die containing significant defects will be blocked out. These die are not expected to yield if they contain pits, and are at high risk for reliability issues if they contain cracks. In many cases nearly all edge die are scrapped. Especially with high-end LEDs intended for automotive or solid-state lighting applications, defects cannot be tolerated: reliability for these devices must be very high.

Not all defects found at the post-epi inspection originate in the MOCVD process, however. Sometimes the fault lies with the sapphire substrate. If an LED manufacturer wants to improve yield or reliability, it’s important to know the source of the problem.

The sapphire substrate itself may contain a host of defect types, including crystalline pits that originate in the sapphire boule and are exposed during slicing and polishing; scratches created during the surface polish; residues from polishing slurries or cleaning processes; and particles, which may or may not be removable by cleaning. When these defects are present on the substrate, they may be decorated or augmented during GaN epitaxy, resulting in defects in the epi layer that ultimately affect device yield or reliability (see figure).

Patterned Sapphire Substrates (PSS), specialized substrates designed to increase light extraction and efficiency in high-brightness LED devices, feature a periodic array of bumps, patterned before epi using standard lithography and etch processes. While the PSS approach may reduce dislocation defects, missing bumps or bridges between bumps can translate into hexes and crescent defects after the GaN layer is deposited. These defects generally are yield-killers.

In order to increase yield and reliability, LED manufacturers need to carefully specify the maximum defectivity of the substrate by type and size—assuming the substrates can be manufactured to those specifications without making their selling price so high that it negates the benefit of increased yield. LED manufacturers may also benefit from routine incoming quality control (IQC) defect measurements to ensure substrates meet the specifications—by defect type and size.

Substrate defectivity should be particularly thoroughly scrutinized during substrate size transitions, such as the current transition from four-inch to six-inch LED substrates. Historically, even in the silicon world, larger substrates are plagued initially by increased crystalline defects, as substrate manufacturers work out the mechanical, thermal and other process challenges associated with the larger, heavier boule.

A further consideration for effective defect control during LED substrate and epi-layer manufacturing is defect classification. Merely knowing the number of defects is not as helpful for fixing the issue as knowing whether the defect is a pit or particle. (Scratches, cracks and residues are more easily identified by their spatial signature on the substrate.) Leading-edge defect inspection systems such as KLA-Tencor’s Candela products are designed to include multiple angles of incidence (normal, oblique) and multiple detection channels (specular, “topography,” phase) to help automatically bin the defects into types. For further information on the inspection systems themselves, please consult the second author.

Rebecca Howland, Ph.D., is a senior director in the corporate group, and Tom Pierson is a senior product marketing manager in the Candela division at KLA-Tencor.

Check out other Process Watch articles: “The Dangerous Disappearing Defect,” “Skewing the Defect Pareto,” “Bigger and Better Wafers,” “Taming the Overlay Beast,” “A Clean, Well-Lighted Reticle,” “Breaking Parametric Correlation,” “Cycle Time’s Paradoxical Relationship to Yield,” and “The Gleam of Well-Polished Sapphire.”

By Gandharv Bhatara, product marketing manager for OPC technologies, Mentor Graphics.

For nearly three decades, semiconductor density scaling has been supported by optical lithography. The ability of the exposure tools to provide shorter exposure wavelengths or higher numerical apertures have allowed optical lithography to play such an important role over such an extended time frame. However, due to technical and cost limitations, conventional optical lithography has reached a plateau with a numerical aperture of 1.35 and an exposure wavelength of 193nm.  Although intended for the 32nm technology node, it has been pushed into use for the 20nm technology node.

The continued use of 193nm optical lithography at the 20nm technology node brings with it significant lithography challenges – one of the primary challenges being the ability to provide sufficient process window to pattern the extremely tight pitches. Several innovations in computational lithography have been developed in order to squeeze every possible process margin out of the lithography/patterning process.  In this blog, I will talk about two specific advances that are currently in deployment at 20nm.

The first such innovation is in the area of double patterning. As the pitch shrinks to below 80nm, double patterning becomes a necessary processing/patterning technique. One of the impacts of double patterning on the manufacturing flow is that foundries now have to perform optical proximity correction (OPC) on two separate masks after the layout has been decomposed. There are two approaches available to do this. In the first approach, each mask undergoes a separate OPC process, independent of each other. In the second approach—developed, deployed, and recommended by Mentor Graphics—the two masks are corrected simultaneously. This approach allows critical information, like edge placement error and critical dimension, to be dynamically shared across the two masks. This concurrent double patterning approach (Figure 1) ensures the best quality optimal correction, good stitching across the two masks, and significantly reduces the risk of intra-mask bridging.

 

 

Caption: Concurrent double patterning OPC corrects the two decomposed masks at the same time, sharing information between them.

The second innovation is in the area of technical advances in OPC techniques. As the process margin gets tighter, traditional or conventional OPC may not be sufficient to process difficult-to-print layouts. These layouts are design rule compliant but require a more sophisticated approach in order to make them manufacturable. We developed two approaches to deal with this situation. The first is to perform a localized in-situ optimization. This is a computationally expensive approach, which precludes it from being a full chip technique that improves printing by enhancing the process margin for extremely difficult-to-print patterns (Figure 2).

Caption: Hotspot improvement with in-situ optimization. The simulated contour lines show an improvement in line width after optimization.

In-situ optimization is integrated within the OPC framework so it’s seamless from an integration standpoint.  The second approach is a technique for post-OPC localized printability enhancement. OPC at 20nm typically uses conventional OPC and simple SRAFs. We developed an inverse lithography technique in which the OPC and the SRAFs have greater degrees of freedom and can employ non-intuitive but manufacturable shapes. This is also a computationally expensive approach, but it allows for significant process window improvement for certain critical patterns and allows for the maximum possible lithography entitlement. In this approach, you first run OPC and identify lithography hotspots (difficult-to-print patterns), then apply the localized printability enhancement techniques on the hotspots. All the necessary tooling and the infrastructure to enable this approach for all major foundries are available.

Both these advances in computational lithography are critical enablers for the 20nm technology node. In my next blog, I will talk about extension of these techniques to the 14nm technology node.

Author biography

Gandharv Bhatara is the product marketing manager for OPC technologies at Mentor Graphics.

 

 

In an IC fab, cycle time is the time interval between when a lot is started and when it is completed. The benefits of shorter cycle time during volume production are well known: reduced capital costs associated with having less work in progress (WIP); reduced number of finished goods required as safety stock; reduced number of wafers affected by engineering change notices (ECNs); reduced inventory costs in case of a drop in demand; more flexibility to accept orders, including short turnaround orders; and shorter response time to customer demands. Additionally, during development and ramp, shorter cycle times accelerate end-of-line learning and can result in faster time to market for the first lots out the door.

Given all the benefits of reducing cycle time, it’s useful to consider how wafer defect inspection contributes to the situation. To begin with, the majority of lots do not accrue any cycle time associated with the inspection, since usually less than 25 percent of lots go through any given inspection point. For those that are inspected, cycle time is accrued by sending a lot over to the inspection tool, waiting until it’s available, inspecting the lot and then dispositioning the wafers. On the other hand, defect inspection can decrease variability in the lot arrival rate—thereby reducing cycle time.

Three of the most important factors used in calculating fab cycle time are variability, availability, and utilization. Of these, variability is by far the most important. If lots arrive at process tools at a constant rate, exactly equal to the processing time, then no lot will ever have to wait and the queue time will be identically zero. Other sources of variability affect cycle time, such as maintenance schedules and variability in processing time, but variability in the lot arrival rate tends to have the biggest impact on cycle time.

In the real world lots don’t arrive at a constant rate and one of the biggest sources of variability in the lot arrival rate is the dreaded WIP bubble—a huge bulge in inventory that moves slowly through the line like an over-fed snake. In the middle of a WIP bubble every lot just sits there, accruing cycle time, waiting for the next process tool to become available. Then it moves to the next process step where the same thing happens again until eventually the bubble dissipates. Sometimes WIP bubbles are a result of the natural ebb and flow of material as it moves through the line, but often they are the result of a temporary restriction in capacity at a particular process step (e.g., a long “tool down”).

When a defect excursion is discovered at a given inspection step, a fab may put down every process tool that the offending lot encountered, from the last inspection point where the defect count was known to be in control, to the current inspection step.  Each down process tool is then re-qualified until, through a process of elimination, the offending process tool is identified.

If the inspection points are close together, then there will be relatively few process tools put down and the WIP bubble will be small.  However, if the inspection points are far apart, not only will more tools be down, but each tool will be down for a longer period of time because it will take longer to find the problem.  The resulting WIP bubble can persist for weeks, as it often acts like a wave that reverberates back and forth through the line creating abnormally high cycle times for an extended period of time. 

Consider the two situations depicted in Figure 1 (below). The chart on the top represents a fab where the cycle time is relatively constant. In this case, increasing the number of wafer inspection steps in the process flow probably won’t help.  However, in the second situation (bottom), the cycle time is highly variable. Often this type of pattern is indicative of WIP bubbles.  Having more wafer inspection steps in the process flow both reduces the number of lots at risk, and may also help reduce the cycle time by smoothing out the lot arrival rate.

 

Because of its rich benefits, reducing cycle time is nearly always a value-added activity. However, reducing cycle time by eliminating inspection steps may be a short-sighted approach for three important reasons. First, only a small percentage of lots actually go through inspection points, so the cycle time improvement may be minimal. Second, the potential yield loss that results from having fewer inspection points typically has a much greater financial impact than that realized by shorter cycle time. Third, reducing the number of inspection points often increases the number and size of WIP bubbles. 

For further discussions on this topic, please explore the references listed at the end of the article, or contact the first author.

Doug Sutherland, Ph.D., is a principal scientist and Rebecca Howland, Ph.D., is a senior director in the corporate group at KLA-Tencor.

Check out other Process Watch articles: “The Dangerous Disappearing Defect,” “Skewing the Defect Pareto,” “Bigger and Better Wafers,” “Taming the Overlay Beast,” “A Clean, Well-Lighted Reticle,” “Breaking Parametric Correlation,” “Cycle Time’s Paradoxical Relationship to Yield,” and “The Gleam of Well-Polished Sapphire.”

References

1.       David W. Price and Doug Sutherland, “The Impact of Wafer Inspection on Fab Cycle Time,” Future Technology and Challenges Forum, SEMICON West, 2007.

2.       Peter Gaboury, “Equipment Process Time Variability: Cycle Time Impacts,” Future Fab International. Volume 11 (6/29/2001).  

3.       Fab-Time, Inc.  “Cycle Time Management for Wafer Fabs:  Technical Library and Tutorial.”

4.       W.J. Hopp and M.L. Spearman, “Factory Physics,” McGraw-Hill, 2001, p 325.

In a significant announcement during the SEMICON Japan 450mm Transition Forum that sheds new light on the availability of 450mm wafer processing lithography capability, Kazuo Ushida, president of Nikon Precision Equipment Company, said that the company plans to ship high-volume manufacturing (HVM) lithography tools in 2017 through a joint development effort with a chip maker.  Nikon plans to have ArF immersion 450mm prototype tools in 2015-16.

Other 450mm-related news during SEMICON Japan came from Opening Keynote speaker Kasumasa Yoshida, representative director and president of Intel, K.K., who confirmed Intel’s new 450mm Japan Metrology Center (JMC) in Tsukuba. 

Ushida said that requirements for 450mm lithography include higher throughput, improved overlay accuracy, and improved imaging performance. The industry’s expectation for high-volume 450mm EUV lithography insertion in 2018 will likely be delayed due to insufficient light source progress, mask infrastructure and EUV photoresist development challenges.  Furthermore, Uchida said that the very steep curve of EUV technology improvement required is not realistic in the timeframe.

Lithography has been a serious concern among other wafer process equipment tool providers, who have been reluctant to invest in 450mm research and development ahead of a viable advanced node patterning solution.  Other equipment companies have been interested in a 450mm litho solution for process development reasons and also as an assurance that chip makers will actually be ready to implement a complete manufacturing line in the timeframe that tool development is being requested.

Uchida’s pledge aligns with the G450C roadmap. G450C vice president and general manager Frank Robertson, also speaking at SEMICON Japan, said that the consortium plans for nano-imprint litho and pitch doubling capability over the next two years and “real” 193 immersion litho capability for tool demonstrations at the unit process level by mid-2014. 

Robertson said that G450C has now negotiated a full set of wafer process and metrology tools with the exception of lithography. In addition to 14nm tool demonstrations, G450C is focused on providing test wafers. Test monitor 450mm wafers are expected to be available in 2Q13, prime wafers in 2Q14 and epi wafers in 1Q15. G450C is giving priority to consortium members and the participating tool makers. They will also provide 450 test wafers to others via a wafer loan program.  Robertson pointed to a wafer loan request process on the G450C website.

According to economic analysis from Akira Minamikawa, vice president, IHS iSuppli Japan, NAND memory and microprocessors warrant the larger wafer size, but DRAM will not require 450mm manufacturing.  Minamikawa estimates that twenty 450mm fab lines (to as many as 50 “at the most”) will be built in a ten-year period.  He contrasts that figure to 160 fab lines for 300mm and 240 8” fab lines that were built in a similar timeframe.

The need for innovation and collaboration to achieve the cost and time objectives was a common observation from G450C, Nikon and TEL representatives speaking at the Forum.  Each speaker pointed to the important role of SEMI Standards in achieving a successful transition and the need for chip makers, consortia and equipment companies to collaborate in new ways.

Tokyo Electron Limited (TEL) VP and general manager Akihisa Sekiguchi commented that equipment makers face inordinate challenges in the 450mm transition. Concurrent 300mm/450mm R&D and prolonged 450mm startup impose significant financial risk and he warned that it may take years for equipment makers to see an ROI. TEL’s proposal is to unify its various internal platforms for 450mm, and establish an open platform alliance to share previously proprietary information with the supply chain.  Concepts for harmonizing facility connections have been well-received by the G450C as a means to reduced installation costs and complexity.

Intel’s new 450mm Japan Metrology Center (JMC) in Tsukuba was referenced in an earlier industry presentation at SEMICON West and operational since July, but has not been publically announced.  Yoshida said that mission of the new facility is to support the 450mm network by providing “quick turn” metrology, improved supplier R&D velocity and a link to the G450C activity in Albany, New York.

Yoshida said that the semiconductor industry is spurred by silicon innovation and penetration.  He referenced smartphone demand, which is expected to grow at a 24 percent compound annual growth rate over the next several years, and tablets with a 53 percent CAGR, as key market drivers.  He further commented that 15 billion intelligent connected devices will be present by 2015 and contribute to an estimated 35 trillion GB of data traffic. He said that the combined market drivers potentially yield semiconductor industry sales with greater than 50 percent upside by 2020 — meaning a worldwide market of about $450 billion. He anticipates that Intel products will represent a 25-26 percent share of the total and therefore the company requires aggressive investment to increase capacity.

The next SEMI 450mm Transition Forum will occur at SEMICON Korea on January 30, 2013.