Category Archives: Blogs

David DiPaola is managing director for DiPaola Consulting, a company focused on engineering and management solutions for electromechanical systems, sensors and MEMS products.  A 17-year veteran of the field, he has brought many products from concept to production in high volume with outstanding quality.  His work in design and process development spans multiple industries including automotive, medical, industrial and consumer electronics.  He employs a problem solving-based approach working side-by-side with customers from startups to multi-billion dollar companies.  David also serves as senior technical staff to The Richard Desich SMART Commercialization Center for Microsystems, is an authorized external researcher at The Center for Nanoscale Science and Technology at NIST and is a senior member of IEEE. Previously,he has held engineering management and technical staff positions at Texas Instruments and Sensata Technologies, authored numerous technical papers, is a respected lecturer and holds five patents. Visit www.dceams.com.

After a functional A-sample prototype is built, it doesn’t take long for a project to gain traction that has market pull.  This is usually the point that a project becomes highly visible within a company and it enters the Technology Development Process (TDP). The TDP is made up of multiple phases including concept, prototype, pilot and production with gates at the end of each phase.  Design and process reviews are required at each gate but may also occur within a phase. These reviews are an open forum for communication of project progress and gaps towards technological, business and schedule milestones. Furthermore, the product is constantly evaluated against the market need and potential changes in market that may have occurred. The audience for the reviews at a gate include peers and management, who provide feedback on the project to date and collectively decide whether additional work is needed to complete the current phase or the completed work is sufficient to allow the project to proceed to the next phase with additional funding.  In certain instances, a project that has not met all of the deliverables may be allowed to proceed to the next phase, but under strict conditions, that must be fulfilled within a given timeline.  The goal of the TDP is to focus the team on high quality execution, effectively screen projects allowing only the best to proceed and hence accelerate successful innovation and profitability. 

The MEMS Industry Group (MIG) Technology Development Process Template is an excellent tool for companies to use to implement the TDP within their organization (Marty et al. 2013). The goal of the TDP was to create a simplified frame work that could be easily customized to fit a company’s needs. The TDP structure shown below is a slightly modified version of the TDP developed by MIG.  In this version there are four major phases including concept, prototype, pilot and production with three major gates. 

 

Figure 1

TDP Structure

MEMS new product development

 

 The concept phase is where ideas are generated and the initial A-samples are developed. It is also where the business case is first generated and the market need is defined.  It is highly desirable to have market pull at this point. The prototype phase is where the design is developed in detail and B-samples are fabricated to support various levels of validation. The outcome of the prototype phase is to have design that can be manufactured in volume production. Towards the end of the prototype phase, production tooling is often released. The pilot phase is where production tooling is built and qualified.  In addition, the product is made on production tooling (C-samples) and revalidated. It is important to note that there should be no change in the product design between the last revision in prototype and the first samples off the production tooling. The production phase is low to high volume production ramp. Often customers will require revalidation of products in production once a year for the life of the product.   

At each gate, there is a design and process review for the project. In order for the team to be focused and efficient, there needs to be a clear set of deliverables defined for completion of each phase.  These deliverables range from business and market definition to project technical details to production launch.  This checklist provides an in-depth set of deliverables for the design reviews at each gate that can be tailored to the specific needs of an organization. It is noted that a fourth gate is common 3-6 months after production launch to review project status but is not depicted in Figure 1.

This table can be downloaded from the following link in PDF format.  Many of the items listed above are self-explanatory.  Others are explained in more detail in previous blogs posts such as DFMEA and tolerance stacks.  

The Technology Development Process is an essential element of successful MEMS new product launches.  The Design Review Checklist can also provide a frame work for discussion between management and engineers on required deliverables to pass a particle gate.  With improved communication and efficient execution of technology development, the TDP is a great tool for accelerating innovation and profitable MEMS products.  In next month’s blog, the necessary attributes of a MEMS engineer for new product development will be discussed.  

Works Cited:

Marty, Valerie, Dirk Ortloff, and David DiPaola. "The MIG Technology Development Process       Template." MEMS Industry Group, Mar. 2013. Web. 28 Apr. 2013.

 

David DiPaola is Managing Director for DiPaola Consulting a company focused on engineering and management solutions for electromechanical systems, sensors and MEMS products.  A 17-year veteran of the field, he has brought many products from concept to production in high volume with outstanding quality.  His work in design and process development spans multiple industries including automotive, medical, industrial and consumer electronics.  He employs a problem solving-based approach working side-by-side with customers from startups to multi-billion dollar companies.  David also serves as Senior Technical Staff to The Richard Desich SMART Commercialization Center for Microsystems, is an authorized external researcher at The Center for Nanoscale Science and Technology at NIST and is a Senior Member of IEEE. Previously, he has held engineering management and technical staff positions at Texas Instruments and Sensata Technologies, authored numerous technical papers, is a respected lecturer and holds 5 patents.  To learn more, please visit www.dceams.com.   

The fourth article of the MEMS new product development blog is Part 2 of the critical design and process steps that lead to successful prototypes.  In the last article, the discussion focused on definition of the customer specification, product research, a solid model and engineering analysis to validate the design direction.  The continuation of this article reviews tolerance stacks, DFMEA, manufacturing assessment and process mapping.       

A tolerance stack is the process of evaluating potential interferences based on the interaction of components’ tolerances.  On a basic level, a cylinder may not fit in a round hole under all circumstances if the cylinder’s outside diameter is on the high size and the inside diameter of the hole is on the lower size causing an interference when there is an overlap of their tolerances.  This situation can become complex when multiple components are involved because it results in the number of variables reaching double digits.  A simple approach to tolerance stacks is using a purely linear or worst case approach where full tolerances are added to determine potential for interference.  However, experience from producing millions of sensors shows this approach is overly conservative and a non optimal design practice.  If tolerances of the assembly follow a normal distribution, are statistically independent, are bilateral and are small relative to the dimension, a more realistic approach is a modified root sum of the squares (MRSS) tolerance stack technique.  In this approach the root sum of the squares of the tolerances are multiplied by a safety factor to determine the maximum or minimum geometry for a set of interrelated components.  The safety factor accounts for cases where RSS assumptions are not fully true.  This approach is only recommended when 4 or more tolerances are at play.  If only 2 tolerances are present as in the first example above, it is recommended to perform a linear tolerance stack.  In some cases, linear tolerances need to be added to a MRSS calculation (MRSS calculation + linear tolerances = result).  Pin position inside a clearance slot for anti-rotation is linear tolerance that is added to a MRSS calculation.  Reasoning for this is the pin can be any location in the slot at any given time and does not follow a normal statistical distribution. 

An example of a MRSS tolerance stack is provided below to review this concept in more detail.    Let’s determine if the wirebond coming off of the sense element will interfere with the metal housing.  A modified RSS tolerance stack shows line to line contact and only a small adjustment in the design is needed to resolve the issue.  The linear tolerance stack shows a significant interference what requires a larger adjustment.  Dimensions and tolerances are illustrative only.

Figure 1: MEMS Sensor Package (mm)

Figure 2

Modified Root Sum Square Versus Linear Tolerance Stack Approaches

 0.17 > SF*(((T1^2) + (T2^2) + (T3^2) + (T4^2) + (T5^2))^(0.5))        

MRSS Approach

0.17 > 1.2*((0.01^2 + 0.05^2 + 0.025^2 + 0.10^2 + 0.08^2)^0.5) = 0.17

0.17 > T1 + T2 + T3 + T4 + T5       

Linear Approach

0.17 > 0.01 + 0.05 + 0.025 + 0.1 + 0.08 = 0.27

An excellent text on this subject is Dimensioning and Tolerancing Handbook, by Paul J. Drake, Jr. and published by McGraw-Hill.

DFMEA, design failure mode and effects analysis. is another tool that is extremely effective to identify troublesome areas of the design that need to be addressed to prevent failures in validation and the field.  Simply put this is a systematic approach to identify potential failure modes and their effects and finding solutions to mitigate the risk of a potential failure.  A Risk Priority Number (RPN) is then established based on rating and multiplying severity, occurrence and detection of the failure mode (severity*occurrence*detection = RPN).  The input to the tool is the design feature’s function, the reverse of the design function, the effect of the desired function not being achieved, and the cause of the desired function not being achieved.  There is also an opportunity to add design controls prevention and detection.  The outputs are the corrective actions taken to mitigate risk of a potential failure. Figure 3 shows an brief example of this approach for a MEMS microphone.

Figure 3: Design Failure Mode and Effects Analysis

Further information on DFMEA can be found at Six Sigma Academy or AIAG.  Corrective action section left out of illustration for clarity.

 It is also extremely important that the manufacturing process be considered from the first day of the design process.  Complete overlap of design and process development are the true embodiment of concurrent design.  The following illustration depicts this well:

Figure 4: Concurrent Design

Hence before a MEMS design is started, discussions should be initiated with the foundry, component fabrication suppliers and the process engineers responsible for the package assembly.  These meetings are excellent times to review new capabilities, initial ideas and explore new concepts.   Considering the design from a process perspective simultaneously with other design requirements leads to highly manufacturable products that are often lowest cost.    In essence, the design engineer is performing a constant manufacturing assessment with each step in the design phase.  This methodology also encourages process short loops in the design phase to develop new manufacturing steps.  This expedites the prototype process with upfront learning and provides feedback to the design team for necessary changes.  The additional benefit of this approach is the boarder team is on board when prototyping begins as they had a say in shaping the design.   

Another tool to thoroughly understand the process in the design phase is process mapping.  Using this methodology, process inputs, outputs, flow, steps, variables, boundaries, relationships and decision points are identified and documented.  The level of detail is adjustable and to start there can be a broad overview with more detailed added as the design progresses.  This quickly provides a pictorial view of the process complexity, the variables effecting the design function, gaps, unintended relationships and non value added steps.  It can also be used as a starting point for setting up the sample line in a logical order to assemble prototypes, estimating cycle time and establishing rework loops.  To further clarify this method, a partial process map for a deep reactive ion etch process is provided:

Figure 5: Partial Process Map of Deep Reactive Ion Etch Process

This process map is not all inclusive but illustrative of the process flow, critical parameters, inputs and a decision point.  The personal protection equipment, tools used and relationships in the process are omitted for brevity.  With this level of process detail available to the design team, the complexity of feature fabrication can be evaluated, anticipated variation from process parameters can be analyzed and much more possibly prompting design changes. 

 Knowledge of and attention to detail in these eight critical, yet often overlooked steps are essential in the design of highly manufacturable, low cost and robust products.  These methodologies create a strong foundation upon which additional skills are built to provide a balanced design approach.  In next month’s blog, the design review process and a checklist will be discussed to help engineers prepare for this important peer review process.

 

 

Gandharv Bhatara is the product marketing manager for OPC technologies at Mentor Graphics.

The long-expected demise of optical lithography for manufacturing ICs has been delayed again, even though the technology itself has reached a plateau with a numerical aperture of 1.35 and an exposure wavelength of 193nm. Immersion lithography is planned for the 20/22nm node, and with the continued delay of EUV, is now the plan of record for 14nm.

How is it possible to use 193nm wavelength light at 14nm? How can we provide the process window to pattern such tight pitches? The secret lies in computational lithography. For 20nm, the two key innovations in computational lithography involve enabling double patterning with concurrent OPC, and in improving difficult-to-print layouts with localized in-situ optimization and by using an inverse lithography technique.

For 14nm, computational lithography offers more tools for process window enhancement with better approaches to sub-resolution assist features (SRAFs). SRAFs have been used since the 130nm node for resolution enhancement, but for 14nm, SRAF placement has evolved considerably. SRAFs placement has traditionally been based on a set of defined rules, which has given excellent coverage for line-space layouts and moderately good coverage for complex 2D layouts, along with fast runtime. However, the final SRAF coverage is highly dependent on the OPC recipe that the user is able to tune. Setting up these highly tuned recipes for 2D layouts can be time consuming, and also inadequate on very complex 2D layouts, leading to catastrophic failures in certain locations. The complexity of developing a well-tuned SRAF rules recipe since the introduction of pixelated sources and the introduction of more sophisticated contact and via layouts has driven lithographers away from rules-based solutions and towards model-based approaches.

Two distinct model-based approaches have developed: inverse lithography (ILT)-assisted and model-based. In the ILT-assisted approach, you use inverse lithography analysis to create a golden reference for a rules-based SRAF placement. ILT provides the ultimate lithography entitlement, but may not be practical to deploy in manufacturing because of increased mask cost and runtime. So, you use ILT only to find the best rules, and then let a rules-based SRAF tool do the actual placement. This gives superior process window for critical blocks like SRAM where the rules are relatively easy to develop.

The second approach is a true model-based approach, where a model is used to determine which areas on mask would benefit most from SRAFS and also to perform the initial SRAF placement. The model-based SRAF optimization reduces dependence on rules generation and improves SRAF placement. Model-based SRAFs can provide a process window that is comparable to that provided by ILT tools, but with much lower mask cost and runtime. The model-based approach is particularly useful for random logic designs, where developing rules continues to be challenging. Figure 1 shows a wafer validation done by IMEC, which shows that the process window obtained using model-based SRAFs and dense OPC was the same as obtained by using an ILT tool.

Given that both the ILT-assisted, rule-based approach and the model-based methods are good, but for different design styles, what if you could combine them easily into a hybrid approach? A hybrid approach combining the best of both solutions provides a single, unified SRAF recipe for SRAM (rules-based) and random logic designs (model-based). This is one of the secrets to 14nm computational lithography: advanced SRAF solutions that provide flexibility, control runtime, and leverage both rules-based and model-based approaches for the most challenging layouts.

Process window with model based SRAFs and ILT
Figure 1. Similar process window with model based SRAFs and ILT

 

SRAF placement flow high lithography
Figure 2.  A novel hybrid SRAF placement flow guarantees high lithography entitlement and resolves the SRAF development challenge.

David DiPaola is Managing Director for DiPaola Consulting a company focused on engineering and management solutions for electromechanical systems, sensors and MEMS products.  A 16 year veteran of the field, he has brought many products from concept to production in high volume with outstanding quality.  His work in design and process development spans multiple industries including automotive, medical, industrial and consumer electronics.  Previously, he has held engineering management and technical staff positions at Texas Instruments and Sensata Technologies, authored numerous technical papers and holds five patents.  To learn more, please visit www.dceams.com.   

 

In the third article of the MEMS new product development blog, critical design and process steps that lead to successful prototypes will be discussed.  These items include definition of the customer specification, product research, a solid model, engineering analysis to validate design direction, tolerance stacks, DFMEA, manufacturing assessment and process map.  With the modeling and analysis tools available and short loops for both design validation and process development, it is possible and should be expected to have functional prototypes on the first iteration.     

Read David’s first installment of the MEMS new product development blog.

Thorough review of the customer specification and an understanding of the application are two of the most critical steps in developing a prototype.  Without this knowledge, its a guess on whether the design will be successful meeting the performance objectives with next to zero quality problems.  The issues often encountered are the customer specification is poorly defined, it does not exist or there are gaps between customer targets and supplier performance.  It is the responsibility of the lead engineer to work with the customer to resolve these issues in the beginning stages of the prototype design to ensure a functional prototype is achieved and is representative of a product that can be optimized for production.   Furthermore, this specification creates an agreement between the supplier and customer on expectations and scope.  Should either of these change during the project, the deliverables, cost and schedule can be revisited.  Expectations and scope include package envelope, application description, initial and performance over life specifications, environmental, mechanical and electrical validation parameters, schedule and quantities for prototype and production.  In this process the supplier and customer review each item of the specification and mark it as acceptable as written or needs modification to be met given current knowledge.  There can also be area of further research and development before an agreement on the topic can be reached.  This entire process is documented and signed by both parties as a formal contract.  Then as more is learned about both the product design and application, modifications to the agreement can (and likely will) occur with consent of both parties. 

Product research is another area of significant importance to the prototype process. This research has several branches including technology to be used, existing intellectual property, materials, design approaches, analysis techniques, manufacturing processes to support proposed design direction and standard components available to name a few.  Product research will also involve reaching out to experts in different fields that will play a role in the product design.  This is the initial data collection phase of learning from previous works through reading patents, journal articles, conference proceedings and text books and building a team of qualified professionals.  This process is sometimes chaotic and over whelming while wading through mounds of information in search of a viable design path.  However, this only lasts for a short period as trends start to form, innovation is birthed and a path is forged. 

Parametric, 3D modeling is no longer a luxury but a must have in the design and prototype process.  It is essential for visualizing the design, documenting it and analyzing function, geometric properties and potential interferences.  However, use of the solid model should not stop there.  The documented geometry can be imported through a live link or other means to various other tools such as CNC machining, finite element analysis, tolerance stack analysis, motion visualization, fabric pattern generation prior to stitching, mold flow analysis, electrical simulations, equipment interactions, process development and much more.  The solid model should be considered a starting point for a much larger analytical model that is used to describe the fabrication, function and performance of the product and its components.  Once the solid model is complete, it is also extremely helpful to make stereolithography (SLA) or 3D printed components that can be felt, observed and often times used for preliminary product testing.  For a trivial cost, SLA’s can provide a wealth of information prior to prototype and help sell the design to colleagues and customers. 

As highlighted in the previous paragraph, engineering analysis is the process used to validate the design and process direction theoretically.  The analysis can take the form of a manual hand calculation of deflection to the sophistication of finite element analysis predicting the strain in the diaphragm of a MEMS pressure sensor due to deformation of the surrounding package under thermal conditions.  The key to successful analysis is not only proper engineering judgment on parameters and attention to detail in model creation but validation of the analysis through experimentation or other theoretical means.  For example, the FEA results of a MEMS diaphragm under large deflection can be compared to other theoretical calculations of a round plate under large deflection that has been validated with experiment.  Correlation of the results suggest your model is in the ballpark and can be used to evaluate other parameters such as stress and strain.  In this analysis phase, the global model is often comprised of several smaller models using different analytical means that are then tied back together for a prediction of performance.  With many live links between several pieces of analytical software and the power of today’s computers, this process is becoming more efficient with better overall accuracy. 

To better illustrate the points above, a case study of a MEMS SOI piezoresistive pressure sensor will be reviewed.  This pressure sensor was designed for operating pressures of 1000 – 7000 KPa. Due to the pressure range used, the surface area of the sensor that was bonded to the mating package substrate needed to be maximized while minimizing the overall foot print to increase the number of sensors per wafer.  Hence a deep reactive ion etch was used to obtain near vertical sidewalls.  A thicker silicon handle wafer was used to provide additional strain isolation from the sensor package while staying within a standard silicon size range for lower cost.  The silicon reference cap provided a stable pressure reference on one side of the sensor diaphragm.  Its geometry was optimized for handling, processing and dicing.

A solid model was created of the design including the wirebond pads, aluminum traces, interconnects, oxide layers and piezoresistors on the silicon membrane wafer.  In addition, the cap and handle wafers were modeled.  Although not shown here for proprietary reasons, each layer of the membrane was modeled as though it was fabricated in the foundry.  This enabled the development of a process map and flow.  Finite element analysis of the diaphragm under proof pressure loads showed that the yield strength of the aluminum traces could be exceeded when in close proximity to the strain gages.  This can cause errors in sensor output.  Hence doped transition regions were added to keep the aluminum out of this high stress region.  A comprehensive model of the piezoresistive Wheatstone bridge was created to select resistor geometry and predict the performance of the sensor under varying pressure and thermal conditions.  Strain induced in the gages from applied operating pressure and resulting deflection of the diaphragm was modeled using finite element analysis.  A model was also created to determine approximate energy levels needed to dope both the piezoresistors and transition regions.  This information was critical in discussions with the foundry in order to design a product that was optimized for manufacture as doping levels and geometry were correlated.   Furthermore short process loops were developed at the NIST Nanofab to optimize etch geometry and validate burst strength. 

It is important to note that the design of the sense element was designed with constant feedback from the foundry and their preferences for manufacturing.  In addition, the sense element and packaging were designed concurrently as there was significant interactions that needed to be addressed.  Design of the sense element and packaging in series would have resulted in a non optimized design with higher cost.  In the end, a full MEMS sensor specification was developed and provided to the foundry for a production quote and schedule.  Through working directly with the foundry, optimizing die size and designing a sensor for optimum manufacture, over 60% improvement in cost was achieved over going to a full service MEMS design and fabrication facility.                        

 Read David’s previous installment of the MEMS new product development blog.

Figure 1
MEMS SOI Sense Element
Figure 2
DRIE Hole Fabricated at NIST Nanofab

 

Due to the length of these topics, stay tuned for next months blog for Part 2 of this article.  In that segment other critical steps including tolerance stacks, DFMEA, manufacturing assessment and process maps will be reviewed.

Dr. Phil Garrou gives his insight into leading edge developments in 3-D integration and advanced packaging, reporting the latest technical goings on from conferences, conversations, and more.

TSMC / Intel and the  Apple A7 processor

Steve Shen of Digitimes reports that TSMC is expected to tape out Apple’s A7 processor on a 20nm process in March and then “…move the chip into risk production in May-June, which will pave the way for commercial shipments in the first quarter of 2014… TSMC will utilize 14-fab to manufacture the A7 chips for Apple.”

J Lien & J Chang of Digitimes report rumors from “institutional investors” that “Samsung is likely to receive 50% of the A7 processor orders, TSMC 40%, and Intel 10%."

Altera and Xilinx returning to PoP ?? (I hope not! )

Digitimes also reports that according to the Chineese language “Economic Daily News”,  Altera and Xilinx are considering switching to PoP (package on package) technology for their next-generation chips, instead of continuing with TSMCs 2.5D CoWoS process [link]. “Altera and Xilinx are considering switching their packaging orders to Advanced Semiconductor Engineering (ASE) and Siliconware Precision Industries (SPIL) as both plan to ramp up their PoP packaging capacity in 2013,” EDN said.  TSMC, ASE and SPIL all declined to comment on market reports, EDN said.

Reportedly yields and cost of trial production using a 20nm production node and the 2.5D CoWoS process at TSMC in past months failed to meet expectations, prompting Altera and Xilinx to seek alternative packaging solutions, said the paper.

[Highly placed contacts at Xilinx and Altera are denying the  veracity of this story and I want to believe them.] 

DARPA ICECool part II – Applications

Military platforms often cannot physically accommodate the large cooling systems needed for thermal management, meaning that heat can be a limiting factor for performance of electronics and embedded computers. DARPA introduced the  interchip/Intra chip enhanced cooling program (ICECool)  in June 2012 to explore ‘embedded’ thermal management. [see IFTLE 119 “ICECool Puts 3D Thermal issues Back in Focus”]

The premise of ICECool is to bring microfluidic cooling inside the substrate, chip or package, including thermal management in the earliest stages of electronics design. The first track of the program, ICECool Fundamentals, has already begun basic research into microfabrication and evaporative cooling techniques.

Under the new ICECool Applications Track, DARPA seeks demonstration of  microfluidic cooling in (A) monolithic microwave integrated circuits (MMICs) and (B) embedded HPC modules. For part B Proposers are expected to define and explore intrachip, interchip or hybrid approaches compatible with a 2.5D or 3D configuration, such that they would be compatible with DARPA, DoD, and commercial investments and trends in 3D stacking of silicon chips.

DARPA expects proosed approaches for ICECool Applications to involve embedded thermal management through microfluidic heat extraction in close proximity to the primary on-chip heat source(s), aided by heat flow through high conductivity thermal interconnects and/or thermoelectric devices from sub-millimeter “hot spots” to liquid-cooled miniature passages, as conceptualized in the figure below. An intrachip cooling approach would involve fabricating miniature passages directly into the chip. An interchip approach would utilize the microgap between chips in three-dimensional stacks for the cooling.

DARPA further expects proposers to define and demonstrate intrachip and/or interchip thermal management approaches that are tailored to a specific application that are consistent with the materials sets, fabrication processes, and operating environment of the intended application.

Proposers are encouraged to apply their proposed solution to existing liquid-cooled systems, in which the external thermal management hardware can be redesigned into an ICECool design.

In Phase 1 performers will be allowed 12 months to a) design, fabricate, and demonstrate the feasibility of the proposed ICECool concept with an appropriately configured Thermal

Demonstration Vehicle (TDV) and to b) establish, through electrical-thermal-mechanical co-simulation, the performance enhancement that can be achieved in the targeted electronic module. Performers will be judged based on thermal performance versus the stated thermal metrics, the simulated performance that would be achieved utilizing their thermal strategy, and their ability to adhere to this schedule.

In Phase 2, selected teams will be allowed 18 months to implement and validate their ICECool strategy in an operational Electrical Demonstration Vehicle (EDV) module and demonstrate their ability to simultaneously meet both thermal and functional performance metrics. In both Phase 1 and Phase 2, it is expected that performers will have significant performance and reliability modeling tasks that flow from the preliminary models that are developed as part of the proposal process.

The full solicitation may be found at: http://go.usa.gov/25qx. The ICECool Applications BAA closes March 22, 2013.

For all the latest on 3DIC and advanced packaging, stay linked to IFTLE…


Dr. Vivek Bakshi blogs about EUV Lithography (EUVL) and related topics of interest. He has edited two books on EUVL and is an internationally recognized expert on EUV Source Technology and EUV Lithography. He consults, writes, teaches and organizes EUVL related workshops. WWW.euvlitho.com

In order to bring EUVL scanners into high volume manufacturing (HVM) of computer chips, its throughput of 10 wafers per hour (WPH) needs to increase. That brings up three questions: how much do we need to increase the current throughput for HVM insertion, what needs to be done to increase throughput, and how quickly can this increase be achieved?

Throughput of EUVL scanner for HVM insertion

Imaging by EUVL scanner offers a higher k1 value than is available from 193 immersion (193i) based lithography. A higher k1 value results in better imaging and lower lithography process complexity, hence the attraction of EUVL as an optical projection lithography.

Today, 193i scanners are used in a double pattering process to print the smallest features needed in HVM. Toward 14 nm and smaller nodes, if EUVL is not ready, chipmakers will need to use quadruple patterning with 193i scanners, combined with increased optical proximity correction (OPC) and design rules restrictions to print increasingly smaller features. This is not an attractive option for chipmakers, hence their increasing emphasis on EUVL readiness. As manufacturers evaluate available technology, switching from double patterning-based 193i to EUVL, throughput is most often mentioned as the criterion for evaluation.

As printing of circuits is a sequential process, in double patterning (DP) the same wafer is exposed twice in a 193i scanner. Between the two exposures, there are many additional processing steps to enable the DP process. Hence, we need less than 50% throughput from an EUVL scanner (as compared to 193i scanners) to achieve a given feature size. In the case of quadruple pattering, an EUVL scanner needs less than 25% throughput to compare with an immersion scanner due to the four exposures.  After accounting for the additional processes of deposition, etch, ash and metrology, the equivalent throughput of an EUVL scanner may become less than 40% and 20% to compete with double and quadruple patterning, respectively. Thus, to match the throughput of a 200 WPH 193i scanner for DP process , we need less than 80 WPH and 40 WPH from an EUVL scanner. This is an important point, as it’s often said in press that an EUVL scanner must reach the throughput of a 193i scanner to be considered equal. (Cost of Ownership wise, the Lithography team of the International Technology Roadmap for Semiconductors (ITRS) has already shown that EUVL is more cost-effective than 193i DP for next generation lithography (NGL) [1]).

How to increase EUVL scanner throughput

Of course, EUVL scanners still need to boost their throughput numbers from the current 10 WPH.  For economic reasons, it’s best to have throughput as high as possible from an EUVL scanner.  Although much focus is placed on sources for improving throughput, other things can be done to increase the productivity of an EUVL scanner.

To better understand the challenge, let’s start with a model that estimates throughput of an EUVL scanner for 1) a given source power, 2) scanner parameters, and 3) reflection/transmission efficiency of various components [2]. EUVL scanners are not very efficient in transferring photons from source to wafer. Hence, in addition to increasing the number of photons available to the scanner, we can also work to increase its transmission. It is important to note the relationship of scanner throughput to scanner’s overhead time and resist sensitivity. [2] For example, for 50 W of source power at intermediate focus (IF), 20 mJ resist will allow 30 WPH while 10 mJ resist will allow 55 WPH. For a 10 mJ resist at 80 WPH, we need 115 W of power for 18 s overhead time, while for the 10 s overhead time we need only 50 W of power! [2]

There are additional factors that can help increase throughput. By decreasing the resist sensitivity to out-of-band radiation, the need for spectral purity filters may be eliminated. Reflection of mask as well as effective reflection of optics can be increased as well. Optical throughput of the NXE3300B is supposed to be 50% more than the NXE3100 [3] so there is already progress in increasing scanner throughput.

EUV sources are a difficult challenge due to the inherent complexity of reliable and repeatable generation of high temperature plasma of 40 eV for a production environment.  Current EUV source conversion efficiency (CE) is only 2 % (i.e., 2% of input energy is converted into EUV photons). Of these photons, only about 10% can be collected due to the limitations of collector optics, debris mitigation and spectral purity filters. We need improvement in each of these areas to enable higher power and increased throughput. CE of 5.5 % has been demonstrated recently, larger collectors are being developed and debris mitigation techniques will continue to improve – all allowing more photons to reach the wafer.

How higher scanner throughput can be achieved quickly

There is no magic bullet, so lots of innovative solutions are needed to lessen various loss factors to reach 100 W of source power. Beyond that, we may need different approaches to key source components  such as fuel delivery. In meetings at the 2012 International Workshop on EUV and Soft X-Ray Sources (Dublin, Ireland, October 8-11), the largest annual gathering of EUV source experts, we can expect discussion on some of these key topics in EUV source development. The workshop will include:

· Several papers on how to increase CE of sources for both EUV and beyond BEUV ( 6.x nm) LPP sources

· New designs to allow higher power DPP sources

· Data on the latest SPF of up to 80% transmission, improved collector optics, and other topics

I look forward to seeing the latest results from the industry’s source experts and will report them on this site.

In summary, I expect a rather slow but steady increase in EUV source power, and I’m still on record as predicting enough throughput by 2014 to allow adoption of EUVL scanners for HVM by leading chipmakers.

References:

1. Lithography Chapter, International Technology Roadmap for Semiconductors (2009).

2. Chapter 3, “EUV Source Technology,” in EUV Lithography, Vivek Bakshi (Editor), SPIE Press 2008, for discussion of a general throughput model for an EUVL scanner.

3. Rudy Peters, ASML presentation at the 2011 EUVL Symposium.

Dr. Vivek Bakshi blogs about EUV Lithography (EUVL) and related topics of interest. He has edited two books on EUVL and is an internationally recognized expert on EUV Source Technology and EUV Lithography. He consults, writes, teaches and organizes EUVL related workshops. WWW.euvlitho.com

Vivek Bakshi, EUV Litho Inc., February 28, 2013

Technical Highlights

The 2013 SPIE Advanced Lithography EUVL Conference started with many of us looking forward to Sam Sivakumar’s  kickoff presentation on results from Intel’s EUVL pilot line. Sivakumar pointed out that printing vias and cuts is the real advantage of EUVL over 193nm immersion based lithography (193i). In order to investigate the feasibility of extreme ultraviolet Lithography (EUVL), his group produced the same 22 nm products that Intel manufactures using 193i scanners. Products made using EUVL demonstrated equal or better performance, and most importantly lacked EUV-specific defect nodes. He noted source power and particles added to the mask during manufacturing as two major challenges for EUVL. The source power issue is not new, but particles on pellicles can make EUVL manufacturing prohibitive.

Surprisingly, in the third paper of the session, ASML presented elegant results on development of EUVL pellicles – with 86% transmission (against 90% needed) that meet imaging and mechanical requirements and only need some scaling. These pellicles have almost no effect on imaging, unless the particles are larger than 1 micron, and can be fully cleaned as well. Also, if the pellicle breaks by accident, ASML said they can clean the mask using a dry clean process.

Scanner Status by ASML

ASML is essentially an integrator and their update was full of continuous improvements. NXE3300B is a solid improvement over NXE3100. In their presentations, one sees ASML’s style of making innovation and improvement part of business as usual. What I like the most about ASML is that they do not play the "blame game." They never say in public, "if sources are ready, we will have the tool ready." If they become an EUVL source supplier through their acquisition of Cymer, we will see if this attitude changes.

The most important information that I got out of ASML’s presentation was how source power will relay to throughput, a relationship that will help us figure out the progress of EUVL. Scanner stages are ready for 100 wafers per hour (WPH) tools and if mask fields need to be split for higher numerical aperture (NA), I expect that they will be able to turn this knob a little to partially compensate for throughput loss. NXE3100 scanners are supposed to have a throughput range of 6 – 60 WPH and NXE3300B scanners of 50-125 WPH. The ratio of source power to WPH will increase from about 1 now (10 WPH for 10 W with NXE3000) to 1.25 (43 WPH at 55 W for NXE3100 in the near future). For NXE3300B, the ratio will rise to 1.6 (80 W for 50 WPH) and then to 2 (250 W for 125 WPH). I expect this to be due mostly to higher dose requirements, plus a few other factors such as availability and reduced scanner throughput at higher NA.

Source Technology Status

Some progress has been made, but a large gap remains. 40 W in 2014 from Cymer looks promising. I am also somewhat optimistic about 60 W with 100% duty cycle (DC) and long term operation by the end of next year, at least in non-integrated sources.

1) Ushio, maker of laser-assisted discharge produced plasma (LDP) sources, showed that they now have > 80% availability for their 6 W source at IMEC’s 3100. They have now demonstrated 51 W at 80% DC for 1 hour and 74 W at 12% DC for couple of minutes. As it has taken them a long time to realize acceptable high availability of 6 W sources, we know that scaling is no small task. It was not clear if LDP will be used for first NXE3300B prototypes, as was done for NXE3100.

2) Gigaphoton had > 7 W in 2012 from their Sn laser produced plasma (LPP) sources but they noted the scaling challenge and went back to the drawing board to address the issues of reliable droplet generation, pre-pulse laser for high conversion efficiency (CE) and debris mitigation. After proof of principal of their new design, they are working now to scale up their source from a current 10 W at low duty cycle, using 20 micron drops and 5 kW CO2 laser. Their new approach looks technically solid and I am expecting good progress this year. For 250 W Sn LPP sources, they are working on a 40 kW CO2 laser module.

3) Cymer’s sources in the field are averaging 10 W today with > 65% availability. These sources have >  0.5% dose stability. For upgrades, they have a 40 W source with 0.2% dose stability that they have used for 100% duty cycle for six one-hour runs. They also had a one-hour run of a 55 W source and feasibility of 60 W was demonstrated. This technology is for NXE3100 sources and they expect it to be ready for the scanners by Q3 of this year. They still will need to transfer this technology to NXE3300B, so I am not sure when the 80 W sources needed for these scanners will be ready. I will be delighted if 40 -50 W sources are ready and in the field in 2014. The Cymer team has done good work and has a roadmap for 250 W; but inasmuch as they have talked for many years about delivering high levels of source power and have not been able to do so, there was some skepticism in the audience toward their roadmap.

New Technical Solutions

The conference presented a large number of solutions for EUVL challenges, and several were good news:

1) A paper by Nissan Chemical (8682-9) titled, "The novel solution for negative impact of out-of- band (OBB) radiation and outgassing by top coat materials in EUVL," provided welcome news about OOB radiation and resist outgassing. Topcoat on resist was shown to eliminate OOB radiation from source as well as outgassing. It was a relief, as there has been ongoing discussion about the extent of OOB radiation, its effect on imaging and losses in a spectral purity filter (SPF).  So now we may not have to worry about OOB radiation, SPF losses and contamination from resists may be a thing of the past.

2) It looks like resist suppliers are working hard to make EUV resists ready, with several good resist papers presented. Among them was a nice review by JSR Micro (#8682-28) titled, "Novel EUV resist materials and process for 20 nm half-pitch and beyond." EUVL resists need to simultaneously meet the requirements of sensitivity, line edge roughness (LER) and resolution. One challenge that has been pointed out repeatedly is that a higher-than-expected dose is needed for best possible performance from a given resist. High absorbing resists (hybrid resists and resists with metal oxide particles) were presented as options in several papers and may allow us to adequately deal with increasing dose demand. As these resists will be more sensitive, I think that they will provide some relief from the increase in the source power requirements coming from shot noise based limitations.

3) Directed self assembly (DSA) was presented by IMEC as an aid for improving EUV resists performance (8682-10). We can expect to see increasing use of DSA in EUV resists.

4) Mask blank defects have been a challenge that has consistently proven hard to mitigate.  Lasertech (8679-17) showed data from their tool that can detect 1 nm high and 33 nm wide defects with 100% accuracy. As shown in many papers, the number of defects in mask substrates and mask blanks remains stubbornly high. However, in the last session of the conference, a paper by IBM (8679-53) delivered good news on mask defect repair for phase and amplitude by nano machining. By looking at mask defects using AIT (mask inspection tool from CXRO), they were able to model the  number of multilayers (ML) that may need to be  removed or added to the mask blank so that the Bossung curve for the resulting ML is what is expected for a defect-free ML! They presented many examples, and I believe that although this process seems laborious, it may get widely adopted along with mask blank defect reduction to address this leading challenge for EUVL.

5) As we move to higher NA, absorber thickness becomes a larger issue due to higher shadowing. One solution presented utilizes phase-shifted masks, which are a short stack of ML etched into the mask blank, and topped by thin absorber to provide destruction interference to enable thinner absorber layers. New materials choices of Ni and Ag were presented in papers as alternatives to the current set of mask absorbers.

6) As EUVL moves to the 10 nm node and below, one option for achieving increasingly smaller patterning is double patterning with EUV. Intel confirmed success for this process in their pilot line and in the last paper of the conference IMEC and AMAT demonstrated 9 nm HP dense L/S patterning using NXE3300B!

New Challenges

The conference also delivered a list of new EUVL challenges. I already mentioned the challenge of particles added to the pellicles. As EUVL is readied for smaller nodes with high NA optics, the angle of incidence on the mask is going to increase. Options to address this issue include 1) breaking the exposure field into two or four parts, 2) adding two additional mirrors to the scanners and 3) increasing mask size from the current 6 inches to 9 inches.

Winfred Kaiser of Zeiss summarized various technical options for the industry.  In his paper, he suggested "going with 6 inch masks with quarter field and 8x magnification" as the best option for 0.5 NA. However, breaking the pattern into many parts will further downgrade the throughput.  Harry Levinson of Global Foundries offered "6 x magnification with 9 Inch masks" as the best solution for 0.5 NA. He also stressed the need to continue working with 6 inch masks as long as we can. Going to a larger mask means upgrading mask infrastructure tools to handle 9 inch masks, which will be very difficult and could take a couple of years. However, this approach may involve changing only the handling part of tools, while leaving the key technical core of the tools the same. In any case, moving to 9 inch masks will be painful for mask makers and we can expect to hear more on this topic from them.

Best Papers

The following four papers seemed outstanding to me:

1) A paper by Harry Levinson titled "Considerations for high-numerical aperture EUV" (8679-41) was my first choice. He not only elegantly outlined the technical challenges, he also proposed a comprehensive set of business solutions and challenges to their implementation.

2) A paper by Luigi Scaccabarozzi  of ASML, "Investigation of EUV pellicle feasibility" (8679-3), showed how quickly this supplier has addressed a critical challenge which could have been a showstopper.

3) A paper by Shannon Hill of NIST titled, "Relationship between resist outgassing and witness sample contamination in the NXE outgas qualification using electrons and EUV" (8679-19) was an excellent technical work looking into the mechanism of resist outgassing and contamination. His group has continued to lead in the basic work of understanding the mechanism of contamination in EUV.

4) A paper that I would like to cite for its excellent presentation style was offered by Ken Goldberg of CXRO as "Commissioning a new EUV Fresnel zone plate mask-imaging microscope for lithography generations reaching 8 nm" (8679-44). His outstanding talk set the standard for how to present a complex topic and immense technical achievements in a very elegant way, and the audience was very impressed. I will recommend that SPIE post Ken’s paper on their website as a standard for SPIE authors wishing to make an excellent technical presentation.

Other Observations

– Despite moving the conference to a larger venue, there was still standing room only for key talks.

– 450 mm was not mentioned once in any paper in the EUV sessions!

– Although sources remain the biggest challenge in EUVL, discussion on this topic was limited pretty much to suppliers showing their roadmaps. I spoke to many people about the source power issue and the lack of funding for source R&D. All agreed, but acknowledged that no action by the industry has been taken yet. Part of the issue, as some mentioned, is that source R&D needs cannot be fully addressed until ASML’s acquisition of Cymer is final, as then it will be something for ASML to address.

Summary of HVM Readiness of EUVL

Hynix presented their 2009 cost of ownership (COO) calculations for various next-generation lithography (NGL) techniques. They indicated that COO for an EUVL scanner at about 35 WPH would be the same as COO for double patterning. They said the COO equation has not changed much since 2009, although I think it will change some for smaller nodes, since for them higher source power will be needed.

I expect 40 W sources in the field next year. I will be delighted if NXE 3300Bs are in the field by the end of 2014 with a source as well, but I am not sure if 80 W sources will be ready by then.  I do not think we will have 100 W sources in field before 2015. However, I do not want EUVL HVM insertion to shift from 2014 to 2015, so I can win my bet with Lithoguru Chris Mack and claim his Lotus as my own!

Bring Me the Rhinoceros

Last month, I decided to take a three-month introductory course on Zen Koans in the local Zen Monastery. (For those not familiar with Buddhism, a koan is a question without a real answer, and is aimed at getting the student to think deeply.) The first Koan, which students can study many years in a traditional Zen monastery, is called the Mu Koan. It goes like this:

A monk asked his Master ZhaoZhou, "Does a Dog have the Buddha nature, or not?"

Master ZhaoZhou replied, "Mu" (Japanese for No).

One of the central ideas in Buddhism is that all things have Buddha nature, so this answer does not make sense. A pupil is supposed to work with this Koan for a long time. There is no standard answer and the master judges each pupil’s answer differently. I had the homework of applying this Koan to whatever was happening to me during the week, and report back what I learned. As I was at the SPIE Advanced Lithography conference, I decided to rephrase the Koan as "250 W is needed for HVM adoption of EUVL and EUVL will be in HVM in the next two years. Does that mean we will have 250 W sources ready?"  Having spent over ten years in the EUVL source business, I think I will answer my own Koan with a Mu, while still acknowledging EUVL as the leading technology in the next two years. I will continue to give a dialogue on this topic in my blog in coming weeks.

I would like to leave my readers with the second Koan from my class called "Bring Me the Rhinoceros," and invite you to contemplate how it relates to the "Art and Science of Making Computer Chips."

One day, Master Yanguan called to his assistant, "Bring me the rhinoceros fan."

The assistant said, "It is broken."

Master Yanguan replied, "In that case, bring me the rhinoceros."

Second Koan used here is from a book by John Tarrant titled "Bring Me the Rhinoceros," Shambhala Press, 2012.

Dr. Vivek Bakshi blogs about EUV Lithography (EUVL) and related topics of interest. He has edited two books on EUVL and is an internationally recognized expert on EUV Source Technology and EUV Lithography. He consults, writes, teaches and organizes EUVL related workshops. WWW.euvlitho.com

I got good bit of feedback on my last blog in which I discussed the differences between physics and engineering of EUV Sources, and the implications of that difference. I was glad to see that it generated some re-evaluation of current thinking (as intended) and now would like to clarify few points.

First is the supplier commitment. One can have lots of great technical options backed by beautiful physics, but if there are no suppliers to turn ideas into commercial products, technology will go nowhere. EUV source technology will succeed as it has three large suppliers, each with current business experience in supplying light sources for scanners.  In the end, we may not have this many suppliers due to business and/or technology consolidation, but right now we do. For EUV sources for metrology, there is an even larger number of potential suppliers who are working to find a way to meet industry requirements. With this backing and competition among suppliers to outperform one another, we ought to see success.

The real question is whether scanners that can produce ~40 wafers per hour (WPH), which  I expect to be ready by 2014, will deliver cost of ownership (COO) sufficient to convince leading chip-makers to switch from 193nm based technology. The challenge is to estimate the point where the COO of EUVL will cross that of 193nm, making it more cost effective technology. Will it be at 15nm or 7nm? What product, what wafer size?  I do not have sufficient information to make this prediction right now, but I expect some acceptance of EUVL in high volume manufacturing (HVM) by the end of 2014.

Just because a technology cannot scale up in power does not mean that it will poorly serve EUVL in the process of development. Last week I gave an example of synchrotrons. They have provided low throughput printing to support development of current EUVL technology, and will continue to do so for future versions of EUVL. So let us continue that very wise investment! Supplier Energetiq has 10W source technology that has aided EUVL very well so far. Present designs may not scale up to the required brightness for mask defect metrology tools, but this supplier is looking at new physics for scaling, as they demonstrated in the last two Source Workshops in Dublin.

So it is a matter of realizing what cannot be done with present physics, and finding new ways to achieve scaling. We have seen >5% conversion efficiency and high debris mitigation techniques at low rep rates. Let us see how far these approaches can scale up. If they do not (over a reasonable period), then we need to quickly pick up another potential solution from a host of possibilities. These will become available to us if we continue to look for new physics, including development of new materials and chemistry. We can research the physics of EUVL with a very tiny fraction of what we have spent on engineering development of the technology. I still believe in the power of innovation and competition to help us move forward, but for this effort we must engage universities, national labs and independent research organizations to generate new ideas leading to new solutions. Only then will we be in a position to solve the persistent problem of low throughput in EUVL scanners.

Dr. Vivek Bakshi blogs about EUV Lithography (EUVL) and related topics of interest. He has edited two books on EUVL and is an internationally recognized expert on EUV Source Technology and EUV Lithography. He consults, writes, teaches and organizes EUVL related workshops. WWW.euvlitho.com

I am frequently asked by my consulting clients and colleagues when EUV sources will be ready to support high volume manufacturing (HVM) of semiconductors. It is a difficult question to answer, partly because readiness metrics have been a moving target, or the latest performance data is not very clear. For example, how many wafers per hour will make it cost-effective to adopt EUVL over the alternatives of triple or quadruple 193 nm immersion lithography for a given  product at a specified feature size for 300 mm or 450 mm wafers? Is the latest data in pulse mode and integrated, and for how long an operation?

Even if the targets are clear, there is still uncertainty because source progress has not increased as much as predicted  by  supplier roadmaps. Last week in a press release (see http://optics.org/news/4/1/26), ASML was quoted as saying, “40 W sources are providing good dose controls and will be used in NXE3300B to be shipped in 2013. 60 W sources have been successfully tested with no sign of performance degradation from debris.”  But can we take these numbers at face value and expect sources to be ready as promised in the supplier roadmaps?

As EUV source technology has been the main reason for the delay in EUVL for HVM, it is worthwhile spending some time  pondering why this is so and what we know. When I look at what I know about source technology status, my only data is what is shown at industry conferences by source suppliers or chip-makers. Most presentations are about achievements which have been significant, but not sufficient. Unfortunately, no one talks much about what is not working, except to say "We’ll fix the problems and here is our roadmap."

Given the many delays in HVM-ready EUVL, we should know by now that looking at roadmaps and press releases may not be the best way to predict technology readiness. Presumably, customers who own the latest EUVL scanners get confidential updates on source readiness so they have a better idea of what needs to be fixed. But these are chip- makers and not source experts, and their information may end with predictions from roadmaps which I suspect are very close to those shown in public by source suppliers. Of course, I have no clue about what additional information source customers may have, except that all of them list EUV source as the #1 problem in their public presentations.

One of the most repeated statements I hear on this topic is, “The physics is known and it is just an engineering challenge.” In other words, it is all about figuring out how quickly solutions can be engineered. I tend to disagree with this statement, and here’s why:

Let’s start by defining physics and engineering. Per Webster’s dictionary, “Physics is science dealing with the properties, changes, interactions of matter and energy,” while “Engineering is concerned with putting scientific knowledge to practical uses and planning, designing, construction or management of machinery.”

In other words, something is not physically possible if the physics is not there. Even if something is possible at low repetition rates, it does not mean that physics will support power scaling without near-impossible engineering. Figuring out physics is like seeing our target in a forest. Yes, we can see it, but can we build a freeway to it for 24 x 7 traffic? Take nuclear fusion as an example:  the physics is there, but we have yet to power a light bulb from a fusion reactor after more than 50 years of research. At least EUVL scanners are in the field and are printing wafers every day for process development. So how large is the remaining engineering challenge for EUV sources? Isn’t finding that out the real challenge in EUVL? 

This assertion that "only engineering challenges remain for source technology” is usually backed by low to very low repeatable data, e.g.,: “Yes, we have 70 W and we got this for 10 s in standalone mode at 10% duty cycle, but it means we know the physics and all we have to do is to engineer this result into a 24 x7 product that can be integrated into a scanner.”

You may remember that  Xe discharge produced plasma (DPP) sources worked very well but never went beyond 5 W, once we finally figured out that collectable power would never exceed 5 W due to etendue limits (i.e., one can collect light only from a very small part of the plasma). In addition, it is not possible to mitigate all the heat that higher power produces in Xe DPP sources. So we had to use different physics by changing the fuel to tin, which was easier to engineer for power scaling using DPP and eventually source suppliers have put more focus on tin based laser produced plasma (LPP). But LPP sources utilize different physics than DPP to heat the plasma, so we had to use slightly different physics  to create new systems  of LPP Sn. These systems were initially based on 1 micron (mm) lasers and today we are using 10 mm lasers, according to results from lab physics experiments. Now the focus is on other aspects of Sn LPP to achieve HVM targets, including 1) changing of the delivery system from droplet to mist targets, and 2) pulse shaping and pre-pulsing to increase conversion efficiency. With each new twist, slightly different physics are added to the mix.  So I am not sure if Sn LPP will scale up without our introducing new designs based on somewhat different physics, such as going to ion beam targets or something else.

So the question comes down to this: do we have a physics solution that we can engineer? If so, how do we assess that solution? Surprisingly, the size of the machine is not necessarily an indication – we cannot say DPP is superior to LPP because it is more compact. Synchrotrons, which are rather large machines, very reliably generate EUV photons on 24×7 time scale. In fact, their contribution to EUVL development has been so immense, I do not know where we would be without them. In addition to their size, coherence and cost have been raised as issues for these very reliable sources of EUV photons. Can we reduce the size/cost to make synchrotrons a potential source for fabs? Have we looked at them seriously enough in the light of current source technology, recent developments in technology and our future needs? Not really, in my opinion, and we need to do this for both plasma and non-plasma based sources.

In short, we have not quite figured out the physics for EUV sources that can be quickly scaled up in power and engineered to make products. Some will disagree with me that this is not so for 100 W sources,  but I think I am probably right  for 250 W or 1000 W EUV sources – which will be needed as we go to higher NA scanners, smaller printed features  and 450 mm wafers.

Dr. Vivek Bakshi blogs about EUV Lithography (EUVL) and related topics of interest. He has edited two books on EUVL and is an internationally recognized expert on EUV Source Technology and EUV Lithography. He consults, writes, teaches and organizes EUVL related workshops. WWW.euvlitho.com

The 2012 Source Workshop was held Oct. 8-11 in Dublin, Ireland, in the Clinton Auditorium on the campus of University College Dublin. This is the industry’s largest annual gathering of EUV and soft X-ray source experts, who took the opportunity to discuss the latest results from their labs.

A keynote talk was given by Akira Endo of Waseda University and the HiLASE project. He focused on identifying technology areas that need immediate development to enable current sources of 100 to 250 W. These areas include droplet generation at 150 kHz via electrostatic acceleration; 500 W solid state lasers with picosecond pulses and mJ energy for pre- pulse; and the ability to focus on 10 µm droplets. He also outlined a roadmap for 1000 W source at 13.5 nm and 6.x nm.

Dr. Endo also identified other important focus areas, including:

  • Tin vapor control for better EUV collection efficiency. He said ionic debris can be controlled via magnetic field, and proposed controlling neutral debris with laser resonant ionization of Sn.

  • Scaling of lasers to high power will need 25 kW CO2 laser modules. One of the toughest challenges in developing such lasers is windows, although diamond windows may be the answer.

Vadim Banine of ASML (the EUVL scanner-maker that recently acquired EUV source supplier Cymer) outlined the state of source technology and a path to 1000 W sources. He also listed top areas that need R&D work to enable power scaling for sources utilizing discharge-produced plasma (DPP) and laser-produced plasma (LPP). Describing the current status of tin-based LPP and DPP sources, Dr. Banine said Sn LPP has demonstrated 50 W average power at 80% duty cycle, along with scaling to 158 W at 3% duty cycle.

As for Sn DPP, Jeroen Jonkers of Ushio said 74 W power is possible today at intermediate focus (IF) in burst mode for a one-hour run. Dr. Jonkers elaborated on this data and presented a development area that may allow DPP to scale to 250 W. Konstantin, et al. presented results on ISAN’s new DPP design to potentially scale up to even higher power than Ushio’s design. Such concepts need to be further investigated to enable power scaling of DPP-based sources.

Although the roadmaps for plasma sources are rather clear, we know that the goals for 1000 W scaling of EUV sources are not easily attainable. After all, suppliers are still working hard to ready 100 W sources with reliable performance.

It was noted in various presentations that scaling of power for beyond EUV (BEUV) sources may be even harder. The potential of free electron laser (FEL) based sources for delivering 1.7 kW of BEUV photons at 6.x nm was discussed in a paper by Diana Tuerke of Carl Zeiss. Source design was presented for a 3 MHz, 1.7 kW, and facility costing 200 M Euro, with annual operational cost of 20 M Euro.

Coherent sources are typically not used for lithography due to large loss in the process of making the beam incoherent. However, it was very interesting to hear Zeiss mention that they developed an invention allowing them to use all coherent light without loss! This was exciting news indeed, as it may further open doors to the feasibility of coherent sources for lithography.

Highlights of workshop

Ulrich Mueller of Carl Zeiss presented the source requirements for mask defect AIMS tools. For their tools, high source stability is required :  <0.3% for position and <3.5 % for energy in pulse to pulse. Sources will need brightness of > 30W/mm2sr with a target of 100/mm2sr.  Currently they have sources of 8 W/mm2sr for tool development.

Klaus Bergmann of ILT showed the champion data for his Xe DPP source for metrology, with brightness of 21 W/mm2sr, operation frequency of 3.3 KHz, conversion efficiency (CE) of 0.35 and source radius of 155 mm for 20 kW input. He sees potential scaling to >50 W/mm2sr with a maximum limiting value of 71 W/mm2sr.

Steve Horne of Energetiq proposed a 100 W/mm2sr high frequency xenon Z-pinch DPP source for mask metrology. He thinks that the physics can be tested in six months – and if successful, the system can be built within 2 years. Cost of this new system would be similar to that of the present system in the field.

Paul Sheridan of NewLambda Technology described his LPP source as having a CE >1% at 45 degree viewing angle, and source size of 250 x 400 mm2 at intermediate focus (IF). For 1,000 hours of operation, he measured 80 W/mm2sr brightness. For his source, at stability at IF is 7% in position and 8% in size.

Larissa Juschkin of RWTH presented theoretical calculations for estimating source brightness requirements for EUV Microscopes.

Sergey Zakharov of EPPRA revealed a "plasma lens" design for the capillary discharge Xe DPP source. The workings of this discharge produced focused EUV beam had been the subject of speculation in the past, and Sergey finally described it for us!

Igor Makhotkin from FOM Institute DIFFER provided the results of using BEUV optics to support EUV lithography at 6.x nm.  Reflectivity for LaN/B based multilayer mirrors, the material of choice for BEUV optics, was reported to be 53.6% for normal incidence and 175 periods of multilayers. So far, this is the highest experimental value reported for these mirrors.

Leonid Sjmaenok of PhysTeX presented Zr filters with one pass transmission of 84% for 25 nm thickness and 80 mm aperture frames. Such filters, he noted, now have no more than a 2 degree max deviation from flatness. These filters will be key elements key to controlling out of band radiation and debris in sources.

For BEUV lithography sources, Takeshi Higashiguchi of Utsunomiya University proposed a mixed complex target of Gd and Tb for 6.x nm photons. He proposed punch out targets (mist) for Gd, as droplet generation is very difficult due to the high melting point of Gd. He also suggested phosphorus as a candidate material for BEUV sources.

Soft X-ray (SXR) sources

The Workshop has been successful in bringing together a large gathering of source experts by inviting technologists from the EUV (13.5 nm), BEUV (6.x nm) and Soft X-ray (~ 1- 50 nm) regions. Due to a lack of funding for research on EUV sources (despite its being the #1 issue in EUVL), many source experts now work on non-lithography applications of EUV and SXR sources. In widening the scope of the Workshop, we were able to attract EUV source experts who could give us good insights on EUV source development, even though their work may not be focused on lithography.

Two of the keynote presenters for the Workshop were global leaders in soft X-ray source technology. They focused their talks on SXR sources and their potential for non-lithographic applications. Prof. Jorge Rocca of Colorado State University talked about desktop EUV laser and its applications. Prof. Alan Michette of King’s College London discussed biological applications of soft X-ray sources.

The Workshop also had many other excellent, oral presentations and poster sessions on SXR sources and their applications.

M. Selin of KTH Royal Institute revealed a high-brightness liquid-jet laser-plasma source that enables 10 second exposure for water-window cryo microscopy. He claimed that its brightness of 1.5 x 1012 photons/ (second x mm2 x mrad2 x line), is the highest operating in a lab today. He said the 10s exposures that are now possible with the new system make this microscope comparable to microscopy based on early synchrotron sources.

James Evans of Pacific Northwest National Lab and University of California at Davis presented "Whole Cell Cryogenic Soft X-ray Tomography" with a laboratory light source from Energetiq. He pointed out that soft X-ray tomography of whole cells is now available commercially and said he is working on improved zone plates to get better resolution. A new standalone, higher brightness non- plasma source with a small footprint is planned. I think such non-plasma based sources may have potential EUVL applications, and I plan to investigate the feasibility of such sources.

Summary

The 2012 Source Workshop succeeded in its objective to bring together more than 80 source R&D experts for discussions and updates. We came away with a list of topics that need focus for scaling sources for current and future generation technology. The virtual lack of sales pitches may have induced participants to let down their guard a bit among colleagues to acknowledge the problems we still face today, while celebrating the progress made since last year’s Workshop in Dublin. The proceedings of this workshop are available for download at www.euvlitho.com.  If your business is EUV or SXR sources, you won’t want to miss our next Source Workshop on Nov. 4-7, 2013, in Dublin.