Category Archives: Blogs

By Zhihong Liu, Executive Chairman, ProPlus Design Solutions, Inc., www.proplussolutions.com

Twenty years ago, I was part of a group of top-notch researchers at the University of California, Berkeley, advised by renowned professors Dr. Chenming Hu and Dr. Ping K. Ko. It was a privilege to be among an elite team that invented the physics-based, accurate, scalable, robust and predictive MOSFET SPICE model called BSIM3.

What we didn’t anticipate was the impact of our invention on the worldwide semiconductor industry then and 20 years later. BSIM series models helped usher in the new era defined by the availability of standardized compact models that contributed to the success of the foundry-fabless business model.

The commercialization of BSIM is 20 years old as well. (See news release titled, “ProPlus Design Solutions Celebrates 20 Years of BSIMProPlus SPICE Modeling,” dated December 11, 2013) Several members of the group and I formed Berkeley Technology Associates (BTA) in 1993 to promote BSIM3 and to provide commercial-grade parameter extraction software for BSIM3 model creation. Our first product was BSIMPro that quickly became the industry’s golden model extraction tool adopted by all leading semiconductor companies.

Initially, the BSIM3 model was used for circuit simulation and CMOS technology development, and later became the first industry-standard compact model. Since then, most ICs designed have used BSIM3 and other BSIM family models, such as BSIM4 and BSIMSOI. Amazingly, the cumulative revenue is in the hundreds of billion dollars.

The advancement of BSIM models and the BSIMPro family of products was largely driven by process technology generations as the semiconductor industry aggressively pushed Moore’s Law. In the early “happy scaling” years, devices could be scaled down easily and BSIM3 was used from 0.5µm down to 0.15µm and, in some cases, down to 90nm with extensions provided by EDA vendors.

BSIM4 was introduced in 2000 for sub-130nm technologies and met the needs of high-speed analog and CMOS RF applications. With continuous geometry down-scaling in CMOS devices, compact models became more complicated as they needed to cover more physical effects, such as gate tunneling current, shallow trench isolation (STI) stress and well proximity effect (WPE).

Over the past 10 years, other varieties of industry-standard compact models have appeared, driven primarily by different system-on-chip (SoC) applications. These include models for silicon on insulator (SOI), bipolar junction transistor and hetero-junction bipolar transistor (BJT/HBT) and advanced CMOS technologies and high-voltage devices.

Early on, BSIMPro established a new methodology for model parameter extraction using GUI-based functionality for a more intuitive and efficient way to develop advanced models. A feature called Equalizer let users tune model parameters by moving their mouse to see the effects on device characteristics. Modeling engineers used the mouse to select multiple regions of device characterization data, while BSIMPro ran mathematic algorithms to achieve the best model fitting. This resulted in a quantum jump in productivity and efficiency as well as improved model quality and continues through subsequent generations of the BSIMPro family of products, including BSIMProPlus, the current version.

While the BTA name is long gone, many members of the core team continue to work together. BTA merged with an EDA company called Ultima in 2001 and formed Celestry, acquired by Cadence Design Systems in 2003. ProPlus Design Solutions spun-out of Cadence in 2006.

The BSIMPro family’s 20-year track record of success is a rarity, as the typical product lifecycle is far less than that. I look back on the past 20 years with a great deal of satisfaction. BSIM3 models had a tremendous impact on the semiconductor industry and continue to do so today. That’s cause for celebration.

By Zvi Or-Bach, President & CEO of MonolithIC 3D

The assertion that Moore made in April 1965 Electronics paper was:

“Thus there is a minimum cost at any given time in the evolution of the technology. At present, it is reached when 50 components are used per circuit. But the minimum is rising rapidly while the entire cost curve is falling (see graph below).”

1

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.”

2

Clearly Moore’s law is about cost, and Gordon Moore’s observation was that the optimum number of components (nowadays – transistors) to achieve minimum cost will double every year.

The reduction of cost per component for many years was directly related to the reduction in feature size – dimensional scaling. But many other technology improvements made important contributions as well, such as increasing the wafer size from 2″ all the way to 12.”

But many observers these days suggest that 28nm will be the optimal feature size with respect to cost for many years to come. Below are some charts suggesting so:

3

And more analytical work by IBS’ Dr. Handel Jones:

4

Graphically presented in the following chart:

5

Recently EE Times reported that EUV Still Promising on IMEC’s Road Map. IMEC provided a road map to transistor scaling all the way to 5nm, as illustrated in the following chart:

6

Yes, we probably can keep on scaling but, clearly, at escalating complexity and with completely new materials below 7nm. As dimensional scaling requires more advanced lithography it is clear that costs will keep moving up, and the additional complexity of transistor structures and all other complexities associated with these extreme efforts will most likely drive the costs even higher.

Looking at the other roadmap chart provided by IMEC and focusing on the SRAM bit cell in the first row, the situation seems far worse:

7

Since at 28 nm SRAM bit cell is already 0.081μm2, this chart indicates that future transistor scaling is barely applicable to the SRAM bit cell, which effectively is not scaling any more.

Unfortunately, most SoC die area is already dominated by SRAM and predicted to be so even more in the future, as illustrated by the following chart:

8

Source:. Y. Zorian, Embedded memory test and repair: infrastructure IP for SOC yield, in Proceedings the International Test Conference (ITC), 2002, pp. 340–349

Dimensional scaling was not an integral part of Moore’s assertion in 1965 – cost was. But dimensional scaling became the “law of the land” and, just like other laws, the industry seems fully committed to follow it even when it does not make sense anymore. The following chart captures Samsung’s view of the future of dimensional scaling for NV memory, and it seems relevant to the future of logic scaling just as well.

9

Dr. Bruce McGaughy, CTO and SVP of Engineering at ProPlus Design Solutions, Inc. blogs about the wisdom of Monte Carlo analysis when high sigma methods are perhaps better suited to today’s designs.

Years ago, someone overhead a group of us talking about Monte Carlo analysis and thought we were referring to the gambling center of Monaco and not computational algorithms that have become the gold standard for yield prediction. All of us standing by the company water cooler had a good laugh. That someone was forgiven because he was a recent college graduate with a degree in Finance and a new hire. As a fast learner, he quickly came to understand the benefits of Monte Carlo analysis.

I was recently reminded of this scene as the limitations of Monte Carlo analysis approaches are becoming more acute because of capacity. No circuit designer would mistake Monte Carlo analysis for a roulette wheel, though chip design may seem like a game of chance today. We continue to use the Monte Carlo approach for high-dimension integration and failure analysis even as new approaches emerge.

Emerging they are. For example, high sigma methods with proven techniques are becoming more prevalent for the design of airplanes, bridges, financial models, integrated circuits and more. Moreover, high sigma methods also are used for electronic design for various applications and are proving to be accurate by validation in hardware.

New technologies, such as16nm FinFET, add extra design challenges that require high sigma greater than six and closer to 7 sigma, making Monte Carlo simulation even less desirable.

Let’s explore a real-world scenario using a memory design as an example where process variations at advanced technologies become more severe, leading to a greater impact on SRAM yield.

The repetitive structure circuits of an SRAM design means extremely low cell failure rate is necessary to ensure high chip yield. Traditional Monte Carlo analysis is impractical in this application. In fact, it’s nearly impossible to finish the needed sampling because it typically requires millions or even billions of runs.

Conversely, a high sigma method can cut Monte Carlo analysis sampling by orders of magnitude. A one megabyte SRAM would require the yield of a bitline cell to reach as high as 99.999999% in order to achieve a chip yield of 99%. Monte Carlo analysis would need billions of samples. The high sigma method would need mere thousands of samples to achieve the same accuracy, shortening the statistical simulation time and making it possible for designers to do yield analysis for this kind of application.

High sigma methods are able to identify and filter sensitive parameters, and identify failure regions. Results are shared in various outputs and include sigma convergence data, failure rates, and yield data equivalent to Monte Carlo samples.

Monte Carlo analysis has had a good long run for yield prediction, but for many cases it’s become impractical. Emerging high sigma methods improve designer confidence for yield, power, performance and area, shorten the process development cycle and have the potential to save cost. The ultimate validation, of course, is in hardware and production usage. High sigma methods are gaining extensive silicon validation over volume production.

Let’s not gamble with yield prediction and take a more careful look at high sigma methods.

About Bruce McGaughy

Bruce McGaughy, CTO and Senior VP of Engineering at ProPlus Solutions in San Jose, CA.

Bruce McGaughy, CTO and Senior VP of Engineering at ProPlus Solutions in San Jose, CA.

Dr. Bruce McGaughy is chief technology officer and senior vice president of Engineering of ProPlus Design Solutions, Inc. He was most recently the Chief Architect of Simulation Division and Distinguished Engineer at Cadence Design Systems Inc. Dr. McGaughy previously also served as a R&D VP at BTA Technology Inc. and Celestry Design Technology Inc., and later an Engineering Group Director at Cadence Design Systems Inc. Dr. McGaughy holds a Ph.D. degree in EECS from the University of California at Berkeley.

By Zvi Or-Bach, President & CEO of MonolithIC 3D Inc.

In the 1960s, James Early of Bell Labs proposed three-dimensional structures as a natural evolution for integrated circuits. Since then many attempts have been made to develop such a technology. So far, none have been able to overcome the 400°C process temperature limitation imposed by the use of aluminum and copper in modern IC technologies for the underlying interconnects without great compromises. The “Holy Grail” of 3D IC has been the monolithic 3D, also known as sequential 3D, where a second transistor layer could be constructed directly over the base wafer using ultra-thin silicon – less than 100nm – thus enabling a very rich vertical connectivity.

Accordingly the industry developed a 3D IC technology based on TSV (Thru Silicon Via) where each strata (wafer) could be independently processed, then after thinning at least one wafer, place in a 3D configuration, and then connect the strata with TSV using a low temperature  (<400°C) process. This independent (parallel) processing has its own advantages; however, the use of thick layers (>50 µm) greatly limits the vertical connectivity, requires development of all new processing flows, and is still too expensive for broad market adoption. On the other hand, monolithic 3D IC provides a 10,000x better vertical connectivity and would bring many additional benefits as was recently presented in the IEEE 3D IC conference.

The semiconductor industry is always on the move and new technologies are constantly being introduced making changes the only thing that is constant. For the most part dimensional scaling has been associated with introducing new materials and challenges, thereby making process steps that were once easy far more complex and difficult. But not so in respect to monolithic 3D IC.

The amount of silicon associated with a transistor structure was measured in microns in the early days of the IC industry and has now scaled down to the hundreds and the tens of nano-meters. The new generation of advanced transistors have thicknesses in nanometers as is illustrated in the following ST Micro slide.

Fig 1

Dimensional scaling has also brought down the amount of time used for transistor activation/annealing, to allow sharper transistor junction definition, as illustrated in the following Ultratech slide

Fig 2

Clearly the amount of heat associated with transistor formation has reduced dramatically with scaling as less silicon gets heated for far less time.

And unlike furnace heating or RTP annealing, with laser annealing the heat is coming from the top and directed only on small part of the wafer as illustrated below.

Fig 3

Fig 4

The following illustrates Excico pulsed excimer laser which can cover 2×2 cm2 of the wafer.

Fig 5

Worth noting that this week we learned of good results when utilizing Excico laser annealing for 3D memory enhancement – Laser thermal anneal to boost performance of 3D memory device.

These trends help make it practical to protect the first strata interconnect from the high temperature process required for the second strata transistor formation. As the high temperature is on small amount of silicon for a very short time and for a small part of the wafer, the total amount of thermal energy required for activation/annealing is now very small.

One of the three most newsworthy topics and papers included in the 2013 IEDM Tip Sheet for the “Advances in CMOS Technology & Future Scaling Possibilities” track was a monolithic 3D chip fabricated using a laser (reported by Solid State magazine “Monolithic 3D chip fabricated without TSVs“). Quoting: “To build the device layers, the researchers deposited amorphous silicon and crystallized it with laser pulses. They then used a novel low-temperature chemical mechanical planarization (CMP) technique to thin and planarize the silicon, enabling the fabrication of ultrathin, ultraflat devices. The monolithic 3D architecture demonstrated high performance – 3-ps logic circuits, 1-T 500ns nonvolatile memories and 6T SRAMs with low noise and small footprints, making it potentially suitable for compact, energy-efficient mobile products.”

Furthermore, in last two weeks we presented in the IEEE 3D IC and IEEE S3S conferences an alternative simulation based work. We suggested to use a smart-cut® for the formation of the second strata (and not amorphous silicon crystallization) with innovative shielding layers to protect the first strata interconnect, as illustrated below.

Fig 6

Currently there are at least three different laser annealing systems offered on the market. The shielding layers could be adjusted according to the preferred choice of the laser annealing system. Our simulations show that if an excimer laser such as one offered by Excico is used, then even without these shielding layers the first strata routing layers are not adversely impacted by the laser annealing process.

Summary: In short, dimensional scaling is becoming harder and yet it makes monolithic 3D easier. We should be able to keep scaling one way or the other (or even both), and keep enjoying the benefits.

Note: smart-cut® s a register TM of Soitec

By Joe Kwan, Mentor Graphics

For several technology nodes now, designers have been required to run lithography-friendly design (LFD) checks prior to tape out and acceptance by the foundry. Due to resolution enhancement technology (RET) limitations at advanced nodes, we are seeing significantly more manufacturing issues [1] [2], even in DRC-clean designs. Regions in a design layout that have poor manufacturability characteristics, even with the application of RET techniques, are called lithographic (litho) hotspots, and they can only be corrected by modifying the layout polygons in the design verification flow.

A litho hotspot fix should satisfy two conditions:

  • First, implementing a fix cannot cause an internal or external DRC violation (i.e., applying a fix should not result in completely removing a polygon, making its width less than the minimum DRC width, merging two polygons, or making the distance between them less than the minimum DRC space).
  • Second, the fix must be LFD-clean, which means it should not only fix the hotspot under consideration, but also make sure that it does not produce new hotspots.

However, layout edges that should be moved to fix a litho hotspot are not necessarily the edges directly touching it. Determining which layout edges to move to fix a litho hotspot can be pretty complicated, because getting from a design layout to a printed contour involves a bunch of complex non-linear steps (such as RET) that alter the original layout shapes, and optical effects that take into account the effect of the layout features context. Since any layout modifications needed to fix litho hotspots must be made by the designer, who is generally not familiar with these post-tapeout processes, it’s pretty obvious that EDA tools need to provide the designer with some help during the fix process.

At Mentor Graphics, we call this help model-based hints (MBH). MBH can evaluate the hotspot, determine what fix options are available, run simulations to determine which fixes also comply with the required conditions, then provide the designer with appropriate fix hints (Figure 1). A fix can include single-edge movements or group-edge movement, and a litho hotspot may have more than one viable fix. Also, post-generation verification can detect any new minimum DRC width or space violations, but it will not be able to detect deleting or merging polygons, so the MBH system must incorporate this knowledge into hint generation. Being able to see all the viable fix options in one hint gives the designer both the information needed to correct the hotspot and the flexibility to implement the fix most suitable to that design.

Figure 1. Litho hotspot analysis with model-based hinting (adapted from “Model Based Hint for Litho Hotspot Fixing Beyond 20nm node,” SPIE 2013)

Figure 1. Litho hotspot analysis with model-based hinting (adapted from “Model Based Hint for Litho Hotspot Fixing Beyond 20nm node,” SPIE 2013)

Another cool thing about MBH systems—they can be expanded to support hints for litho hotspots found in layers manufactured using double or triple patterning, by using the decomposed layers along with the original target layers as an input. This enables designers to continue resolving litho hotspots at 20 nm and below. In fact, we’ve had multiple customers tape out 20 nm chips using litho simulation and MBH on a variety of designs to eliminate litho hotspots

Of course, it goes without saying that any software solutions generating such hints also need to be accurate and fast. But we said it anyway.

As designers must take on more and more responsibility for ensuring designs can be manufactured with increasingly complex production processes, EDA software must evolve to fill the knowledge gap. LFD tools with MBH capability are one example of how EDA systems can be the bridge between design and manufacturing.

Author:

Joe Kwan is the Product Marketing Manager for Calibre LFD and Calibre DFM Services at Mentor Graphics. He previously worked at VLSI Technology, COMPASS Design Automation, and Virtual Silicon. Joe received a BA in Computer Science from the University of California, Berkeley, and an MS in Electrical Engineering from Stanford University. He can be reached at [email protected].

Blog Review October 14 2013


October 14, 2013

At the recent imec International Technology Forum Press Gathering in Leuven, Belgium, imec CEO Luc Van den hove provided an update on blood cell sorting technology that combines semiconductor technology with microfluidics, imaging and high speed data processing to detect tumorous cancer cells. Pete Singer reports.

Pete Singer attended imec’s recent International Technology Forum in Leuven, Belgium. There, An Steegan, senior vice president process technology at imec, said FinFETs will likely become the logic technology of choice for the upcoming generations, with high mobility channels coming into play for the 7 and 5nm generation (2017 and 2019). In DRAM, the MIM capacitor will give way to the SST-MRAM. In NAND flash, 3D SONOS is expected to dominate for several generations; the outlook for RRAM remains cloudy.

At Semicon Europa last week, Paul Farrar, general manager of G450C, provided an update on the consortium’s progress in demonstrating 450mm process capability. He said 25 tools will be installed in the Albany cleanroom by the end of 2013, progress has been made on notchless wafers with a 1.5mm edge exclusion zone, they have seen significant progress in wafer quality, and automation and wafer carriers are working.

Phil Garrou reports on developments in 3D integration from Semicon Taiwan. He notes that at the Embedded Technology Forum, Hu of Unimicron looked at panel level embedded technology.

Kathryn Ta of Applied Materials connects how demand for mobile devices is driving materials innovation. She says that about 90 percent of the performance benefits in the smaller (sub 28nm) process nodes come from materials innovation and device architecture. This number is up significantly from the approximate 15 percent contribution in 2000.

Tony Massimini of Semico says the MEMS market is poised for significant growth thanks to major expansion of applications in smart phone and automotive. In 2013, Semico expects a total MEMS market of $16.8 B but by 2017 it will have expanded to $28.5 B, a 70 percent increase in a mere four years time.

Steffen Schulze and Tim Lin of Mentor Graphics look at different options for reducing mask write time. They note that a number of techniques have been developed by EDA suppliers to control mask write time by reducing shot count— from simple techniques to align fragments in the OPC step, to more complex techniques of simplifying the data for individual writing passes in multi-pass writing.

If you want to see SOI in action, look no further than the Samsung Galaxy S4 LTE. Peregrine Semi’s main antenna switch on BSOS substrates from Soitec enables the smartphone to support 14 frequency bands simultaneously, for a three-fold improvement in download times.

Vivek Bakshi notes that a lot of effort goes into enabling EUV sources for EUVL scanners and mask defect metrology tools to ensure they meet the requirements for production level tools. Challenges include modeling of sources, improvement of conversion efficiency, finding ways to increase source brightness, spectral purity filter development and contamination control. These and other issues are among topics that were proposed by a technical working group for the 2013 Source Workshop in Dublin, Ireland.

By Steffen Schulze and Tim Lin, Mentor Graphics

An upcoming challenge of advanced-node design is the expected mask write time increase associated with the continued use of 193nm wavelength lithography. If nothing is done, then shot count, the major predictor of mask write time, will increase more than 10x. A number of techniques have been developed by electronic design automation (EDA) software suppliers to control mask write time by reducing shot count— from simple techniques to align fragments in the OPC step, to more complex techniques of simplifying the data for individual writing passes in multi-pass writing. These approaches promise a reduction in shot counts anywhere between 10% and 40%. This article describes and compares several techniques, and the merits versus cost of each[1].

Mask write time increase has a number of dimensions. One is the increase in shot count: the number of shots directly correlates to mask write time. The addition of more shapes from OPC contributes as well. Another dimension is the introduction of litho techniques like multi patterning, which adds more masks to the set and hence increases the overall mask writing time. The growth in mask counts can be countered with capacity and won’t be addressed here.

Increased mask write time leads to increased mask cost that detract from the benefit of moving to advanced nodes, so taking steps to mitigate or reduce this cost is attractive. However, these additional steps also impose some new cost to the overall process. We introduce a number of techniques that mitigate the impact on mask write time and offer a benefit versus effort of deployment assessment [2].

Shot count reduction approaches

We analyzed the following shot count reduction approaches:

  • Optimized fracture
  • Pre-fracture jog alignment
  • L-shot
  • Multi-resolution writing (MRW)
  • Optimization-based fracture
  • Optimized OPC output

Optimizing fracture

The baseline shot count is defined by the fracture step – a general polygon can be fractured into elementary figures in a variety of ways – hence the fracture algorithm can be tuned to achieve the minimum shot count.

The fracture step is driven by three metrics:

  • Smallest total shot count
  • Smallest number of outside small figures (shots exceeding the vendor recommended smallest shot size)
  • Smallest number of dual shot splits for critical cd features

Tuning the heuristic to the node-driven changes in design style and RET/OPC methodology can lead to a reduced shot count. In a recent test on an M1 22nm design, roughly a 2% shot count reduction was achieved.  While the reduction is not large in itself, such improvements have a large cumulative effect over time; and other algorithm improvements, such as small-outside figure reduction, also indirectly improve the shot count. This method is also easy to adopt with minimal cost to the user.

Jog Alignment

One source of small figures is misaligned vertices on opposing sides of polygons, a.k.a jogs. Misaligned jogs can occur in OPC during fragmentation, when different data levels are merged prior to fracture, or during biasing.  When jogs are misaligned even by a small amount, a small trapezoid is required between them. Jog alignment suppresses the small figures by identifying jogs on opposing sides of polygons and aligning them based on user-defined parameters[3]. The principle is shown in Figure 1.

Figure 1. Reduction of shot count by jog alignment using Calibre MASKopt.

Figure 1. Reduction of shot count by jog alignment using Calibre MASKopt.

Jog alignment is an additional processing step that is inserted into the workflow prior to the fracture. It is conducted with the same tools as in the current flow. Because the mask target is modified, a verification step to assess the mask edge placement error (EPE) is recommended. The downstream processes on the mask writer are not impacted by this method, including the onboard proximity effect correction (PEC) algorithms.

Jog alignment can yield significant shot count reduction, as illustrated Figure 2. In an experiment, we applied jog alignment, varying the jog movement distance over a range of 0nm up to 100nm at mask scale. A 34% shot count reduction was achieved without any degradation of the mask (based on the EPE range).

Figure 2.  Jog alignment results (Calibre MASKopt) showing shot count and EPE range versus max. jog alignment at mask scale.

Figure 2. Jog alignment results (Calibre MASKopt) showing shot count and EPE range versus max. jog alignment at mask scale.

L-Shot

L-shot fracture reduces shot count by expanding the range of geometries that can be written in a single shot[4].   Current e-beam mask writing tools allow triangles or rectangles.  The concept of L-shot fracture is to let the write tools make a single shot in the shape of an “L”.  Overall a shot count reduction of between 20% and 40% has been achieved.

To create an L-shaped shot, an additional aperture is required in the write tools.  Today, two rectangular apertures are used to create rectangular shots of different sizes, but a cross shaped aperture is needed for L-shot.  This requires significant development by the e-beam write tool manufacturers.  This method does not change other fundamentals of the mask writing process.

Multi-resolution writing

Photomasks are conventionally written in two or more passes in which the same data is exposed multiple times with a shifted placement. Each identical exposure integrally multiplies the mask write time.

The objective of multi-resolution writing (MRW) is to jointly customize the shot patterns in both passes.  In particular, one may decompose the exposure into one “detail” pass with about as many shots as the conventional pass and one “coarse” pass with many fewer shots such that the desired image is obtained.  The coarse pass deposits an “average” image and the detail pass “refines” it [5].

We used a prototype model-based MRW software on a 22nm active layer, and the results are shown in Figure 3. The maxDist parameter controls the aggressiveness of MRW, with maxDist = 0 meaning that no MRW is applied.  The shot count reduction was up to 33%.

Figure 3: MRW results showing shot count and EPE range versus maxDist at mask scale. Noteworthy that the EPE range is reduced over the base line owing to the application of mask process correction as part of the algorithm to secure the mask target.

Figure 3: MRW results showing shot count and EPE range versus maxDist at mask scale. Noteworthy that the EPE range is reduced over the base line owing to the application of mask process correction as part of the algorithm to secure the mask target.

The deployment of this method requires adjustments both to data preparation and to the mask writing equipment. The MRW software is used before fracture, then the two different data layers are fractured.

Optimized-based fracture

Optimization-based fracture is one method for writing curvilinear masks within a reasonable shot count [6]. In traditional fracture, trapezoids are created to exactly cover the input polygons submitted to the fracture algorithm; shots are abutting and non-overlapping. Optimization-based fracture relaxes those constraints.

Shots can be placed such that they overlap or be non-abutting so that sub-resolution gaps exist. The optimization problem is formulated to minimize the number of shots while maintaining the intended post-OPC pattern on the mask. The solution incorporates an e-beam blur (forward scattering + resist blur) model to properly simulate the overlapping and non-abutting shapes. Allowing for overlapping shots and non-abutting shots expands the solution space and provides the optimization engine more opportunity to reduce the shot count. Experiments demonstrate up to 28% reduction with a limited impact on wafer process window and max EPE. However, it does require an update to the workflow on the current mask writer equipment.

Optimized OPC output

The complexity of the OASIS layout presented to the mask manufacturing process is largely driven by the RET and OPC processes. The insertion of assist features, the decoration of layout shapes and the simplification of smooth target mask contours as obtained by inverse lithography methods with tight tolerances increase the shot count of the output. A number of techniques to reduce the complexity and hence reduce the mask writing time can be applied during the application of OPC. In this case any changes to the output layout are intrinsically verified against the tolerances required by the litho process. The OPC tools referenced in this study (Calibre OPCpro and Calibre nmOPC) offer two main user-controlled options to reduce shot count [7,8].

  • Jog-smoothing – the alignment of adjacent fragments to eliminate vertices prior to the final  iterations
  • Jog-alignment – vertex alignment across the shapes during the fragmentation step

Assessment of mask write time solutions

Deployment of mask write time reduction techniques in a running mask manufacturing line require changes that will impact the current technology, workflows, and equipment to varying degrees. We aimed for maximum write time reduction at the lowest cost and with the smallest impact to the running operation.

We associated a cost indicator as a relative rating of the effort for the implementation and execution of each technique. Benefit indicators are associated with the potential for shot count reduction. The results of the assessment are displayed in Figure 4.

Figure 4. Benefit and effort assessment for various mask write time reduction techniques.

Figure 4. Benefit and effort assessment for various mask write time reduction techniques.

 

A few observations are noteworthy. The biggest benefit is obtained from optimized OPC output, which also incurs one of the lowest costs of adoption. Optimization-based fracture with dose modulation ranks highest on the cost scale.  All methods modifying the mask shapes impose increasing effort depending on the complexity of the changes.

Optimized OPC output wins

We reviewed several mask write time reduction techniques designed to contain the increase in mask shot count while preserving the results quality. Multiple factors impact the cost associated with shot count reduction – CD control on mask and wafer, hardware and software changes, and data preparation effort. The goal is to get maximum write time reduction at the lowest cost and with smallest impact to the running operation. The technique of optimized OPC output was the clear winner. Post-OPC data simplification techniques of varying complexity follow a steep deployment cost curve and require careful consideration.

ACKNOWLEDGEMENTS

The authors would like to thank their colleagues from Mentor Graphics – A. Elayat, E. Sahouria, P. Thwaite, J. Mellmann, N. Akkiraju, Y. Granik, U. Hollerbach.

 

REFERENCES

  1. A. Elayat, T. Lin, E. Sahouria, S.F. Schulze, “Assessment and comparison of different approaches for mask write time reduction”, Proc SPIE 8166, 816634 (2011) http://go.mentor.com/23bmz
  2. James Word, Keisuke Mizuuchi, Sai Fu, William Brown, Emile Sahouria, “Mask shot count reduction strategies in the OPC flow”, Proc. SPIE 7028, 70283F (2008) http://dx.doi.org/10.1117/12.799410
    1. Steffen Schulze, Emile Sahouria, Eugene Miloslavsky, “High-performance fracturing for variable shaped beam mask writing machines”, Proc. SPIE 5130, 648 (2003) http://go.mentor.com/236y3
    2. Emile Sahouria and Amanda Bowhill, “Generalization of shot definition for variable shaped e-beam machines for write time reduction”, Proc. SPIE 7823, 78230T (2010) http://go.mentor.com/236y4
    3. Emile Sahouria, “Multiresolution mask writing”, Proc. SPIE 7985, 798503 (2011) http://dx.doi.org/10.1117/12.881929
    4. Timothy Lin, Emile Sahouria, Nataraj Akkiraju, Steffen Schulze, “Reducing shot count through optimization-based fracture”, Proc. SPIE 8166, 81660T (2011) http://dx.doi.org/10.1117/12.897779
    5. Sean Hannon, Travis Lewis, Scott Goad, Kenneth Jantzen, Jianlin Wang, Hien T. Vu, Emile Sahouria, Steffen Schulze, “Reduction of layout complexity for shorter mask write-time”, Proc. SPIE 6730, 67303K (2007)  http://go.mentor.com/236y5
    6. Ayman Yehia, “Mask-friendly OPC for a reduced mask cost and writing time”, Proc. SPIE 6520, 65203Y (2007) http://go.mentor.com/236y6

 

Steffen Schulze is product marketing director for Calibre mask data preparation products at Mentor Graphics Corp., 8005 S.W. Boeckman Rd., Wilsonville, OR 97070; ph 800/547-3000, e-mail [email protected].

Tim Lin is technical marketing engineer for Calibre mask data preparation products at Mentor Graphics Corp., 46871 Bayside Parkway, Fremont, CA 94538; ph ph 800/547-3000, e-mail [email protected].

3D-IC: Two for one


September 25, 2013

Zvi Or-Bach, President & CEO of MonolithIC 3D Inc. blogs about upcoming events related to 3D ICs.

This coming October there are two IEEE Conferences discussing 3D IC, both are within an easy drive from Silicon Valley.

The first one is the IEEE International Conference on 3D System Integration (3D IC), October 2-4, 2013 in San Francisco, and just following in the second week of October is the S3S Conference on October 7-10 in Monterey. The IEEE S3S Conference was enhanced this year to include the 3D IC track and accordingly got the new name S3S (SOI-3D-Subthreshold). It does indicate the growing importance and interest in 3D IC technology.

This year is special in that both of these conferences will contain presentations on the two aspects of 3D IC technologies. The first one is 3D IC by the use of Through -Silicon-Via which some call -“parallel” 3D and the second one is the monolithic 3D-IC which some call “sequential.”

This is very important progress for the second type of 3D IC technology. I clearly remember back in early 2010 attending another local IEEE 3D IC Conference: 3D Interconnect: Shaping Future Technology. An IBM technologist started his presentation titled “Through Silicon Via (TSV) for 3D integration” with an apology for the redundancy in his presentation title, stating that if it 3D integration it must be TSV!

 Yes, we have made quite a lot of progress since then. This year one of the major semiconductor research organization – CEA Leti – has placed monolithic 3D on its near term road-map, and was followed shortly after by a Samsung announcement of mass production of monolithic 3D non volatile memories – 3D NAND.

We are now learning to accept that 3D IC has two sides, which in fact complement each other. In hoping not to over-simplify- I would say that main function of the TSV type of 3D ICs is to overcome the limitation of PCB interconnect as well being manifest by the well known Hybrid Memory Cube consortium, bridging the gap between DRAM memories being built by the memory vendors and the processors being build by the processor vendors. At the recent VLSI Conference Dr. Jack Sun, CTO of TSMC present the 1000x gap which is been open between  on chip interconnect and the off chip interconnect. This clearly explain why TSMC is putting so much effort on TSV technology – see following figure:

System level interconnect gaps

System level interconnect gaps

On the other hand, monolithic 3D’s function is to enable the continuation of Moore’s Law and to overcome the escalating on-chip interconnect gap. Quoting Robert Gilmore, Qualcomm VP of Engineering, from his invited paper at the recent VLSI conference: As performance mismatch between devices and interconnects increases, designs have become interconnect limited. Monolithic 3D (M3D) is an emerging integration technology that is poised to reduce the gap significantly between device and interconnect delays to extend the semiconductor roadmap beyond the 2D scaling trajectory predicted by Moore’s Law…” In IITC11 (IEEE Interconnect Conference 2011) Dr. Kim presented a detailed work on the effect of the TSV size for 3D IC of 4 layers vs. 2D. The result showed that for TSV of 0.1µm – which is the case in monolithic 3D – the 3D device wire length (power and performance) were equivalent of scaling by two process nodes! The work also showed that for TSV of 5.0µm – resulted with no improvement at all (today conventional TSV are striving to reach the 5.0µm size) – see the following chart:

Cross comparison of various 2D and 3D technologies. Dashed lines are wirelengths of 2D ICs. #dies: 4.

Cross comparison of various 2D and 3D technologies. Dashed lines are wirelengths of 2D ICs. #dies: 4.

So as monolithic 3D is becoming an important part of the 3D IC space, we are most honored to have a role in these coming IEEE conferences. It will start on October 2nd in SF when we will present a Tutorial that is open for all conference attendees. In this Monolithic 3DIC Tutorial we plan to present more than 10 powerful advantages being opened up by the new dimension for integrated circuits. Some of those are well known and some probably were not presented before. These new capabilities that are about to open up would very important in various market and applications.

In the following S3S conference we are scheduled on October 8, to provide the 3D Plenary Talk for the 3D IC track of the S3S conference. The Plenary Talk will present three independent paths for monolithic 3D using the same materials, fab equipment and well established semiconductor processes for monolithic 3D IC. These three paths could be used independently or be mixed providing multiple options for tailoring differently by different entities.

Clearly 3D IC technologies are growing in importance and this coming October brings golden opportunities to get a ‘two for one’ and catch up and learn the latest and greatest in TSV and monolithic 3D technologies — looking forward to see you there.

David DiPaola is managing director for DiPaola Consulting a company focused on engineering and management solutions for electromechanical systems, sensors and MEMS products.  A 17-year veteran of the field, he has brought many products from concept to production in high volume with outstanding quality.  His work in design and process development spans multiple industries including automotive, medical, industrial and consumer electronics.  He employs a problem solving based approach working side by side with customers from startups to multi-billion dollar companies.  David also serves as senior technical staff to The Richard Desich SMART Commercialization Center for Microsystems, is an authorized external researcher at The Center for Nanoscale Science and Technology at NIST and is a senior member of IEEE. Previously he has held engineering management and technical staff positions at Texas Instruments and Sensata Technologies, authored numerous technical papers, is a respected lecturer and holds 5 patents.  To learn more, please visit http://www.dceams.com.   

Product validation is an essential part of all successful MEMS new product developments.  It is the process of testing products under various environmental, mechanical or electrical conditions to simulate life in an accelerated manner.  Testing early and often needs to be a daily routine and not just a popular phase used in meetings.  This blog will cover proven methods to accurately perform MEMS product validation while mitigating potential issues resulting in repeated tests and non accurate results. 

Measurement system analysis or MSA is a methodology to qualify the measurement system that will be used to characterize the product.  In the context of MEMS, this could be a function test system for characterizing the performance of a MEMS pressure sensor by applying known pressures / temperatures and measuring sensor output.  The first step of MSA is to calculate total system accuracy determined by a tolerance stack of subcomponent errors traceable to NIST reference standards.  This will ensure your test system has the accuracy needed to properly characterize the samples.  In addition, system linearity of the true and measured value with minimal bias change and stability of the measurement system over time should be demonstrated.  Lastly, a Gage R&R (using average and range or ANOVA methods) in percent of process variation (not tolerance) should be completed to demonstrate repeatability and reproducibility for each test system utilized.  An excellent reference for MSA is aiag.org, Measurement System Analysis.   

Verification of the test system setup and function of the equipment is an important step prior to the start of validation.  Often times, improper test set up or malfunctioning equipment results in repeated tests and delayed production launches.  This is easily avoidable by documenting proper system setup and reviewing the setup thoroughly (every parameter) prior to the start of the test.  Equally important, the engineer should verify the system outputs are on target using calibrated tools after the tools themselves are verified using a known good reference. 

We all like to believe that customer specifications are well thought out and based on extensive field and laboratory data.  Unfortunately, this is not always the case.  Hence it is prudent for engineers to challenge areas of the customers’ specifications that do not appear robust.  Neither the customer nor the supplier wins if the product meets the defined specification but fails in the field.  The pain of such events is pervasive and extremely costly for all parties.   As parts complete laboratory tests, take the added step of comparing the results to similar products in the field at the end of life and ensure similar degraded appearance.  When ever possible, test products to failure in the laboratory setting to learn as much as possible about failure mechanisms.  When testing to failure is not possible, perform the validation to 3 – 5X the customer specification to ensure proper margin exists mitigating the risk of field failures.  Furthermore, always take advantage of field tests even if limited in duration.  They can provide valuable information missed in a laboratory validation. 

As briefly stated earlier, a function test or product characterization is the process of applying known inputs such as pressure, force, temperature, humidity, acceleration, rotation, etc. (sometimes two or more simultaneously), measuring the output of the MEMS product and comparing it to the desired target.  This is completed to ensure the product is compliant with the stated performance specification from the manufacturer.  As product life is accelerated through the validation, the device function should be characterized multiple times during the test to understand product drift and approximate time of failures.  It is recommended to perform function tests three to eight times at periodic (equally spaced or skewed) intervals during the validation after the initial pretest characterization.  As an example, I often test products at intervals of 0, 25, 50, 75 and 100 percent of the validation. 

Use of test standards is highly encouraged as it brings both consistency and credibility to validations performed.  Several organizations develop test standards for general use such as ASTM, JEDEC, AEC, Military and more.   When a product is tested to standards widely excepted in the industry, the intended audience is more likely to accept the results than if a non-familiar possibly less stringent test method was applied.  Some commonly used standards include ASTM B117 (salt spray), JEDEC JESD22-A106B (thermal shock), Automotive Electronics Council AEC-Q100 (stress test for integrated circuits) and MIL-STD-883 (various environmental tests) just to mention a few.  A list of validation standards used across the MEMS industry can be found in the MEMS Industry Group Member Resource Library, Standards Currently in Use at MEMS Companies.

In the validation of MEMS products, it is tempting to perform the testing on units from one wafer that has yielded 1000 pieces.  However, this is a single window in time and does not properly reflect the true process variation that can occur.  A better sampling approach for validation is taking units from multiple wafers within a lot and across multiple wafer lots.  Equally important, differing raw material lots should be used (one example is the starting SOI wafers).  This will ensure supplier, equipment, process, operator and time sensitive factors are well understood.  

Controls are another method to learn valuable information about the products being validated and the equipment being used.  A basic control could be as simple as a product that is function tested at each stage of the test, but does not go through any of the validation (i.e. sits on a shelf at room temperature).  This will give an indication if something has gone wrong with your test system should the same errors be seen in both experimental (parts going through validation) and control groups.  Another use of a control is testing a product that has previously passed a given validation (control group) while simultaneously testing a product that has under gone a change or is entirely new (experimental group).  This will provide information on whether the change had any impact on the device performance or if the new device is as capable as a previous generation.          

Lastly validation checklists are a valuable tool to ensure each test is set up properly before the test begins.  Without the checklist, it is easy to over look a step in pursuit of starting the test on time to meet a customer’s schedule.  Below is a sample validation check lists for thermal shock.  This can be modified for other tests as well. 

Thermal Shock Validation Checklist

 

  • Perform proper preventative maintenance on the environmental chambers before the start of the test to prevent malfunction during the test
  • Identify appropriate control and experimental groups and ensure proper sampling from multiple wafers and lots
  • Document sample sizes
  • Identify a proper validation standard or customer specification to define the test
  • Document pass / fail criteria for the devices under test
  • Create a test log and record any time an event occurs (i.e. start of test, end of test, devices removed from thermal chamber for testing, etc.)
  • Verify calibration of measurement reference and trace it back to a national standard
  • Verify the measurement reference with appropriate simple test.  (i.e. thermal couple’s accuracy and repeatability with boiling water, room temperature, ice water and other known sources)
  • Measure the temperature of the hot and cold chambers with an accurate and verified reference prior to the start of the test (i.e. thermal couple ± 1°C)
  • Verify chamber temperature is consistent across the part loading
  • Verify the time it takes the thermal load to reach the desired temperature (i.e. -40°C) and that its within test guidelines
  • Measure the transition time between hot and cold chambers and verify its within test guidelines
  • Complete all necessary MSA on test equipment and document the results
  • Engrave serial number on each device (paint pen can be easily removed)
  • Document the location of devices in environmental chamber with digital photograph
  • Record serial number and manufacturer for environmental chambers used
  • Determine and document periodic intervals for device function test
  • Continuously monitor environmental chamber temperature for the duration of the test using an appropriate chart recorder
  • Document location of thermal couple (photo) and verify it is located close to parts
  • Monitor device output continuously during the test
  • Check on the environmental chamber daily to ensure no malfunctions have occurred and monitor daily cycle count
  • Create a test in process sign with appropriate contact information for support staff
  • This will likely prevent individuals from accidentally turning off the environmental chamber or changing temperature profiles without notifying you
  • Document any changes to this specification for future reference

Product validation is a critical tool to learn about MEMS performance over a laboratory based accelerated life.  Its an excellent method to validate theory and ensure product robustness in the field.  The due diligence presented in this blog will help engineers avoid seemly small mistakes that cause repeated tests, inaccurate results and missed customer deadlines. 

Steffen_SchulzeSteffen Schulze is the director of Marketing, mask Data Preparation and Platform Applications at Mentor Graphics, serving customers in mask and IC manufacturing.

Who knew that mask process correction (MPC) would again become necessary for the manufacturing of deep ultraviolet (DUV) photomasks? MPC can be called a seasoned technology; it has always been an integral part of the e-beam mask writers to cope with the e-beam proximity effects, which can extend up to 15um of the exposure point. In addition, the mask house has been compensating for process-induced biases. Residual errors were always absorbed by the models created to describe the wafer process. Range and general behavior proved to be sufficient to capture the effects and secure the accuracy requirements for the wafer lithography.

At the 40nm node, more complex techniques were considered to cope with significant proximity signatures induced by the dry etch process and the realization that forward scattering in e-beam exposure, which was not accounted for in the machine-based correction, had a significant impact especially in 2D configurations. A number of commercial tools were made available at the time but a broad adoption was thwarted by the introduction of new mask substrates and the associated etch process – OMOG blanks provided for a thinner masking layer and enabled an etch process with a fairly flat signature. The MPC suppliers moved on to the EUV technology domain where a new and thicker substrate with a more complex etch process showed stronger signatures again, along with new effects of different ranges.

So what is changing now? At the 10nm node we are still dependent on the 193nm lithography. And while the wafer resolution is tackled with pitch splitting, the accuracy requirements continue to get tighter – tolerances far below 1nm are required for the wafer model. Phase shift masks are considered with a stack that is higher and displays stronger signatures again. In addition, scatter bars of varying sizes but below 100nm at the mask fall again into the size range with significant linearity effects.

So no wonder that the mask and wafer lithography communities are turning to mask process correction technology again. Wafer models have expanded to a more comprehensive description of the 3D nature of the mask (stack height and width ratio are approaching 1). A recent study presented at SPIE (Sturtevant, et al http://dx.doi.org/10.1117/12.2013748) shows that systematic errors on the mask contribute more than 0.5nm RMS to the error budget.  Figure 1 shows the process biases present in an uncorrected mask for various feature types – dense and isolated lines and spaces. Figure 2 shows how strongly the variability in mask parameters – here stack slope and corner rounding – can influence the model accuracy for a matching wafer model.

Figure 1 copy
Figure 1. Mask CD bias (actual-target) 4X for four different pattern types and both horizontal and vertical orientations. Click to view full screen.
Figure 2 copy
Figure 2 .  Impact of changing the MoSi slope (left) and corner rounding (right) on CTR RMS error at nominal condition and through focus. Click to view full screen.

The authors showed that the proper representation of the mask to a wafer model can improve the modeling accuracy significantly. For example, even an approximation of the corner rounding by a 45deg bevel can have a significant impact. Likewise considering the residual linearity and proximity errors improves the modeling accuracy. Figure 3 shows the comparison of residual errors for a variety of test structure in the uncorrected stage and when simulated with the proper mask model. The latter one can largely compensate the observed errors.

Figure 3
Figure 3. Mask CD error (4X) versus target and residual mask process simulation error. Click to view full screen.

The methods to properly account for the effects during the simulation are anchored in mask processing technology. These results have opened the discussion as to not only describing the mask but also correcting it. The above mentioned study revealed a number of additional parameters that are generally assumed to be stable but fundamentally reveal a high sensitivity of the lithomodel to any variation –specifically the material properties and the edge slope or a substrate over-etch.  From these studies and observations one can conclude that mask process characterization and correction will be of increasing importance for meeting the tolerance requirements for wafer modeling and processing – initially by proper description of its residual errors for consideration in the wafer model but subsequently also by correction. The technology is available for quite some time and ready to be used – unless the materials and the process community come through again.

Read other Mentor Graphics blog posts:

Innovations in computational lithography for 20nm

Looking for an integrated tape-out flow