Double-patterning design challenges

by J. Andres Torres and Alexander Tritchkov, Mentor Graphics


With the delay in EUV tools, and hyper-NA immersion tools lacking the native resolution to print 32nm features, there is general consensus that the critical layers of 32nm and 22nm semiconductor products will be patterned using ArF lithographic systems [1]. While these 193nm patterning tools have been enhanced to support immersion optics (which extend the numerical aperture to 1.35) and polarized illuminators, they also lack the native resolution needed to achieve the necessary pattern transfer characteristics that are mandatory to fully realizing the area reduction available at these advanced nodes. To address this issue, multiple exposure and double-patterning techniques have been proposed.

Double-patterning (DP) is a method for breaking up a layout so that sub-resolution configurations are separated between two distinct masks. These masks are exposed and processed sequentially to obtain the original design pattern by composing the layout features from the independent patterning steps [1]. While conceptually simple, double-patterning increases the complexity of layout verification and mask data preparation, so manufacturers must be able to identify exactly which layers require double-patterning. Double-patterning also presents significant design challenges, as it is possible to design layout features that are not easily decomposable. To avoid these features, double-patterning requires verification tools that can apply DP-related design checks, spot problem areas, and suggest modifications to ensure effective mask decomposition.

Because double-patterning requires processing critical layers twice, it is often believed that wafer throughput is reduced by half. While that is true for a single critical layer, it is not the case for a complete and finished product, as not all layers are critical, thus not all layers require double patterning.

Pitch-splitting double-patterning

There are different types of DP: spacer, double exposure-double etch (DEDE) in trim and pitch-split modes, as well as mesa and trench modes. This discussion focuses on the tradeoffs that layout designers need to be aware of when designing for a pitch-splitting DP approach. We selected this technique over others because it provides the fewest number of pattern restrictions, as well as the highest possible resolution from a DP process. As such, it is the technique that is being more widely deployed for 32nm digital designs.

Figure 1 shows a simple example of how pitch-splitting DP is applied to a layout. The original layout (Fig. 1a), is composed of several structures in which the small gaps between polygons need to go into different patterning steps. Figure 1b shows how such small gaps are detected using models or rules. By solving a two-color problem (in which adjacent regions are assigned one of two colors such that no two regions of the same color share an edge), the original features are then decomposed into two sets of structures (Fig. 1c), which will be patterned sequentially to arrive at the complete pattern on the wafer.

Figure 1. Typical DP decomposition: a) original layout; b) separator insertion; c) cuts, coloring, and decomposition.

Separators, violations and cuts

In practice, however, things are more complicated. Figure 2 illustrates a Metal1 configuration that shows the separators (S), which are used to assign the features to different patterning steps, and the violation regions (V), which show the least costly location that, if moved, could resolve the coloring conflict. This cost calculation is based on the number of features that would have to be modified to remove a DP conflict.

Figure 2. Separators and coloring violation regions.

Figure 3 depicts how the coloring violation is translated into a weak region (W), which highlights the location in which a minimum gap was not resolved and which could become a difficult-to-pattern region.

Figure 3. Coloring violations translate into sub-optimal DP decomposition.

Because coloring conflicts arise from interactions between neighboring features, there are different methods that can be used to address these problems: The simplest method verifies and makes each library element DP-compliant, if necessary. Then, to ensure that interactions do not affect the decomposability of the layout, a large separation between neighboring cells is imposed so that the final layout remains DP-compliant. This approach, while effective, can significantly increase the total area needed to implement a given functionality.

A second approach also requires each library element to be DP-compliant, but information as to the “color” of boundary features is attached to each cell in the library (similar to the process proposed for strong phase shift compliance [3]). By doing so, this method limits the types of allowed neighbors for a given cell (these restrictions can be incorporated into most layout placement tools). This approach can also be effective, but requires the annotation of such information along with the cell itself. By itself, it does not guarantee that interconnect or via layers will be DP-compliant, in case such layers also require DP processing.

The third approach does not require any changes because it relies on verification once the full chip is integrated. While this certainly has the least impact on design practices in the short term, and permits manufacturing teams to fine-tune their production recipes to allow for more aggressive configurations, it is the riskiest of all approaches, because it relies on the proper execution of layout design and manufacturing to resolve every structure that design rules allow.

Choosing a specific method for addressing DP complexities depends strongly on the target product, since some products can more readily trade-off area for predictability and faster design times, vs. other products that require maximum area utilization to provide a cost-competitive product. The selection of which DP approach to pursue also depends on the present and projected states of the manufacturing process.

Figure 3 shows how one can prevent coloring conflicts by introducing cuts to the original layout. If the cut was not introduced, a much larger number of conflicts would have arisen, and more drastic design changes would have been needed to make the pattern printable. At this point, it is pertinent to ask the question: What is the best way to cut original polygons? And the answer is, again: It depends on the functional specification of the intended product.

Figure 4a shows how a T-style cut can introduce a mask constraint that will impede the full line-end pullback correction, because there will be no room for a hammerhead. Because process control cannot guarantee that the design will be always manufactured at a point process condition, the concept of process variability bands was introduced to quantify the pattern transfer accuracy loss that a layout will incur when being manufactured under actual process conditions. While the line-end pullback is relatively small at nominal conditions, its imaging performance is further degraded when subject to process variations (Fig. 4b). This degradation can be particularly problematic if vias get exposed, as it can decrease interconnect reliability.

Figure 4. T-Cut style DP decomposition: a) Contours at nominal conditions and highlight of mask constraint; b) PV-bands represent patterning qualities under manufacturing process variations.

Alternatively, it is possible to try to correct the DP conflict by inserting an S-cut, as indicated in Figure 5a. This different style of cut provides more room for the line-end hammerhead.

While one could think that such a cut is ideal at nominal conditions, since it prevents line-end pullback and does not introduce any patterning limitations, it is possible to detect a weak region near where the cut was introduced when evaluating the solution across the available process window. Figure 5b shows a weakened metal line, which is more likely to pinch due to process variations. This condition may also cause a systematic yield loss, either by a broken wire or a wire that exhibits a higher resistance than that used during circuit validation, which can cause the electrical behavior of the devices to be off target.

Figure 5. S-Cut style DP decomposition: a) contours at nominal conditions; and b) PV-bands represent patterning qualities under manufacturing process variations.

In the T-cut example, in which the line-end pullback was caused by the lack of space to sufficiently extend a hammerhead, the layout designer can use manufacturing-aware tools to guide the original layout and use the white space in ways that prevent these types of pattern degradation. Figure 6 shows a layout that does not increase in area, and yet produces a more robust topology.

Figure 6. T-Cut style DP decomposition after layout change to prevent pullback: PV-bands represent patterning qualities under manufacturing process variations.

Similarly, by taking the S-cut limitation and making modifications to the layout, one can quickly discover that the problem with the pinching location is not layout-induced, as several layout arrangements are not able to make the weak location disappear. In this case, the overlap between the two patterning steps was insufficient, and such limitations can only be corrected by modifying the manufacturing process.

Exploring choices with process modeling

As we have seen in these examples, the layout style determines the type of DP analysis and verification that needs to be applied to the final design. In general, very restrictive design rules reduce the number of possible DP conflicts, but at the expense of increases in the footprint of the devices, while flexible design rules may allow configurations that are printable and have a smaller footprint, but require extensive layout verification techniques to ensure that the original design intent will be faithfully executed in the manufactured product.

These examples also illustrate how there are no preferred methods to select one type of cut over the other, as there is no single option that is able to remove all DP violations without sacrificing pattern fidelity or making layout changes in the neighborhood of the violation. These examples only emphasize how manufacturing choices affect design tradeoffs: Is reliability a concern? Is this a high frequency design? What is the thermal budget for the device? While it is clear that attaining a complete level of understanding for every pattern that requires conflict resolution is near to impossible, given time and resource constraints, it is nevertheless evident that the communication between manufacturing and design needs to be enhanced, so that the correct tradeoffs can be implemented.

Fortunately, advances in process modeling allow such tradeoffs to be explored more rapidly than in the past. By assessing the lithographic performance of a process using simulations, one can explore a much larger number of decomposition and correction methods, and only verify in silicon a few of the most promising candidates (e.g., Figure 7), resulting in a more cost-effective process development.

Figure 7. SEM of a 32nm node Metal1 layer processed with DP. NA=1.2, Annular 0.92-0.72, X/Y polarized. (Image courtesy of IMEC.)

Looking ahead

To understand the challenges that double-patterning faces, it is important to remember that moving to new manufacturing nodes has been driven mainly by economics. Higher transistor densities translate directly to manufacturing cost savings and higher FLOPS per watt ratios (which measure the performance of a device normalized to the energy required to operate). Both metrics are essential, especially in mobile consumer products, in which cost, battery life and functionality are the main design constraints.


With the move towards fabless models, it is critical that layout designers and manufacturing engineers remain engaged in the discussion of effective design rules that provide the types of yield, predictability and cost information that IC companies require for their products to be competitive in the marketplace. To reduce lengthy and costly design iterations, the main challenge is to convey the results from formal DP simulations and feed this information, whenever possible, to IC library developers and place and route systems in a way that minimizes the total impact of DP, thus enabling design teams to concentrate on developing the functionality that will be the main source of differentiation of their products. It is also important to strike the right balance between complexity (use of geometric rules) and accuracy (use of process simulators) to continue with the design productivity gains that have been the basis of the semiconductor industry’s rapid growth.

Andres Torres holds a BS in chemical engineering from the National Autonomous U. of Mexico, an MS in chemical engineering from UW-Madison, and a PhD in electrical engineering from the Oregon Graduate Institute. He is the technical lead of the Litho-Friendly Design group at Mentor Graphics, 8005 SW Boeckman Rd, Wilsonville, OR 97070 USA; e-mail [email protected].

Alexander Tritchkov holds an MSc in semiconductor science and technology (with specialization in microlithography) from the Sofia U. of Technology and is an RET Development Engineer at Mentor Graphics.


1. K.M. Monahan, “Enabling Double Patterning at the 32nm Node,” Semiconductor Manufacturing 2006 IEEE International Symposium, pp.126-129, ISBN 978-4-9904138-0-4.

2. A. Tritchkov, et al., “Double-patterning Decomposition, Design Compliance and Verification Algorithms at 32nm hp,” Proc. SPIE 7122, 71220S (2008).

3. L. W. Liebmann, et al., “Layout Optimization at the Pinnacle of Optical Lithography,” Proc. SPIE 5042, 1 (2003) Full Text.


Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.