Tag Archives: letter-ap-top

Global shutter image sensors


September 9, 2014

Different GS pixel architectures and technologies are presented and performances compared. 

BY GUY MEYNANTS, CMOSIS, Antwerp, Belgium

CMOS image sensors are in widespread use today in many consumer and professional applications. The typical shutter type for most CMOS image sensors is a so-called Rolling Shutter (RS). This is an inherent property of the 4T active pixel and its derived architectures with shared amplifier readout. The main drawback of a RS CMOS imager is that the start and the stop of the exposure is slightly shifted from pixel line to pixel line resulting in object deformation of fast moving objects (FIGURE 1) or the so-called “jello” effect when the camera is vibrating. To avoid this, either a mechanical shutter or a flash is required. Neither of these is accepted in many applications.

FIGURE 1. Rolling shutter image artifacts in the spokes of the turning wheel

FIGURE 1. Rolling shutter image artifacts in the spokes of the turning wheel

The alternative is using a so-called Global Shutter (GS) pixel based image sensor, whereby every pixel of the entire pixel array acquires the image during the same time period. This requires an in-pixel memory element that stores the signal after capture by the photodiode. Interline transfer (IT) CCDs were for many years the technology of choice for GS imagers, due to the combi- nation of global shutter with low read noise through a correlated double sampling (CDS) output stage. However, compared to CMOS image sensors, CCDs are limited to moderate readout speeds, consume more power, and lack on-chip integration of timing and AD conversion circuitry.

The first generation of GS CMOS imagers suffered from high read noise due to the lack of CDS on the charge sense node, and from poor shutter efficiency. Today, several techniques have been proposed to combine CDS with GS functionality. Meanwhile, pixel scaling efforts and microlens designs allow recovering the loss in fill factor caused by the in-pixel storage elements required in GS pixels, and allowed low-noise GS pixel design with good shutter efficiency. Shutter efficiency reports how much the stored pixel value is distorted by incoming light (which will be typically light from an unrelated exposure period which falls on the pixel when awaiting readout). It is calculated as {1 – sensitivity with shutter closed / sensitivity with shutter open} and can typically be wavelength dependent.

FIGURE 2. 5-transistor charge domain global shutter pixel (a) and 7-transistor charge domain global shutter pixel (b).

FIGURE 2. 5-transistor charge domain global shutter pixel (a) and 7-transistor charge domain global shutter pixel (b).

FIGURE 2 shows two global shutter pixels of earlier generations. Fig. 2a is a 5-transistor global shutter pixel, which stores the image on floating diffusion FD after exposure. At readout, the value sampled on FD is read through the source follower when the pixel row is selected. Then the floating diffusion is reset, and a reference level is read from the pixel. This reference level cancels any random fixed offset variations between pixels, which would otherwise cause fixed pattern noise. However, the temporal kTC noise on the floating diffusion sense node is not cancelled, since the reference for each pixel is taken after reading the photosignal, by a new reset of the sense node, which introduces a new random offset error uncorrelated to the signal level. Gate TX2 acts as an anti- blooming drain and is also used to start the exposure. The anti-blooming function is important, since excess charges are not allowed to flow to FD, where the pixel data of the previous exposure is stored.

The shutter efficiency of such pixels is not very good, typically below 99.9% for green light. The reasons why are shown in fig. 3, which is a cross-section of the 5T pixel structure. Photons generate electrons in the substrate, which diffuse through the substrate until they reach the pinned photodiode as shown in green. Some electrons generated deeper in the substrate may be collected directly by an unrelated n+ junction, such as the n+ junctions of the charge drain or the drain of the reset transistor, as shown in orange. These charges do not contribute to the photosignal and result in a loss of quantum efficiency. Some charges may diffuse to the junction of the floating diffusion, rather than by the photodiode, as shown in red. These disturb the signal stored on the floating diffusion and reduce the shutter efficiency.

This diffusion also explains the wavelength dependency of shutter efficiency for this pixel type: blue light is generated close to the surface, the majority of it inside the pinned photodiode. Part of the electrons generated by red or near infrared light are located deeper in the silicon, and have to diffuse first to the photodiode, but may reach the floating diffusion instead, which results in lower shutter efficiency for these longer wavelengths.

Often, a light shield is placed on top of the storage node to improve shutter efficiency and microlenses are used to focus the light onto the photodiode, away from the storage area. Also, a higher doped p-well under the unrelated n+ junctions can be used to reduce the charge diffusion of electrons, thanks to a small potential difference between the epitaxial p- substrate and this higher doped p-well region. A majority of the electrons will prefer to diffuse to the photodiode, where this barrier is not present.

FIGURE 3. Cross-section of a 5T charge domain global shutter pixel

FIGURE 3. Cross-section of a 5T charge domain global shutter pixel

However, a further effect that reduces shutter efficiency and which is not solved by light shields and neither by this p-well, is that some charge may leak from the photodiode through the transfer gate to the floating diffusion during the next exposure time (see Ileak in FIGURE 3). To include this effect in shutter efficiency measurements, it should be measured with constant light in a mode where the pixel integrates the next exposure during readout. Often, shutter efficiency is measured while the photodiode is drained through TX2, which cancels this transfer gate leakage, but which does not match the typical use cases for global shutter pixels in the real world, where a next image is captured during readout of the image. Furthermore, dark leakage current of the floating diffusion junction will also disturb the signal sampled on it and is a further source of noise, hot pixels and non-uniformity. This is especially important since the floating diffusion n+/p junction reaches the surface, where leakage currents will increase due to surface defects present inside the depletion region of the n+/p junction.

Fig. 2b solves the shutter efficiency and dark leakage issues of the storage node by storing the signal in voltage domain on capacitor C behind a first source follower, instead of on the floating diffusion. This capacitor can be larger and be composed of a gate or plate capacitance, which is not capable to collect electrons straight from the substrate, where they are generated by photons. In this way, the shutter efficiency can be improved above 99.98%. The pixel can be operated with double sampling by reading the reset level as a reference after reading the value sampled on C, but it still lacks correlated double sampling, just like the 5T pixel of fig. 2a. And some electrons can be collected from the substrate by the junctions of the switch connecting to the capacitor, which explains why shutter efficiency is not perfect.

For both pixel types of Fig. 2, the full well charge is proportional to the sense node capacitance, and the noise is proportional to the square root of the sense node capacitance. A typical floating diffusion of 1.6 fF, corresponding with a conversion gain of 100 μV/e-, will operate with a voltage swing of 1V. This corresponds with a saturation level of 10,000 e-. The kTC noise on 1.6 fF is 16 e- RMS. This noise appears both on the signal and reference samples, so it is increased by the square root of 2 (sqrt(2)) to 23 e- RMS at the sensor output. The dynamic range is then limited to 53 dB in this example, which is clearly lower than its IT CCD counterparts. Only if the reset level of the floating diffusion before charge transfer is used as a reference for the photosignal, it is possible to cancel the kTC noise of the sense node through CDS and reach similar dynamic range as IT CCDs.

Charge domain global shutter pixels

 FIGURE 4. CDS charge domain global shutter pixel and timing


FIGURE 4. CDS charge domain global shutter pixel and timing

FIGURE 4 shows a charge transfer pixel [1] with correlated double sampling and its timing scheme. In addition to the 5T GS pixel structure, two extra transfer gates ø2 and ø3 have been added. The signal is transferred synchronously in all pixels of the array to gate ø2 after exposure. During readout, this charge packet stored under ø2 is transferred to the floating diffusion row-by- row. The floating diffusion is sampled before and after charge transfer in a CDS scheme, and hence reducing read noise. Read noise of 4.8 e- RMS [2] and 3 e- RMS [3] have been reported with this structure. Shutter efficiency of such pixel is limited, since some photo- charges generated in the substrate may be collected directly by the storage gate ø2 rather than by the photo- diode. [3] reports a shutter efficiency of 99.96%, which is again limited by charge diffusion and leakage current under transfer gate ø1.

It is clear that CDS and global shutter require two memory elements in the pixel. In this case, the floating diffusion and gate ø2 are these two memory elements. Variants of this structure have been proposed, mainly to reduce the area required for charge transfer and storage: a combined ø1/ø2 gate with two different potentials under it [2], a compact ‘pump gate’ replacing ø1/ø2 [3] or a structure where ø2 is replaced by a pinned photo-diode [4]. Though offering the best noise performance, the shutter efficiency is not satisfactory for all applications. A second problem remains dark current leakage on the storage node ø2. This storage gate is typically a surface channel device (except in [4] where it is a pinned photodiode). For lowest leakage, the storage device should be a buried channel device. But a buried channel device has lower charge storage capacity per unit area, which may limit the minimum possible pixel size.

Voltage domain global shutter pixel

FIGURE 5 shows a GS pixel structure counting 8 transistors and two in-pixel capacitors. This is a voltage domain global shutter pixel that memorizes not only the signal level but also the reset level of the floating diffusion in the pixel on a capacitor behind the first buffer amplifier. The pixel of Fig. 5 shows two storage capacitors that are connected in series but other configurations can be considered where the storage capacitors are connected in parallel or in cascade. This series connected approach resulted in the most compact pixel design. Timing is also shown in Fig. 5. The image acquisition cycle starts with an exposure of the pinned photo- diode. At the end of the exposure period, the reset level Vreset is first sampled on C2, after which charge is trans- ferred to the floating diffusion FD. Then the signal level Vsignal is sampled on C1. During readout, first the reset level is read out from C2. Then C1 and C2 are shorted. Since C1 and C2 are equal in capacitance, the signal read after shorting both capacitors is (Vsignal + Vreset)/2. The readout circuit, present typically in the column amplifier of the image sensor, calculates the difference between both pixel readings, and amplifies the signal again so that Vsignal – Vreset results.

FIGURE 5. Voltage domain global shutter pixel with CDS

FIGURE 5. Voltage domain global shutter pixel with CDS

Fig.5 shows two timing modes. In mode 1, the S2 pulse remains on during sampling of the second sample Vsignal of the pixel. In mode 2, S2 is opened again before sampling. Mode 1 contains an asymmetric gate-source cross-talk between the two samples.

This causes an extra offset between both readings and increases fixed pattern noise by approx. 30%. However, temporal read noise is lower. It can be shown that the temporal read noise of the pixel is optimum when C1 is equal to C2. In mode 1, the temporal read noise is given by kT/2C where C is the capacitance value of C1 and C2. In mode 2, the read noise is kT/C. A more complex model including noise of in-pixel transistors has been made. Read noise depends strongly on the size of in-pixel capacitors. For larger pixels, a larger capacitance can be made, and lower read noise can be reached. A 5.5 μm pixel with two in-pixel capacitors of 16 fF each has been made, resulting in 13 and 10 e- RMS in modes 2 and 1 respectively. A larger 6.4 μm pixel with two in-pixel 36 fF capacitors reached 8 e- RMS. On a smaller 3.5 μm pixel, only 8 fF was available, resulting in a read noise of 17 e- RMS.

Full well charge of the 5.5 μm pixels is limited by the swing on the floating diffusion sense node, to about 13,500 e-. This results in a dynamic range of 60 dB for the 5.5 μm pixels. The 6.4 μm pixel reaches a full well charge of 15,000 e-, which results, together with its lower noise, in 65 dB dynamic range.

Shutter efficiency of this 8T GS pixel structure is excellent thanks to a variety of reasons:

1) the capacitors C1 and C2 are implemented through gate or metal-isolator-metal capacitors, which are unable to collect charges generated in the substrate. Some small contribution of charges collected from the substrate is still possible, through the source/drain junctions of the in-pixel switches S1 and S2. But these junctions do cover only a very small area of the pixel

2) If such charges are collected from the substrate, there is a similar chance that they are collected on C1 or on C2. This creates a common-mode offset error on both the signal and reference samples stored on both capacitors, which is cancelled after CDS.

3) An electron collected on C1 or C2 has less impact on the voltage signal than an electron present on the floating diffusion, by the ratio of the capaci- tance of the floating diffusion and the storage capacitor. For example, in the 5.5 μm pixels, the floating diffusion is 1.6 fF, and the storage capacitors are 16 fF each. This means that an electron converted on FD causes a signal change of 100 μV, while an electron collected on C1 or C2 causes only a shift of 10 μV.

The reported shutter efficiency for a front-side illumi- nated (FSI) 8T GS pixel is better than 99.999%. Because the pixel does not rely on light shields and the storage nodes are almost not capable of collecting any charges from the substrate, such pixel can also be used in combi- nation with backside thinning.

Backside illumination and global shutter pixels

Today, backside illuminated CMOS image sensors have been widely adapted in consumer applications. This technology was introduced to improve light sensitivity while pixel pitch could be further reduced, to 1.4 μm and below. The same technology can also help to improve quantum efficiency and light sensitivity of global shutter pixels. Backside illumination (BSI) can also increase the light sensitive spectrum into the near and extreme UV spectrum. These wavelengths are blocked on traditional front-side illuminated image sensors due to absorption in the inter-metal dielectric layers on top of the silicon. But these wavelengths get important in more and more machine vision applications, for example semiconductor inspection.

Since with BSI, the photocharges are generated from the backside surface onwards, charge diffuse towards the photodiode gets more important. This is why obtaining good shutter efficiency is more difficult than with front-side illumination. Light shields are not very effective, since they don’t influence charge diffusion. Also, shutter efficiency now becomes worse for shorter wavelength, since these photons are absorbed closer to the surface and generate photocharges further away from the photo-diode. In particular for the charge domain global shutter pixels discussed before, it becomes difficult to avoid diffusion to the charge storage element. A voltage domain global shutter pixel with CDS can keep its good shutter efficiency thanks to the reasons mentioned before, such as the lower impact on the signal when an electron hitting the storage element, and the differential operation of the CDS voltage domain global shutter pixel.

An 8T voltage domain global shutter BSI prototype image sensor has been made and reported [6] with a shutter efficiency of 99.996%, well above the acceptance limit for almost all use cases. Read noise and full well charge, were not changed with backside illumination. QE can be optimized to the desired wavelength range by an optimized anti-reflective coating.

Scaling of global shutter pixels

The 8T pixel structure contains a lot of compo- nents (8 transistors, 2 capacitors) and a signif- icant amount of interconnect routing. The smallest possible pixel pitch in 0.18 μm CMOS is around 5.5 μm. To develop smaller 3.5 μm pixels the following approaches were taken:

1) The IC technology is switched for a smaller geometric node. CMOSIS developed pixels in a process with 110 nm front-end and 90 nm back-end design rules. This process was initially developed for 1.75 μm shared 4T pixels and allows narrow interconnect pitch. Also the height of the interconnect stack is reduced, which improves the optical performance of the pixel such as quantum efficiency and angular pixel response.

2) Pixel sharing is employed to share the first source follower in the pixel. Interconnect routing is shared to select pixels from 2 adjacent rows to 2 vertical column busses.

More details are described in [7]. In spite of the scaling, a dynamic range of 58.5 dB is reached on a 3.5 μm global shutter pixel, with a noise level of 17 e- RMS and a full well charge of 14,800 e-. Quantum efficiency is 46% at 550 nm.

Conclusions

CMOS sensors with global shutter pixels can only compete with IT-CCD devices in case when the pixel allows correlated double sampling (CDS), in order to keep the temporal read noise low. Mechanisms similar to smear in a CCD cause a degradation of shutter efficiency on the global shutter pixels, which must be dealt with effectively in pixel design. One solution is a voltage domain global shutter pixel. Several pixel implementations have been discussed, and pixel specifications of voltage domain pixels are listed in TABLE 1. Charge domain pixels offer lower read noise at the cost of decreased shutter efficiency and are more difficult to use with backside illumination. Future developments in global shutter pixels use CMOS scaling for smaller pixel structures, while aiming to at least maintain performance at the values reached today. Backside illumination can be considered, and has been demonstrated already with voltage domain global shutter pixels.

Screen Shot 2014-09-10 at 10.01.54 AM

References
1. S. Lauxtermann, A. Lee, J. Stevens and A. Joshi, “Comparison of Global Shutter Pixels for CMOS Image Sensors”, 2007 International Image Sensor Workshop, Ogunquit, ME, June 2007 (www.imagesensors.org)
2. M. Sakakibara, et al, “An 83dB-Dynamic-Range Single-Expo- sure Global-Shutter CMOS Image Sensor with In-Pixel Dual Storage”, ISSCC Dig. Tech. Papers, pp. 380-381, February 2012
3. S. Velichko, et al, “Low Noise High Efficiency 3.75 μm and 2.8μm Global Shutter CMOS Pixel Arrays”, 2013 International Image Sensor Workshop, Snowbird, Utah, June 2013 (www. imagesensors.org)
4. K. Yasutomi, et al, “A Two-Stage Charge Transfer Active Pixel CMOS Image Sensor With Low-Noise Global Shuttering and a Dual-Shuttering Mode”, IEEE Trans. El. Dev., Vol. 58, No. 3, March 2011
5. G. Meynants, “Global shutter pixels with correlated double sampling for CMOS image sensors”, Adv. Opt. Techn. 2013; (2): pp. 177-187
6. G. Meynants, et al, “Backside illuminated Global Shutter CMOS Image Sensors”, 2011 International Image Sensor Workshop, Hokkaido, Japan, June 2011 (www.imagesensors. org)
7.B.Wolfs,etal,“3.5μmglobalshutterpixelwithtransistor sharing and correlated double sampling”, 2013 International Image Sensor Workshop, Snowbird, Utah, June 2013 (www. imagesensors.org)

Fast and predictive 3D resist compact models are needed for OPC applications. A methodology to build such models is described, starting from a 3D bulk image, and including resist interface effects such as diffusion. 

BY WOLFGANG DEMMERLE, THOMAS SCHMÖLLER, HUA SONG and JIM SHIELY, Synopsys, Aschheim, Germany, Mountain View, CA and Hillsboro, OR. 

With further shrinking dimensions in advanced semiconductor integrated device manufacturing, 3D effects become increasingly important. Transistor architecture is being extended into the third dimension, such as in FinFETs [1], multi-patterning techniques are adding complexity to lithographic imaging in combination with substrate topography.

Even on planar wafer stacks, process control gets more and more challenging for the 1X nm technology node, as features are being scaled down while exposure conditions remain at 193nm immersion lithography with 1.35 NA. Image contrast decreases, especially at defocus, resulting in high susceptibility for resist loss height and tapered sidewalls; resist profiles may deviate significantly from ideality. Although imaging conditions can be well controlled at nominal exposure conditions, the effect on the process window is usually substantial, as the useful depth of focus as become comparable to the resist film thickness. These dependencies are illustrated in FIGURE 1.

FIGURE 1. Extending 193nm immersion technology to the 1x technology node reveals new patterning challenges.

FIGURE 1. Extending 193nm immersion technology to the 1x technology node reveals new patterning challenges.

Especially random 2D layout structures exhibit weak image areas, where often severe resist top loss or footing occurs, which can results in critical defects within the subsequent etch process. An example for such a weak spot is shown in FIGURE 2a, taken during the early phase of process development [2]. The left clip shows a top-down SEM image of the pattern in resist, taken after the development step. It does not provide any indication for a potential defect in this area. Conventional 2D models represent well the bottom contour of the resist profile. Overlaying the model contour (red line) with the SEM image shows a very good correlation with reality, again giving no motive to apply any layout corrections. However, after etch a bridging hot spot is revealed, as can be seen on right SEM image. A more detailed analysis of the weak spot area using rigorous simulations indicates a low image contrast and severe resist loss of about 60% at the critical location, as shown in FIGURE 2b. Degenerated 3D resist profiles are one of the main root causes for post-etch hotspots at advanced technology nodes.

FIGURE 2. “Weak lithography spot” often becomes only visible after etch if 2D models are used for correction and verification.

FIGURE 2. “Weak lithography spot” often becomes only visible after etch if 2D models are used for correction and verification.

In case those “weak litho spots” in a layout are known, localized corrections to mask features can be applied to prevent yield loss. However, the diversity of random logic structures in advanced designs makes is mandatory that compact models are available which reflect the 3D nature of the resist profiles at any location within the chip, and that this information is being utilized during optical proximity correction and verification, on full chip scale. Rigorously tuned compact models provide an efficient approach to achieve this goal, as we are going outline in the subsequent sections.

Efficient generation of 3D resist compact models

The fundamentals of 3D resist simulation are well captured by rigorous lithography process simulation which is based on a first principle physical modeling approach [3 – 6]. The corresponding simulation results do not only provide an accurate representation of the expected 3D resist profile for arbitrary device patterns within a random layout context. Rigorous models are also capable of predicting the impact of process variations such as focus or dose shifts, wafer stack or illumination condition changes, to only name a few, onto the lithographic performance. This predictive power is achieved by properly separating the various contributions to pattern formation inside the models, for instance addressing optical effects and resist effects individually. Due to their physical nature, the accuracy of optical simulations is only limited by the quality of the input data charactering the optical conditions in the exposure tool. As chemical processes in the photo resist are rather complex, the corresponding models utilize a small set of free, physically or chemically motivated parameters. Only a few experimental data points, e.g. from SEM metrology, are required to calibrate those free parameters, ensuring a good match between experiment and simulation over a wide application space. However, this predictive simulation power comes at the expense of run time – the enormous demand for computational resources does not allow rigorous models at to be applied on a full chip scale.

Standard full chip mask synthesis applications such as optical proximity correction (OPC) or verification are based on the deployment of conventional 2D compact models, i.e. models which represent the resist contours visible in a top-down views. Compact models are optimized for performance. Their accuracy, i.e. the match between model and experiment, is usually achieved by optimizing a large set of fitting parameters, inputting an even larger metrology data set based on CD-SEM measurements. Expansions to a models application space, e.g. to cover additional feature types, are enabled by extending the training data set for model fitting. However, this approach has limitations, as the effort for gathering additional metrology data might become prohibitive, which is rather cogent in the case of 3D metrology.

However, as outlined above, 3D models are required to capture hotspots which are being introduced through local resist height loss. An obvious extension into the third, vertical dimension could be to build individual 2D models at different image depths, representing resist contours of a 3D profile at discrete resist heights. The application of any of the individual 2D models to downstream OPC/LRC tools is straight-forward. However, the relevant image depths need be determined in advance due to the discrete nature of the methodology itself. The critical resist heights can be predetermined, based on etch process results. In practice, a bottom model along with one or two models at critical heights are usually sufficient to detect sites where etch results become sensitive to resist profile. Then the models are directly calibrated on those critical resist heights [7].

One major challenge to support this compact model calibration approach is the preparation of the corresponding metrology data. Conventional, single plane 2D models already require a significant amount of top-down CD-SEM data based on a feature set large enough to represent the entire design space. However, only very rough estimations can be made about the actual resist profiles. This is not sufficient for a reliable 3D model calibration.

Several techniques are available to experimentally characterize the three-dimensional shape of a resist profile, such as atomic force microscope (AFM) or CD-SEM cross section measurements. Common to all these methods is that they are very complex, elaborate, and costly, and therefore not suitable for high volume metrology data collection.

Alternatively, a carefully calibrated rigorous simulator model can be used to generate virtual 3D resist profile data by outputting CD values at specific heights, for specific features. Due to the underlying physical modeling approach, only significantly less experimental data are required for resist model calibration, compared to compact model building [8]. A typical calibration data set consists of CD-SEM top down measurements on a small set of 1D structures, covering critical CDs and pitches, through process window. In addition, a few 3D reference data points, e.g. from AFM, cross section measurements, or etch finger- prints are used to tune the absolute resist height of the profiles in order to match experiment and simulations in all dimensions. This approach not only removes the potential risk of measurement inconsistency between 2D and 3D metrology results, but also opens the door for extensive data collecting with minimum fab efforts.

The CD data sets, either experimentally determined virtually generated for a number of discrete heights, is then fed to compact model calibration at multiple imaging planes. The calibration can be independent for each height. It is often found that fitting a separate threshold for each resist height enables a better match between input data and compact model results. This is mainly due to the fact that vertical resist physics, such as z-diffusion, out-diffusion at boundaries are not included in the traditional compact modeling approach. Differences are compensated through a variable threshold. In addition, other resist models parameters may also be varied to compensate the z-direction physical effects. As a result, the common physicality of the model is compromised, as over-fitting takes place.

In order to demonstrate these dependencies, rigorous simulations based on a calibrated resist model were used to generate reference CD data for over 500 gauges at 9 height positions in the resist film. The gauges represent real fab process covering both 1 dimensional and 2 dimensional layout patterns. The process settings between compact model (ProGen) and rigorous model (S-Litho) are matched exactly. FIGURE 3a shows the results of a compact model calibration in which threshold and common resist model param- eters were kept constant for all sampling heights. The example profile (left image) shows a clear mismatch between the two modeling approaches, which results in an overall matching error with a root-mean-square (RMS) value of 2.9nm for the entire data set (right image).

FIGURE 3. Matching 3D resist compact model profiles to rigorous reference data.

FIGURE 3. Matching 3D resist compact model profiles to rigorous reference data.

These limitations have been overcome by adopting more physical modeling approaches, as used in rigorous simulators, while keeping the model form compact for full-chip applications. To that end, the bulk image is calculated by using one set of retained Hopkins kernels. Optical intensity can be assessed at any image depth without accuracy compromise. Based on an accurate bulk image, the model has been extended to capture effects present in chemically amplified resists. For instance, acid generation, acid-base neutralization, and lateral as well as vertical diffusion are taking into account. Specific boundary conditions at the resist interfaces are used to account for surface effects. The model is formulated in a continuous form so that a model slice at any image depth is readily available for use after calibration. While the calibration data is collected at discrete image planes, all planes are calibrated simultaneously using one set of resist parameters to guarantee physical commonality among them. Moreover, the calibration is done stepwise carefully to ensure the optical part to account for optical effects and resist model to account for resist effects.

The corresponding results are shown in FIGURE 3b. The compact modeling approach now takes vertical diffusion effects into account, including out-diffusion at resist top and bottom, which ensures an excellent match for individual profiles (left image) as well as for the entire data set, resulting in an rms value of 0.5nm.

Compact resist model portability

The integration of physical effects into compact modeling does not only enable the extension of resist simulation into the third, i.e. the vertical dimension, as described in the previous section. Characteristics such as “portability” or “separability,” usually assigned to rigorous models only, become now available within compact modeling as well. Rather than lumping optical and resist effects into a single set of model fitting parameters, the optical set is characterized individually, and resist effects are modeled individually, and therefore separated from the optical contributions to the modeling result. The more clean the separation, the more accurate is the modeling of the resist system response to slight modified optical condition, i.e. conditions different from the ones present during calibration.

Typical simple changes to the optical setup are the variation of focus and exposure dose. FIGURE 4 shows the 3D profile results for two representative features nominal CD of 60 nm (Figure 4(a)) and a wide line with a nominal CD of 200 nm (Figure 4(b)). The calibration 4, center images), with profiles being sampled at various heights. In order to test compact model prediction, we have applied a negative focus offset (Figure 4, left images), and a positive focus offset (right images), and compared the compact model results to profiles determined by rigorous simulation, which served as a reference. The profile changes through focus are very well captured by the compact model, especially the resist top loss at positive defocus (Figure 4, right images). These results are already a first demonstration of predictive power which comes with rigorously tuned compact models. In similar experiments, we have also successfully shown that this modeling concept can be utilized to investigate unintended printing of sub resolution assist features by analyzing the 3D resist response [9], and to source variations [10].

FIGURE 4. Rigorously tuned 3D resist compact models can predict the impact of process variation on profiles without additional data fitting.

FIGURE 4. Rigorously tuned 3D resist compact models can predict the impact of process variation on profiles without additional data fitting.

3D resist model based proximity correction

An accurate and predictive 3D resist compact model can be deployed in mask synthesis verification, or lithographic rule check (LRC), to detect weaknesses in resist profiles. For severe hot spots, simple OPC retargeting is not sufficient to mitigate issues caused by degraded resist profiles. In such a case, the appli- cation of rigorously tuned 3D compact models within optical proximity correction (OPC) offers an efficient approach to automatically repair hotspots within the mask synthesis flow. ProGen models exhibit the unique property of being consistently applicable in combination with different mask correction approaches, for instance conventional OPC as well as inverse lithog- raphy technology (ILT).

FIGURE 5a shows such a weak spot on an ILT mask where the correction is based on a 2D resist compact model, just the contours representing the bottom of the resist profile (black contour). However, the 3D rigorous simulation results reveals severe resist pinching at the top of the resist bulk, as displayed in Figures 5b. Looking at the bottom contour alone, such a hotspot would not have been detected. The red contour in Figure 5a represents the corresponding 3D compact model result extracted at the resist top, confirming the rigorous simulation result. Consequently, in order to achieve a more robust mask solution, we are now taking information from the entire resist profile into the ILT cost function to compute the corresponding correction. The results are shown in Figure 5(c), including bottom resist contour (black) and top resist contour (red) for the modified mask. Although the resist profile sidewall that the location of the weak spot still show some taping, the situation has significant improved over the 2D model based correction. This is confirmed by the rigorous simulation results in Figure 5(d), which does not show indications for resist pinching anymore.

FIGURE 5. Successful OPC correction of an ILT mask, based on 3D resist compact model input.

FIGURE 5. Successful OPC correction of an ILT mask, based on 3D resist compact model input.

The above OPC results conducted by ILT using 3D resist models again imply that resist profile weakness can be corrected in a mask synthesis process with the help of one predictive, accurate 3D resist compact model. As a result, wafer yields will be greatly improved.

Summary and outlook

In this work, we have outlined the concept of using a rigorous simulation approach to tune and improve compact modeling capabilities. Characteristics such as “productivity,” “portability” or “separability,” usually known only within the context of physical models, can be transferred to compact models and therefore made available for full chip mask synthesis applications. We have successfully demonstrated this approach by establishing rigorously tuned 3D resist compact models. Those models combine the performance benefit of compact models, required for full chip mask synthesis applications, with the 3D modeling capabilities and predictivity of rigorous models. We have demonstrated that the rigorously tuned resist model can be carried to a different lithography process setup, e.g. a different illumination source without suffering any accuracy degradation. Those models can be deployed in downstream mask synthesis applications such as optical proximity correction or verification without further modifications. As an example, we have performed a 3D resist model assisted mask correction, using ILT, to mitigate potential post etch hotspotsThe concept of “rigorously tuned compact models” can be easily extended to address other simulation challenges, even beyond the litho process, as shown in FIGURE 6. In fact, it has already been used to improve mask topography simulation capabilities in compact models, or extend resist modeling properties to capture effects which are characteristic to negative tone development. We are currently working on utilizing TCAD physical etch simulation to tune etch compact models, which will take simulated 3D resist profiles as input. A combination of TCAD etch tools and rigorous litho simulation can be used to generate compact models which take underlying wafer topography into account.

FIGURE 6. Extending the concept of “rigorous tuning” to process simulation beyond traditional lithography.

FIGURE 6. Extending the concept of “rigorous tuning” to process simulation beyond traditional lithography.

References
1. Wen-Shiang Liao; A high aspect ratio Si-fin FinFET fabricated with 193nm scanner photolithography and thermal oxide hard mask etching techniques; Proc. SPIE 6156, Design and Process Integration for Microelectronic Manufacturing IV, 615612 (March 14, 2006).
2. Aravind Narayana Samy; Role of 3D photo-resist simulation for advanced technology nodes; Proc. SPIE. 8683, Optical Microlithography XXVI, 86831E. (April 12, 2013).
3. Mohamed Talbi; Three-dimensional physical photoresist model calibration and profile-based pattern verification; Proc. SPIE. 7640, Optical Microlithography XXIII, 76401D. (March
11, 2010).
4. Chandra Sarma; 3D physical modeling for patterning process development; Proc. SPIE. 7641, Design for Manufacturabil- ity through Design-Process Integration IV, 76410B. (March
11, 2010).
5. Seongho Moon; Fine calibration of physical resist models: the importance of Jones pupil, laser bandwidth, mask error and CD metrology for accurate modeling at advanced litho- graphic nodes; Proc. SPIE. 7973, Optical Microlithography XXIV, 79730X. (March 17, 2011).
6. Chandra Sarma; 3D lithography modeling for ground rule development; Proc. SPIE. 7973, Optical Microlithography XXIV, 797315. (March 17, 2011).
7. Yongfa Fan; 3D resist profile modeling for OPC applications; Proc. SPIE. 8683, Optical Microlithography XXVI, 868318. (April 12, 2013) doi: 10.1117/12.2011852.
8. Ulrich Klostermann; Calibration of physical resist models: methods, usability, and predictive power; J. Micro/Nanolith. MEMS MOEMS. 2009.
9. Cheng-En R. Wu; AF printability check with a full-chip 3D resist profile model; Proc. SPIE. 8880, Photomask Technol- ogy 2013.
10. Yongfa Fan; Improving 3D resist profile compact modeling by exploiting 3D resist physical mechanisms; Proc. SPIE. 9052, Optical Microlithography XXVII, 90520X. (March 31, 2014).

BY TOM QUAN, Deputy Director, TSMC

The Prophets of Doom greet every new process node with a chorus of dire warnings about the end of scaling, catastrophic thermal effects, parasitics run amok and . . . you know the rest. The fact that they have been wrong for decades has not diminished their enthusiasm for criticism, and we should expect to hear from them again with the move to 10nm design.

Like any advanced technology transition, 10nm will be challenging, but we need it to happen. Design and process innovation march hand in hand to fuel the remarkable progress of the worldwide electronics industry, clearly demonstrated by the evolution of mobile phones since their introduction (FIGURE 1).

FIGURE 1. The evolution of mobile phones since their introduction.

FIGURE 1. The evolution of mobile phones since their introduction.

Each generation gets harder. There are two different sets of challenges included with a new process node: the process technology issues and the ecosystem issues.

Process technology challenges include:

  • Lithography: continue to scale to 193nm immersion
  • Device: continue to deliver 25-30% speed gain at the same or reduced power
  • Interconnect: address escalating parasitics
  • Production: ramp volume in time to meet end-customer demand
  • Integration of multiple technologies for future systems

Ecosystem challenges include:

  • Quality: optimize design trade-off to best utilize technology
  • Complexity: tackle rising technology and design complexity
  • Schedule: shortened development runway to meet product market window

Adding to these challenges at 10nm is that things get a whole lot more expensive, threatening to upset the traditional benefits of Moore’s Law. We can overcome the technical hurdles but at what cost? At 10nm and below from a process point of view, we can provide PPA improvements but development costs will be high so we need to find the best solutions. Every penny will count at 7nm and 10nm.

FIGURE 2. A new design ecosystem collaboration model is needed due to increasing complexity and shrinking development runways.

FIGURE 2. A new design ecosystem collaboration model is needed due to increasing complexity and shrinking development runways.

Design used to be fairly straightforward for a given technology. The best local optimum was also the best overall optimum: shortest wire length is best; best gate-density equates to the best area scaling; designing on best technology results in the best cost. But these rules no longer apply. For example, sub-10nm issues test conventional wisdom since globalized effects can no longer be resolved by localized approaches. Everything has to be co-optimized; to keep PPA scaling at 10nm and beyond requires tighter integration between process, design, EDA and IP. Increasing complexity and shrinking development runways call for a new design ecosystem collaboration model (FIGURE 2).

Our research and pathfinding teams have been working on disruptive new transistor architectures and materials beyond HKMG and FinFET to enable further energy efficient CMOS scaling. In the future, gate-all-around or narrow wire transistor could be the ultimate device structure. High mobility Ge and III-V channel materials are promising for 0.5V and below operations.

Scaling in the sub-10nm era is more challenging and costly than ever, presenting real opportunities for out-of-box thinking and approaches within the design ecosystem. There is also great promise in wafer-level integration of multiple technologies, paving the way for future systems beyond SoC.

A strong, comprehensive and collaborative ecosystem is the best way to unleash our collective power to turn the designer’s vision into reality.

Ongoing growth in cellular and Wi-Fi applications will continue to drive GaAs device revenues higher.  The recently released Strategy Analytics Advanced Semiconductor Applications (ASA) Forecast and Outlook report, “GaAs Industry Forecast: 2013- 2018” and the accompanying spreadsheet model forecasts further growth in the GaAs device market in 2014, before competitive technologies and trends act to reduce the growth rates.

The wireless communications segment continues to be the largest user of GaAs devices. Strong demand in this segment helped propel GaAs device revenues up by 11 percent in 2013. GaAs MMIC devices, supplied by device OEMs like Skyworks, RFMD/TriQuint, Avago, ADI/Hittite, M/A-COM Technology Solutions, ANADIGICS, etc. make up nearly 98 percent of all GaAs device revenues. Competitive technologies like silicon and GaN will continue to capture market share from GaAs. This will put the brakes on the growth rate of GaAs device revenue and contribute to an expected decline by 2018.

Strategy Analytics ASA GaAs Forecast

“The GaAs device market has proven incredibly resilient in the face of threats, and manufacturers have been rewarded with nearly 10 years of uninterrupted revenue growth,” Eric Higham, Service Director, Advanced Semiconductor Applications commented. “However, increasing market share for CMOS PAs, GaN and multi-mode and multi-band GaAs PAs will signal a new challenge to revenue growth in the future.”

The call to action has gone out to all the participants in the GaAs supply chain to provide innovation that will enable GaAs devices to fend off these challenges and continue the trend of revenue growth,” Asif Anwar, Director in the Strategic Technologies Practice added.

Extensive market models detailing the more than 30 market segments that utilize GaAs devices back the forecast.  The ASA service specifically focuses on the different semiconductor technologies like GaAs, GaN, InP, SiC and silicon that compete for end applications in the RF, optical and power electronics segments, resulting in the most robust market data in the industry.

Intel Corporation today announced two new technologies for Intel Custom Foundry customers that need cost-effective advanced packaging and test technologies.

Embedded Multi-die Interconnect Bridge (EMIB), available to 14nm foundry customers, is a breakthrough that enables a lower cost and simpler 2.5D packaging approach for very high density interconnects between heterogeneous dies on a single package. Instead of an expensive silicon interposer with TSV (through silicon via), a small silicon bridge chip is embedded in the package, enabling very high density die-to-die connections only where needed. Standard flip-chip assembly is used for robust power delivery and to connect high-speed signals directly from chip to the package substrate. EMIB eliminates the need for TSVs and specialized interposer silicon that add complexity and cost.

“The EMIB technology enables new on-package functionality that may have been too costly to pursue with previous solutions,” said Babak Sabi, Intel vice president and director, Assembly and Test Technology Development.

Intel also announced the availability of its revolutionary High Density Modular Test (HDMT) platform. HDMT, a combination of hardware and software modules, is Intel’s test technology platform that targets a range of products in diverse markets including server, client, system on chip, and Internet of Things. Until now, this capability was only available internally for Intel products. Today’s announcement makes HDMT available to customers of Intel Custom Foundry.

“We developed the HDMT platform to enable rapid test development and unit-level process control. This proven capability significantly reduces costs compared to traditional test platforms. HDMT reduces time to market and improves productivity as it uses a common platform from low-volume product debug up to high-volume production,” said Sabi.

EMIB is available to foundry customers for product sampling in 2015 and HDMT is available immediately.

STATS ChipPAC Ltd., a provider of advanced semiconductor packaging and test services, announced today that it has shipped over 100 million semiconductor packages with the company’s fcCuBE technology, advanced flip chip packaging with fine pitch copper (Cu) column bumps, Bond-on-Lead (BOL) interconnection and enhanced assembly processes.

fcCuBE technology is well established in the mobile market with the most significant production volume to date in small chip scale packages where the performance, size and cost benefits successfully address customer requirements in smartphones, tablets and wearable devices. The compelling performance and cost advantages of fcCuBE are also accelerating the diversification of this advanced technology into large die packages for consumer and networking applications where very high performance, reliability and processing speeds are imperative.

“The exceptional success of fcCuBE in the mobile market over the last year is a reflection of the complex performance and form factor requirements that our customers face and the clear advantages of this advanced technology. Demand for greater functionality and significantly higher processing speeds in consumer and networking devices is also driving flip chip packaging technology for ICs containing ultra low K dielectrics, very large package sizes, very fine bump pitches and lead-free solder,” said Dr. Han Byung Joon, Executive Vice President and Chief Technology Officer, STATS ChipPAC. “fcCuBE has proven to be a scalable technology that cost effectively addresses the technical requirements for high performance devices.”

In consumer applications such as set top boxes (STB) and digital television (DTV) ICs, higher functionality, faster data rates and increased bandwidth are required for enhanced user interfaces, rich graphics and outstanding audio quality. Wire bonding technology, a popular packaging choice in the past, is often unable to successfully address the increased thermal and electrical performance requirements for next generation consumer applications and, as a result, semiconductor companies are turning to high performance flip chip interconnect to differentiate their products. The BOL interconnection and very fine pitch Cu bumps in fcCuBE technology deliver exceptionally high I/O density and bandwidth with excellent electromigration (EM) performance for high current carrying applications such as STB and DTV ICs at a cost competitive price point for customers.

The functional and performance requirements for networking devices continue to evolve as well, driving demand for larger and thinner packages supporting very high current densities and bandwidth requirements. These high performance devices also require a steady and consistent supply of power which becomes challenging as device functionality increases. In addition, there are yield and reliability concerns that arise from the larger package sizes and very fine pitch interconnection that is required to produce higher I/O densities. fcCuBE technology significantly reduces the substrate layer count and complexity, achieving a thinner, lower cost package with high power integrity, superior control over thermal performance and higher resistance to EM over standard flip chip packages.

“Over the course of the last year, rapidly increasing density, performance and bandwidth challenges have become a driving force for customers who are looking for a powerful, cost effective flip chip technology to support their next generation mobile, consumer and networking applications,” said Chong Khin Mien, Senior Vice President of Product and Technology Marketing, STATS ChipPAC. “The growth in our fcCuBE production volume is a clear vote of customer confidence in our ability to deliver an advanced packaging solution that best meets the cost and performance targets for their specific product requirements.”

Semiconductor Manufacturing International Corporation, the largest and most advanced pure foundry provider in China; and Jiangsu Changjiang Electronics Technology Co., Ltd., the largest packaging service provider in China, jointly announced today the formation of a joint venture for 12-inch bumping and related testing, from the previously signed joint venture agreement, which will be established in Jiangyin National High-Tech Industrial Development Zone (JOIND), in Jiangsu Province, China.

By setting up in Jiangyin National High-Tech Industrial Development Zone, the joint venture can benefit from Jiangyin’s unique location and mature industrial environment to quickly set up the 12-inch wafer bumping and CP testing production line (Middle-End-Of-Line). Meanwhile, the joint venture can also utilize JCET’s nearby advanced back-end packaging production line, which includes Flip-Chip to support advanced Back-End-Of-Line production for 40/45nm, 28nm and below. Together with SMIC’s 12-inch front-end advanced chip production line in development, this will be China’s first-ever domestic 12-inch advanced IC manufacturing supply chain. This supply chain will shorten the overall manufacturing cycle time. More importantly, its close proximity to China’s consumer electronic industry and the world’s largest end-market, will allow our customers to respond with a shorter time-to-market window, and therefore better serve the fast changing consumer electronic market.

Dr. Tzu-Yin Chiu, SMIC’s Chief Executive Officer and Executive Director commented, “The Yangtze River Delta is regionally the strongest, largest, and most developed ecosystem in China’s IC industry. Jiangyin is located in the center of Yangtze River Delta’s ‘Golden Triangle’ comprised of Suzhou, Wuxi and Changzhou, and is only 180km from Shanghai. Furthermore, Jiangyin has good transportation infrastructure and is a hub of human talents. With our strategic partner JCET located in Jiangyin, our joint venture will rely on JCET’s existing manufacturing base and established facilities, thus, the mid and back-end lines will be constructed nearby to increase its dominance in the area, shorten its lead-time, and provide a one-stop service for customers. The initiation and implementation of the project will benefit SMIC’s ramp up of 28nm mass production and will help increase the capability of China’s semiconductor industry.”

Mr. Xinchao Wang, Chairman of JCET stated, “SMIC and JCET’s joint venture in Jiangyin will combine our companies’ strengths and enhance our long-term relationship; furthermore, the joint venture will focus on upgrading the domestic 3D IC industry chain to world-class standards.”

Jian Shen, Mayor of Jiangyin and Director of Jiangyin High-Tech Zone said, “SMIC and JCET are the most prominent companies in China’s semiconductor industry. The joint venture will help Jiangyin to become an important component of China’s most advanced semiconductor ecosystem. The Jiangyin municipal government gives its full support to this project in order to establish Jiangyin as one of the leading integrated circuit manufacturing bases, and to accelerate the growth and development of China’s semiconductor industry.”

New approaches to start-ups can unlock mega-trend opportunities.

BY MIKE NOONEN, Silicon Catalyst, San Jose, CA; SCOTT JONES and NORD SAMUELSON, AlixPartners, San Francisco, CA

The semiconductor industry returned growth and reached record revenues in 2013, breaking $300 billion for the first time after the industry had contracted in 2011 and 2012 (FIGURE 1).

FIGURE 1. Worldwide semiconductor revenue. Source: World Semiconductor Trade Statistics, February 2014.

FIGURE 1. Worldwide semiconductor revenue. Source: World Semiconductor Trade Statistics, February 2014.

However, even with that return to growth, underlying trends in the semiconductor industry are disturbing: The semiconductor cycle continues its gyrations, but overall growth is slowing. And despite 5% year-on-year revenue growth in 2013 (the highest since 2010), the expectation is that semiconductor growth will likely continue to be at a rate below its long-term trend of 8 to 10% for the next three to five years (FIGURE 2). An AlixPartners 2014 publication , Cashing In with Chips, showed that semiconductor industry growth had slowed to roughly half of its long-term growth average since the 2010 recovery—with no expectation that it will return to historical growth until at least 2017. Other studies have also shownthat semiconductor growth has slowed not only relative to its previous performance but also versus growth in other industries. And a study conducted by New York University’s Stern School of Business[1] found that the semiconductor industry’s revenue growth lagged the average revenue growth of all industries and ranked 60th out of 94 industries surveyed. Surprisingly, the industry’s net income growth of semiconductor companies lagged even further behind—ranking 84th out of 94 companies surveyed—and had actually been negative during the previous five years.

FIGURE 2. Semiconductor revenue growth. Sources: Semiconductor Industry Association and AlixPartners research.

FIGURE 2. Semiconductor revenue growth. Sources: Semiconductor Industry Association and AlixPartners research.

In another study released by AlixPartners that looked at a broader picture of the semiconductor value chain, including areas such as equipment suppliers and packaging and test companies, the research showed that outside of the top 5 companies, the remainder of the 186 companies surveyed had declining earnings before interest, taxes, depreciation, and amortization (FIGURE 3).

FIGURE 3. Spotlight on the top five (fiscal year 2012). Source: AlixPartners Research.

FIGURE 3. Spotlight on the top five (fiscal year 2012). Source: AlixPartners Research.

As revenue growth slows, costs increase at a rapid rate

As semiconductor technology advances, the cost of developing a system on chip (SoC) has risen dramatically for leading-edge process technologies. Semico Research has estimated that the total cost of an SoC development, design, intellectual property (IP) procurement, software, testing has tripled from 40/45 nanometers (nm) to 20 nm and could exceed $250 million for future 10-nm designs(FIGURE 4) [2]. This does not bode well for an economic progression of Moore’s law, and it means that very few applications will have the volume and pricing power to afford such outlandish investment. If we assume that a 28nm SoC can achieve a 20% market share and 50% gross margins, the end market would have to be worth over $1 billion to recoup R&D costs of $100 million. By 10 nm, end markets would have to result in more than $2.5 billion to recoup projected development costs. With few end markets capable of supporting that high a level of development costs, the number of companies willing to invest in SoCs on the leading edge will likely decline significantly each generation.

FIGURE 4. Development Costs are Skyrocketing. Source: Semico Research Corp.

FIGURE 4. Development Costs are Skyrocketing. Source: Semico Research Corp.

What happened to semiconductor start- ups?

The history of the semiconductor industry has been shaped by the semiconductor start-up. Going back to Fairchild, the start-up has been the driving force for growth and innovation. Start-ups helped shape the industry, and they are now some of the largest and most successful companies in the industry. But the environment that lasted from the 1960s until the early 2000s—and that made the success of those companies possible—has changed dramatically. The number of venture capital investments in new semiconductor start-ups in the United States has fallen dramatically, from 50 per year to the low single digits (FIGURE 5). And even though that drop is not as dramatic in other countries — such as China and Israel — it is indicative of an overall lack of investment in semiconductors.

FIGURE 5. Number of seed/series a deals. Source: Global Semiconductor Alliance.

FIGURE 5. Number of seed/series a deals. Source: Global Semiconductor Alliance.

The main reason for the decline is the attractiveness of other businesses for the same investment. In the fourth quarter of 2013, nearly 400 software start-ups received almost $3 billion of funding, whereas only 25 semiconductor start-ups received just $178 million (representing all stages) (FIGURE 6). It seems that (1) the lower cost of starting a software company, (2) the relatively short time frame to realize revenue, and (3) attractive initial-public-offering and acquisition markets possibly make the software start-up segment more interesting than semiconductors.

FIGURE 6. Funding of software and semiconductor start- ups. Source: PwC, US Investments by Industry/Q4 2013.

FIGURE 6. Funding of software and semiconductor start- ups. Source: PwC, US Investments by Industry/Q4 2013.

This situation is unfortunate and has conspired to create a vicious and downward cycle (FIGURE 7).

  • Lack of investment limits start-ups
  • Lack of start-ups limits innovation
  • Lack of innovation and fewer start-ups limits the number of potential acquisition targets for established companies.
  • Reduced potential acquisition targets in turn limit returns for companies and returns for those who would have invested in start-ups.
  • Limited returns make future investments less likely and continue the cycle of less innovation and lower investment [3]. 
FIGURE 7. A vicious cycle limits innovation.

FIGURE 7. A vicious cycle limits innovation.

Therefore, it is reasonable to conclude that the demise of semiconductor start-ups is a contributing cause to the lackluster results of the overall semiconductor industry. And that demise and those lackluster results are further exacerbated by the rise of activist shareholders who demand a more rapid return on their investment, which possibly reduces the potential for innovation in an industry that has lengthy development cycles.

What about other industries?

It is tempting to think that the semiconductor industry is alone in this predicament, but other industries face similar challenges and have figured out accretive paths forward. For example, biotechnology has some of the same issues:

  • An industry that grows by bringing innovation to market 
  • Similarly lengthy development cycles 
  • Potentially capital intensive at the research and production stages

In addition, the biotech industry faces a challenge the semiconductor world does not — namely, the need for government regulatory approval before moving to production and then volume sales. Gaining that regulatory approval is a go-to-market hurdle that can add years and uncertainty to a product cycle.

However, in spite of its similarities to the semiconductor business and the added regulatory hurdles, the biotech industry enjoys a very healthy venture-funding and start-up environment. In fact, in the fourth quarter of 2013 in the United States, biotech was the second-largest business sector for venture funding in both dollars and total number of deals (FIGURE 8).

FIGURE 8. Funding of software and semiconductor start- ups. Source: PwC, US Investments by Industry/Q4 2013.

FIGURE 8. Funding of software and semiconductor start- ups. Source: PwC, US Investments by Industry/Q4 2013.

Why is this? What do biotech executives, entre- preneurs, and investors know that the semiconductor industry can take advantage of? There are several lessons to be learned.

  • Big biotech companies have made investing, cultivating, and acquiring start-ups key parts of their innovation and product development processes. 
  • Biotech and venture investors identify interesting problems to solve and then match the problems to skilled and passionate entrepreneurs to solve them.
  • Those entrepreneurs are motivated to create and develop solutions much faster and usually more frugally than if they were working inside a large company.
  • The entrepreneurs and investors are creating businesses to be acquired versus creating businesses that will rival major industry players.
  • The acquiring companies apply their manufacturing economies of scale and well-estab- lished sales and marketing strategies to rapidly— and profitably—bring the newly acquired solutions to market.

For several reasons, certain megatrends are driving the high-technology sector and the economy as a whole, and all of them are enabled by semiconductor innovation (FIGURE 9). Among the major trends:

  • Mobile computing will likely continue to merge functions and drive computing power.
  • Security concerns appear to be increasing at all levels: government, enterprise, and personal.
  • Cloud computing will possibly cause an upheaval in information technology.
  • Personalization through technology and logistics appears to be on the rise.
  • Energy efficiency is likely need for sustainability and lower cost of ownership.
  • Next generation wireless will likely be driven by insatiable coverage and bandwidth needs.
  • The Internet of things will likely lead to mobile processing at low power with ubiquitous radio frequency.
FIGURE 9. Global internet device installed base forecast. Sources: Gartner, IDC, Strategy Analytics, Machina Research, company filings, BII estimates.

FIGURE 9. Global internet device installed base forecast. Sources: Gartner, IDC, Strategy Analytics, Machina Research, company filings, BII estimates.

The Internet of Things megatrend alone will result in a tremendous amount of new semiconductor innovation that in turn will likely lead to volume markets. Cisco Systems CEO John Chambers has predicted a $19-trillion market by 2020 resulting from Internet of Things applications [4].

Does it really cost $100 million to start a semiconductor company?

The prevailing conventional wisdom is that it takes $100 million to start a new semiconductor company, and in some cases that covers only the cost of a silicon development. It is true that recently, several companies have spent eight- or nine-figure sums of money to develop their products, but those are very much exceptions. The reality is that most semiconductor development is not at the bleeding edge, nor is the development of billion-transistor SoCs.

The majority of design starts in 2013 were in .13 μm, and this year, 65, 55, 45, and 40nm are all growing (FIGURE 10). These technologies are becoming very affordable as they mature. And costs will likely continue to decrease as more capacity becomes available once new companies enter the foundry business and as former DRAM vendors in Taiwan and new fab in China come online.

FIGURE 10: .13um has the most design starts; 65nm and 45nm have yet to peak.

FIGURE 10: .13um has the most design starts; 65nm and 45nm have yet to peak.

Another thing to consider is whether a new company would sell solutions that use existing technology or platforms (i.e., a chipless start-up) or whether a company would choose to originate IP that enables functionality for incorporation into another integrated circuit.

A chipless start-up would add value to an existing architecture or platform. It could be an algorithm or an application-specific solution on, say, a field-programmable gate array, a microcontroller unit or an application-specific standard product. It could also be service based on an existing hardware platform.

A company developing innovative new functionality for inclusion into another SoC paves a path to getting to revenue quickly. Such IP solution providers would supply functionality for integration not only into a larger SoC but also into the emerging market for 2.5-D and 3-D applications.

In both situations (the chipless start-up and the IP provider), significant cost may be avoided by the use of existing technology or the absence of the need to build infrastructure or capabilities already provided by partners. In addition, those paths have much faster times to revenue as well as inherently lower burn rates, which are conducive to higher returns for investors.

Even for start-ups that intend to develop leading-edge multicore SoCs, a $100-million investment is not inevitable. Take, for example, Adapteva, an innovative start-up in Lexington, Massachusetts. Founded by Andreas Olofsson, Adapteva has developed a 64-core parallel processing solution in 28 nm. The processor is the highest gigaflops/watt solution available today, beating solutions from much larger and more-established companies. However, Adapteva has raised only about $5 million to date, a good portion of which funding was crowd sourced on the Kickstarter Web site. This just shows that even a leading-edge multicore SoC can be developed cost-effectively—and effectively—through the use of multiproject wafers and other frugal methods.

Several conclusions can be drawn at this point.

  • Even though the semiconductor industry is growing again, the underlying trends for profitability and growth are not encouraging. 
  • Cost development is increasingly rapid on leading-edge SoCs. 
  • Historically, start-ups have been engines of innovation of growth and innovation for semiconductors. 
  • In recent years, venture funding for new semiconductor companies has almost completely dried up. 
  • That lack of investment of semiconductor start-ups has contributed to a downward and vicious cycle that will further erode the economics of semiconductor companies. 
  • The biotechnology industry has many parallels to the semiconductor. Interestingly, biotechnology has a relatively thriving venture funding and start-up environment, and we can apply that industry’s successful approach to semiconductors. 
  • Despite the state of start-ups, it is now one of the most exciting times to be in semiconductors because most of the megatrends driving the economy are either enabled by or dependent on semiconductor innovation. 
  • It does not need to take $100 million to start the typical semiconductor company, because a great deal of innovation will use very affordable technologies, and come from chipless start-ups or IP providers that have much lower burn rates and ties to revenue.
  • Even leading-edge multicore SoCs can be developed frugally (for single-digit millions of dollars) and profitably. 

References

1. http://people.stern.nyu.edu/adamodar/New_Home_ Page/datafile/histgr.html

2. SoC Silicon and Software Design Cost Analysis: Costs for Higher Complexity Continue to Rise SC102-13 May 2013.

3. AlixPartners and Silicon Catalyst analysis and experi- ence.

4. Cisco Systems public statements.

Recent developments in wafer bonding technology have demonstrated the ability to achieve improved bond alignment accuracy. 

BY THOMAS UHRMANN, THORSTEN MATTHIAS, THOMAS WAGENLEITNER and PAUL LINDNER, EV Group, St. Florian am Inn, Austria.

Scaling and Moore’s law have been the economic was initially misty, several paths to integration have been the economic drivers in the planar silicon arena for the last 30 years. During that period, major technology evolutions have been implemented in CMOS processing. The most recent of these evolutions have been extremely complex, including multiple-step lithographic patterning, new strain enhancing materials and metal oxide gate dielectrics. Despite these great feats of engineering and material science, the often predicted “red brick wall” is once again fast approaching and requires evasive action. In fact, several semiconductor suppliers have already shown that the “economic” brick wall has arrived at the 22nm node, where scaling can no longer decrease the cost per transistor [1]. Solutions are getting more difficult to track down in an industry driven by increasing performance at lower cost.

3D-IC integration provides a path to continue to meet the performance/cost demands of next-gener- ation devices while avoiding the need for further lithographic scaling, which requires both increas- ingly complex and costly lithography equipment as well as more patterning steps. 3D-IC integration, on the other hand, allows the industry to increase chip performance while remaining at more relaxed gate lengths with less process complexity— without necessarily adding cost [1].

While the initial outlook on 3D-IC integration was initially misty, several paths to integration have since been identified, giving an unobscured view to the future in the third dimension [2]. The current state of 3D-IC integration is analogous to crossing the Alps. There are different options to get over the mountain range: by smart use of the valleys, more dangerous direct ascent and descent, or by the brute force of tunneling through. In the end, the most economic routes are combinations of all these factors. In 3D-ICs we see a similar process occurring now. Some 3D devices are established in the middle of the fabrication process, referred as mid-end-of- line (MEOL), while some are established using chip stacking at the back-end-of-line (BEOL). In the future, some 3D stacking will be pulled upstream into the front-end-of-line (FEOL). Which integration scheme will be adopted by a manufacturer depends mainly on the target device, market size and compatibility of processes. The most cost-effective approach to 3D-IC integration should be a combination of all three integration schemes. That said, for many applications 3D-IC integration in FEOL processing offers further potential to pave the way for cost reduction, perfor- mance increase and higher-power efficiency. Front-end processing is still seen as a purely planar-based process, where the power/performance of the device comes from the silicon. However, many disruptive processes and materials, such as SiGe and other epitaxial layers, have already been implemented to enable device improvements. As a result, the boundary between planar and 3D stacking has already softened and paves the way for heterogeneous integration (e.g., memory on memory, memory on logic, etc.) to become prevalent going forward [3].

FIGURE 1. Comparison of different 3D front-end-of-line integration schemes.

FIGURE 1. Comparison of different 3D front-end-of-line integration schemes.

FIGURE 1 provides an overview of different 3D integration process schemes at FEOL. The first integration scheme being considered is layer- by-layer epitaxial growth, which has been a standard process for the semiconductor industry for the last 20 years. However, current epitaxy temperatures, which are in excess of 600-1000°C, make epi not a viable option for 3D integration today, since metal diffusion and broadening dopant distribution of the functional substrate wafer caused by these extreme temperatures would destroy the underlying IC layer. A second integration method is hybrid bonding, whereby a dual damascene copper and silicon oxide hybrid interface serves as both the full-area bonding mechanism and the electrical connection. A third route for 3D integration is the transfer of a thin processed semiconductor layer (ranging from tens to a few hundred nanometers in thickness) using a full-area dielectric bond. In contrast to hybrid bonding, the electrical connection is introduced by a via-last process between early interconnect metal levels on the bottom wafer and the second transferred transistor layer.

Both hybrid bonding and full-area dielectric bonding can be achieved through aligned wafer-to- wafer fusion bonding. However, high-interconnect density along with small routing dimensions set a high bar for bond alignment precision, which is necessary for fusion bonding. Fusion bonding is a two-step process consisting of 1) room-temperature pre-bonding and 2) a high-temperature annealing step. This essentially relates to the chemical bonds at interface. While pre-bonding is based on hydrogen bridges, thermal annealing facilitates the formation of covalent bonds.

FIGURE 2. Calculated surface overlap of metal TSVs for hybrid bonding as a function of wafer-to-wafer alignment accuracy. Comparison of ITRS roadmap relevant TSV pitches and diameters reveal, alignment accuracy of better than 200nm (3�) is needed to achieve 60% and more TSV overlap for hybrid bonding.

FIGURE 2. Calculated surface overlap of metal TSVs for hybrid bonding as a function of wafer-to-wafer alignment accuracy. Comparison of ITRS roadmap relevant TSV pitches and diameters reveal, alignment accuracy of better than 200nm (3) is needed to achieve 60% and more TSV overlap for hybrid bonding.

An important benefit of fusion bonding is the widespread avail- ability of bonding materials. Any exotic or novel material suffers a high barrier to adoption in the semiconductor industry, in part because it must comply with many different specifications and requires lengthy and extensive failure analysis to ensure no negative impacts are introduced across the entire chip process. With fusion bonding, however, all integration schemes rely on silicon oxide, silicon nitride or oxy-nitrides as dielectric bonding materials, and copper or other interconnect metals— all of which are standard in state-of-the-art IC production lines.

Early on, successful fusion bonding required that the bonding material be transformed into a viscous flow, which required extremely high temperatures (ranging from 800°C to 1100°C depending on doping as well as deposition method) [4]. However, major research has been and continues to be invested in interface physics and morphology prior to bonding and their effect on the bonding result. Recent efforts in low-temperature plasma activation bonding have enabled a reduction of the thermal annealing temperature to about 200°C and opened up the possi- bility for further material combinations [5,6]. In fact, fusion bonding is already being implemented in high-volume production for certain applications, including image sensors and engineered substrates, such as silicon-on-insulator (SOI) wafers. In the case of wafer-to-wafer fusion bonding, the process can readily being introduced into the CMOS process flow, which uses low-k dielectrics and standard metals.

Alignment is key for fusion-bonded 3D-ICs

Minimizing the via dimension for via-last bonding, or the via and bonding pad dimensions for hybrid bonding, are key requirements for bringing down the cost of 3D devices. Considering that the role of a TSV is essentially “only” for signal connection yet consumes valuable wafer real estate, further miniaturization has to be the logical consequence. Increasing integration density is a means of regaining valuable active device area. However, a direct consequence of smaller interconnect struc- tures is the need for improved wafer-to-wafer alignment.

As indicated in the cross section of FIGURE 1 for via-last processing after semiconductor layer stacking, lithographic etch masks for the vias need to be aligned to the buried metal layers. Bonding alignment is also key here, since the resist layer must match with contacts on both the bottom and top device layers. In order to minimize loss of silicon real-estate and maintain small wiring exclusion zones, the bond alignment must be within tight specifications and adapt to metal, via and contact nodes, as shown in FIGURE 2.

The semiconductor world would be easy if devices operated at a constant voltage. However, a major concern with 3D-IC/through-silicon via (TSV) integration is the potential introduction of high- frequency response and parasitic effects. Again, bond alignment is of major importance here. Any via within the interconnection network will generate a certain electric field around it. Perfect alignment between individual interconnect layers results in a symmetric electric field, whereas misalignment can cause a local enhancement of the electric field. This in turn can result inan electric field imbalance. Further scaling of intercon- nects and pitch reduction between vias means that inhomogeneous electric fields gain importance. Memory stacking and high-bandwidth interfaces with massively parallelized signal buses are particularly sensitive to this issue [2].

Optimizing alignment values

From the above discussion, it becomes clear that wafer-to-wafer alignment accuracy for fusion bonding has to
be in line with interconnect scaling. The 2011 edition of the Interna- tional Technology Roadmap for Semiconductors (ITRS) roadmap (at the time of writing this article, the Assembly and Packaging section of the 2013 ITRS Roadmap has not yet been published) specified that for high-density TSV applications, the diameter of vias will be in the range of 0.8-1.5 μm in 2015 [2], which requires an alignment accuracy of 500nm (3) in order to establish a good electrical connection. Previous studies have demonstrated that alter- native wafer-to-wafer alignment approaches can achieve a post-bond alignment accuracy of better than 250nm for oxide-oxide fusion bonding [7]. The newly introduced SmartView®NT2 bond aligner has demonstrated the ability to achieve face-to-face alignment within 200nm (3), as shown in FIGURE 3.

FIGURE 3. SmartView NT2 alignment data for consecutive alignments (left), revealing an alignment accuracy of 200nm (3�) from the histogram and corresponding normal distribution (right).

FIGURE 3. SmartView NT2 alignment data for consecutive alignments (left), revealing an alignment accuracy of 200nm (3) from the histogram and corresponding normal distribution (right).

Several factors contribute to the global alignment of the wafers besides the in-plane measurement
and placement of the wafers relative to each other. In fusion bonding, both wafers are aligned and a pre-bond is initiated. When bringing the device wafers together, wafer stress and/or bow can influence the formation of a bond wave. The bond wave describes the front where hydrogen bridge bonds are formed to pre-bond the wafers. Controlling the continuous wave formation and controlling influencing parameters is key to achieving the tight alignment specifications noted above. In essence, optimizing a fusion bonding process means that one must optimize the force generated during the bonding.

For example, bowing and warping of processed wafers can be substantial after via etching and filling. TSVs in particular represent local strain centers on a wafer. Minimizing the via size and depth helps to reduce the strain, which heavily influences the shape and travel of the bond wave. At the same time, this bond wave also causes local strain while running through the bonding interface. Any wafer strain manifests in distortion of the wafer, which leads to an additional alignment shift. Process and tool optimization can minimize strain and significantly reduce local stress patterns. Typically, distortion values in production are well below 50nm. Indeed, further optimization of distortion values is a combination of many factors, including not only the bonding process and equipment, but also previous manufac- turing steps and the pattern design. To a large extent, plasma activation also determines initial bonding energies, which impact the travel and formation dynamics of the bond wave and consequently wafer distortion.

Conclusion

In summary, aligned fusion wafer bonding is progressing rapidly to support front-end 3D-IC stacking. However, wafer bonding alignment accuracy must improve in order to meet the production requirements for both current and future design nodes. Controlling the local alignment of the wafers is only one aspect. Other important aspects include the initiation, manipulation and control of the bond wave. Recent developments in wafer bonding technology have demonstrated the ability to achieve bond alignment accuracy of 200nm (3) or less, which is needed to support the production of the next generation of 3D-ICs.

References

1. Z. Or-Bach, “Is the Cost Reduction Associated with Scaling Over?”, June 18, 2012, http://www.monolithic3d.com/2/ post/2012/06/is-the-cost-reduction-associated-with-scal- ing-over.html

2. ITRS Roadmap, 2011 edition
3. M. Bohr, “The evolution of scaling from the homogeneous

era to the heterogeneous era”, IEEE International Electron

Devices Meeting, 2011
4. Q.-Y. Tong and U. Gösele, Semiconductor Wafer Bonding:

Science and Technology (Wiley Interscience, New York, 1999)

5. T. Plach, et al., “Investigations on Bond Strength Develop- ment of Plasma Activated Direct Wafer Bonding with Annealing”, ECS Transactions, 50 (7) 277-285 (2012)

6. T. Plach, et al., “Mechanisms for room temperature direct wafer bonding”, J. Appl. Phys. 113, 094905 (2013)

7. G. Gaudin, et al., “Low temperature direct wafer to wafer bonding for 3D integration”, Proc. IEEE 3D-IC Conference, München, 2010

By Paula Doe, SEMI

Investors are still looking for differentiated technologies that solve high-value problems in semiconductor manufacturing, or that bring semiconductor technology to disruptive applications in other fields, particularly in the medical and environmental sectors, said the leading venture capitalists gathered at the Silicon Innovation Forum at SEMICON West 2014.

“As financial investors have moved to fund more ‘flapping bird’ apps instead of hardware, strategic investors have moved more to early-stage hardware opportunities,” noted Robert Maire, president, Semiconductor Advisors.

Tallwood Ventures general partner George Pavlov concurred that his financial investment firm was making fewer hardware investments because the technology is maturing and there are fewer opportunities, as well as the lower margins and lower exit prices. “The app maker gets $1 a shot, which is more than the chip maker,” another VC put it more bluntly. That means that semiconductor investments need creative strategies to reduce risk, such as one recent deal that involved three strategic investors all interested in helping the startup succeed, including a customer and a supplier. “It’s also important to have a capitalist at the table to assure that the company’s interest comes first,” Pavlov noted, which may involve making the difficult moves like rebalancing leadership teams or reconstituting the Board of Directors.  Financial investors can also come in early with an experienced team that can help a company find the right strategic partners they need and introduce them.

Strategic investors are getting more involved with early-stage companies to reduce risk even if it means collaborating with the competition. “More and more we are collaborating in investments, and we will see more in the future, in both big and small companies, depending on the size of the problem, when fundamental industry interests are aligned,” said Sean Doyle, director, Intel Capital. “We see greater pull from financial investors to have strategic investors involved from the beginning.” More handholding is needed even before investment. Kurt Petersen, a member of the Band of Angels, noted that three of the group’s members spent two years mentoring a company before it was ready even for angel investment. In fact, a MEMS company may need a strategic investor to even convince a foundry to take it on.

“More than half the investments we’ve made in the last year have been with other strategic investors,” concurred Eileen Tanghal, general manager of Applied Ventures, adding that investing with Intel and Samsung for customer input was especially useful.

Semiconductor startups to watch: The VCs’ current favorites

So where are these investors putting their money in the semiconductor sector these days?  Primarily it’s either towards technologies with potential to solve next-generation semiconductor manufacturing challenges, or towards extending conventional semiconductor technology to new fields, from medicine to agriculture. The strategic investors from the venture arms of Samsung, Intel and Applied Materials all cited innovative materials solutions as the investments about which they were currently most excited, particularly Inpria for its high-resolution metal oxide photoresists, SBA Materials for its liquid-phase self assembled porous ultra low-k dielectrics, and Voltaix (recently acquired by Air Liquide) for its unique precursor gases for germanium and other chemistries. “We’re making more investments in equipment and materials because it is becoming incredibly difficult to advance the technology,” said Dong-Su Kim, senior director, Samsung Ventures Investment Corp.

The VCs saw a wider range of investment opportunities in applying silicon technology to other fields, especially if time-to-market and development costs can be reduced by re-using existing technology.  Peter Moran, general partner, DCM, cited RayVio as a good example, making high power UV LEDs specifically for sterilizing surfaces, with both cost and performance that have no competition from traditional wet or heat methods.  Another of his favorites is battery maker Enovix, which leverages  existing thin-film photovoltaics technology and invovates the battery structure itself for a battery that could potentially store 3X the charge per area. The financial VC worked with strategic investor  Cypress who brought specific manufacturing and scaling expertise from its Sunpower experience, while Intel brought its experience in identifying where, globally, was best to build the manufacturing plant.  Moran also noted that DCM previously did not consider devices that sold for less than a dollar, but it is now looking at lower cost devices as long as they are differentiated and high volume, such as ingestible sensors that track if people have taken their pills.

“The most opportunity is in proliferating silicon technology into other fields, especially in the medical field,” concurred Tanghal, citing Applied Venture’s investments in Oncoscope’s optical screening for pre-cancerous cells to significantly improve the accuracy of biopsies compared to the usual random sampling, Twist Bioscience’s platform for large-scale synthetic gene manufacturing, and  MTPV Power Corp.’s chips that convert heat to electricity.  Applied Ventures is also looking at ongoing opportunities for capturing more value from the inflection point of the emerging Internet of Things, such as supplying the materials for, or the service of, making implantable or ingestible coatings.

The MEMS field continues to come up with new kinds of electromechanical structures for new tasks.  Peterson said he was particularly excited about Chirp Microsystems for its ultrasonic gesture recognition, Next Input with its force-sensitive touch screen technology, and Lumedyne Technologies for its completely new, high accuracy, inertial sensor approach.

VC panels choose Amorphyx and Aledia for best startup pitches

The panel of leading investors selected two companies offering disruptive materials/process technologies — and leveraging a collaborative infrastructure — for the best pitches from among 25 selected startups at the event. Aledia says its microwire LEDs grown on 8-inch silicon should cost 2x-3x less than conventional LEDs grown as thin films on sapphire. The ~1µm diameter pillars, with the active quantum well layers grown vertically in concentric layers, provide more light emitting surface area from less material in less time in the MOCVD reactor.  Their small area on the wafer likely helps ease the lattice and thermal mismatch issues compared to blanket GaN on silicon.  Co-founder, president and CEO Giorgio Anania said the company has figured out how to grow regular, high quality pillars through holes in a mask, though lumens/watt remains low and is not the current focus of improvement. Based on the CEA campus in Grenoble, the startup plans to grow only the pillar layer, then send the wafers out to a mainstream CMOS foundry for the rest of the processing.

The other winner, Amorphyx, offers a fast switching, low cost backplane solution for displays, using a kind of tunneling effect through a near-perfect amorphous sapphire insulating layer in a metal-insulator-metal device. The company is working at ITRI in Taiwan with a production collaborative it put together  three Asian companies, aiming at start joint production in 2015. “This should save $100 of the cost of a $400 display,” claimed CEO and President John Brewer.

Among the other interesting startups pitching to the investors at the event was MEMS microphone startup Baker-Calling, with an innovative simplified design for an AlN piezoelectric MEMS microphone, using four separate triangular plates free to expand and contract so they are less sensitive to film stess than the usual capacitive membranes. CEO Matt Crowley reported the company has sampled prototypes to its strategic investor, and is now bringing up the process at a foundry.

Okeanos Technologies showed its microfluidics desalinization technology, which CEO Tony Frudakis reported uses half the energy to remove salt from water compared to the usual reverse osmosis, because the tiny volumes react better, using an electrochemically mediated process that strips off ions as they pass through the small channel.  However, each pass removes only about 10% of the salt, so multiple cells would be needed to remove all the salt from significant volumes of water.

Inpria leverages grant money for years to take university research towards commercial

The venture arms of Applied Materials, Intel and Samsung have all recently invested in Inpria, and kept citing it as an example of semiconductor development they were excited about for its potential solution to the key problem of resolution of next generation photoresist. Replacing the long, tangled, polymer molecules of traditional photoresist with the smaller inorganic molecules enables cleaner edges and reduces collapse of 7nm and 10nm features.  CEO Andrew Grenville reported that the line-width roughness with this resist is half that of conventional polymer products (0.7nm vs 1.5nm) on 10nm lines and spaces.

Grenville told the tale of the company’s earlier years of leveraging its capital as it developed the metal oxide cluster technology from Oregon State University, starting with NSF/SBIR funding, then a grant from Oregon’s Onami, then joint development funding with potential users. Inpria first developed the material using shared equipment of the Onami university network, then the SEMATECH microexposure tool at Laurence Berkeley national lab, and then in joint development programs at imec’s consortium in projects with equipment suppliers and customers – for about five years before the technology was developed enough for angel investors and Applied Materials. This year strategic investors Intel and Samsung joined Applied in further funding, which then attracted more from the Oregon Angel Fund, with deep semiconductor experience and connections. “We expect we will be interesting for a financial investor in a couple of years,” said Grenville. “It takes leveraging, leveraging, leveraging for capital-efficient development…though the proof will come in 2015 when we go into the fabs.”

The next Silicon Innovation Forum at SEMICON West will be held on July 14, 2015. In addition, SEMICON Europa 2014 (October 7-9) will offer an Innovation Village with a Silicon Innovation Forum.