Backside-illuminated image sensors: Optimizing manufacturing for a sensitivity payoff

November 11, 2011 — Backside-illuminated CMOS image sensors (BSI) capture light directly on the silicon light-sensitive layer. They have a higher sensitivity in a broader spectrum than the mainstream frontside-illuminated imagers (FSI). And in the field of high-end and specialty imagers, they have started to compete with established charged-coupled device (CCD) technology.

BSIs, however, are more difficult to fabricate than FSIs. They require advanced wafer thinning, surface passivation techniques to maximize sensitivity, and careful substrate engineering to minimize crosstalk. Using the possibilities of BSI technology, imager researchers are also looking at alternative architectures, producing pixels and readout electronics as separate dies and stacking those using high-density microbumps and TSVs.

Detecting and capturing light with image sensors: CCD versus CMOS

Silicon is an ideal material to make image capturing sensors, for use in digital cameras and other products. It absorbs that part of the electromagnetic spectrum that — through a lucky quirk of nature — matches the light that is visible with our eyes.

The first commercial sensor chips were CCDs, appearing around 1985. By the early 1990s, the CMOS process was well-established and CMOS imagers started to appear, first for low-end imaging applications or low-resolution high-end applications. Since then, the market has split into two segments. For low-cost, high-volume imagers, CMOS imagers have clearly overtaken CCDs. In high-performance, low-volume applications, CMOS and CCD imagers share the market, mainly because CCD technology still allows for a lower noise. In total, in 2010, the market share of CMOS imagers was 58% vs CCDs; this share is forecast to grow to 66% in 2015 [1].

When light strikes a CCD pixel sensor, it is stored as a small electrical charge. Next, these pixel charges are transported, one at a time, to the output stages. And only then, on a separate chip, are the voltages converted to the digital domain, to bits. A CMOS imaging chip, on the other hand, is an active pixel sensor: each pixel has its own circuitry. CMOS image sensors are fabricated using standard CMOS production processes, so they require less-specialized manufacturing facilities than CCDs. Also, they consume less energy, are faster, are better scalable, and allow integrating on-chip image processing electronics.

The roadmap in the image sensor industry is mostly concerned with decreasing price per pixel while increasing the number of pixels on a given chip surface — reducing the pixel pitch. Currently, high-volume sensor production capacity is moving from 200mm fabs to 300mm fabs, with minimum features reaching 65nm, and resolutions pushing beyond 16 megapixels [1].

But next to this, R&D centers such as imec are also concerned with improving the image quality. Not so much looking for smaller pixels, but for optimal performance. Capturing more photons (improving the quantum efficiency, QE), capturing them in the correct pixels (reducing or eliminating the crosstalk), and capturing a larger part of the light spectrum. Solutions are used in specialty imagers, e.g. for space applications (Figure 1).

Figure 1. 1 megapixel backside-illuminated hybrid imager consisting of a substrate with a passive photodiode array pixel-wise connected to a CMOS readout circuit using 22.5

POST A COMMENT

Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.