Tag Archives: Small Times Magazine

About the Site


July 1, 2006

Small Times, a division of PennWell, is the leading source of business information and analysis about micro and nanotechnology. Small Times offers full news coverage through its business trade magazine, daily news Web site and weekly e-mail newsletter. Small Times also offers custom research services.

Small Times magazine details technological advances, applications and investment opportunities to help business leaders stay informed about the rapidly changing business of micro and nanotechnology from biotech to defense, telecom to transportation. Small Times spotlights key issues in the industry’s development, along with market intelligence, company profiles and more.

Launched in September 2001, Small Times is a controlled circulation print magazine.

Free subscriptions are available to qualified decision-makers internationally.

Small Times staff of editors and reporters bring daily news coverage of small tech around the world, including:

  • Savvy, in-depth analysis of the latest business developments
  • Breaking news in biotech, consumer goods, defense, energy, environment, IT & telecom and transportation fields
  • Company profiles
  • Patent listings
  • Industry calendar of events
  • A small tech stock index

The site also includes:

  • An online Classified Advertising Section
  • Small Tech Census
  • Small Tech Business Directory

Small Times has run its share of stories that were, well, out of this world. We have tracked the use of MEMS in satellites and covered the development of nanomaterials for spacecraft. Heck, we’ve even written about the space elevator concept without cracking a grin – well, maybe a slight grin. But nanotech experiments in space – now there’s a first, at least for us.

The 43-year-old astronaut who conducted the experiments, Brazilian Air Force Lt. Col. Marcos Pontes, spent eight days aboard the International Space Station. He blasted off March 29 on a Russian-built Soyuz rocket along with commander Pavel Vinogradov of Russia and flight engineer Jeffrey Williams of the United States. Few details were released about the exact nature of Pontes’ experiments, except that nine nanotech-related experiments were on the docket.


Brazilian Space Agency astronaut Marcos Pontes uses a computer in the Destiny laboratory of the International Space Station soon after his arrival. Photo courtesy of NASA
Click here to enlarge image

The first Brazilian to go to space, Pontes reportedly brought along a Brazilian flag and a national soccer jersey. If his experiments are anywhere near as successful as the Brazilian soccer team then expect some impressive results. (The team has consistently been among the best, if not the best, in the world.)

Pontes left his U.S. and Russian compatriots onboard the space station and came back to Earth with the station’s previous crew, returning on April 8.

Vacations are supposed to be about disconnecting from work, finding that downtime to recharge, and experiencing that outside world which you ordinarily don’t have the chance to see.

In this day and age it can be difficult to get away from reminders of civilization. Fast food, paved roads and cell phone service seem ubiquitous. But a trip to the northern region of Argentina should have done the trick. No fast food, no paved roads, no cell phone service (well, truth be told, some service in towns with satellite dishes). Enough to disconnect from work?


It may look like another planet but it’s here on Earth. Salinas Grandes – the great salt beds – of northern Argentina sport a pattern reminiscent of the arrangement of atoms in fullerene structures.
Click here to enlarge image

Yes, until I laid my eyes on Salinas Grandes, an extraordinary natural salt flat a little over two miles up in the Andes. It’s remote, exotic, and beautiful. And one look at the pattern that ranges for miles across its surface – and its resemblance to the pattern of atoms in a fullerene structure – brought me right back to the workday world. Oh well.

The question of what exactly forms the geometry of the salt flats remains an open one, however. Odds are that at least one of our readers knows. Kudos to the reader – as well as a mention in Small Times’ weekly e-mail newsletter – who can accurately explain what forms this pattern in the salt. E-mail me at [email protected].

– David Forman

MEMS fab services


July 1, 2006

Just a few short years ago the MEMS fab business was decidedly in the doldrums. Today it is catching a tailwind. Product companies are either moving toward or are already in medium and high volume manufacturing – and the latest trend is to outsource. As a result, the fabs that made it through the maelstrom are picking up new business and expanding their offerings. Four leaders shared their perspectives on making MEMS with Small Times’ David Forman.

Q: Describe the market for MEMS fabrication services. Is it growing? Static? Declining?


Bruce Alton
Vice president, marketing & business development
Micralyne Inc.
Edmonton, Alberta
Click here to enlarge image

ALTON: We have experienced very significant growth recently – 40 percent to 50 percent revenue growth – and the pipeline of new opportunities looks very strong. I can’t speak for other foundries but my sense is that the industry overall is growing.

VCs are investing more and the failures associated with the optical bust are now a distant memory. Furthermore, the VCs are not investing in MEMS fabrication capacity. Therefore, most new start-ups that incorporate MEMS are going fabless.

HEATON: We’re seeing strong interest in our services and our business is on a fairly steep rise both from new and existing customers. The interest is from both small and large companies – primarily fabless companies or companies with fabs who require more capacity or a more complex materials set.

HERBIG: The MEMS foundry market is certainly growing, but more slowly than expected. From a technology standpoint, integration of CMOS and MEMS is hot.

Q: There are conflicting reports about what is driving new demand for MEMS fab services. We hear biotech, defense, telecom. What applications are driving your fab business?


Peter Gorniak
Business development director
Semefab Ltd.
Fife, Scotland
Click here to enlarge image

ALTON: It may be different for other foundries but Micralyne is experiencing very rapid growth related to products for optical telecommunications. More importantly, our customers’ customers are ordering more product so this isn’t an issue of building capacity for anticipated demand. End user demand is real.

Most of the growth we are seeing is from opportunities we would categorize as “medium volume and high value-add”.

While we are obviously pursuing the high volume opportunities, we’re very well set up to handle medium volume. That’s where a lot of growth is coming from.

GORNIAK: Semefab is a provider of MEMS foundry services and therefore is open to all companies seeking silicon processing for MEMS applications. The key application fields where Semefab is seeing growth are in microfluidics, RF MEMS (albeit still in R&D), biomedicine, accelerometers and thin film applications such as pressure sensors. The defense industry is slowly waking up to the possible applications of MEMS in security applications and is seen as a major contributor to business by 2010.

HEATON: IMT is diversified by design and we’re seeing strong interest in several areas. We’ve announced our work with Ion Optics, now part of ICx, in both the defense and industrial spaces for IR emitters and gas sensors. And we’ve announced that we are entering production for an optical telecom device for fiber-to-the-home with Xponent. We are also in high volume production for a switching application for telecom and shipping on the order of two million working MEMS switches each week for that customer. Other areas showing strong potential are IR imaging, RF and MM-wave, and biomedical applications.

HERBIG: We see significant activity in microphone, inertial sensor and bio applications in the telecom, automotive and medical markets.

Q: What are you doing to facilitate process development and transfer, especially for clients who have developed prototypes on equipment that differs from yours?


Monteith Heaton
Vice president, marketing & sales
Innovative Micro Technology
Santa Barbara, Calif.
Click here to enlarge image

ALTON: This is a very common issue as many of our customers will – before they come to us – create a process or design or fabricate a few prototypes in an open access or academic fab.

Our more mature customers however will contact us first before designing their products in detail. They realize that if they design with our processes, their development and time-to-market costs can be reduced and production yields will be higher. This would seem common sense but few companies actually do it.

GORNIAK: MEMS products rely on custom processing. Therefore every new MEMS inquiry demands a new process flow. In many cases the products require an investment in new equipment. This has obviously to be balanced with the potential value of the project. It is a fine line that has to be trodden between accepting new business and the return on investment of new equipment.

HEATON: Our customers range from those with a spec to those with a design, to those that have prototyped or even produced working devices. In all cases, we collaborate closely on both process transfer/development and design for manufacturability. Our skills and experience in these areas enable smooth startup or transfer to our facility of processes even where the tool sets do not match up. Tight collaboration is key and several of our customers have personnel permanently or periodically assigned at our facility. We believe in strong and open partnering as the solution.

HERBIG: You have raised an intriguing issue because the first law of MEMS is “one product/one process.” Companies need to offer a wide variety of process options. X-FAB is continually expanding its portfolio, and is investigating or implementing several new modules, including DRI etching for bulk micromachining, electroplating, and wafer-level packaging/encapsulation.

Q: What technologies are you using to lower cost and increase reliability?


Volker Herbig
Product marketing manager
X-FAB Semiconductor Foundries AG
Erfurt, Germany
Click here to enlarge image

ALTON: A couple of things we are doing are to add higher capacity tools along with focusing on lean manufacturing. These are having a significant impact on cost reduction once a process has been stabilized. Unfortunately, for some key processes, such as lengthy DRIE etches, few if any multi-wafer, high volume production tools currently exist.

Packaging and test are two areas that have a huge impact on cost and may be part of the services offered by a MEMS foundry. As compared to 4 to 5 years ago, more attention is being placed on these areas.

Finally, the possibility of setting up a fabrication facility in a lower cost environment is coming up more. We are looking at this and the opportunity to reduce labor costs is massive. Having said that, the issue of IP protection has to be carefully addressed. I think it is inevitable that we will see more MEMS production transferred to Asia.

GORNIAK: We cannot answer this just now.

HEATON: Our team and facility were built for production, including SPC/SQC and what we call product engineering. This latter function comprises a team of engineers whose specific role is to identify and create paretos for the key parameters that drive product yield and reliability. These parameters are control-charted and tightly monitored with metrology and a variety of functional and electrical tests. This information is continuously applied to increase yields, decrease labor content and increase reliability – all of which drive down costs.

HERBIG: X-FAB aims to develop batch processing for MEMS, and looks for synergies between CMOS operation and MEMS operation, such as doing MEMS wafer batch processing instead of single wafer processing. We use a KOH batch etch instead of a DRI single wafer etch whenever possible. X-FAB also reduces packaging costs by providing wafer-level packaging.

Q: How do you compete with the big foundries that have guaranteed lines of business from parent companies that support their infrastructure?

ALTON: We don’t see competing with these companies as a huge issue and, in fact, we see a trend for them to outsource more to companies like ours.

In terms of their foundry services, the customers we talk to say they have challenges in dealing with these companies. For example, these mainly captive fabs are not able to deviate much from their internal processes or might not have the flexibility to address issues that arise during the early, less defined stages. There’s also a concern that external customers will be treated as second class citizens compared to internal product lines. While we can’t ignore these companies, we believe we can compete with them very effectively.

GORNIAK: Semefab belongs to the Semelab group. However Semefab acts completely independently. Semelab is not involved in MEMS and thus we do not get the benefits of having a guaranteed line. Semefab is quite confident of being able to compete with the big foundries as there are many niche market products appearing for volume production. These all require custom processing and hence the major manufacturers will not get involved in the small initial volumes.

HEATON: Our business growth demonstrates that we are competing very effectively and offering exceptional value to our customers. We’re not so small ourselves with over 30,000 square feet of active fab workspace – and we believe we also offer a unique mix of capabilities not offered elsewhere, including strong collaboration from design to prototyping through volume production. Our product and customer diversity also gives us a broader and growing range of experience that can be applied to solve our customers’ problems. We also continue to see the large foundries with their own products having diminishing interest in their external customers as their internal business ramps up.

HERBIG: This is not an issue for X-FAB because we have exactly this type of support.

By David Forman, James R. Dukart and Elizabeth Gardner

From semiconductor lithography to the imaging of living cells, optical techniques are being challenged by alternative approaches. New technologies for stamping and writing nanoscale features are emerging for manufacturing. Researchers have long been using non-optical scanning techniques for peering into the nanoscale. But light, apparently, has a bright future.


Photolithographic masks like the one held here by a manufacturing technician won’t be displaced by non-optical technologies anytime soon. Photo courtesy of SEMATECH
Click here to enlarge image


Immersion lithography poised to process future chips

It was just a few years ago that some pretty exotic forms of chip making were in contention to be the semiconductor industry’s next best bet. But now, analysts say, a modification of conventional optical lithography is going to be sufficient for at least a few more generations of chips, pushing off the semiconductor industry’s day of reckoning.

That has broad implications for nanotech, especially for the purveyors of technologies like nanoimprint lithography that are looking to cement their future on the semiconductor industry’s product development roadmap. It also affects toolmakers that provide the machines currently used for processing chips. When will Moore’s Law have to be amended? Experts say not for at least another five to seven years.

“The whole thrust is to be able to get photolithography at smaller and smaller dimensions,” explained Fred Zieber, a long-time tracker of the semiconductor industry who is the founder and president of Pathfinder Research, a market research firm in San Jose, Calif. “The point is to get features smaller than the wavelength of light that you’re working with. There is a phased progression down to 45, 32 and 22 nanometers, if possible.” But, he said, “To get there they have to solve a whole lot of problems.”


Pictured at left is an array of 29.9 nm wide lines and equally sized spaces created by IBM scientists using immersion lithography. These lines are less than one-third the size of the 90 nm features at right (same magnification) now in mass production by the microchip industry. They are also smaller than the 32 nm size that industry consensus held was the limit for optical lithography techniques. Image courtesy of IBM
Click here to enlarge image

The comparatively exotic routes all have major roadblocks. For extreme ultraviolet lithography, in which the lenses ordinarily used to focus ultraviolet light are replaced by mirrors, some of the challenges include creating those perfect mirrors and operating the process in a vacuum. E-beam lithography uses an electron beam rather than photons for processing. Nanoimprint is a stamping, rather than lithographic, process. Engineers across the semiconductor industry are not familiar with these technologies at the level of depth with which they know optical lithography using 193 nm light – today’s cutting edge.

Those difficulties explain why immersion lithography is poised to be the next leading semiconductor process. It has fewer problems. “This (immersion) is the one the industry is committing to,” said Klaus Rinnen, a managing vice president at market research firm Gartner Inc. who tracks semiconductor manufacturing. “In the near future, 193 nanometer immersion will allow an extension of the current infrastructure.” And, he added, he expects it to dominate.

The proof is in the numbers. By Rinnen’s count, two immersion devices were shipped in 2004 and 12 in 2005, and he expects between 20 and 25 more to ship in 2006.

“The first four to five were for R&D,” he said. But now the manufacturers – ASML and Nikon – are shipping second generation systems. By 2009 Rinnen said he expects the industry to ship more than 100 immersion lithography systems. That number would exceed the 97 non-immersion lithography machines, today’s standard, that shipped in 2005.


CAD renderings show ASML’s Twinscan XT:1700i, left, and Nikon’s NSR-S609B. Both lithography systems use immersion technology to produce smaller features than the current manufacturing standards. Image courtesy of ASML Image courtesy of Nikon
Click here to enlarge image

Immersion lithography uses the same 193 nm wavelength light as non-immersion lithography. However, with immersion lithography, water or some other liquid is placed between the lens and the semiconductor, a process that extracts higher resolution light capable of making smaller, more densely-packed circuits.

A big part of immersion lithography’s momentum, says Lawrence Gasman, principal of NanoMarkets LLC, a Glen Allen, Va., market analysis firm focused on emerging nanotechnologies, comes from the fact that it is an evolutionary, rather than revolutionary step. Although manufacturers will have to buy new equipment to stay ahead, all of the investment they have made in training and the development of institutional knowledge will continue to be valuable. The processes for making and cleaning masks, for example, as well as other common tasks, remain pretty much the same with immersion lithography, whereas more revolutionary alternatives will require wide-scale industrial retooling and retraining.

“Most of the companies would like to keep on doing what they are doing,” Gasman said. “If they can keep conventional optical lithography going for a few more years they will do it….Once the whole paradigm changes, all that experience goes out the door.”

In a sense nanoscale processing is following a path previously paved by nanoscale imaging. While optical microscopy is still widely used for looking at the micro world, peering and probing into the nano world is done with tools like atomic force microscopes and atom probes that take advantage of phenomena other than light. Non-optical techniques for nanoscale processing have likewise been around for decades and have been used to make individual samples of devices and prototypes. But, by definition, to be used for manufacturing, the process must become repeatable, cheap and reliable – a set of challenges that research tools for imaging and manipulation don’t have to meet at the same level.

It is still unclear to what scale immersion lithography will work as a production technique. The most advanced chips being produced today are made on a 65 nm scale while 90 nm processing is mainstream. According to the semiconductor industry roadmap, the next standard scales would be 45 nm, followed by 32 nm and then 22 nm.

Already companies are announcing new techniques. In February IBM announced that its scientists had created small, high quality line patterns only 29.9 nm wide using immersion lithography, comfortably under the 32 nm mark that many had previously considered the limit for optical lithography techniques.

Gasman says there is general agreement that optical lithography won’t get past 18 nm. Rinnen thinks immersion will be sufficient to get close – 22 nm. He and other experts say innovations will become more commonplace as more immersion lithography machines come online and more researchers and engineers have access to the technology and gain proficiency with it. The IBM research, by contrast, was done on a test apparatus designed and built at IBM’s research facilities.

Also in February Taiwan Semiconductor Manufacturing Corp. (TSMC), the largest semiconductor foundry in the world, announced that it had produced semiconductor wafers within “acceptable parameters” for volume manufacturing using immersion lithography to create 45 nm features on 12-inch wafers.

The company characterized the test as a milestone toward production immersion lithography. Recently TSMC produced multiple test wafers with defect rates as low as three per wafer – better than other immersion results to date, and comparable to dry (that is, non-immersion) lithography results, according to statements by Burn Lin, senior director of TSMC’s micropatterning division. He said that now that the company understands the root causes of the defects, it can focus its attention on improving throughput for high-volume manufacturing.


Intel Corp. announced in January that it had produced fully functional test chips using 45 nm process technology. The company said it will eventually use the technology to make chips in Oregon, Arizona and Israel. Photo courtesy of Intel
Click here to enlarge image

A byproduct of immersion lithography’s ramp-up will be that the developers of other technologies will gain a reprieve. Since immersion lithography will push back the demise of Moore’s Law, these technologies have more time to mature in parallel with immersion lithography.

That could be a real boon to developers of the more revolutionary techniques. For example, says Pathfinder’s Zieber, right now extreme ultraviolet lithography is “different enough that the cost would be prohibitive.” But nobody knows what can happen with five to seven years of development. The same goes for e-beam, nanoimprint and other processing technologies.

Extreme ultraviolet lithography remains positioned as the most likely follow-on technology. For starters, it has the backing of Intel Corp., which has integrated the technology in its roadmap and which was one of the first companies to join an industry coalition promoting the technology. Intel has been active in developing the technology itself and has invested in other companies developing solutions for some of EUV’s problems.

In late January, for example, Intel announced an investment in Xtreme Technologies GmbH of Gottingen and Jena, Germany, along with a strategic development agreement. The company, which is a joint venture between a subsidiary of Jenoptik AG and Ushio Inc., is working on developing an extreme ultraviolet light source for photolithography. The development of such a source has been one of the roadblocks in the way of commercializing EUV lithography.

However, other companies have been slower to invest in the technology. And industry coalitions promoting technology have fallen apart before. A similar coalition devoted to promoting e-beam lithography as a next-generation mainstream production technology stalled out in 2001.

The lack of industry-wide support shouldn’t necessarily derail EUV, according to Gasman. “Intel, after all, is pretty influential in these things.” But, he acknowledged, there’s a flipside to that argument. “The business problem is if Intel wakes up one morning and decides it wants to do something else.”

The adoption of immersion technology will give the industry some time to decide. Of course immersion has its technical challenges too, the analysts say. Among the potential problems are bubbles and watermarks caused by the use of the liquid, residues left behind by the liquid, and damage from particles present in the liquid.

But TSMC claims to have developed techniques that mitigate these problems, and Gartner’s Rinnen says others will too. “I don’t view them as showstoppers,” he said. “I view them as nuisances.”

– David Forman


Next generation manufacturing: the contenders

The race to keep semiconductor manufacturing ahead of Moore’s Law for the next decade or so boils down to a few competing technologies, each of which has its own strengths and weaknesses. The goal of each technology, of course, is to obtain that elusive Triple Crown of micro-manufacturing: low cost, high throughput and increasingly small size. Herein we handicap some of the main contenders:

Immersion lithography

Immersion lithography has the strongest odds to take an early lead, and, in terms of next-generation lithography technology, is certainly fastest out of the gate. In simple terms, immersion lithography is standard optical lithography with wafers, masks and lenses but using water or some other liquid to increase resolution. Companies such as ASML and Nikon are already shipping immersion systems for 45 nm half pitch production, with analysts predicting 20 or more such systems shipped in 2006.

Immersion lithography uses the same wavelength light (193 nm) as non-immersion photolithography, and thus benefits from the installed base of companies and technical staffs already familiar with the process. Primary drawbacks to immersion lithography include costs higher than standard photolithography and defects – primarily watermarks or bubbles – due to the liquid being used for immersion.

That said, Mike Lercel, director of lithography for SEMATECH, said water-based immersion lithography is definitely the horse to back for commercial semiconductor lithography in about 2009 and beyond. “People have actually seen results demonstrated and the results have been very good,” Lercel noted. “The defect levels were a bigger issue two years ago, but it seems the companies have gone off and solved them and we are now down into the single-digits.”

Extreme ultraviolet lithography (EUV)

EUV shines new light – literally – on chip manufacturing. Heavily backed by Intel, EUV has been around since at least the late 1990s, with Intel promising high-volume production by about 2009.

EUV is essentially an extension of optical lithography, using 13.5 nm wavelength light from the extreme ultraviolet region of the spectrum. Because light at a 13.5nm wavelength is absorbed by materials, including the glass of traditional lenses, EUV systems must use reflective surfaces – mirrors – to focus the light. Lithographic masks must also be reflective, and the entire system must be enclosed in a vacuum.

Therein lie the primary challenges for EUV as a production technology – increased costs of materials and tooling, as well as the costs associated with maintaining vacuum conditions in the lab or production facility.


NanoInk’s Nscriptor dip pen nanolithography system allows users to build patterns, layers and structures at resolutions less than 15 nm. Photo courtesy of NanoInk
Click here to enlarge image

According to Stefan Wurm, EUV strategy program manager at SEMATECH, EUV has a good chance to supplant immersion lithography by about 2012 and beyond. Attendees of the Litho Forum, a three-day gathering in May of global lithography experts, agreed. They gave EUV in 2015 as high marks as immersion lithography in 2009. A key will be ongoing technical developments. “We have seen great extendibility,” Wurm said. “You can increase throughput by 50 percent just by adding two more mirrors.”

E-beam lithography

E-beam lithography uses the same principles as photolithography, except that instead of light the system shoots electrons (x-rays, essentially). Electrons have a much shorter wavelength than light, giving e-beam the promise of being able to write much smaller than photolithography.

The drawbacks to e-beam have always been relatively low throughput and high complexity – as well as high cost – of the exposure tools. That said, the cost of traditional masks continues to rise, making e-beam as a direct-write technology more attractive to chip makers. Some are looking at relatively slow e-beam technology to create critical chip layers with very small patterns while using traditional optical lithography for non-critical layers.

Another development to watch in the e-beam space is the use of multiple beams to increase throughput. For now, SEMATECH’s Lercel commented, e-beam appears to be more applicable to prototyping or very low volume production for research or development. Using multiple beams for volume production, he said, will be at least five years into the future, “if you can prove that it works.”

Nanoimprint lithography

Nanoimprint lithography takes a completely different approach than optical, EUV or e-beam. Michael Falcon, strategic marketing manager for nanoimprint toolmaker Molecular Imprints, called it “almost like stamping DVDs.” The process uses a mold – or master – that has a circuit imprint or other imprint on it, and then imprints or stamps that directly onto a wafer.

Falcon claims nanoimprinting can and will be able to go well beyond 45 nm processing at fractions of the cost of any type of optical or e-beam lithography (including immersion or EUV). That said, he doesn’t foresee nanoimprint supplanting photolithography so much as being chosen in place of other types of lithography for certain applications. Key among these are high-brightness LEDs (solid state lighting) and pattern imprinting of disks in microdrives used in iPods, cell phones, MP3 players and the like.


Molecular Imprints’ Imprio 250 nanoimprint lithography system offers sub-50 nm half pitch resolution, sub-10 nm alignment, integrated magnification control and fully automated wafer and template loading capability. It is intended for device and process prototyping and pre-production, as well as for alignment sensitive lithography applications such as thin film heads and molecular electronics. Photo courtesy of Molecular Imprints
Click here to enlarge image

Larry Koecher, chief operations officer of nanoimprint toolmaker Nanonex, concurred. “We’re on the verge of moving nanoimprint into manufacturing,” he says. “It is being accepted quite nicely as an R&D technology in research labs, but it is starting to catch the eye of those who want to move into mass production mode.”

The Litho Forum found nanoimprint generating increased interest in the 2012 to 2015 time frame. “Solving template defects is the real issue,” Lercel said of nanoimprint lithography. “There are those who are not going to use it for the semiconductor space, but for other nanotechnology applications.”

Dip pen nanolithography (DPN)

Dip pen nanolithography uses the tip of an atomic force microscope (its “pen”) to write patterns directly on substrates using molecules (its “ink”).

“It is fundamentally the same thing as dipping a pen in ink and writing on paper,” said Tom Levesque, senior director of DPN global sales for NanoInk, which makes DPN tools. “The material you can deposit can be from small molecules to biological components such as proteins or polymers.”

Examples Levesque gave for the use of DPN include attaching viruses to see how they attack cells or molecules, production of DNA arrays and other medical diagnostic tools and using DPN to functionalize and align carbon nanotubes on a substrate. The technology, he said, allows for “bottom-up” manufacturing.

DPN may be unlikely to replace or displace much optical lithography in the semiconductor industry, at least in the short term. Primary applications promise to be fast turns of prototype material, since the technology is direct-write but doesn’t require the high cost of materials or vacuum conditions of other approaches. “Immersion lithography using a $20 million tool for mass production will continue to work,” Levesque predicted. “E-beam will continue to be a specialty. We (DPN) will be a niche in that marketplace for people that have more research functionality in their application.”

– James R. Dukart


Imaging: behold the optical nanoscope

Can microscopy morph into “nanoscopy”? Magnification using visible light and a series of lenses has been around since Galileo’s time. But because of the physics of light, there’s a natural lower limit – about 240 nm – to the size of things that can be viewed with a traditional optical microscope.

That’s small, but it’s not small enough to see the tiniest features on new generations of semiconductors, to check for uniformity of nanoparticles or to see viruses or many parts of a live cell. The alternatives – scanning probe technologies such as the scanning tunneling microscope or the atomic force microscope – can “see” things at the atomic level using nanoscale probes that trace surfaces and send back a signal. But they cost six figures and can take several minutes or longer to complete the scan for a single image. And their requirements for sample preparation can preclude making certain kinds of observations – for example, imaging live cells.

As a result many researchers are on a quest to harness the economy, efficiency and versatility of optical microscopy to see things that are supposedly too small to be seen – down to 100, 60 or even 10 nm. And they’re pushing the physical limits of light in various ways.

Aetos Technologies of Auburn, Ala., markets a device called CytoViva, which was developed by a researcher at Auburn University. Using a patented optical system that replaces the condenser on most standard lab microscopes, and a special light source, CytoViva tightly controls how a sample is illuminated. It can image objects in the 100 nm range and can detect objects as small as 10 nm.

“The unit has a fixed geometry that creates a perfect alignment that’s not achievable in traditional microscopes,” said Tom Hasling, director of technology development. “It gives us an extremely good signal-to-noise ratio because there’s not a lot of stray optical noise.” The device produced the first video of Lyme disease viruses infecting a cell and has also imaged 20 nm polystyrene particles.


These two images show a view of a microscope calibration slide. The image at top was taken with a CytoViva-equipped optical microscope. The image at bottom was captured with a field emission scanning electron microscope. Image courtesy of Aetos Technologies
Click here to enlarge image

CytoViva has been on the market since late 2004. Its first installation was a U.S. Department of Agriculture facility in Ames, Iowa, which sponsored the development of the tool as part of its animal and plant health inspection service. The company hopes to have 150 units in the field by the end of the year. It recently introduced a fluorescence module that allows viewers to see both labeled and unlabeled entities at once. Prices range from $10,000 for a basic unit to about $45,000 for a full system, including the microscope, the fluorescence device and a camera.

Because CytoViva operates by shining light through the sample, it can’t be used for solid objects such as computer chips. To address the needs of the semiconductor industry, scientists at the National Institute of Standards and Technology are experimenting with a way to use scanning probe microscopes and optical microscopes together with computers to see features as small as 10 nm.


This image taken with a CytoViva-equipped optical microscope shows a slice of skin tissue with red quantum dot labeled nanoparticles embedded in hair follicles. Image courtesy of Aetos Technologies
Click here to enlarge image

Currently the industry relies on scanning probe technologies to do quality control, but they can damage the samples that they’re supposed to be measuring, said Rick Silver, a physicist in NIST’s precision engineering division. “Optical tools are low-cost, high-throughput and nondestructive.”

Silver and his team are developing a technique called phase-sensitive scatter-field imaging. It uses illumination whose wavelength and angle are tailored to the particular target. The target’s general shape is determined through imaging by a scanning electron microscope and an atomic force microscope. Using that information, along with the patterns produced when the light scatters off the target, a computer algorithm can create a precise image of even the tiniest details. Silver said the technique can detect differences of as little as one nm between two similar objects.

“This technology is likely to evolve into complex sensors to keep close control on the manufacturing process,” Silver said.

– Elizabeth Gardner

Before there were carbon nanotubes, there was Morinobu Endo. Like the semiconductor industry itself, Endo has been battling manufacturing problems for more than 30 years. Call it Endo’s Law: his process for manufacturing carbon nanotubes has been increasing its yield by a factor of 10 every year for the past 15 years.

How has he done it? Hard work, innovation, serendipity…and inspiration from a sneeze. Endo, a professor at Shinshu University in Nagano, Japan, recently shared his perspective on carbon nanotubes, their development and their future applications in an interview with Small Times’ David Forman.

Q: When did commercial activity begin?

We have been working on carbon since 1970, and we had been working already on other kinds of the nanotube materials. We started commercialization after scientific results based on my research. We started by cooperating with a chemical company in Japan for the industrialization of multiwall carbon nanotubes in 1988.

There were a lot of breakthroughs we made. For example, in my carbon nanotube production the past 15 years, we increased 10 to 15 times the production each year. It does not correspond to a scale up of the plant. The (increase in productivity) is from (technical) breakthroughs.

Q: So it’s not only scale up but also innovation?

Innovation, yes. The 10-times increase every year corresponds to some innovation of production.

Q: In process?

In the process. But not only process, also the quality control. Because you know for industrialization – commercialization of materials – we need not only the scientific result but also we need the industrial breakthroughs because we must guarantee the product for the user.

We (need) a very high level of reliability of the product materials. That corresponds to quality control. So the quality is also a very big part of the control.

Q: One of the things I am curious about in the history is that one of the first papers on carbon nanotubes was considered to have been published in 1991. But you had been working with these materials before. What did you call carbon nanotubes prior to 1991?

I said they are ultrathin carbon fiber with hollow structure, with hollow tube structure.

Q: But it is the same?

It is just the same as the carbon nanotube. In a paper in 1976 I proposed already the growth model at the tip of that kind of hollow tube. We find already the hollow core. This is carbon nanotube already….And already under some broken part the central core extrudes.

At the tip of the tube…I found (that) there is iron (and) the iron grows. The small ultrafine iron grows the tube.

Q: As the leading edge?

The leading edge, yes. So this is the growth mechanism I explained.

After 10 years they asked me to give another paper. We did continuously that work since 1976 and my first paper. At that time, I thought this was very curious. This iron particle, where does it come from? We used sandpaper on the substrate. Sand paper consists of iron oxide. At that time I didn’t know but I unintentionally used this paper. Innovation is very, very curious. One time we used black paper and there are no tubes, no growth.

Q: So you determined that the sandpaper is where the iron is coming from?

Yes, because of the iron oxide. And I constructed this model people now use a lot by using the iron catalysts for chemical vapor deposition growth. At that point I intentionally dispersed the iron particle. In this case, the iron oxide is intentional. Controllability is important for nanotechnology.

I found that the smaller diameter of a catalytic particle gives a higher yield of the materials. So I already find small particles of iron can grow more tubular materials. This I called (the) substrate seeding method. (It is designed) intentionally so I can get a lot of tube.

(The breakthrough came) when I went to a public meeting in Tokyo. This meeting is always in November. In those days the general newspaper wrote about a very bad situation for influenza. It made me think. When people have a cold – and sneeze – it can float about 15 meters.

Oh, (I said), my particle is much smaller than the influenza virus so I can float that catalytic particle and perhaps if I am lucky I’ll get the tube there.

Q: So you were inspired by the sneeze?

Yes. So as soon as I got back from Tokyo I tried to float the catalytic particle like that and I get the same materials. So it was a very strong breakthrough. One week we have the catalytic particle on the substrate. But the cost of the material is about $2,000 per kilogram. Too expensive compared to conventional materials. The next week we have a very big breakthrough because the reproducibility is a thousand times improved because we can provide the catalytic particle continuously.

Q: So what were the first applications?

That is a very important question. I contacted at that time NASA people and aerospace company people and high tech companies in the United States. And they say, “Endo, very beautiful material. Very nice. But even in space we should evaluate the cost of the method.” And so I decided already at that time that since the price of carbon fiber degrades year-by-year that we can’t compete with carbon fiber. I wanted to develop an application field for this material where it is only possible to use our material – hollow tube, carbon nanotube…Endotube, people say.

Q: An application where you must use this material?

Yes, no substitutions. Because if we did substitution perhaps the counterpart would degrade the price.

Fortunately at that time, in 1988, there were very important breakthroughs for electronics. Do you remember? In the 1990s electronics like laptop computers…they were not so good. Basically all electronics used a power line, a power source. The battery was only the auxiliary system.

At that age personal computers changed from the black and white display to color display and Intel starts the high power CPU. So the laptop computer needs more high power. And also the people start the cell phone but they use a very big battery box. And also people use the video camera but people used a very big battery box.

So there is a lot of demand to develop a good battery, a high quality battery. The requirements of energy consumption were increasing for this laptop computer. So the companies were very aggressive in Japan to develop a new battery, the lithium-ion battery. Some companies used our multiwall carbon nanotubes.

Q: And how did they use your materials?

It’s very top secret. I can’t say that part. But anyway the lithium-ion battery starts in 1991 and year by year it increases. In 1991 the worldwide battery production is only 10 million batteries. Last year about 1,000 million sold.

Q: Do all of those batteries use the material?

You know, it’s very expensive with our multiwall tube. It is for the very high quality battery. For very thin (batteries) or very high power it is necessary.

This lithium ion battery provides the era of wireless computing – cordless, portable electronics. If not, we would have the big telephones. So energy storage is very important. And also for the future, for motor vehicle technology. The ideal case could be the electric car, not the fuel cell. Even for fuel cell cars, even for hybrid vehicles, energy storage technology is key. If we use electricity generated by the wind, atomic, coal, whatever, we can get the highest efficiency of a car. Because power plant efficiencies are 42 percent in this country. For gasoline car, it is only 10 percent of the efficiency. Hybrid car is only 20 percent. But if we use electricity generated by power plant we can get twice the efficiency. For that kind of battery this type of material is very important.

Also it is (important) for power tools. Carpenters use very high grade and long-life electronic tools. The lithium-ion battery is very nice for electronics. It’s not powerful like lead-acid battery but recently they have improved very much the performance. The performance must still increase. When the performance increases you can expand the application field. This is very important.

Q: What about the ability to quickly recharge?

That is a good question. So I should say the performance of the lithium-ion battery can improve a lot by putting this multiwall carbon nanotube in the electrode. This is a mechanism. Nanotubes are not the main material but a very important, I should say, an essential additive to get high repeatability and cycle-ability.

Batteries are now a strategic technology because the battery company connects with the cell phone company. The battery company connects with the laptop computer company. And these companies every two years they want to have some special laptop computer. They want to sell some special cell phone. We can know what kind of cell phone will be possible from the battery size and capacity that exists. So that’s very strategic. The battery now is a very strategic component.


The Endo file

Click here to enlarge image

Morinobu Endo is a professor in the faculty of engineering at Shinshu University in Nagano, Japan. His current work ranges from basic science to applications of various forms of carbon, carbon nanotubes, new forms of carbon and graphite, nano-porous carbons, graphite intercalation compounds, Li-ion batteries and electric double layer capacitors.

After receiving an M.S. degree from Shinshu University, Endo obtained a Ph.D. from Nagoya University. He is the present chairman of Japan Carbon Society and is one of the international advisory members of CARBON journal. His work developing carbon nanotube processes, specifically a 1976 paper in Journal of Crystal Growth and a 1988 paper in CHEMTECH, the former journal of the American Chemical Society, is considered fundamental.

Industry calls for consistent terminology

By Andreas von Bubnoff

Nano means one billionth of something, but when it comes to labeling consumer products, it can mean vastly different things: nano-sized particles, a nanoscale thin film, even nano-sized air pockets. And some products labeled “nano” don’t contain anything nano at all.


“We encourage people to be transparent when using nanotech,” said Sean Murdock, executive director of the NanoBusiness Alliance, the U.S. trade association. The Alliance is currently compiling a database of products that use nanotech, he said. Photo courtesy of NanoBusiness Alliance
Click here to enlarge image

Environmental groups and academic and business experts agree on a need for consistent labeling of nano products to indicate which ones contain a nanomaterial or nanotechnology. But they disagree on why that’s important and how to get there.

Recent inventories of nano products by the Woodrow Wilson International Center for Scholars and by environmental group Friends of the Earth include products as varied as sunscreens that contain nanoscale particles of titanium dioxide and shoe insoles with nano-sized air pockets. David Rejeski, who directs the Project on Emerging Nanotechnologies at the Wilson Center, said during a search for nano products, he even came across a “nano” kayak which has nothing nano in it, except that it is small – relative to other kayaks, that is.

There are no labeling guidelines, Rejeski said, and for consumers, it gets very confusing. “Are there nanomaterials in here? Are you applying a nanoscale layer?” he asked. “Are there nanoholes? What is the nanothing?”

In the case of the glass and bathroom sealant “Magic Nano,” which was recalled in Germany in March because it caused breathing problems in consumers, the “nanothing” was supposed to be a nanoscale layer of silicon dioxide that the product creates on surfaces, according to the manufacturer. There had been concern that there were nanoparticles in the product, but the Federal Institute for Risk Assessment (BfR) in Berlin announced on May 26 that the product did not contain nanoparticles.

Mandatory labeling is one way to deal with the confusion, say eight consumer, health and environmental groups. On May 16, the groups, which include Friends of the Earth and the International Center for Technology Assessment, or CTA, filed a petition with the Food and Drug Administration that calls for mandatory labeling of nanotech products.

Currently, companies can call their products nano regardless of whether they contain anything nano or not, said George Kimbrell, staff attorney for the CTA. “There is no regulatory oversight and no standards for labeling,” said Kimbrell, the main author of the petition. “The consumer has no way of knowing.”


“All nanoparticles are not the same,” said Rice University chemistry professor Vicki Colvin. Before there can be labeling, she said, there needs to be consistent terminology. Photo by George Craig
Click here to enlarge image

Business representatives agree on the need for consistent labeling, but want to keep it voluntary. “We encourage people to be transparent when using nanotech,” said Sean Murdock of the NanoBusiness Alliance, the U.S. trade association of the nanotech industry. The Alliance, he said, is currently compiling its own database of products that use nanotech.

“I think there is a lot of confusion about what nanotech is,” said John Bailey, executive vice president of science for the Cosmetic, Toiletry and Fragrance Association, an industry group. “A more thoughtful definition would help clarify the issue considerably for some of our consumers.”

But before any labeling can be done, there needs to be consistent terminology, said Vicki Colvin, who studies nanoparticle toxicity at Rice University. “We can’t even agree on what to call a nanoparticle,” she said.

Murdock agrees. “It’s one thing to petition for labeling, assuming that we have the terminology to be consistent with what we are putting on the label,” he said. “We don’t yet.”

That’s about to change. The International Organization for Standardization, or ISO, is developing international standard terminology for nanoparticles and other things nano, said Clayton Teague, director of the National Nanotechnology Coordination Office. The ISO is also developing standards for how to measure nanomaterial content of nano products, he said. Industry will be able to use the standards on a voluntary basis.

Teague expects basic ISO standards, for example on how to define a nanoparticle, to be available in a few years.

But that may not be fast enough, Rejeski said. “The problem is that in two years you will have a whole new generation of products out there on the market.”

What’s more, coming up with consistent “nano” terminology won’t be easy, given that nanoparticles come in so many forms, Colvin said. “All nanoparticles are not the same.”

Tim Harper & Helen Yu

It is fascinating to look back a few years at the early predictions made about the impact of nanotechnologies. Coming off the dot-com boom, most commentators predicted major changes in technology, with nanotechnologies seen through the same high-tech lens as the Internet. Perhaps with 20-20 hindsight now, we should have looked more closely at an industry that has been at the forefront of every industrial change from steam power to globalization: the textile industry.

While the nanotech exhibit in London’s science museum is tucked away in a corner, the entrance is dominated by the huge steam engines that drove Britain’s industrial revolution, a revolution in which textiles played a major role. From John Kay’s flying shuttle in 1733 to the first synthetic dyes in 1856, the textile industry has been an early and enthusiastic adopter of new technologies. Perhaps this in part explains why, for the past several years, whenever anyone gives an example of nanotechnologies in everyday life, stain-resistant pants usually top the list.

We set out to look at the real impact of nanotechnologies on the textile industry – beyond the infamous nanopants – to the entire textiles value chain. Working with colleagues at London’s Queen Mary University Unit for Strategic Studies in Nanotechnology and colleagues at London College of Fashion, we were surprised by the results. We found that $13.6 billion worth of textiles using nanotechnologies would be on the market by 2007. Our recent report, “Nanotechnologies and the Textile Market,” predicts that this will rise to $115 billion by 2012.

Let’s take a look behind the numbers.

The textile sector has been radically altered once again by a combination of changing consumer needs, new technologies and globalization. A walk through any shopping mall shows just how far textiles have come in the last hundred years, with the dazzling array of colors, textures and finishes. The industry has come a long way from simply spinning and weaving.


Nanotech market research firm Cientifica predicts the value of nanomaterials in textiles will increase at an escalating pace over the coming years.
Click here to enlarge image

In a sector as broad as textiles not all markets are equally affected. In a market such as clothing, where price is a key driver, nanotechnologies will not have a major impact. Of course there will be significant nanotech-related revenues, but it will be hard for nanotechnologies to penetrate much more than one percent of the apparel market. Still, even one percent of a predicted $3 trillion market by 2012 is a sizable business.

In such an atmosphere of change and competition, both manufacturers and retailers are continuously striving to meet the needs of customers and must carefully balance price with performance. Many companies are desperately trying to differentiate themselves from commodity textile markets and to secure a niche through technological innovation, including the use of nanotechnologies. Textile companies, especially in Europe and the United States, hope to find an alternative to the unsustainable strategy of competing purely on price. The highest penetration of nanotechnologies and also the highest growth rates will be in less cost-sensitive applications such as medical, military and sports textiles, where performance is usually more important than price.

In this respect, the textile industry mirrors much of traditional industry, which is shifting from adding value through process innovation to creating value through product innovation. This allows the business models of nanotextile companies to be based more on intellectual property than capital equipment. This is reflected in the fact that almost 50 percent of the companies supplying nanotechnologies for the textile industry are U.S.-based, while most of the manufacturers are in Asia.

Perhaps the most interesting and least predictable sector is non-conventional technical textiles, where nanotechnologies are enabling applications not typically associated with textiles, such as radiation shielding. Some of these applications will make use of synthetic fibers, but many will use modified natural fibers, from cotton to coconut, which may also provide opportunities for developing countries that are rich in natural resources.

The penetration of nanotechnologies into the textile market is certainly not the only example of the rapid uptake of new technology by traditional industries, and while the industry may be a first adopter, it certainly won’t be the last.

Tim Harper is CEO of Cientifica Ltd. He can be reached at [email protected]. Helen Yu is a senior analyst at Cientifica Ltd.

R&D UPDATES


July 1, 2006

Purdue engineers develop “cool” MEMS device

WEST LAFAYETTE, Ind. – Engineers at Purdue University have developed a “micro-pump” cooling device small enough to fit on a computer chip that circulates coolant through channels etched into the chip. The new MEMS device has been integrated onto a silicon chip that is about one centimeter square.

Innovative cooling systems will be needed for future computer chips that will generate more heat than current technology, and this extra heating could damage electronic devices or hinder performance, said Suresh Garimella, a professor of mechanical engineering.


Brian Iverson, a mechanical engineering doctoral student at Purdue, holds up a disk containing several “micro-pump” cooling devices small enough to fit on a computer chip. The tiny pumps circulate coolant through channels etched into the chip. Photo courtesy of David Umberger/Purdue University
Click here to enlarge image

Chips in today’s computers are cooled primarily with an assembly containing conventional fans and “heat sinks,” or metal plates containing fins to dissipate heat. But because chips a decade from now will likely contain upwards of 100 times more transistors and other devices, they will generate far more heat than chips currently in use, Garimella said.

The prototype chip contains numerous water-filled micro-channels, grooves about 100 microns wide. The channels are covered with a series of electrodes, electronic devices that receive varying voltage pulses in such a way that a traveling electric field is created in each channel. A pumping action is created by electrohydrodynamics, which uses the interactions of ions and electric fields to cause fluid to flow.

Nanostars could shine light on chemical sensing

HOUSTON – Optics research from Rice University’s Laboratory for Nanophotonics (LANP) suggests that tiny gold particles called nanostars could become powerful chemical sensors.

Nanostars, named for their spiky surface, incorporate some of the properties of often-studied photonic particles like nanorods and quantum dots. For example, they deliver strong spectral peaks that are easy to distinguish with relatively low-cost detectors. But Jason Hafner, associate director of LANP, and his team found unique properties too.


Nanostars, named for reasons that are obvious by looking at the image above, could open up new possibilities for sensing applications. Image courtesy of Rice University
Click here to enlarge image

An analysis revealed that each spike on a nanostar has a unique spectral signature. Preliminary tests show that these signatures can be used to discern the three-dimensional orientation of the nanostar, which could open up new possibilities for 3-D molecular sensing. Their findings appeared in the journal Nano Letters.

Nanoparticles found to improve ultrasound images

COLUMBUS, Ohio – Nanotechnology may one day help physicians detect the very earliest stages of diseases like cancer, a study in the journal Physics in Medicine and Biology suggests. It would do so by improving the quality of images produced by one of the most common diagnostic tools used in doctors’ offices, the ultrasound machine.

In laboratory experiments on mice, scientists found that nanoparticles injected into the animals improved the resulting images. This study is one of the first reports showing that ultrasound can detect these tiny particles when they are inside the body, said Thomas Rosol, a study co-author and dean of the college of veterinary medicine at Ohio State University. The particles also can brighten the resulting image.

Clemson group develops “carbon dots”

CLEMSON, S.C. – Chemists at Clemson University say they have developed a new type of quantum dot that is the first to be made from carbon. Like their metal-based counterparts, these nano-sized “carbon dots” glow brightly when exposed to light and show promise for a broad range of applications, including improved biological sensors, medical imaging devices and tiny light-emitting diodes, the researchers say.

The carbon-based quantum dots show less possibility for toxicity and environmental harm and have the potential to be less expensive than metal-based quantum dots. Cheap, disposable sensors that can detect hidden explosives and biological warfare agents such as anthrax are among the possibilities envisioned by the researchers.


A microscope image of bacterial spores (Bacillus subtilis) labeled with luminescent carbon nanoparticles. B. subtilis serves as a common model for anthrax research. These carbon nanoparticles, or carbon dots, could lead to safer, disposable biosensors to detect biological warfare agents. Image courtesy of Ya-Ping Sun/Clemson University
Click here to enlarge image

“Carbon is hardly considered to be a semiconductor, so luminescent carbon nanoparticles are very interesting both fundamentally and practically,” said study leader Ya-Ping Sun. “It represents a new platform for the development of luminescent nanomaterials for a wide range of applications.” The research was published in the Journal of the American Chemical Society.

Researchers explore nanotubes as minuscule metalworking tools

TROY, N.Y. – Bombarding a carbon nanotube with electrons causes it to collapse with such incredible force that it can squeeze out even the hardest of materials, much like a tube of toothpaste, according to an international team of scientists. The researchers suggest that carbon nanotubes can act as minuscule metalworking tools, offering the ability to process materials as in a nanoscale jig or extruder.

Engineers use a variety of tools to manipulate and process metals. For example, handy “jigs” control the motion of tools, and extruders push or draw materials through molds to create long objects of a fixed diameter. The new findings suggest that nanotubes could perform similar functions at the scale of atoms and molecules, the researchers say. The results also demonstrate the impressive strength of carbon nanotubes against internal pressure, which could make them ideal structures for nanoscale hydraulics and cylinders.


A multi-walled carbon nanotube is partly filled with an iron carbide nanowire. The work is the result of an international collaboration. Photo courtesy of Johannes Gutenberg University/Banhart
Click here to enlarge image

“Researchers will need a wide range of tools to manipulate structures at the nanoscale, and this could be one of them,” says Pulickel Ajayan, professor of materials science and engineering at Rensselaer Polytechnic Institute and an author of the paper. “For the time being our work is focused at the level of basic research, but certainly this could be part of the nanotechnology tool set in the future.”

The paper is the latest result of Ajayan’s longtime collaboration with researchers at Johannes Gutenberg University in Mainz, Germany; the Institute for Scientific and Technological Research of San Luis Potosi, Mexico; and the University of Helsinki in Finland. The paper appeared in Science.

UCLA engineers announce spin wave breakthrough

LOS ANGELES – Engineers at the University of California at Los Angeles Henry Samueli School of Engineering and Applied Science announced new semiconductor spin wave research. Adjunct professor Mary Mehrnoosh Eshaghian-Wilner, researcher Alexander Khitun and professor Kang Wang created three novel nanoscale computational architectures using technology they pioneered, called “spin-wave buses,” as the mechanism for interconnection. The three nanoscale architectures are power efficient and possess a high degree of interconnectivity.

“Progress in the miniaturization of semiconductor electronic devices has meant chip features have become nanoscale. Today’s current devices, which are based on complementary metal oxide semiconductor standards, or CMOS, can’t get much smaller and still function properly and effectively. CMOS continues to face increasing power and cost challenges,” Wang said.

In contrast to traditional information processing technology devices that simply move electric charges around while ignoring the extra spin that tags along for the ride, spin-wave buses put the extra motion to work transferring data or power between computer components. Information is encoded directly into the phase of the spin waves. Unlike a point-to-point connection, a “bus” can logically connect several peripherals. The result is a reduction in power consumption and less heat.

Nanotube membranes open possibilities for cheaper desalinization

LIVERMORE, Calif. – A nanotube membrane on a silicon chip the size of a quarter may offer a cheaper way to remove salt from water. Researchers at the Lawrence Livermore National Laboratory have created a membrane made of carbon nanotubes and silicon that may offer, among many possible applications, a less expensive method for desalinization.

Billions of nanotubes act as the pores in the membrane. The super smooth inside of the nanotubes allows liquids and gases to rapidly flow through, while the tiny pore size can block larger molecules.

The team was able to measure the flow of liquids and gases by making a membrane on a silicon chip with carbon nanotube pores as the holes of the membrane. The membrane is created by filling the gaps between aligned carbon nanotubes with a ceramic matrix material. The pores are so small that only six water molecules could fit across their diameter.

The research resulted from collaboration between Olgica Bakajin and Aleksandr Noy, both recruited to Lawrence Livermore Lab as “Lawrence Fellows” – the laboratory’s initiative to bring in young, talented scientists. The principal contributors to the work are postdoctoral researcher Jason Holt and Hyung Gyu Park, a UC Berkeley mechanical engineering graduate student and student employee at Livermore. The research appeared in the journal Science.

It was just a few years ago that nanoelectronics was, well, hypothetical. Today it is anything but. The world’s leading electronics companies are pushing the nanoelectronics envelope in attempts to do things like develop faster and denser memory, convert electrons into photons, and forestall, at least for a little while longer, the inevitable demise of Moore’s Law.

Competing against these companies – and, at times, collaborating with them – are a group of agile startups with dreams of being the next Intel. And supplying them with the machines they need are a set of innovative toolmakers seeking to provide the picks and shovels of the next industrial revolution.

They all face difficult challenges, both technological and commercial, regardless of their size or their position in the food chain. Therefore, Small Times invited representatives from the sector to contribute articles on both technical and business hurdles and how they have developed, or are developing, a way over them.

As you will read in the following pages, the problems are many, but the creativity employed in overcoming them is significantly greater.

– David Forman


Nanotech will ride many roads to market

By Tom Theis, IBM

Some experts predict that nanotechnology will be a game changer for the IT industry in general and the microelectronics industry in particular, but major changes in such a large and complex industry will not happen overnight. It looks like incremental improvement of the silicon transistor will continue for ten or more years. Nevertheless, the seeds of massive change have been planted, and I believe that some important innovations in nanotechnology are already on a path to the market.

IBM Research manages a broad portfolio of research projects. At any given time, some of these projects are impacting IBM’s revenue and profits while others are still in an exploratory stage.

As an example of exploratory research with a long-term outlook, researchers at our Zurich Research Laboratory just published their observations of electrical contact formation between a single metal atom and an organic molecule (Science, May 26, 2006, “Imaging Bond Formation Between a Gold Atom and Pentacene on an Insulating Surface”).


An illustration of IBM’s Millipede storage device shows how an atomic force microscope tip inscribes data in a polymer surface. Image courtesy of IBM
Click here to enlarge image

Another example is our study of a novel physical process for electrically driven light emission from a carbon nanotube transistor. (Science, Nov. 18, 2005, “Bright Infrared Emission from Electrically Induced Excitons in Carbon Nanotubes”).

We don’t yet know how this fundamental knowledge will be applied, but given the history of miniaturization in electronic devices and the strong market incentives for further miniaturization, we are confident that it will be.

At the same time, many of the projects in our nanotech portfolio are much closer to product applications. MRAM (magnetic random access memory) started out as a highly exploratory project at IBM Research in 1996, but today we are thinking about how to take it to market. Millipede, our nanomechanical approach to mass storage of information, is at a similar stage. And of course, some of our innovations in nanostructured materials and fabrication processes are already in product development. Stay tuned for the announcements.

How will these nanotech innovations go to market? Many will be integrated into existing IBM product lines. IBM is a leading supplier of scientific supercomputers, servers and data storage systems, and a leading designer and manufacturer of microprocessors and other key components of those systems. All of these products and businesses place a high premium on performance and thus demand the continuous introduction of new materials and manufacturing processes, and improved devices for logic, memory and communications. In other words, these products are a prime target for our nanotech innovations.

Not every invention and bright idea from our research laboratories will find its way immediately into an IBM product, so we also license our intellectual property to other innovators. More important, we also partner with others to develop new products and market opportunities. That is why we have pioneered the use of our intellectual property portfolio to help create open innovation networks, where companies can share IP as a foundation for the creation of new products. Take a look at your portfolio of intellectual property, your customers and your partners. Is there something we can each bring to the table that might allow us to create an entirely new product or service?

A successful partnership is likely to be based on a shared vision of potential markets, the ability of each partner to share the cost and risk of collaborative development and the ability of each partner to contribute key resources, talent and expertise. Such partnerships can greatly accelerate the movement from laboratory results toward new products and new markets – that is, they can yield innovation that matters.

Click here to enlarge image

Tom Theis is director of physical sciences at IBM’s T.J. Watson Research Center (www.watson.ibm.com) in Yorktown Heights, N.Y.


Bright light from tiny tubes

By Jia Chen, IBM

Today information typically travels as photons in optical fibers deep beneath the ocean and across continents. Yet we access the information as electronic signals through computers, cell phones, Blackberries and iPods. In our wildest imagination, is it possible that one day all information could be ferried by photons, which travel much faster than electrons?

At IBM, we made a novel ultra-bright and ultra-small light source using carbon nanotubes – a breakthrough in nanophotonic devices, and a step closer to transmitting all information with photons. The nano-“flash-light” emits in a wavelength of 1 to 2 micrometers, a range widely used by the telecommunications industry to send information through optical fibers, and 1,000 times brighter and more efficient than previously demonstrated.

Carbon nanotubes are hollow cylindrical tubes with all of their atoms on the surface. They are known to have excellent mechanical and electrical properties. For example, they have 100 times the tensile strength of steel with one-sixth the weight, and can carry 1,000 times more electricity in a tiny area than metals such as silver and copper. Yet this shows that they also have potential optoelectronic applications.


This schematic of an array of carbon nanotube devices that emit bright infrared light depicts the “waterfall” architecture that speeds up electrons and helps them form tightly-bound electron-hole pairs, thereby generating light. Image courtesy of IBM
Click here to enlarge image

The conventional approach to producing photons from electrical signals is to bring negative charges (electrons) and positive charges (holes) together for them to neutralize each other and emit photons. In the past, electrons and holes were introduced from the two electrodes of a carbon nanotube device separately, and the chances that they would meet each other and emit photons were so low that finding applications for them seemed a distant possibility.

In our new devices, only one type of charge carrier (either electron or hole) is needed to produce light, which is much easier to realize than previous methods.

We played a little trick on the charges (e.g., electrons) moving along nanotubes. We found a way to speed up the electrons by creating a “waterfall” landscape for them, which allowed them to pick up enough energy and create tightly-bound electron-hole pairs. The electrons and holes within the pair will then neutralize each other and emit photons 1,000 times brighter than previously reported. We were able to coerce the electrons to convert the energy to light instead of dissipating into heat.

Our method greatly improved the electron-to-light-conversion efficiency such that every electron injected into the carbon nanotube participates in the light conversion process. A simple way to create the waterfall landscape is to use substrates with different dielectric constants that the nanotube rests on (e.g., by removing part of the underlying support from a nanotube). The new method generates about 100,000 times more photons per unit area per unit of time than large area Light Emitting Diodes, and from an area that is a trillionth of the emitting area of a regular 60W tungsten filament light bulb.

These nanoscale light sources conveniently use the same fabrication processes as semiconductor silicon devices. They have the potential to be built into complex light-based circuitry with the same footprint as silicon electronic components, enabling the integration of both optics and electronics on the same chip. Information can then be ferried not only with electrons, but also with light.

With the aggressive miniaturization of semiconductor chips, the metal wirings currently used to connect the different components on a single chip will suffer increasingly from problems such as lack of speed and unacceptable levels of power dissipation, eventually limiting chip performance. For instance, in a Pentium 4, more than 50 percent of its power is consumed by metal interconnects. These on-chip light sources could eventually provide an attractive alternative as optical connections that generate less heat and support far higher bandwidth than metal wires.

In addition, the light wavelength from the nanophotonic devices can be tuned by using carbon nanotubes with different diameters. Hence one can make nanotube emitters with both infrared and visible light. The devices can also be made on a flexible substrate. The efficient generation of focused light could plausibly be used for carrying out optical probing, providing on-chip variable-wavelength light sources for bio-sensors, and manipulation and spectroscopic analysis at the nanoscale regime where it is impossible to focus light due to physical limitations.

Click here to enlarge image

Jia Chen is a research staff member of nanometer scale science and technology at IBM’s T. J. Watson Research Center (www.watson.ibm.com) in Yorktown Heights, N.Y.


The business case for universal memory

By Greg Schmergel, Nantero

Nantero is a product-focused intellectual property company whose goal is to develop NRAM – a high-speed, high-density nonvolatile random access memory. In other words, we want to develop a universal memory capable of replacing SRAM, DRAM and/or flash depending on the application.

In the field of nanotechnology, a product focus is not always the case. Some business plans and even companies get very focused on the technology itself and lose sight of the market, which is easy given how remarkable nanotechnology is. Nantero’s technology is itself quite exciting – we’re using millions and millions of moving nanotubes to store data – but we strive to maintain a focus on the products and our customers. The memory technology used in the end products helps deliver performance improvements and new features. That is what consumers want and expect. We need to deliver memory chips that provide added value over existing and emerging competitors and that become a must-buy for makers of electronic devices.

Electronics designers today choose between several types of memory chips, including DRAM (dynamic random access memory), SRAM (static random access memory) and flash. Each type of memory has its advantages and disadvantages. DRAM is cheap and high-density, but it needs constant refreshing and it’s volatile, so when the power is turned off the data disappears. SRAM is very fast, does not need refreshing, and is easily embeddable in a chip alongside logic, but the cells are very large and it’s also volatile. Flash is non-volatile, but it is comparatively slow, requires block erase, and is not easily embeddable in a chip alongside logic. Almost everyone would prefer to use a universal memory, which combines all the positive attributes of DRAM, SRAM and flash, if only one existed.


Flash memory today is available not only in a variety of sizes but also packaged in formats for every conceivable brand of digital camera or handheld computer. A memory architecture that offered the non-volatility of flash along with the speed of SRAM and density of DRAM could be used for many purposes. Photo courtesy of SanDisk
Click here to enlarge image

Most manufacturers of memory and logic devices are actively seeking to replace the memory they currently produce or embed alongside logic, because they need to eliminate current memory manufacturing and performance limitations. Their end customers are also demanding that the new memory technologies satisfy future end product specifications. This activity requires significant resources and commitment and would be considered successful only if the new memory production cost is competitive and scalable for many years to come.

Given that a universal memory would have a market in the tens of billions of dollars per year, there have been many large efforts to develop one over the past few decades, with two of the most significant being FRAM (ferroelectric random access memory) and MRAM (magnetic random access memory). Neither has made it to market as a scalable and cost-effective memory, so the field remains open.

Nantero’s memory, called NRAM, is intended to combine the non-volatility of flash with the speed of SRAM and the density of DRAM. Importantly, the manufacturing process is simple, with only one additional mask layer, and requires no new capital equipment. In addition, NRAM’s basic concept is scalable down to below 5 nanometers, which means that it would be a viable memory design for decades to come.

We stay in close contact with the end customers and the electronic device manufacturers who would purchase and integrate the memory. There is tremendous excitement across multiple applications about what could be done with a true universal memory and they would all benefit by differentiating themselves from their competitors. Laptops would turn on instantly, cell phones would be more powerful and have a longer battery life, and a variety of new products could be designed, taking advantage of a substantial increase in the amount of memory that could be embedded in microcontrollers and logic chips such as ASICs or FPGAs.

Nanotechnology often carries with it a misperception that it is closer to science fiction than commercial reality, with complex, breakthrough products being decades away. In reality, companies are already in the later stages of developing highly innovative products that deliver benefits that simply could not be achieved without the control of matter at the molecular level.

Click here to enlarge image

Greg Schmergel is co-founder and CEO of Nantero Inc. (www.nantero.com) of Waltham, Mass.


Leaping the hurdles to making nanoelectromechanical memory

By Thomas Rueckes, Nantero

Nantero’s goal is to make a memory that combines the non-volatility of flash with the speed of SRAM and the density of DRAM. Our approach involves using carbon nanotubes to make a nanoelectromechanical memory called NRAM. Many industry experts have predicted that carbon nanotubes will play a critical role in the future of the semiconductor industry. However, those experts also tend to predict that this future will not come to pass for a decade or more. This is because of some substantial challenges Nantero has had to overcome before moving carbon nanotubes beyond fabrication of single devices and into mass production.

The first issue has been a major roadblock – carbon nanotubes are grown from a metal catalyst, which is frequently iron, and they are generally grown in dirty environments, leading to the introduction of even more metals. Production semiconductor fabs have very strict requirements and will not allow levels of metal that might contaminate other materials in the fab and damage wafers. So off-the-shelf nanotube material would not be allowed in any CMOS fab in the world. Nantero had to resolve this by developing a process for purifying carbon nanotubes to meet semiconductor industry standards, meaning only a few parts per billion of metal can remain mixed in with the carbon. Having done this, we entered into a partnership with Brewer Science Inc. to enable the supply chain and mass produce this CMOS-compatible nanotube material.

A second major issue is the placement of the carbon nanotubes. A single walled carbon nanotube is approximately 1 nanometer in diameter, and may be a micron in length. Generally, research into carbon nanotube devices is done by growing the nanotubes directly on the wafer and then measuring their properties, but this is by no means a scalable process. And certainly the nanotubes cannot be individually positioned in a mass production process. Nantero resolved this issue by developing a process for spin coating the nanotubes onto the wafers, and then patterning them using lithography and etching. This process is compatible with existing semiconductor process tools and results in nanotubes being located only where required by design.


This illustration of Nantero’s NRAM architecture shows how carbon nanotube ribbons deflect in response to an electric charge. Van der Waals forces hold the ribbon down, making the memory non-volatile.Image courtesy of Nantero
Click here to enlarge image

These issues are examples of how complex it can be to transition from a laboratory environment to a mass production environment. However, once carbon nanotubes can in fact be utilized in production, all sorts of possibilities open up, including Nantero’s NRAM.

NRAM uses carbon nanotube nanoelectromechanical switches to represent bits. In the off-state, the resistance of the bit is in the gigaohm range, whereas in the on-state the resistance of the bit is in the kiloohm range. Thus it is simple to read out the bit and determine its state. The bit state can be changed very rapidly since the carbon nanotubes have a very small mass and move only a very short distance measured in nanometers. And the bits are permanently non-volatile, due to Van der Waals forces which bind them in place in the on-state. Importantly, fabricating NRAM is an elegant process requiring only one additional mask layer, meaning that NRAM is not a costly addition to a process. These are some of the NRAM advantages, which make it a potential universal memory.

In addition, NRAM is intended to be drop-in compatible and integrated easily into existing systems by using existing peripheral circuitry and fitting into existing standard packaging. This means that electronics manufacturers would not have to redesign their laptops, PDAs, cell phones, game consoles, or other devices in any way to take advantage of NRAM.

Click here to enlarge image

Thomas Rueckes is chief technology officer of Nantero Inc. (www.nantero.com) of Waltham, Mass.


Making TEM economically viable in semi manufacturing

By Kevin Fahey, FEI

Controlling many of the processes used to manufacture devices will exceed the resolution capability of scanning electron microscopy (SEM) as the semiconductor industry passes through the 65 nm technology node. Transmission and scanning transmission electron microscopy (S/TEM, collectively) provide higher resolution alternatives, but manufacturers have resisted their adoption in mainstream process control and failure analysis applications because of burdensome sample preparation requirements. Perhaps equal in importance to the growing need for better resolution is the need for three-dimensional information, brought on by the increasing complexity of device structures.

DualBeam systems, which combine a focused ion beam (FIB) for cross sectioning, and a SEM for imaging, have been widely accepted for their ability to provide access to the third dimension. Now FIB-based TEM sample preparation promises to be an important enabling technology in the transition from SEM to S/TEM for mainstream semiconductor process control and failure analysis.

As shown in the chart accompanying this article, there are many factors driving the transition to TEM. The chart compares the critical length scales for a number of important processes at the various technology nodes from 65 nm down to 22 nm. Clearly many processes already require TEM for adequate control, even at the 65 nm node. The majority of processes are already in the transition region of the chart and many are in the region where only TEM provides adequate capability.


As semiconductor manufacturing moves to progressively smaller technology nodes, more processes require S/TEM capability to achieve adequate control. Data courtesy of FEI
Click here to enlarge image

In few industries is the saying “time is money” more true than for semiconductor manufacturing. Processes are highly integrated and automated. Delays at any point in the process flow translate directly into reduced profitability. The problem is exacerbated by the billion dollar capital investment required for a new fab. Seconds count. Anything that can accelerate the return of an errant process to full yield is valuable. Thus the requirement that manufacturers move from the relatively quick SEM, which can provide results in minutes, to the laborious TEM, which has historically required days, is most unwelcome.

FIB-based sample preparation offers both temporal and economic advantages. A fully integrated tool set can provide first results in less than two hours and produce multiple samples per hour. Although the initial capital investment in a full suite of tools is measured in millions of dollars, cost-of-ownership modeling demonstrates a total cost per analysis as low as $400. Typical cost per analysis for SEM is in the $150 to $200 range.

One of the most important economic benefits of FIB-based sample preparation is the ability to extract a location-specific sample from a wafer and return the wafer to the process. In process development and integration this eliminates variables introduced by looking at different test wafers for each process step. In production it eliminates the cost of scrapping an entire wafer – potentially worth thousands of dollars – in order to obtain a single measurement.

Returning to the equation of time with money, two cycle times are of critical importance in semiconductor manufacturing – the development cycle and the process control cycle. The first determines the time required to bring up a new process or bring a new product into production. Fewer, faster cycles get the product to market first, permitting premium pricing and increased profit.

Once a product enters high volume production, profitability is determined primarily by process yield. Process control seeks to maintain maximum yields. Anything that shortens the process control feedback cycle contributes to profitability by detecting yield excursions sooner, determining the root cause faster, ultimately reducing the length of the excursion and its impact on average yield.

Kevin Fahey is general manager of the NanoElectronics fab division at FEI Co. (www.feico.com) of Hillsboro, Ore.


Enabling the transition to high-res imaging in semi manufacturing

By Todd Henry, FEI

The continuing reduction in device size has pushed the imaging and analytical requirements of many semiconductor processes beyond the capabilities of scanning electron microscopy (SEM), requiring a transition to transmission (TEM) and scanning transmission electron microscopy (STEM, or collectively S/TEM) for mainstream applications. S/TEM provides both atomic scale resolution and much better material contrast but the transition has been impeded by the significantly greater requirements for S/TEM sample preparation – specifically, the requirement for samples thin enough to transmit electrons (100 nm or less).

SEM scans a finely focused beam of electrons over the sample surface and synchronously detects various signals caused by interactions between the beam electrons and the sample atoms. The sequentially acquired signal is assembled into a virtual image that associates signal strength with instantaneous beam location. In semiconductor applications SEM resolution is typically two to three nm. The primary limitation on resolution is beam spreading within the sample.


Focused ion beam-based S/TEM sample preparation uses the ion beam to remove material from either side of the targeted feature until the remaining structure is thin enough to transmit electrons. Images courtesy of FEI
Click here to enlarge image

TEM illuminates the entire imaged region of the sample simultaneously with a relatively broad electron beam. It forms a real, magnified image from transmitted electrons using lenses located beyond the sample. STEM may be thought of as a hybrid between SEM and TEM. It scans the sample with a finely focused beam and, using a detector located beyond the sample, constructs a virtual image from transmitted electrons. Both TEM and STEM require very thin samples. The thin sample eliminates much of the beam spreading that degrades SEM imaging and is the primary reason STEM resolution is so much better.

STEM can be performed in some SEMs (at voltages typically less than 30kV) by adding a detector below the thin sample. Dedicated S/TEMs operate at voltages as high as 300kV and can switch easily between modes depending upon specific imaging and analytical requirements. In addition to improved resolution, S/TEM provides greatly enhanced material contrast – essential for distinguishing the many component layers of advanced device designs.


The S/TEM image at right demonstrates higher resolution and better material contrast than the SEM image at left. Images courtesy of FEI
Click here to enlarge image

Over the last decade, microscopists have developed new techniques based on the use of focused ion beams (FIB) to expedite sample preparation in order to make S/TEM viable for routine semiconductor applications. These techniques can be highly automated and can offer significant improvements in speed and reliability, as well as reductions in the skills required of the technician.

FIB is similar to SEM in its use of a finely focused beam of charged particles; however ions are much more massive than electrons and can be used to remove (sputter) material from the sample much like a microscopic sand blaster. For S/TEM sample preparation the FIB is used to remove material from both sides of the desired thin section and to cut the section free from the bulk sample.

One of the most important advantages of FIB in this application is its ability to navigate precisely to the location of the targeted feature – often a defect detected by routine inspection. FIB may be combined with SEM in an instrument known as a DualBeam where the SEM is configured to look directly at the FIB cross section, thus providing very fine control of the milling process. This can be critical in determining the proper end-point for milling, especially if the target is a one-of-a-kind defect.

Todd Henry is director of the semiconductor fab business at FEI Co. (www.feico.com) of Hillsboro, Ore.


Survival means learning to adapt

By Barry Weinbaum, NanoOpto

As CEO of NanoOpto, a venture-backed product company, I have learned the hard way during the past five years that you cannot make it without a healthy mix of many ingredients, including good fortune and a little bit of luck. Startups are like teenagers finding their way in the grown-up world. Expectations are often misguided and things don’t turn out the way you initially planned. For success in nanotechnology, as with life in general, evolution and adaptation are critical.

NanoOpto designs, develops and manufactures a broad range of discrete and integrated passive optical components based on our proprietary nanotechnology-based processes. Our products are intended to displace traditional optical components used in consumer electronics and communications products worldwide. Specifically, we serve four markets: digital imaging, for which we make a suite of nano-filters for cell phone camera modules; communications, for which we make a family of optical isolators and discrete polarizers for communications network applications; projection display, for which we make optical waveplates, filters and polarizers embedded in projection TVs and metrology systems; and, optical drives, for which we make a suite of passive optical components for products like DVD players.

To survive, we have had to be extremely flexible in our business model and market approach. We were financed in early 2001 to use nanotechnology to build next-generation optical components for communications networks. In retrospect, we had chosen the absolute worst time to start a telecom components company. Our markets – and our customers – were crashing down around us.

Bold decisions were required, and we used every resource at our disposal to crawl through a narrow porthole and make it to the other side. As a result, we are a vastly different company today. We have adapted our technology and ourselves to receptive markets. We learned through our customers that it does not matter if you are a nanotech company. It only matters that you offer a better and more cost-effective way of doing things, and that you can execute and deliver. Therefore, we chose a market diversification strategy because our technology platform was capable of such a move, and because the economic impact and improved value we could provide mattered to customers.


Learning to turn on a dime: In the wake of the optical crash earlier this decade NanoOpto charted a new course into products like the infrared cutoff filters pictured here. They go between the lens and the image sensor in a digital camera to filter out infrared light. Photo courtesy of NanoOpto
Click here to enlarge image

If adapting your strategy to meet real market needs marks the onset of a company’s “adolescence,” then moving into manufacturing is the teenage years. Over the past 18 months, as we began to generate volume-based revenues, operational issues emerged front and center. We required manufacturing capabilities that could build products in volume, at high yield and at reasonable cost.

It was no longer compelling just to show interesting samples. It became critically important to meet customer commitments with volume product deliverables. Customer relationships would be made or broken with people who invested their own personal capital, believing we would deliver on our claims. For the right market reach, the need emerged for a distribution network in critical geographies.

For us, that meant we had to supplement what we considered to be an already strong team, adding critical new skills to the company and integrating new people into the team. In other words, we had to become a real company – all the while maintaining and growing our investor syndicate to demonstrate increasing value so we would have the runway to achieve our fullest potential.

Five years into the experience, we have moved from an idea to a product development engine, from samples to volume, from concept to quality. We feel the company is poised to become an adult. As a result, the issues we face today have little to do with nanotechnology per se and everything to do with solving basic business problems.

Though it has been over-hyped, nano-technology remains full of enormous potential. Like most technology disruptions, it won’t have a sudden impact. Rather, it will make its mark on everyday society over a long period of time by establishing better or more economical ways of doing things. That’s the path to becoming a successful adult, rather than a young prodigy that ends up as a mere flash-in-the-pan.

Click here to enlarge image

Barry Weinbaum is president and chief executive of NanoOpto Corp. (www.nanoopto.com) of Somerset, N.J.