Tag Archives: Small Times Magazine

July 2, 2009: Researchers at Georgia Tech have developed a new statistical analysis technique that could lead to more precise and reliable measurements of nanomaterials and nanostructures.

The technique, “sequential profile adjustment by regression” (SPAR), identifies and removes system bias, noise, and equipment-based artifacts, which at the nanoscale may be only slightly weaker than true signals of interest. It also can help reduce the amount of experimental data required to make conclusions, and help distinguish true nanoscale phenomena from experimental error.

“At the nanoscale, small errors are amplified,” explained Zhong Lin Wang from Georgia Tech’s School of Materials Science and Engineering. “This new technique applies statistical theory to identify and analyze the data received from nanomechanics so we can be more confident of how reliable it is.”

Specifically, the research focused on a data set measuring the deformation of zinc oxide nanobelts, to determine the material’s elastic modulus. Theoretically, applying force to a nanobelt with the tip of an atomic force microscope should produce consistent linear deformation — but their experimental data showed that sometimes less force appeared to create more deformation and the deformation curve was not symmetrical, and simple data-correction techniques didn’t solve the mystery. “The measurements they had done simply didn’t match what was expected with the theoretical model,” noted Georgia Tech prof. C.F. Jeff Wu. The new modeling technique “uses the data itself to filter out the mismatch step-by-step using the regression technique,” he said.


Georgia Tech researchers illustrate how their new technique improves measurement of nanostructure properties, in this case a graph of elastic modulus of nanobelts. (Georgia Tech Photo: Gary Meek)

In addition to correcting the errors, the technique’s precision could make it easier to produce reliable experimental data on nanostructure properties. “With half of the experimental efforts, you can get about the same standard deviation as following the earlier method without the corrections,” Wu stated. The technique also targets industrial manufacturing environments — i.e., commercialization — “because industrial users cannot afford to make a detailed study of every production run […] the significant experimental errors can be filtered out automatically,” Wu noted.

Future work will target the statistical technique to analysis of the properties of nanowires, whose structure will require a separate model using the same SPAR techniques to correct data errors, Wu noted. The technique also will be applied to past research to possibly generate new findings. “What may have seemed like noise could actually be an important signal,” Wang said. “This technique provides a truly new tool for data mining and analysis in nanotechnology.”

The research, sponsored by the NSF, was published June 25 by the journal Proceedings of the National Academy of Sciences.


SEM images showing a zinc oxide nanobelt on a trenched substrate. An atomic force microscope tip was scanned along the length of the nanobelt with a constant force applied. A series of such scans with the application of different forces produced a bending profile of the nanobelt. These bending measurements were then evaluated using the new SPAR technique to provide information on the nanostructure’s elastic modulus. (Images courtesy of Zhong Lin Wang/Georgia Tech)

July 1, 2009: Researchers at the U. of Michigan have developed a microfluidic device that helps analyze the mechanical behavior of biofilms, colonies of bacteria behind most human infectious diseases.

The “lab-on-a-chip” measures the biofilms’ resistance to pressure. “Think of biofilms as materials that respond to forces, because how they live in the environment depends on that response,” explained Mike Solomon, associate prof. and senior author of the paper, in a statement. And mechanical forces within the body apply here too. “We think a lot of host defense boils down to doing some kind of physical work on these materials, from commonplace events like hand-washing and coughing to more mysterious processes like removing them out of the bloodstream during a serious infection,” he said. “Until you know when the materials will bend or break, you don’t really know what the immune system has to do from a physical perspective to fight this opponent.”

The channel-etched chip (see image), made from a flexible polymer, can study samples of 10μm-50μm biofilms containing 50-500 bacterial cells. Findings suggest greater elasticity than measured with previous methods, and a “strain hardening response” — i.e., as pressure increased, so did the biofilms’ resistance.

The experiments were performed on colonies of Staphylococcus epidermidis and Klebsiella pneumoniae, which are known to cause infections in hospitals; the researchers say the microfluidic device could also be used to measure the resistance of other soft-solid materials, in sectors ranging from consumer products to food science to biomaterials and pharmaceuticals.

The work, partly funded by the National Institutes of Health and the National Institute of General Medical Sciences, was originally published online in February by Langmuir and is the cover story in the journal’s July 7 print edition.


(Source: U. of Michigan/College of Engineering

with Walt Trybula

The Texas Emerging Technology Fund (ETF) was formed in 2004 to support statewide nanotech commercialization and research efforts, with initial funds totaling $200 million–50% earmarked to commercialization efforts, 25% to reward research “superiority,” and 25% toward matching grants. The current legislative session is expected to keep funding allocations roughly the same as last year: $185M, distributed 70% to commercialization, 15% to research, and 10% for matching grants.

A recent report authored by Walt Trybula, director of the Nanomaterials Application Center at Texas State University-San Marcos, analyzes the state’s funding programs that promote economic development and research commercialization. Trybula, an IEEE and SPIE Fellow, discusses what’s working in Texas’ state support for nanotech ventures, what can be improved, and what’s next.

Q: What has the ETF done right, and where has it not lived up to hopes? And how do you think the ETF should be adjusted to maximize its impact and support for the state’s nanotech efforts?

What we have witnessed is that the fund has brought a number of creative companies to the forefront. The requirements for commercialization awards, which were an evolution, now contain identification of university researchers involved and coupled into a development that will be accomplished has resulted in some win–win partnerships. The requirements to receive an award have resulted in a greater amount of teaming among industry and universities, mentoring of emerging companies by sources of capital, and the Regional Centers of Innovation and Commercialization.

The report proposes the development of two distinct categories. Public-Private Partnerships [PPP] offer the opportunity to further couple industry and university researchers. The NDCC [Nanomaterials Design Commercialization Center] is only one of a number of examples that are happening in Texas. The Proof-of-Concept [POC] funding would enable the development of creative ideas that are close to prime time, but require a “little” more development. When working with creative types, one has probably exhausted all sources of money and is just shy of being able to finish. The POC could provide this final step. We are talking about relatively small amounts of funding to bring this to completion.

Q: What is Texas doing right to be considered a best practice for supporting/accelerating statewide economic growth, in the eyes of other states–and by venture capitalists?

The addition of pre-seed funding to the commercialization portion of the fund has permitted fledgling companies to reach out for funding, which is especially important in today’s economic climate. The state of Illinois reportedly is creating a $15 million fund that, while at a much smaller scale, has a strong resemblance to the commercialization portion of the Texas fund. Imitation is a strong form of flattery.

From a VC perspective, the mentoring and guidance provided to these emerging companies helps focus their efforts on both the commercialization of their technology development and the development of a viable business. Volunteers throughout the state [representing technology, business, and finance] evaluating proposals in assisting the companies to move towards commercialization number in the multiple hundreds. Being on a proposal review committee, I have witnessed the improvement in the quality of proposals being submitted over the last few years. What we have is a community of people throughout the state who are working to help companies be successful in commercializing their product development, which will result in an increase in quality jobs in the communities.

Another advantage of this statewide effort is that business leaders throughout the state are communicating with each other more frequently and comparing best practices which can be then implemented in their area. I talk with people throughout Texas, and in this report had roughly 100 volunteers participate in providing information.

Q: Most of the Texas industry “clusters” have some nanotech component: e.g. Advanced Technologies and Manufacturing Cluster (ATMC), and the NDCC which targets aerospace. How have these clusters changed since 2004; how must they change from 2009 and beyond?

With these six identified Texas industry clusters all representing key areas of current employment and the potential for significant increase in job creation, the technologies required to support these clusters becomes extremely important. Nanotechnology has the potential to advance developments in all of the critical areas.

Business evolution is an absolute necessity due to both rapidly changing technology and global competition. A knowledge infrastructure, like Silicon Valley or Austin, is actually built on a very large number of unsuccessful attempts. The ability to discuss ideas with colleagues who have a similar background is very important in permitting product commercialization along the path that has a high probability of success.

Q: Your report cites how welcoming SEMATECH was a key early win for Texas…but semiconductors are hurting today as much or more as other industries; and SEMATECH has largely uprooted its leading-edge development work to Albany, NY. What’s the next shining example for future industry R&D work in Texas, if not purely semiconductors? How and where can those lessons/skills be transferred to other activities and partnerships?

The MCC [Microelectronics Computer Consortium] was the early model for a consortium that was used to develop the SEMATECH model. There are still a number of semiconductor manufacturing sites in Texas; however, the development and construction of new facilities that was witnessed in the 1990s is probably gone forever anywhere in the world. This industry is evolving to a different model of development and supply that is unlike the industry of the 1980s and 1990s.

However, the basic fundamentals of the semiconductor process can be employed in other processes. Photovoltaic technology being developed throughout the world has a strong reliance on adapting the existing processes being used in manufacturing today. There are some portions of semiconductor manufacturing, especially metrology, which can be applied to the development of nanomaterials manufacturing.


Regional technology/ manufacturing clusters. (Source: State of Texas)
Click here to enlarge image

A significant amount of medical device development incorporates aspects of nanotechnology, but the absolute “must-have”–I do not like to use the term “killer-app” in this context!–has yet to be identified. With the largest medical complex in the country in the city of Houston and the research being done at universities and medical centers, the development and manufacturing of significant medical breakthroughs has begun.

With the size of the state and its large population, it is very difficult to try to identify a single region or technology and declare that it has a key presence. Aerospace has a large presence in the Dallas-Fort Worth area as well as the Panhandle; one cannot ignore the NASA Johnson Space Flight Center. The medical community has multiple locations throughout the state with some fairly significant concentrations in the larger cities. Texas has four, maybe five, of the top 20 largest cities in the country; the Austin-San Antonio corridor has 265,000 college students within about 120 miles. There are many opportunities for creative people to move forward to commercialize their efforts.

Q: Collaboration among universities is a growing trend, pooling efforts amongst themselves and with industry. Can you address how in general, and in this particular group of Texas institutes of higher education, you are seeing university collaboration–and perhaps also, increased competition?

Collaboration among universities is really a logical progression of the increasing cost of sophisticated equipment and the related expenses for maintainance and upkeep. In 2007, Texas voters approved the creation of a 10-year, $3B cancer research fund, the Cancer Prevention and Research Institute of Texas (CPRIT), with research spread across multiple sites. The six different State of Texas university systems along with private medical research institutions have agreed upon a common set of terms and conditions to apply to this research effort.

We have not worked out all the details of collaborative efforts, but we have multiple schools working together to increase the productivity of our research efforts; this includes private schools. While there will always be competition among schools and even among professors within the same institution, no single institution has all the tools and researchers it needs to solve every problem. Collaborating provides the ability to multiply their resources and researchers at any given institution by bringing in complementary capabilities to enhance research efforts. Yes, the researchers must have defined responsibilities and leadership on the various projects to minimize the complications that could arise without them.

There is a perception that all it takes to improve research capabilities is to go out and acquire equipment. In reality, it is the equipment and the expertise of the researchers and the support of the technicians that provides the ability to do the advanced research and development. While both Rice University and the University of Houston have excellent materials characterization capabilities, combining the expertise of the researchers at both these institutions and focusing on the most appropriate equipment to do the valuations of [their] capabilities [makes them] among the best in the world.

The MEMS Industry Group, a trade association representing the MEMS and microstructures industries, has launched a MEMS Marketplace online “matchmaking” portal that enables MEMS companies to connect with prospective customers and partners.

The portal is designed for companies in the entire MEMS supply chain, from material suppliers to original equipment manufacturers. It also provides a networking forum for MEMS companies interested in collaborative or customer relationships. Users can search for specific products and services offered by MEMS device manufacturers, foundries, wafer suppliers, equipment suppliers, MEMS-specific software providers and market research analysts. Information is broken down into four search options: by category, product, company, and industry.

Participating companies can manage their profiles, update product/service listings, and post recent press releases by logging in to the profile management area of MEMS Marketplace.


Nano to fight flying fish? USGS seeks funding

Advanced BioNutrition Corp. and the US Geological Survey are looking to partner on a project to evaluate whether nanotechnology can control flying carp in Wisconsin waterways.

According to the local Onalaska Holmen Courier-Life, a USGS biologist asked the Lake Onalaska Protection and Rehabilitation District if it could test MicroMatrix, a product that delivers a range of bioactive compounds in animals and human foods, at its French Island facility to see if it will work with flying carp and other aquatic invasive species. The USGS doesn’t have the roughly $3 million it would cost to fund the first year of the three- to five-year study; while Advanced BioNutrition builds a working prototype the USGS can test, local officials are writing to congressional reps, fishing for funding.


Winners, losers in 2008 MEMS standings

Overall sales for the top 30 MEMS manufacturers inched up 2% in 2008 to $5.5B, held back in large part (and little surprise) to the global economic malaise, according to a recent report by Yole Développement. A closer look, however, reveals more distinct shifts and patterns among suppliers and product segments.

The top two suppliers, HP and Texas Instruments, both saw sales decline, but held onto their positions still by a wide margin. New No. 3 STMicroelectronics moved ahead of Robert Bosch, followed by Canon, and Seiko Epson (all separated by ~$16M); Freeescale came in seventh, with the top 10 rounded out by Lexmark, Analog Devices, and Avago (all separated by a small margin, ~$19M or <10% of sales). On the other end of the scale, Kionix and Micralyne entered the top 30 rankings for the first time; Delphi and Sanyo were bumped out. (Yole notes its rankings only include providers of silicon MEMS chips.)


Preliminary top 30 worldwide MEMS manufacturers based on estimated 2008 MEMS revenues.
Click here to enlarge image

Top growth in 2008 went to Kionix (70%), with 18 of the top 30 MEMS manufacturers showing sales growth vs. 2008. FormFactor (-51%) suffered the biggest declines among the 12 firms who saw sales slide year-on-year.

The economic crisis was felt across the MEMS spectrum, but its depth varied among product sectors. Automotive was likely hurt the worst (-10% to -20%), though within this sector emerging devices such as tire pressure monitoring systems fared better than mature products, such as airbag accelerometers. And individual companies within sectors fared differently, too. Systron Donner sunk -14% to 13th place, and VTI lost -10% (in euros) attributed to a slowdown in the automotive market. Bosch, meanwhile, though it also suffered a -10% decrease, is better positioned with a new 200mm fab ready to start production when conditions improve, Yole surmised.

Consumer markets obviously were hurt too, but also varied by sector. Makers of ink-jet heads saw a -15% decreased in sales and saw units decline too. Inertial MEMS products, however, in general saw growth “in the range of several %), with STM and ADI pacing this sector. ST got a boost from a 42% (in euros) jump in its accelerometer business.

Other snippets from the Yole report:

  • Avago’s 2008 sales ($183M, #10 overall) don’t include the $38M (Yole estimate) attributed to Infineon’s former the bulk acoustic wave (BAW) filter business, which it bought in September – that would probably be enough to leapfrog #9 ADI. Measurement Systems, meanwhile, enjoyed a five-spot boost through integration of Intersema (in January 2008, adding ~$17M).
  • Boeringher Ingelheim microParts enjoyed 9% growth (in euros) thanks to the biomedical market.
  • Texas Instruments saw its DLP chip sales sink about 13% (in US $).
  • Panasonic (#16, $124M, up three spots) likely has taken market share in gyroscopes from Murata (#20, $86M, -4%) and SSS (#30, $30M, -14%), Yole says.
  • As for the two who fell off the list: Sanyo stopped its foundry activity, and Delphi “has dramatically reduced its MEMS staff,” Yole pointed out.

Editor’s note: Yole and SEMI will again publish their MEMS supply chain market report as a benefit for SEMI members, detailing key developments impacting equipment and materials suppliers, and the market outlook for these sectors.


MEMS sector takes hit from auto, economy slump

As the economy slogs along, big-ticket consumer purchases such as cars have dried up; shipments slipped 8% in 2008 and are expected to sink 19% in 2009. And that’s bad news for, among others, suppliers of automotive electronics, noted iSuppli in a pair of reports tracking the sector.

A notable casualty of the hurting auto sector are MEMS sensor suppliers, whose technology is used for applications such as vehicle stability control, airbags, and satellites. MEMS sensor companies saw sales decay more than the actual auto industry in 2008 (-6% to -15%). Industry leader Robert Bosch GmbH led the pack with $429M in sales (~80% of that for internal consumption in its automotive subsystems) and a -6.1% Y/Y decline (Figure 1).


Figure 1. Global automotive MEMS supplier sales.
Click here to enlarge image

The strain on automotive MEMS suppliers has caused casualties. Systron Donner Automotive, the world’s second-largest supplier of car quartz MEMS gyroscopes (behind Bosch), was shut down by French parent Schneider Electric, laying off all engineers and leaving a skeleton crew to meet contractual commitments. “This is major turnaround for a company that sold nearly $105 million worth of MEMS vehicle dynamics gyroscopes in 2008,” noted Richard Dixon, iSuppli senior analyst for MEMS, in a statement. “The company was under competitive siege and already was beginning to lose market share at its key long-time customer, Continental, to Panasonic, which is offering a cheaper product.”

Meanwhile, Infineon has said it wants to sell off its Norwegian unit Sensonor to private investors. “The recent downturn […] has especially hit the market for tire pressure monitors sensors (TPMS),” Dixon said. Shedding its unit “will help balance Infineon’s books in the short term and has little impact on its market-leading position.” Some process steps done in Sensonor’s site in Horten, Norway, will be merged with Infineon’s TPMS production in Austria, simplifying the supply chain, he said. “But the major impact is to Infineon’s capability to innovate, as the Sensonor group represented an R&D team par excellence.”

Government mandates for multiple MEMS-driven capabilities such as gyroscopes, accelerometers, and pressure sensors for tires and brakes have kept the sector from sliding any further. “In the past TPMS has been presented as the new El Dorado of the automotive MEMS market,” wrote Richard Dixon, senior analyst for MEMS at iSuppli. “Today TPMS is a US market due to a mandate that required fitment in all cars by the end of 2007.” He forecasts the auto MEMS sector will return to “healthy” growth in 2010, and double-digit revenue growth in 2011, as such systems will become mandatory on vehicles in the US starting in 2012, and in the EU starting 2014 (Figure 2).


Figure 2. MEMS market in US $M for automotive applications, 2006-2013.
Click here to enlarge image

One key point about that growth hinges on what type of TPMS will be adopted. “Indirect” TPMS systems use an algorithm to model wheel rotation speed, this requiring only one sensor (which makes the system less expensive). Direct TPMS systems, now being employed in the US, use separate sensors inside each tire to detect pressure levels (and are more accurate, and expensive). The EU is still working on its regulations, and this point of which system to require is yet undecided; final regulations are due in November. “Much of the growth in the future market will hinge on whether or indirect systems can meet the accuracy requirements of the European mandates,” Dixon wrote.

Government mandates aren’t just key to growth in the auto MEMS sector, they’ve significantly reshaped the landscape of suppliers. “Taking a technology that has only been used in luxury cars in the past and putting it into every car, including those that cost less than $10,000, is a big challenge for the major established players in the MEMS market,” Dixon noted (citing as Exhibit A the Schneider Electric/Systron Donner shuttering).

Another key factor to auto MEMS growth is China, which is poised to become the global leader in auto production in 2009. “Unlike India, China is not a low-cost market for cars and there is a higher sensor content in the cars it makes,” pointed out Jérémie Bouchaud, iSuppli principal analyst for MEMS, adding that China also imports a lot of cars. Biggest opportunities are in power-train sensors; “safety is not expected to be the biggest driver in terms of sensor suppliers,” he noted.


NYU touts DNA-enabled nanoparticle glue

Researchers at New York University say they’ve created a method to precisely bind nanoparticles into larger structures that overcomes a “sticky” problem and enables creation of stable, sophisticated microscopic and macroscopic structures.

The work, reported in an advanced online publication by Nature Materials, describes confronting the problem of self-replication: when the number of objects doubles in each cycle it presents a linear challenge when trying to fabricate things microscopic objects with a sophisticated architecture.

Their solution? Coat micrometer particles with short stretches of DNA (“sticky ends”), each with a particular sequence of DNA building blocks; those with complimentary sequences form reversible bonds when a certain temperature is applied. Thus, the particles can be organized in a controlled fashion onto a template, and then released again.


The novel DNA ‘sticky ends’ can form intra-particle loops and hairpins (e.g. schemes II & III), giving more control over the particles’ interactions than conventional sticky ends that can only form inter-particle bridges (scheme Ia).
Click here to enlarge image

DNA-mediated interactions are known, but binding only subsets of a particle (not the whole thing) into structures has proven difficult. So the researchers at NYU’s Center for Soft Matter Research and in the university’s Department of Chemistry focused on a particular type of DNA sequence that can fold like a hairpin and bind to neighboring “sticky ends.” Lowering the temperature, they determined, rapidly caused the sticky ends to fold up on the particle before they could bind to other sticky ends. This occurred long enough (a few minutes) for the sticky ends to find binding partners on other particles moved around by optical traps, thus building a structure (see video above). “We can finely tune and even switch off the attractions between particles, rendering them inert unless they are heated or held together–like a nano-contact glue,” said Mirjam Leunissen, the study’s lead author, in a statement.


Micrometer-sized particles, functionalized with self-protective scheme II sticky ends, are collected in a circular array of point-like optical traps. Relatively low system temperature gives good self-protection of the sticky ends and thus ample time to release superfluous particles from doubly occupied traps without forming unwanted doublets. Near the end of the movie, we shrink the array to bring the particles in close proximity.
Click here to enlarge image

Potential applications include ordering arrays of these particles into optical devices such as sensors and photonic crystals. The same organizational principles also apply to smaller nanoparticles, which have a range of useful electrical, optical, and magnetic properties, NYU noted.

The work was supported by the NSF’s Materials Research Science and Engineering Center (MRSEC) program, the Keck Foundation, and the Netherlands Organization for Scientific Research.


Russian officials at odds over nanotech success

Recent media reports indicate Russia wants to better track its nanotech production, and is traveling the globe to press its interests and grow its position in the worldwide market–but a top government official isn’t convinced that current efforts are up to the job.

Nobody currently knows how much “nanotechnology production” there is in the country, pointed out Anatoly Chubais, the head of state-owned nanotech business group RUSNANO, which is teaming with the State Statistics Service to develop a tracking system. RUSNANO reportedly wants to help Russian companies win 3% of the global nanotechnology market by 2015 as part of the government’s drive to diversify the economy, according to the Moscow Times.

Russia isn’t just examining its own domestic capabilities. It’s proposed to pump at least $10M into Canada’s nanotech industry, according to a report by Canwest News Service; the investment could inject life into the sector, according to Neil Gordon, former head of the now-defunct Canadian NanoBusiness Alliance, who told Small Times contributing editor Howard Lovy that the “Canadian government [has] ignored the massive economic development opportunity from nanotechnology.”

RUSNANO’s Chubais also reportedly was in Israel in March to deepen discussions about ways the two countries can cooperate in nanotechnology development, meeting with President Shimon Peres (a longtime nanotech-development advocate) and Prime Minister-designate Benjamin Netanyahu, reported the Itar-Tass news service. RUSNANO representatives met with Israeli scientists and businessmen last fall.

But Russia’s top government official isn’t fully on board with the idea that state-owned corporations are up to the job in nanotech. “[Rusnano] is the kind of instrum×ent that sometimes works and sometimes doesn’t work at all,” said Russian President Dmitry Medvedev, quoted by the Moscow Times. The group, he added, is a “large structure that has a lot of money and that still has to understand how to correctly spend it,” so that it is not blamed for wasting it in the future.


Ears have nanoscale ‘flexoelectric’ motors

Utah and Texas researchers have discovered what they call a “nanoscale motor” in the human ear: hair-like tubes atop “hair cells” that dance back and forth, acting as “flexoelectric motors” that amplify sound mechanically.

Previous research elsewhere indicated that hair cells (each ~10µm × 30-100µm) within the cochlea of the inner ear can “dance” (elongate and contract) to help amplify sounds. The new study by Richard Rabbitt, the study’s principal author and a professor and chair of bioengineering at the University of Utah College of Engineering, and colleagues at Utah and Baylor College of Medicine in Houston, shows sounds also may be amplified by the back-and-forth flexing or “dancing” of “stereocilia,” the 50-300 hair-like nanotubes (1-10µm × ~200nm) projecting from the top of each hair cell. This flexing converts an electric signal generated by incoming sound into mechanical work (more flexing of the stereocilia), thereby amplifying the sound by a “flexoelectric effect.”

The tops of the stereocilia tubes are connected by protein filaments; at those connection points is an “ion channel” that opens and closes as the bundle of stereocilia sway back and forth. When the channel opens, electrically charged calcium and potassium ions flow into the tubes, which changes the electric voltage across the membrane encasing each stereocilium, making the tubes flex and dance even more. Such flexoelectricity amplifies the sound and ultimately releases neurotransmitter chemicals from the bottom of the hair cells, sending the sound’s nerve signal to the brain.


Cross-section of part of the cochlea, the fluid-filled part of the inner ear that converts vibrations from incoming sounds into nerve signals that travel to the brain via the auditory nerve. University of Utah and Baylor College of Medicine researchers found evidence that stereocilia–bundles of tiny hair-like tubes atop “hair cells” in the cochlea–dance back and forth to mechanically amplify incoming sounds via what is known as the “flexoelectric effect.”
Click here to enlarge image

The researchers estimate the combined flexoelectric amplification by hair cells and the stereocilia atop them enables humans to hear the quietest 35-40 dB of their range of hearing. Rabbitt says the flexoelectric amplifiers are needed to hear sounds quieter than the level of comfortable conversation; the cells are said to be sensitive enough to detect sounds almost as small as those caused by Brownian motion.

The researchers’ calculations and computer simulations deduced that “a longer stereocilium was more efficient if it was receiving low-frequency sounds,” while shorter stereocilia most efficiently amplified high-frequency sound.

In addition, the researchers speculate that flexoelectrical conversion of electricity into mechanical work also might be involved in processes such as memory formation and food digestion. The stereocilia involved in amplifying hearing are similar to other tube-like structures in the human body, such as villi in the gut, dendritic spines on the signal-receiving ends of nerve cells, and growth cones on the signal-transmitting axon ends of growing nerve cells.

The study, part of an effort by researchers to understand the amazing sensitivity of human hearing, is published in PLoS ONE, a journal published by the Public Library of Science.


IMEC paves way to deep-brain stimulation

IMEC says it has created a prototype multi-electrode stimulation and recording probe for deep-brain stimulation, which beyond the medical applications highlights the opportunities in the healthcare market for design tool developers.

Brain implants for electrical stimulation of specific brain areas are used as a last-resort therapy for brain disorders such as Parkinson’s disease, tremor, or obsessive-compulsive disorder. Conventional deep-brain stimulation probes use millimeter-size electrodes which stimulate a large area of the brain, “and have significant unwanted side effects,” IMEC notes. However, more precise stimulation and recording is achievable with electrodes as small as neurons built using semiconductor process technology, design tools, and electronic signal processing, notes Wolfgang Eberle, senior scientist and project manager at IMEC’s bioelectronics research group.

IMEC’s design and modeling strategy relies on finite-element modeling of the electrical field distribution around the brain probe (using multiphysics simulation software COMSOL); this also enabled investigation of the mechanical properties of the probe during surgical insertion and the effects of temperature. Results indicate that adapting the penetration depth and field asymmetry allows steering the electrical field around the probe, which results in high-precision stimulation. Another key was development of a mixed-signal compensation scheme enabling multi-electrode probes capable of stimulation as well as recording, needed to realize closed-loop systems.


A prototype multi-electrode stimulation and recording probe for deep-brain stimulation.
Click here to enlarge image

The result of IMEC’s work is creation of brain implants consisting of multiple electrodes with simultaneous stimulation and recording. Prototype probes with 10µm-size electrodes and various electrode topologies have been built; the new design approaches also point to ways to achieve more effective stimulation with fewer side effects, reduced energy consumption (due to focusing the stimulation current on the desired brain target), and closed-loop control adapting the stimulation based on the recorded effect.


Ex-EPA official calls for new agency to oversee nanotech

Existing health and safety agencies are unable to cope with the risk assessment, standard setting, and oversight challenges of advancing nanotechnology–so J. Clarence (Terry) Davies, in a recent paper, “Oversight of Next Generation Nanotechnology,” calls for a new Department of Environmental and Consumer Protection to oversee product regulation, pollution control, and monitoring and technology assessment.

The proposed agency would foster more integrated oversight and a unified mechanism for product regulation to deal with current problems like toxics in children’s toys and newer challenges like nanotechnology. A more integrated approach to pollution control was necessary even before EPA was created, and since that time the need has only increased, according to Davies.

“Federal regulatory agencies already suffer from under-funding and bureaucratic ossification, but they will require more than just increased budgets and minor rule changes to deal adequately with the potential adverse effects of new technologies,” he says. “New thinking, new laws, and new organizational forms are necessary. Many of these changes will take a decade or more to accomplish, but there is an urgent need given the rapid pace of technological change to start thinking about them now.”

Davies served during the George H.W. Bush administration as Assistant Administrator for Policy, Planning and Evaluation at the US Environmental Protection Agency. In 1970, as a consultant to the President’s Advisory Council on Executive Organization, he co-authored the plan that created EPA. As a senior staff member at the Council on Environmental Quality, he wrote the original version of what became the Toxic Substances Control Act (TSCA).


MIT virus battery could power cars

MIT researchers have genetically engineered viruses to build both the positively and negatively charged ends of a lithium-ion battery, with comparable energy capacity/power as batteries in hybrid cars, and could also be used for personal electronic devices.

The new batteries, described in the April 2 online edition of Science, could be manufactured with a cheap and environmentally benign process: The synthesis takes place at/below room temperature and requires no harmful organic solvents, and the materials that go into the battery are non-toxic.

In a traditional lithium-ion battery, lithium ions flow between a negatively charged anode (usually graphite) and the positively charged cathode (usually cobalt oxide or lithium iron phosphate). Angela Belcher and co-researchers already had engineered viruses that coat themselves with cobalt oxide and gold and self-assemble to form a nanowire.

Their latest work extends the work by building a cathode to pair up with that anode. Because most candidate materials for cathodes are highly insulating (non-conductive), the team genetically engineered viruses (a common bacteriophage) that first coat themselves with iron phosphate, then grab hold of carbon nanotubes to create a network of highly conductive material. As the viruses recognize and bind specifically to the CNTs, each iron phosphate nanowire can be electrically “wired” to conducting carbon nanotube networks. Electrons travel along the CNT networks, percolating throughout the electrodes to the iron phosphate and transferring energy.

In lab tests, batteries with the new cathode material could be charged and discharged at least 100× without losing any capacitance; that’s fewer charge cycles than currently available lithium-ion batteries, but Belcher predicts “much longer” lifetimes. The prototype is packaged as a typical coin cell battery, but the technology allows for the assembly of very lightweight, flexible, and conformable batteries that can take the shape of their container. Future work will pursue better batteries using materials with higher voltage and capacitance, such as manganese phosphate and nickel phosphate, and from there look to improve the technology and processes for commercial production.


Catilin, Ames develop algae ‘nanofarming’

Algae as a biofuel catalyst is a promising field. Up to 10,000 gallons of oil can be produced on a single acre of land; the US Department of Energy extrapolates that replacing all the petroleum fuel in the US would require 15,000 sq. miles, less than 1/7 the area devoted to corn production (and just a tad bigger than Maryland). One of the challenges in creating promising biofuels from algae is that extracting the oil from the algae tends to kill the organisms. To this end, researchers at Iowa State University and the DoE’s Ames Laboratory say they have developed a “nanofarming” technology that safely harvests oil from the algae so the pond-based “crop” can keep on producing–and keep costs low.

The “nanofarming” technology uses nanoparticles to extract oil from the algae; once the algal oil is extracted, a separate solid catalyst from Catilin will be used to produce ASTM (American Society for Testing and Materials) and EN certified biodiesel.

Ames Labs, Iowa State, and Ames spinoff Catilin are involved in a three-year cooperative R&D project; phases one and two will cover the culturing and selection of microalgae as well as development of the specific nanoparticle-based extraction of algal oil, and catalyst technologies for production of biodiesel. Phase three will focus on scale-up of the catalyst and pilot plant testing on conversion to biodiesel.


Stanford sets new record for smallest letters

A novel technique is enabling Stanford researchers to push individual molecules into specifically arranged patterns, and reclaim their title of producers of the world’s smallest letters.

The researchers encoded 35 bits of information per electron and wrote the letters “S” and “U” (of course) composed of 0.3nm bits, a feat that edges out researchers at Japan’s Hitachi, who in 1991 set the record for microscopic calligraphy by chiseling 1.5nm-tall letters into a crystal. The demonstration suggests information could be stored more densely providing greater speed and storage capacity for modern computers.

Using a scanning tunneling microscope, researchers Hari Manoharan and Christopher Moon arranged individual carbon monoxide molecules on a copper surface in a complicated 2D pattern with a void in the middle, into which was projected electronic versions of the letters. The constant flow of electrons naturally present on the copper surface scattered any carbon monoxide molecules and worked to project holographic patterns of the letters into the void. Essentially, the pattern functioned as a molecular hologram, illuminated with electrons instead of light, they claim.


Molecular holograms are fashioned with scanning tunneling microscope manipulation. When illuminated by two-dimensional electron gas, a three-dimensional holographic projection is created.
Click here to enlarge image

“Imagine the copper as a very shallow pool of water into which we put some rocks [the carbon monoxide molecules],” said Manoharan, in a statement. “The water waves scatter and interfere off the rocks, making well-defined standing wave patterns.” If the rocks are positioned just right, the wave patterns will form into letters.

The research, supported by the National Science Foundation, the DoE’s SLAC National Accelerator Laboratory, the Stanford Institute for Materials and Energy Science, the Office of Naval Research, and the Stanford-IBM Center for Probing the Nanoscale, was published online in Nature Nanotechnology.


Gold nanoparticles could ‘cook’ cancer cells

Researchers presenting at the American Chemical Society’s 237th National Meeting in March, and in Clinical Cancer Research in February, described an advance in the nanotech-enabled fight against cancer: the first hollow gold nanospheres that can search out and “cook” cancer cells, showing particular promise as a minimally invasive future treatment for malignant melanoma, the most serious form of skin cancer.

The nanospheres are equipped with a special “peptide” to a protein receptor abundant in melanoma cells, which draws the spheres to the cancer cells while avoiding healthy skin cells. After collecting inside the cancer the nanospheres heat up when exposed to near-infrared light. Studies in mice showed the hollow gold nanospheres did 8× more damage to skin tumors than the same nanospheres without the targeting peptides.

“It’s basically like putting a cancer cell in hot water and boiling it to death. The more heat the metal nanospheres generate, the better,” explained study co-author Jin Zhang, a professor of chemistry and biochemistry at the University of California in Santa Cruz, in a statement. Zhang’s team worked with Chun Li at the University of Texas M.D. Anderson Cancer Center in Houston.


Transmission electron microscope images, showing (A) hollow gold nanospheres with average size of ~30nm, and (B) an individual hollow gold nanosphere with diameter 29.1nm and wall thickness ~5nm. (Image courtesy of Jin Zhang)
Click here to enlarge image

This form of cancer therapy is a variation of photothermal ablation (photoablation therapy or PAT), a technique that uses light to burn tumors–but also can destroy healthy skin cells, so it requires careful control of duration and intensity. Applying a light absorbing material such as metal nanoparticles to the tumor greatly enhances the PAT treatment, but ideal candidates must have both good penetration into the cells and limited heat-carrying capacity. Solid gold nanoparticles and nanorods don’t possess both qualities; Zhang’s creation in 2006 of gold nanoshells (30-50nm in size) did, and were safer than other metal nanoparticles, he noted.

Li emphasized, though, that the next step is human trials, which will require “extensive preclinical toxicity studies,” and that “there is a long way to go before it can be put into clinical practice.”

Nanoscale properties enable improved thermoelectrics, NEMS gas sensors

By Dr. Paula Doe, contributing editor

The much-hyped properties of materials at the nanoscale are finally starting to be applied to some real electronics applications, ranging from near-ideal thermoelectric material based on spray-on semiconductor nanocrystals, to transparent conductive films made from on carbon nanotubes and self assembled silver nanoparticles, to ultrasensitive nanoscale MEMS gas sensors.

Nanoscale materials properties are enabling efficient, low-cost thermoelectric materials. Though long studied, thermoelectric conversion of heat to electricity have never been efficient enough to be practical for most applications. “They were stuck with natural materials,” says Evident Technologies CEO Clinton Ballinger. “But with nano-structured materials you have a lot more materials to work with. You can change the thermal properties.” That means it is possible to design something that approximates the ideal thermoelectric material, conducting electricity well but not heat. Modeling suggests that the most efficient structure would be point sources of excited electrons distributed evenly in a matrix–and that ideal efficient material can be approximated quite well by a low-cost solution, using colloidal ink containing semiconductor nanocrystals to create a bulk material while retaining the nano properties.

Ballinger suggests that some of the first markets for this technology will be in the semiconductor industry, where it could enable efficient, flexible, solid-state cooling for integrated circuits and LEDs. This could greatly reduce the size or need for a heat sink, he argues, and potentially improve performance. “Right now we’re just spraying it on with an airbrush,” he notes. “So it could likely be coated right on the chip for thermoelectric cooling.”

First application is likely to be for less sophisticated solid-state cooling, though, such as spot cooling for things like wine coolers. But eventual markets for low-cost roll-to-roll coated thermoelectric films could also include waste heat recovery in automobiles and central power stations, general heating and cooling, and even power generation.

Likely closer to market are transparent conductive films using innovative nanomaterials to potentially challenge ITO. Unidym is sampling a transparent conductive film based on carbon nanotubes for touch panel displays and readying production capacity. CimaNanotech is similarly sampling its flexible film product based on self assemble of silver nano particles, for which Toray Advanced Films is the production coating partner.

Pushing MEMS to the nanoscale opens up new potential as well. “The advantages go beyond scaling,” says Caltech professor Michael Roukes, whose lab has been driving developments for the last 15 years. “The physics scales in a profound way.” This means MEMS-based detectors in an electronic nose can be made significantly more sensitive, as well as scaled down in size by about a million fold, compared to the existing state-of-the-art–and made with efficient wafer-scale processes.

Roukes’ lab and CEA-Leti are now routinely mass producing arrays of these nano MEMS sensors on 8-in. wafers, and recruiting corporate partners to their Alliance for Nanosystems VLSI for the final stage of developing the MEMS and CMOS processes to integrate them into practical low-cost gas-phase chemical sensors, to monitor toxic industrial gases and gas phase processes, or to analyze human breath to detect diseases.

The detectors are essentially arrays of nanoscale MEMS resonators–fancy versions of guitar strings–set within MEMS flow channels. The resonators are coated with a kind of chemical sponge that absorbs the target material, which changes the mass of the resonator. The gas is first sent through a chipscale version of a gas chromotograph process, to simplify the identification problem.

Though first markets will likely be military and industrial, the most interesting potential may be in medical diagnostics. “There are a few validated tests for detecting lung cancer and other diseases from the gases in the breath, enough to suggest this is a fertile area,” says Roukes, even though studies so far require large-scale lab instrumentation, so are hard to do. “The more easily and routinely this could be deployed, the more deeply it could be studied,” he notes.

All these structures can be made at 90nm, though 45nm would be preferable, says Roukes. He notes that with the device arrays successfully being produced at wafer-scale, current efforts are directed towards precursor systems including surface chemical functionalization and integration en masse with both MEMS flow channels and CMOS circuits for data post processing.

These companies will be among those discussing their latest developments in the program on Emerging Commercial Applications of Nanoelectronics at SEMICON West, July 14-16 in San Francisco. SRC Nano Electronics Initiative director Jeff Welser will also give a mini keynote on the interesting properties of graphene and spin wave transistors with potential to impact the semiconductor industry further out. The program is part of the Extreme Electronics series on emerging technology opportunities for the semiconductor manufacturing supply chain. For details, see www.semiconwest.org.

Countless benefits can be gained for dozens of industries with the ability to observe invisible elements, especially contaminants, at the nanoscale. Finding a way to manipulate and cure imperfections should leap us forward to longer, healthier lives for both our semiconductors and citizens. Comparable to how x-ray technology, MRI and sonography transformed the practice of medicine, a new approach for seeing the unseen promises great potential for finding new ways to improve the state of human and microelectronic patients alike.

By Vinayak P. Dravid and G. Shekhawat, Northwestern University

Seeing the invisible through a non-destructive, real-space imaging of buried or embedded structures and features below 100nm is a formidable challenge. Non-invasive radiations such as light and acoustic waves suffer from classical diffraction limit to attain sub-100nm resolution. High-energy probes, such as electrons, are invasive and require extensive and laborious specimen preparation. X-ray, or neutron, probes are difficult to focus down to sub-100nm scale and can also be invasive, especially for soft structures. Scanning probe microscopy (SPM) offers superb resolution but is sensitive to only surface features and phenomena.


Table 1. Various microscopy techniques, and their relevance to important metrology criteria. The question mark (?) implies limited conditions under which the technique satisfies the criterion.
Click here to enlarge image

To address these barriers, with the support of SRC and its members, we have developed a new approach in non-invasive, sub-surface, nanoscale metrology: Scanning near-field ultrasound holography (SNFUH), which combines the non-destructive nature of acoustic waves, high-spatial resolution scanning-probe platform and phase-sensitive holography paradigm. Its capabilities and results have not previously been possible without slicing the sample, which changes both the composition and structure and sacrifices characteristics of the studied subject.

In SNFUH, a high-frequency (=1MHz) acoustic wave is launched from below the specimen, while another acoustic wave at slightly different frequency is launched on the SPM cantilever. The resultant “surface standing ultrasound wave” is monitored by the SPM tip as an acoustic antenna, such that scattered phase and amplitude of the specimen acoustic wave are registered point-by-point. The resultant spatial map provides viscoelastic contrast offered by the phase sensitivity of acoustic wave.

SNFUH offers non-destructive nanoscale imaging of embedded and buried structures at unprecedented spatial resolution, more than 10–20nm, for a wide variety of materials, both hard, soft and hybrid. We believe SNFUH ushers a new era in non-destructive nanoscale metrology and opens new vistas for a multitude of applications. It offers not only an in-line, non-invasive metrology toolset for defect analysis and quality control in current generation microelectronics, but also for emerging and future nanoscale structures and devices.

History of intrigue

Microscopy has come a long way since the first observations were made through homemade optical microscopes in the mid-seventeenth century: snow flakes by Robert Hooke, an Englishman, and spermatozoa by Dutch scientist Anthony van Leeuwenhoek. The past two decades in particular have witnessed remarkable developments in real-space imaging techniques, ranging from atomic-scale SPM of surfaces to sub-microscale confocal imaging of biological structures. Remarkably, however, there is a notable absence of techniques for non-destructive imaging of embedded or buried sub-surface features at nanoscale resolution.


Figure 1. A schematic illustration of SNFUH approach. A high frequency acoustic wave is launched from below the specimen, while another high frequency acoustic wave (but at slightly different frequency) is launched on the SPM cantilever. SNFUH electronic module is used to spatially monitor the phase perturbation to the standing surface acoustic wave, which results from scattered specimen acoustic wave. The resonant frequency of the typical cantilever, f0, is in the 10-100 kHz range.
Click here to enlarge image

There is a clear void between the two ranges of length-scales offered by non-invasive imaging techniques, such as confocal/multi-photon or acoustic/sonography techniques, and x-ray/neutron imaging. As materials, structures, and phenomena continue to shrink and the micro/nanofabrication paradigm moves from planar to 3D/stacked platforms, there is an acute need to image and analyze surface/sub-surface features and phenomena, non-invasively, at ultrahigh resolution and sensitivity, coupled with usual ergonomic/economic considerations.

Non-destructive imaging is obviously critical in the microelectronics industry, given ever-reducing profit margins and concomitant need for improved yield. Reduced time-to-market and imperative quality control in an ever-complex multitude of processes provide both the challenge and reward for this research.

The entirely new, “out-of-the-box” SNFUH imaging approach is the only option that meets all necessary criteria for adoption by industry. Various current characterization tools for sub-surface imaging, force modulation microscopy, nanoindentation, and picosecond ultrasonic or photoacoustic probes address some aspects of the sample. Each application, however, fails to meet one or more key criteria regarding spatial resolution, quantitative capability or non-destructive nature (Table 1). This is particularly true if features of interest are buried deeper into the material, beyond the interaction range of proximal probes.

Several SPM-based techniques have been introduced in recent years with mixed results in the context of sensitivity to surface nanomechanical variations or sensitivity to embedded or buried features or quantitative extraction of nanomechanical contrast. Force modulation microscopy (FMM), ultrasonic force microscopy (UFM), and heterodyne phase microscopy (HPM) are notable techniques which have enjoyed some success for nanomechanical mapping of elastic and viscoelastic properties of soft and hard surfaces. Wider deployment of these techniques, however, is generally inhibited by lack of reproducibility, environmental effects in usual contact mode of imaging and lack of compelling evidence for sensitivity to buried and embedded structure and ease of use.

Basic concept of SNFUH

The SNFUH development integrates three major approaches with a unique combination of the SPM platform, which enjoys excellent lateral and vertical resolution. Coupled with microscale ultrasound source and detection–which facilitates “looking” deeper into structures, slice-by-slice–it offers a novel holography approach to extract and enhance phase resolution and phase coupling in imaging.


Figure 2. Schematic illustration of origin of perturbation to surface acoustic standing wave in SNFUH
Click here to enlarge image

SNFUH involves launching a high-frequency (a few MHz) ultrasound wave from the bottom of the specimen, while another wave is launched on AFM cantilever at a slightly different frequency (Figure 1). The interference of these two waves would nominally form products and so-called “surface-standing ultrasound wave,” and both amplitude and phase components are monitored via lock-in approach by the AFM tip.


Figure 3. Conceptualization of the SNFUH module.
Click here to enlarge image

As the specimen acoustic wave gets perturbed by sub-surface and surface features, especially its phase, the local surface acoustic interference is effectively monitored by the AFM tip. Thus, within the near-field regime, the acoustic wave–which is non-destructive and sensitive to mechanical/elastic variation in its path–is fully analyzed, point-by-point, by the AFM acoustic antenna in terms of phase and amplitude. As the specimen is scanned across, a pictorial representation of perturbation to the surface standing acoustic wave is fully recorded and displayed, to offer quantitative account of the internal microstructure of the specimen.

Image formation

The mechanism for the formation of the acoustic standing wave and the origin of the contrast and high sub-surface sensitivity in SNFUH can be conceptually understood with reference to Figure 2. In SNFUH mode, the perturbation to the surface acoustic standing wave, resulting from specimen acoustic wave scattering, is monitored by SPM acoustic antenna. The resulting cantilever deflection merely follows the perturbation to the surface standing acoustic wave, which represents the dissipative lag/lead in the surface response with respect to the tip reference frequency, i.e. the time of flight delay of the specimen acoustic waves reaching the sample surface. Extracting the spatial dependence of this phase term provides image contrast indicative of the relative elastic response of the buried structures, interfaces, and embedded defects to the specimen acoustic wave. This results in a surface-standing acoustic wave.

In a homogeneous specimen (Fig. 2, left panel), the surface-standing acoustic wave is merely the interference of specimen and cantilever acoustic waves without any local perturbation. On the other hand, if a scattering feature is present below the specimen surface (Fig. 2, right panel), the scattering of the specimen acoustic wave results in local perturbation to the surface acoustic standing wave, which is registered by the SPM cantilever antenna.

The contrast variation, or sensitivity, in SNFUH arises from the acoustic phase difference between the matrix and the feature. The acoustic wave propagation is directly related to the elastic modulus difference. The lateral spatial resolution is governed by the SPM probe interaction with the acoustic standing wave at 10-15nm. The degradation of spatial resolution should be commensurate with depth, beyond the near-field regime, because of far-field scattering and diffraction.

Because the acoustic phase information is spatially recorded, it is possible to obtain, via modeling, depth distribution of phase and to convert the data into a 3D tomography map of the embedded features.

Research tools

Our research employed a conventional JEOL SPM 5200 scanning probe microscope system with a modified stage and cantilever holder system. The feedback electronics of the system was modified and an electronic module, developed in-house, was implemented along with an RF lock-in approach to extract the measurable sub-surface phase component of the acoustic wave.

It’s important to note that SNFUH can be readily adapted to any other commercial SPM platform, with minor modifications.

Commercial piezoelectric ceramics are used to provide ultrasonic vibrations to the sample and the cantilever. Each oscillator has an out-of-plane resonance at approximately 2.1–2.5MHz and 2.3–2.8 MHz, respectively.


Figure 4: Buried copper interconnects below the surface, unseen with a typical AFM/SEM (left), but revealed in a SNFUH phase image (right).
Click here to enlarge image

The SNFUH electronic module (Figure 3) monitors the difference frequency input to an RF lock-in amplifier as a reference. Closely matched piezocrystals are used to keep the frequency difference below the cut-off frequency of the SPM photodiode (<1MHz). The difference frequency is chosen in view of the optical detection limitation of the SPM photodiode. The images are acquired using the soft contact mode for hard structures, and near-contact mode for biological structures.

The SPM differential photodiode signal constitutes an input to the SNFUH electronic module, which enables simultaneous extraction of topography, acoustic amplitude and the acoustic phase, to form respective images. The topography images are obtained using normal optical feedback of the system, while the SNFUH electronic module provides the phase component of the acoustic wave for the sub-surface contrast.

Implications for microelectronics metrology

Having firmly established the proof-of-concept behind SNFUH, we have developed an extensive portfolio of SNFUH applications to diverse problems in physical sciences, engineered systems, and life sciences. In collaboration with SRC member companies, we are particularly focused on the potential applications of SNFUH in critical metrology needs for microelectronics.

In addition, biomedicine is moving toward use of nano-bio-structures to interrogate cells and deliver therapeutic cargo. As this requires a non-invasive view inside the cells to monitor what happens under physiologically viable conditions, SNFUH can play a key role.

Our model experiments have helped to quantify SNFUH parameters, which are directly relevant to several critical metrology challenges. Those include:

  • Identification of buried defects in multi-layer thin-film stacks and nanopatterned structures;
  • Buried defects identification in multilayer photoresist films;
  • Stress migration in 3D MEMS structures, devices, and cracks in bonded wafers;
  • Quantitative modulus mapping of multilayer films;
  • Identifying buried voids in copper vias and interconnects; and
  • Nanomechanical properties of low-k dielectrics material and porous structures

The next nano-metrology toolset

Further refinement of SNFUH, and its quantitative understanding, provide considerable promise for an entirely new nano-metrology toolset. Next steps in development of the technology include system integration, material handling, faster scanning, and high throughput of results. In-line tools and methods for addressing these needs will be created by a new spin-off company, NanoSonix.

In the next 12 months, the spin-off will develop an add-on module for existing commercial SPM equipment in order to meet associated off-line metrology requirements. Availability of such an add-on module will make this technology accessible to a wider community, not only in semiconductor metrology but also in bio-application for both academia and industry to look deep below surfaces non-destructively with nanoscale resolution.


Vinayak P. Dravid is professor of materials science and engineering and the director of the NUANCE Center at Northwestern University. E-mail: [email protected].
G. Shekhawat is Research Assistant Professor at the Northwestern University Institute of Nanotechnology.

by Gerhard Lammel and Julia Patzelt, Bosch SensorTech

Inside of buildings, even the newest GPS navigation units quickly hit their limits–their navigational abilities are so inexact that a few floors can stand between the goal they indicate and the actual one. These are extremely poor qualifications for location-based services. An experimental study recently proved that the BMP085 pressure sensor can resolve not only the imprecision of GPS indoor navigation in a cost-efficient manner and without requiring additional infrastructure, it can also substantially improve locating in urban canyons.

In order for a navigation unit with GPS to provide exact positioning information within a few meters, the device has to receive data simultaneously from as many GPS satellites as possible. This requirement is fulfilled most easily on flat land under the open sky. However, as soon as obstacles block the unit’s view of the satellites, and it receives data from fewer than four satellites, the 3D positioning quickly becomes a game of chance.

Older GPS units are practically blind inside of buildings. A single wall within a structure generally damps a GPS signal in the 1.5GHz range by 20–30dB (factor of 100 to 1000), while reinforced concrete has the greatest dampening impact. The weak, useful signal is then drowned in the static of broadband HF receiving modules in these situations; a signal-to-noise ratio sufficient for determining position can no longer be reached using correction algorithms, and the navigation unit fails.

Indoor navigation: More than comfort

Unerring navigation within buildings is as advantageous for pedestrians as outdoor navigation is for drivers. In large foreign airports, for example, passengers could effortlessly find the correct check-in counter and from thence their departure gate. In large shopping centers and malls, they could be guided directly to businesses or restaurant without detours. New forms of location-based services, linked to GPS-equipped cell phones, could be offered for these environments. Imagine a note about a special sale at a certain store, or a restaurant’s reasonably priced lunch menu; once a person expresses interest, a GPS-equipped cell phone could immediately provide navigation to the new goal. For professional usage, examples include various types of support–a maintenance technician unfamiliar with the plant could traverse an extensive industrial complex, or merchandise could be unerringly delivered, even by someone not well acquainted with the locale. The possibilities for use by first responders are of incalculable value–no longer would they have to fumble through complex corridors, instead they could be directly guided to the location where they were needed.

Small errors have embarrassing effects without floor accuracy

All of the uses for indoor navigation listed are based on the concept that navigation within buildings decisively requires vertical “floor accuracy” (along the z-axis). If one assumes that building floors are 4m in height, then a positional accuracy of only 10 vertical meters for a GPS unit can easily steer one to the wrong floor. The people so affected would, however, only notice the error when they had arrived at the putative goal, then have to retrace their steps, perhaps along both floors in question, in order to finally arrive at their destination. This type of indoor navigation is both worthless and counterproductive. A positioning error of 10m has far less impact when it occurs horizontally (along the x- or y-axis); in the former case, floors and ceilings do not bar one’s view of the goal, which can generally be quickly reached by traveling a few more steps.


Figure 1. Indoor altitude measurement values comprised using a modern GPS navigation device in an office building. The fluctuations from the actual value (30m) are substantial over time.
Click here to enlarge image

A limited type of GPS navigation within buildings has only become possible with the most recent navigation devices, equipped with new, highly sensitive receivers. However, multipath reception is omnipresent in buildings, caused by repeated signal reflections off walls, ceilings and floors; distortions of the signal propagation time significantly reduce the accuracy of positioning within buildings, especially in comparison with navigation under the open sky. Figure 1 quantifies the quintessential positioning error along the z-axis. The statistical measuring point was located at a height of 30m in the third floor of a three-story office building. The navigational unit used for the test could determine the elevation within the building, but without great accuracy. Over the course of the observation time of ~13min, during which time the actual position of the device at 30m did not change, the position measured along the z-axis fluctuated between 10–50m. Despite the most modern reception technology, the indoor positioning error of ±20m greatly exceeded that which is acceptable for floor-accurate navigation (=4m).

Pressure sensor for floor-accurate GPS indoor navigation

With its BMP085 pressure sensor, Bosch SensorTech already has the solution to the “floor-accurate GPS navigation” problem, with a very small (5mm × 1.2mm), yet highly exact digital barometric pressure sensor, constructed using MEMS (microelectromechanical systems) technology. At every point in the earth’s atmosphere, air pressure and elevation (as it relates to sea level) have a fixed relationship; by measuring the air pressure, the exact altitude of a measuring point can be calculated. The BMP085 has extremely high-pressure resolution of max. ±0.03 hPa (RMS), which when converted to altitude corresponds to a resolution of ±0.25m (at sea level). At this level of accuracy, the sensor can essentially recognize the difference in altitude that a person undergoes as they move from one step to the next on a flight of stairs.


Figure 2. The SiRFstarIII chip set offers a user input function for storing altitude information gained using a separate pressure sensor.
Click here to enlarge image

A prerequisite for transmitting exact knowledge of momentary elevation to a chip set is excluding the less exact altitude information, generated in parallel via GPS. Bosch SensorTech’s SiRFstarIII chip set (Figure 2) incorporates the usual NMEA protocol, plus a proprietary binary protocol that enables settings that reach deeper on the chip set, making it possible to influence the type of positional calculation. In normal operation, the chip set decides automatically about the best type of calculation (four possibilities are available) with regard to the situation, independent from the number of satellites the receiver can see at the moment. The “2D fix” type of calculation is the most favorable for inputting barometrically measured altitude; the chip set normally uses this when it only has reception from three satellites. By this means it sets the elevation to a fixed value in order to reduce the number of unknowns in its formula for calculating position.

Results of an experimental study using a BPM085/GPS coupling

An “Alt Hold Mode” feature with the SiRF technology enables storage of the navigating engine in the SiRFstarIII chip with the measured values of the BMP085. Bosch SensorTech conducted an experimental study using practical tests to examine which results this BMP-GPS coupling would involve. The goal was to determine if the coupling of the GPS chip set with the pressure sensor provided not only a better level of accuracy for vertical navigation, but also whether this positively affected horizontal navigation.


The barometric pressure measurement increases the accuracy of GPS navigation: vertically, in every condition horizontally, and only in urban canyons.
Click here to enlarge image

After setting the Alt Hold Mode parameter to “always use input altitude,” the SiRF chip set adopted the barometrically measured elevation as the new input variable. Also, a software interface was installed between the pressure sensor and the GPS chip set to turn off several negative influences on the positioning accuracy:

Recognition of climatic and artificial pressure fluctuations. Weather can cause strong fluctuations in air pressure (up to ±40hPa) that should not be interpreted as changes in elevation. An already extant algorithm analyzed the course of pressure fluctuations, and excluded rather slow changes typical of weather influences (<2.5hPa/h) from measured values. It similarly resolved abrupt, artificial pressure fluctuations, caused by air conditioning, ventilation systems, or a strong wind through open windows.

Statistical reduction of measurement errors. A BMP085 can perform up to 128 pressure measurements per second. Averaging over several measured values eliminates statistical outliers.


Figure 3. Principle of dynamic zero compensation. During indoor navigation, the high-resolution BMP085 signal replaces the weak, inexact GPS signal (left). During periods of strong reception, the highly accurate GPS signal calibrates the pressure sensor (middle). In urban canyons, the pressure sensor again replaces the GPS altitude signal, which has become unreliable due to multipath reception.
Click here to enlarge image

Calibration compensates for absolute measurement errors. Independent of its height resolution, the BMP085 has an absolute measurement error of ±2.5hPa (±20m). This error was demonstrated to be easily compensated for–at good GPS reception, i.e. at the maximum GPS positional accuracy, height values determined via GPS by the navigation device were used automatically as calibration values for the pressure sensor. If the GPS reception deteriorated, then the high-resolution barometric altitude values–based on the most recently calibrated values–came into play (Figure 3). A Kalman filter determined whether or not to use situational calibration in this example.

The results of the experimental study of a SiRFstarIII with a BMP085 as external user altitude input are compiled (see table on p.22). The pressure sensor guaranteed indoor navigation at a floor-accurate level, thereby allowing the user to reach his or her destination; it also significantly increased the vertical positioning accuracy when used outdoors. In addition to a drastic reduction in the altitude error (visible only in the bar graph), the error in horizontal direction due to sensor support is also significantly smaller, with a standard deviation reduced by ~60%. This unexpected positive effect on horizontal positioning could, however, only be observed in comparable situations, such as in “urban canyons”.

BMP085 pressure sensor

Introduced in 2008, the BMP085 in an LCC-8 housing has a measuring range of 300–1100hPa and high over-pressure resistance at 10,000 hPa. Specifically developed for use in consumer electronic mobile devices, the sensor requires only 3µA of power (in ultra-low-power mode), with a low idle current consumption in stand-by mode of 0.1µA. The minimum supply voltage was also reduced to its current 1.8V. A 19-bit measurement operation on applications takes place inclusive of calibration data for temperature compensation in serial via an I2C 2-wire interface, which simplifies the integration of the BMP085 into already extant applications and eliminates the need for additional external components for wiring. At its slowest, a new measured value is ready for collection every 7.5ms.


Dr. Gerhard Lammel is manager of engineering at Bosch SensorTech GmbH, a wholly owned subsidiary of Bosch SensorTech.
Julia Patzelt is responsible for marketing communication and public relations at Bosch SensorTech GmbH.


No infrastructure required

At present, alternatives to satellite navigation are being developed and tested globally for indoor and urban navigation. Locating takes place by cross bearing, determining position by using the distance-dependent intensity of HF signals, which can be emitted by terrestrial transmitters from exactly known locations. The specific advantage is that the current transmission networks for cellular structures can be used for this form of navigation, without requiring a completely new system of transmitters.

This type of system, which functions using currently available WLAN islands, was developed by the Fraunhofer Institute for Integrated Circuits (IIS). The navigation unit constantly analyzes the reception field strength of nearby WLAN spots, and by means of their SSIDs, obtains the exact location of the WLAN transmitter from a central databank, which it then uses to navigate. However, this is not sufficient to also navigate within buildings with a sufficient degree of accuracy. For indoor navigation to function correctly, the navigation unit has to recognize the additional field strength divisions in every building, which in turn requires preliminary mapping measurements. Solutions related to this one convert the same navigational principle, but use different radio systems, such as GSM or DECT.

A communications network infrastructure is imperative for each of these solutions, which in turn requires an (at least periodic) administrative expenditure (e.g. mapping) which is not insignificant. The combination of the BMP085 pressure sensor and GPS is completely different does not require any foreign infrastructure; it functions everywhere, all over the planet, and is completely autarchic, in that the pressure sensor can fundamentally increase the vertical resolution of each of the alternative navigation systems.

Quicker 3D simulation, more practical dry etch, and automated die inspection

By Dr. Paula Doe, contributing editor

Though the MEMS market has remained essentially flat in 2008 and 2009, as plenty of new consumer and medical applications continued healthy growth (see page 6), equipment demand is another story. The MEMS equipment market (new specialty MEMS tools) likely reached less than $200 million in 2008, down from $330 million in 2007, says Yole Développement CEO Jean-Christophe Eloy, and 2009 looks about the same, though visibility is very limited. However, “several specialty MEMS tool suppliers are adding capacity in light of strong order intake so far this year,” he notes. “This growth is driven by the diffusion of MEMS technologies into other markets like 3D ICs, image sensors, and new applications for nanoimprint.”


Virtual fabrication of a MEMS accelerometer by the X-Fab SOI process.
Click here to enlarge image

null

Click here to enlarge image

Suppliers also see potential in offering new approaches to improve tricky MEMS yields in manufacturing faster, and at lower cost, with solutions for quicker simulation, more practical dry etch, and smarter package inspection.

One option for getting those MEMS devices to yield would be to find design problems by simulation. Some fabs are using Coventor Inc.’s 3D virtual fabrication software to find design flaws or process problems before fabrication, reports Stephen Breit, Coventor VP of product development. Using voxels (cubic 3D equivalents of pixels) instead of traditional compute-intensive CAD-like solid modeling techniques–plus some elegant compression algorithms–allows fast modeling of the entire process sequence. The virtual prototype that results shows real engineering information on how the 2D layout will translate to a 3D device, revealing things like gaps in film coverage or cavities in underlying layers.

Users specify the parameters that impact the MEMS device geometry, like layer thickness, snowfall or conformal deposition, or etch rate ratios between different materials, for each step in the sequence. The modeler then applies a series of strictly geometric operations to generate a realistic virtual prototype of the device. The parameters must be experimentally calibrated, but the simpler modeling process can then simulate the entire fabrication sequence over a large part of the die within a few hours on a desktop computer.

X-Fab now regularly uses the tool throughout design and process development to validate MEMS designs before tape out, saving test wafers, and speeding time to yield.

More practical dry etch

Though wet etch remains the workhorse technology for etching away sacrificial layers to release the functional MEMS structures, at smaller geometries the surface tension effects of trapped moisture tend to stick down the released structures. Dry etch processes prevent this stiction, but they’ve yet to see wide adoption in production. Vapor phase HF etchers still require careful control of the condensation from the water used as a catalyst, and have been relatively low throughput, and the option of using XeF2 is extremely expensive.

Primaxx proposes to avoid stiction at lower cost with a batch vapor HF system that better keeps water out of the process. It uses low cost anhydrous HF, with low-water electronic grade alcohol for the catalyst, to minimize the H2O content of reagents going in, control H2O byproducts and keep them in gas phase, and draw the H2O away from the etch interface with the alcohol properties. The process also runs at minimal power at 45°C. CEO Paul Hammond says etch rates range from 0.05-5µm/minute, depending on oxide type, and with <10% variability within wafer, wafer to wafer, and batch to batch, on 25-wafer batches of 200mm wafers.

Air Products and Chemicals, meanwhile, is bringing down the cost of using XeF2 by reclaiming the xenon. Increasing demand for xenon for new applications in flat-panel displays and other electronics is pushing prices up, but the rare gas is also simply costly to extract and distill. It occurs naturally in air only in minute quantities of 87 parts per billion–so manufacturing typically requires some 220 watt-hours to extract one liter of Xe from air, then further purification by cryogenic distillation. This energy-intensive extraction process only makes sense on very large air plant, mostly associated with steel mills, and with the economic downturn they’ve curtailed production, further tightening supplies.

Air Products’ system for the fab reclaims the xenon by-product of the XeF2 etch process, then sends it back to an Air Products facility for manufacturing into more XeF2, explains commercial development manager Eugene Karawacki.

Automated package die inspection

Printed circuit board inspection tool supplier Vi Technology is entering the MEMS market by applying its automated optical inspection expertise to automating the inspection of MEMS dies before encapsulation. First customer for the new product started full volume production in mid-May.

The tool replaces traditional inspection by an operator with a microscope with a quick, two-step automated process. The first pass uses a laser to measure the tilt of each die in the package, to make sure products like inertial sensors are correctly seated so they work. It also measures the exact focal distance of the die in the package, adjusting to make sure the optical system can see from the top to the bottom of the dimensional MEMS device. The second pass comes back with a high-end camera and telecentric lens to take pictures of the die with different fields of view. After stitching the pictures together to reconstruct an image, it compares that to a reference image to identify any differences, and flags the ones that are actually real defects, down to 3µm in size. Both passes take about 2-5 seconds per die, depending on size and types of defects.

“This ensures that only the known good dies go on to the next step, usually encapsulation, therefore saving costs,” says product line manager David Richard. “It also enables for the first time a real tilt measurement, which is a key functional criterion for accelerometers and gyros, and no electrical test can measure this.”

These ideas are some that will be discussed in the MEMS programs at SEMICON West in San Francisco, July 14-16. Yole will present it latest market forecast for the supply chain, and leading European development foundries IMEC, CEA Leti, and Silex Microsystems will discuss their progress in using standard processes to cut development time and costs. The sessions are part of a series on key developments in disruptive semiconductor technologies featured this year in the Extreme Electronics program. See www.semiconwest.org for details.

Vesselin Shanov and Mark Schulz, U. of Cincinnati

Most natural fibers and nanofibers are produced only in relatively short lengths, and most applications require a bulk or continuous material–but there is no effective method for using short fibers or carbon nanotube (CNT) powders to achieve breakthrough properties in bulk materials. The most promising approach to use nano-fibers in bulk material is to form an intermediate material by spinning CNT fibers into yarn, which can displace conventional fiber in composite materials and other applications. This article examines recent advances that are allowing spinning smaller CNT fibers, and their use in new applications.

Prior to discussing the recent advances, it is important to give due credit to antecedent work. The length and diameter of the fiber play critical roles in the success of spinning; diameters of fibers such as cotton used for spinning since the 16th century are in the micron range, whereas carbon nanotube diameters are much smaller, in the 10nm range. Spinning small diameter fibers also increases twist by about the same factor. Short and long CNTs can be compared to cotton fibers [1]. For traditional spinning technologies such as rotor and ring spinning, there is a separate process called combing where short-length cotton fibers (less than ½ inch long) are removed from the raw material mass before spinning is performed. Typically, as much as 16% of the cotton raw material mass is short fiber, and removed; premium cottons such as Pima or Egyptian have less short fiber but still require combing where typically 8%-10% of the raw material is removed.

These factors and subsequent analyses indicate long CNTs will improve strength and electrical properties, and we have focused research at the University of Cincinnati [2-15] to produce long and strong CNT, spinning carbon nanotube yarns that have superior properties, are economically and commercially viable, and will meet the long-range needs of defense and commercial sectors.

Approaches to spinning CNTs

Spinning CNTs into thread is a relatively new topic of research. There are two main approaches: spinning thread from substrate grown forests of CNT, and direct spinning of CNTs into thread from a vertical reactor that uses the floating catalyst method of synthesis. Spinning from the array is done by a handful of research groups around the world [2-4, 5-9], and direct spinning from a vertical reactor using the floating catalyst method is done by just a few research groups around the world [10, 15]. Mechanical measurements indicate that yarn produced using both methods has a uniform strength of about 0.5N/Tex, which is equivalent to about 1.0GPa; short sections of yarn and special cases have shown higher strength. The electrical resistivity of thread is about 1×10-4 ohm cm and the current density is about 1×109 amp/m2. If the properties are divided by the density of the yarn, the specific properties are competitive with existing materials and the combination of properties can exceed those of existing materials. The mechanical and electrical properties are improving as the number of defects in the thread are reduced through improving the synthesis and spinning processes. The goal is to produce yarns that are strong, creep resistant, highly conducting, and reversibly deformable over relatively large strains to absorb energy.

Fiber properties for spinning. Understanding fiber spinning is important to move CNT out of the lab. Fibers must have certain particular properties to be able to be spun into thread, including strength, stiffness, and pliability–in other words, an openness and ease of fiber separation and toughness, and appropriate bending and radial stiffness. These aspects, along with quality and reproducibility, are of extreme importance in producing yarn. Spinning can be done using different approaches, the details of which are partly confidential; dry spinning from an array is discussed here. The relative size of the yarn being made commercially and the twist uniformity of a strand are important. Our initial target CNT yarn is a 10Tex size yarn (10g/1000m of yarn). CNT length is important in spinning yarn. The most important property of a CNT forest that is required for solid-state processing is that whenever the CNTs at the edge of the array are pulled away from the forest, the CNTs cling together (due to van der Waals forces) to form a continuous strand [5].

Properties of CNT yarn. The mechanical and electrical properties of CNT yarn depend on the number of defects in the CNT and in the yarn. Each gap or junction at the end of the nanotube can be considered to be a defect in the yarn. It can be considered that each CNT has one defect, which is the gap or junction between the next nanotube. Thus there are N nanotubes and N gap defects in the thread, besides defects in the walls that can cause the CNT to become wavy and weak. The gap interrupts the load transfer from CNT to CNT and requires that friction between CNTs carry the load. The effect of length of the CNT on strength of the thread can predicted based on a conventional thread-spinning model which was discussed by R. Baughman [5, 9]. For the CNT thread, a shorter migration length gives better strength–i.e. more fibers must run from the surface to the inside of the yarn in a short interval of length to make the fiber strong. A higher friction coefficient gives better fiber strength. However, the number of turns per unit length–the helix angle–plays an important role in fiber strength; when the number of turns increases the fiber strength decreases drastically. Similar to mechanical load transfer, electrical conduction is interrupted by the gaps between nanotubes, and thus electrical conduction must occur laterally from nanotube to nanotube probably by electron hopping. The resistance of the thread is equal to the longitudinal resistance of the nanotube plus the lateral resistance of the nanotube. The CNT yarn also has resistance, super-inductance, and super-capacitance properties, which are being studied to develop carbon electronics or “carbotronics” that have superior properties in certain applications compared to conventional copper components.

Direct spinning from the array. Centimeter-long “Black Cotton” [a type of CNT trademarked by UC spinoff General Nano] can be spun into thread for electrical wire and as fiber to reinforce composite materials supplementing or replacing carbon fiber. The long CNTs allow dry spinning, which is an advantage in terms of strength, cost, electrical conductivity, and scale-up to manufacturing commodity levels. The U. of Cincinnati’s spinning machine, specially designed to spin Black Cotton into thread, has two DC motors to independently control the twisting and winding while drawing thread directly from the CNT array. The spinner has independent orthogonal control of winding and twisting using a yoke assembly. The thread is twisted and wound onto a spindle. A post-treatment stage allows further processing of the thread, such as thermal annealing or coating with an insulating material.

Catalyst and substrates for growing of spinnable CNT arrays

It is observed that dense and aligned arrays are more spinnable, and the thread obtained from such an array is stronger. Continuous thread can be drawn from dense aligned arrays of nanotubes. In order to achieve this goal, increased catalyst particle density on the substrate is required. There is consensus that double-wall carbon nanotubes (DWCNT) are very appropriate for spinning into threads. A procedure developed for CVD of CNT was modified for synthesis of well-aligned and high-purity DWCNT arrays. A new catalyst based on an iron alloy with increased catalyst particle density was introduced, deposited on a Si/SiO2/Al2O3 substrate by e-beam deposition. After thermal annealing a uniform distribution of high-density catalyst particles was achieved, which was proven by AFM. The growth was performed at 750°C in a First Nano EasyTube 3000 reactor using a gas system consisting of ethylene (C2H4), water vapor, hydrogen, and argon with optimized concentrations, deposition temperature, and flow rates. Critical for this study was to maintain low-carbon partial pressure in the reaction zone. Two-hour growth with this catalyst produced a 1.1mm long array with excellent properties for spinning. Extremely long (up to 18mm) CNT arrays have been made; in one example an 11mm long CNT array was peeled completely off the substrate, which with no additional processing was reused to grow and yield an 8mm long CNT array.


By using multiple spools, University of Cincinnati doctoral student Chaminda Jayasinghe is able to spin bigger-diameter threads from long (4-5.6mm) CNT arrays.
Click here to enlarge image

Magnetron sputtering is a further improvement in substrate preparation being evaluated by North Carolina A&T and produces very uniform catalyst deposition and highly spinnable arrays.

Device-quality CNT thread, yarn, and ribbon

Our team has developed techniques for spinning long CNT directly from the array into thread, yarn, and ribbons (figure 1). This technique has produced CNT thread with strength of 1 GPa and electrical conductivity of 0.8×104 (ohm-cm)-1. This strength and electrical conductivity are at least 10X lower than the corresponding properties of perfect individual nanotubes. We assume that the lower properties of thread are due to: (i) defects in the walls of the nanotubes; (ii) large number of gaps in the thread at the ends of the CNT; and (iii) considerable open volume between the nanotubes in the threads. We are developing techniques that we believe will reduce these problems. One technique is to use electric current to fuse the ends of nanotubes together while the nanotubes are being spun into thread. This technique has been shown to work under a microscope and the challenge is to scale it up for mass production. Another technique is to perform secondary annealing/welding of the nanotube thread to reduce the number of defects. Other post-spinning treatments of the nanotube thread, such as ion irradiation and ozone or UV exposure [16-19], also have potential to improve the thread’s electrical and mechanical properties. The strategic importance of the research is to produce CNT thread that surpasses the properties of any existing material in terms of strength, weight, and electrical current carrying capability.


Figure 1. Four types of CNT materials fabricated by the UC team: a) as grown CNT bundles, b) single thread, (c) two strand yarn, and (d) ribbon.
Click here to enlarge image

null

Preparing thread and ribbon from CNT arrays

At the present time CNTs cannot be grown beyond about 2cm in length [20-37]. At the U. of Cincinnati, DWCNTs several millimeters in length are being spun into thread (with micron-range diameter) to produce a strong and tough bulk material with novel properties. Multiple threads have been woven together to form a yarn, which can be used to form tows and unidirectional plies; they also can be woven into a “smart fabric” with two-directional properties, which can then be used to fabricate strong, electrically conductive composite materials, or used as a wearable sensor embedded in clothing. Nanotube thread can also be used to form carbon electronic components (“carbotronics”), or electromagnetic devices such as an antenna to communicate with sensors inside the body. Thin narrow sheets of nanotubes called ribbon have also been drawn from the array [8, 11].

Spinning thread from DWCNT arrays

Closely aligned CNTs (spaced <100nm apart) are weakly held together by van der Waals intermolecular forces. In this forest configuration, they can be harvested by pulling a small bundle of CNTs away from an edge of the array in the direction that keeps the “centerlines” parallel and maintains the close spacing. The CNTs next to the first bundle will be pulled along also by van der Waals forces. As these CNT bundles are pulled away from the “forest” they form a long line in which all of the CNT centerlines are aligned in parallel. In the spinning process (pulling and twisting), CNTs are pulled from the array and held together by twisting around neighboring nanotubes [38-42], which prevents the CNTs from slipping along the lengths of their neighbors when axial force is applied. As the CNTs are twisted and pulled, more nanotubes are added to form a long, strong thread. CNT arrays that we have used for spinning range in length from 1mm to 0.5cm–and threads with lengths of 100m have been spun in our facility. The diameter of the thread can be controlled by the length of the CNT array and by the spinning parameters [43-53]. Figure 2a shows CNT thread being wound onto a spool.


Figure 2. Manufacturing of CNT thread and ribbon at UC: (a) thread being wound onto a spool, (b) pulling and winding ribbon.
Click here to enlarge image

As the quality of the CNTs improves, progressively longer CNTs will be used to spin thread. Long CNTs can improve thread properties by reducing the number of gaps in the thread at the nanotube ends, and by providing a longer length for mechanical interlocking each nanotube with its neighbors. If the quality and spin-ability of the 1.5-2cm long CNT that we currently produce are improved, thread properties could improve by an order of magnitude which would open up many applications. The excitement of this research is that the properties of thread are being continuously improved and are getting closer to the properties of individual nanotubes. If CNT thread reaches a strength of 10GPa (20% of the strength of individual nanotubes), this would be a large breakthrough in nanotechnology. Electrical conductivity would also be expected increase roughly in proportion to the strength increase.

Pulling ribbon from CNT arrays

A material called carbon nanotube ribbon (~200nm thick, 5mm wide) was also produced from our arrays, possessing a different morphology from threads. Winding CNT ribbon is shown in Figure 2b. The width of the ribbon is limited only by the lateral size of the array [11].

Specific properties of CNT and thread will be important for weight-critical applications and are calculated via dividing the property by the density of the material. However, since a single wall of each nanotube is one atom thick, defects can greatly reduce the strength and electrical conductivity of CNTs. Reducing intrinsic defects will greatly improve the properties of future CNT threads, ribbons, and yarns. Re-spinning, treating with solvent, and other post-processing is being done to improve the properties of thread. Several techniques to post-treat CNT yarn are being evaluated, mostly consisting of applying energy to the spun yarn in the form of electron flow or heat. This will be in a controlled atmosphere to prevent oxidation. The intent is to “fuse” the twisted CNT together so they will resist slipping when a lateral force is applied to the yarn.

Applications of CNT thread

CNTs’ high specific strength and stiffness, electrical and thermal conductivity, and compatibility with electronics and sensing applications are a key enabling medium for the convergence of textiles with fully integrated functions. Smart fabrics or interactive textiles have many potential applications such as physiological monitoring, power bus systems and communications, medical care, multifunctional exteriors, harvesting of energy and water, and passive and active thermal management. CNT yarns may eventually find applications in composite materials, in electrically conductive wire, bulletproof vests, light emitters by incandescence, antennas, and materials that block electromagnetic waves. The macroscopic CNT yarns may find application as mechanical actuators for artificial muscles, flexible conductors for textile sensors, power buses for communication, flexible batteries, and solar cells.

One example of new application is in signal communications. Researchers at the U. of Cincinnati have applied a 25µm spun carbon nanotube thread to create a dipole cell-phone antenna [54], with transmission close to that of copper but at a fraction of the weight [44]. CNT yarns also could be woven into cloths for use as a very lightweight reflector or dish antenna–one that could be deformed to change the focus of the antenna, e.g. to be molded directly into electronic device casings or aircraft structures. Nanocomp Inc. [15] has demonstrated fabrication of coaxial cables using CNT threads as center conductors and CNT sheets for outer conductors or shields, which would be substantially lighter than similarly sized copper coaxial cables; USB cables using only CNT threads and yarns also have been made.


David Mast, U. of Cincinnati associate professor of physics, demonstrates new wireless applications of the spun carbon nanotubes.
Click here to enlarge image

The surface of CNT materials also can impart different functionalities–e.g., absorption of gas molecules to make ultrasensitive sensors for toxic gases or biological agents. Fabrication of such functionalized CNT sensors with integral CNT antenna for wireless sensor applications is currently being investigated. These threads and yarns can also be wound into small loops or spirals for micro-miniature antenna for possible bio-medical applications.

The CNT yarns are good electrical conductors and can carry enough current to act as an incandescent filament or to emit electrons to produce light from phosphorescent screens. Electrons field emitted from the side of a cold, negative nanotube yarn electrode hit a transparent fluorescence screen to provide light emission. Low voltages are possible for field emission because of both the field enhancing effect of the yarn shape and the high aspect ratio of nanotubes that protrude from the sides of the yarn. The CNT yarns can be used as electron field emitters for light sources (lighting and displays) and X-ray sources that could be in a micro-catheter used for medical applications. The individual nanotubes are anchored into the yarn by twist, which should enhance electron emission stability and device lifetime.

Conclusions

Steady progress in CNT synthesis and spinning by a small number of groups around the world is moving nanotube yarn technology ever closer to the point where it can be come a disruptive material that can offer multi-functional properties that cannot be achieved by any other materials on earth. CNT yarn may supplement or displace carbon, glass, and aramid fibers and copper wire in high-performance critical applications.


Dr. Vesselin Shanov is associate professor of chemical and materials engineering at the University of Cincinnati.

Dr. Mark Schulz is an associate professor of mechanical engineering at the University of Cincinnati, and deputy director of the National Science Foundation’s Engineering Research Center for Revolutionizing Metallic Biomaterials located at North Carolina A&T.

Shanov and Schulz are co-directors of the UC Nanoworld and Smart Materials and Devices Laboratories at the University of Cincinnati, and are affiliated with the start-up company General Nano LLC in Cincinnati that is commercializing the Black Cotton material.


References for this story are available online, at www.smalltimes.com.

by Gerhard Lammel and Julia Patzelt, Bosch SensorTech

Inside of buildings, even the newest GPS navigation units quickly hit their limits–their navigational abilities are so inexact that a few floors can stand between the goal they indicate and the actual one. These are extremely poor qualifications for location-based services. An experimental study recently proved that the BMP085 pressure sensor can resolve not only the imprecision of GPS indoor navigation in a cost-efficient manner and without requiring additional infrastructure, it can also substantially improve locating in urban canyons.

In order for a navigation unit with GPS to provide exact positioning information within a few meters, the device has to receive data simultaneously from as many GPS satellites as possible. This requirement is fulfilled most easily on flat land under the open sky. However, as soon as obstacles block the unit’s view of the satellites, and it receives data from fewer than four satellites, the 3D positioning quickly becomes a game of chance.

Older GPS units are practically blind inside of buildings. A single wall within a structure generally damps a GPS signal in the 1.5GHz range by 20–30dB (factor of 100 to 1000), while reinforced concrete has the greatest dampening impact. The weak, useful signal is then drowned in the static of broadband HF receiving modules in these situations; a signal-to-noise ratio sufficient for determining position can no longer be reached using correction algorithms, and the navigation unit fails.

Indoor navigation: More than comfort

Unerring navigation within buildings is as advantageous for pedestrians as outdoor navigation is for drivers. In large foreign airports, for example, passengers could effortlessly find the correct check-in counter and from thence their departure gate. In large shopping centers and malls, they could be guided directly to businesses or restaurant without detours. New forms of location-based services, linked to GPS-equipped cell phones, could be offered for these environments. Imagine a note about a special sale at a certain store, or a restaurant’s reasonably priced lunch menu; once a person expresses interest, a GPS-equipped cell phone could immediately provide navigation to the new goal. For professional usage, examples include various types of support–a maintenance technician unfamiliar with the plant could traverse an extensive industrial complex, or merchandise could be unerringly delivered, even by someone not well acquainted with the locale. The possibilities for use by first responders are of incalculable value–no longer would they have to fumble through complex corridors, instead they could be directly guided to the location where they were needed.

Small errors have embarrassing effects without floor accuracy

All of the uses for indoor navigation listed are based on the concept that navigation within buildings decisively requires vertical “floor accuracy” (along the z-axis). If one assumes that building floors are 4m in height, then a positional accuracy of only 10 vertical meters for a GPS unit can easily steer one to the wrong floor. The people so affected would, however, only notice the error when they had arrived at the putative goal, then have to retrace their steps, perhaps along both floors in question, in order to finally arrive at their destination. This type of indoor navigation is both worthless and counterproductive. A positioning error of 10m has far less impact when it occurs horizontally (along the x- or y-axis); in the former case, floors and ceilings do not bar one’s view of the goal, which can generally be quickly reached by traveling a few more steps.


Figure 1. Indoor altitude measurement values comprised using a modern GPS navigation device in an office building. The fluctuations from the actual value (30m) are substantial over time.
Click here to enlarge image

A limited type of GPS navigation within buildings has only become possible with the most recent navigation devices, equipped with new, highly sensitive receivers. However, multipath reception is omnipresent in buildings, caused by repeated signal reflections off walls, ceilings and floors; distortions of the signal propagation time significantly reduce the accuracy of positioning within buildings, especially in comparison with navigation under the open sky. Figure 1 quantifies the quintessential positioning error along the z-axis. The statistical measuring point was located at a height of 30m in the third floor of a three-story office building. The navigational unit used for the test could determine the elevation within the building, but without great accuracy. Over the course of the observation time of ~13min, during which time the actual position of the device at 30m did not change, the position measured along the z-axis fluctuated between 10–50m. Despite the most modern reception technology, the indoor positioning error of ±20m greatly exceeded that which is acceptable for floor-accurate navigation (=4m).

Pressure sensor for floor-accurate GPS indoor navigation

With its BMP085 pressure sensor, Bosch SensorTech already has the solution to the “floor-accurate GPS navigation” problem, with a very small (5mm × 1.2mm), yet highly exact digital barometric pressure sensor, constructed using MEMS (microelectromechanical systems) technology. At every point in the earth’s atmosphere, air pressure and elevation (as it relates to sea level) have a fixed relationship; by measuring the air pressure, the exact altitude of a measuring point can be calculated. The BMP085 has extremely high-pressure resolution of max. ±0.03 hPa (RMS), which when converted to altitude corresponds to a resolution of ±0.25m (at sea level). At this level of accuracy, the sensor can essentially recognize the difference in altitude that a person undergoes as they move from one step to the next on a flight of stairs.


Figure 2. The SiRFstarIII chip set offers a user input function for storing altitude information gained using a separate pressure sensor.
Click here to enlarge image

A prerequisite for transmitting exact knowledge of momentary elevation to a chip set is excluding the less exact altitude information, generated in parallel via GPS. Bosch SensorTech’s SiRFstarIII chip set (Figure 2) incorporates the usual NMEA protocol, plus a proprietary binary protocol that enables settings that reach deeper on the chip set, making it possible to influence the type of positional calculation. In normal operation, the chip set decides automatically about the best type of calculation (four possibilities are available) with regard to the situation, independent from the number of satellites the receiver can see at the moment. The “2D fix” type of calculation is the most favorable for inputting barometrically measured altitude; the chip set normally uses this when it only has reception from three satellites. By this means it sets the elevation to a fixed value in order to reduce the number of unknowns in its formula for calculating position.

Results of an experimental study using a BPM085/GPS coupling

An “Alt Hold Mode” feature with the SiRF technology enables storage of the navigating engine in the SiRFstarIII chip with the measured values of the BMP085. Bosch SensorTech conducted an experimental study using practical tests to examine which results this BMP-GPS coupling would involve. The goal was to determine if the coupling of the GPS chip set with the pressure sensor provided not only a better level of accuracy for vertical navigation, but also whether this positively affected horizontal navigation.


The barometric pressure measurement increases the accuracy of GPS navigation: vertically, in every condition horizontally, and only in urban canyons.
Click here to enlarge image

After setting the Alt Hold Mode parameter to “always use input altitude,” the SiRF chip set adopted the barometrically measured elevation as the new input variable. Also, a software interface was installed between the pressure sensor and the GPS chip set to turn off several negative influences on the positioning accuracy:

Recognition of climatic and artificial pressure fluctuations. Weather can cause strong fluctuations in air pressure (up to ±40hPa) that should not be interpreted as changes in elevation. An already extant algorithm analyzed the course of pressure fluctuations, and excluded rather slow changes typical of weather influences (<2.5hPa/h) from measured values. It similarly resolved abrupt, artificial pressure fluctuations, caused by air conditioning, ventilation systems, or a strong wind through open windows.

Statistical reduction of measurement errors. A BMP085 can perform up to 128 pressure measurements per second. Averaging over several measured values eliminates statistical outliers.


Figure 3. Principle of dynamic zero compensation. During indoor navigation, the high-resolution BMP085 signal replaces the weak, inexact GPS signal (left). During periods of strong reception, the highly accurate GPS signal calibrates the pressure sensor (middle). In urban canyons, the pressure sensor again replaces the GPS altitude signal, which has become unreliable due to multipath reception.
Click here to enlarge image

Calibration compensates for absolute measurement errors. Independent of its height resolution, the BMP085 has an absolute measurement error of ±2.5hPa (±20m). This error was demonstrated to be easily compensated for–at good GPS reception, i.e. at the maximum GPS positional accuracy, height values determined via GPS by the navigation device were used automatically as calibration values for the pressure sensor. If the GPS reception deteriorated, then the high-resolution barometric altitude values–based on the most recently calibrated values–came into play (Figure 3). A Kalman filter determined whether or not to use situational calibration in this example.

The results of the experimental study of a SiRFstarIII with a BMP085 as external user altitude input are compiled (see table on p.22). The pressure sensor guaranteed indoor navigation at a floor-accurate level, thereby allowing the user to reach his or her destination; it also significantly increased the vertical positioning accuracy when used outdoors. In addition to a drastic reduction in the altitude error (visible only in the bar graph), the error in horizontal direction due to sensor support is also significantly smaller, with a standard deviation reduced by ~60%. This unexpected positive effect on horizontal positioning could, however, only be observed in comparable situations, such as in “urban canyons”.

BMP085 pressure sensor

Introduced in 2008, the BMP085 in an LCC-8 housing has a measuring range of 300–1100hPa and high over-pressure resistance at 10,000 hPa. Specifically developed for use in consumer electronic mobile devices, the sensor requires only 3µA of power (in ultra-low-power mode), with a low idle current consumption in stand-by mode of 0.1µA. The minimum supply voltage was also reduced to its current 1.8V. A 19-bit measurement operation on applications takes place inclusive of calibration data for temperature compensation in serial via an I2C 2-wire interface, which simplifies the integration of the BMP085 into already extant applications and eliminates the need for additional external components for wiring. At its slowest, a new measured value is ready for collection every 7.5ms.


Dr. Gerhard Lammel is manager of engineering at Bosch SensorTech GmbH, a wholly owned subsidiary of Bosch SensorTech.
Julia Patzelt is responsible for marketing communication and public relations at Bosch SensorTech GmbH.


No infrastructure required

At present, alternatives to satellite navigation are being developed and tested globally for indoor and urban navigation. Locating takes place by cross bearing, determining position by using the distance-dependent intensity of HF signals, which can be emitted by terrestrial transmitters from exactly known locations. The specific advantage is that the current transmission networks for cellular structures can be used for this form of navigation, without requiring a completely new system of transmitters.

This type of system, which functions using currently available WLAN islands, was developed by the Fraunhofer Institute for Integrated Circuits (IIS). The navigation unit constantly analyzes the reception field strength of nearby WLAN spots, and by means of their SSIDs, obtains the exact location of the WLAN transmitter from a central databank, which it then uses to navigate. However, this is not sufficient to also navigate within buildings with a sufficient degree of accuracy. For indoor navigation to function correctly, the navigation unit has to recognize the additional field strength divisions in every building, which in turn requires preliminary mapping measurements. Solutions related to this one convert the same navigational principle, but use different radio systems, such as GSM or DECT.

A communications network infrastructure is imperative for each of these solutions, which in turn requires an (at least periodic) administrative expenditure (e.g. mapping) which is not insignificant. The combination of the BMP085 pressure sensor and GPS is completely different does not require any foreign infrastructure; it functions everywhere, all over the planet, and is completely autarchic, in that the pressure sensor can fundamentally increase the vertical resolution of each of the alternative navigation systems.