Tag Archives: letter-mems-tech

Researchers from the Georgia Institute of Technology have developed a novel cellular sensing platform that promises to expand the use of semiconductor technology in the development of next-generation bioscience and biotech applications.

The research is part of the Semiconductor Synthetic Biology (SSB) program sponsored and managed by Semiconductor Research Corporation (SRC). Launched in 2013, the SSB program concentrates on synergies between synthetic biology and semiconductor technology that can foster exploratory, multi-disciplinary, longer-term university research leading to novel, breakthrough solutions for a wide range of industries.

The Georgia Tech research proposes and demonstrates the world’s first multi-modality cellular sensor arranged in a standard low-cost CMOS process. Each sensor pixel can concurrently monitor multiple different physiological parameters of the same cell and tissue samples to achieve holistic and real-time physiological characterizations.

“Our research is intended to fundamentally revolutionize how biologists and bioengineers can interface with living cells and tissues and obtain useful information,” said Hua Wang, an assistant professor in the School of Electrical and Computer Engineering (ECE) at Georgia Tech. “Fully understanding the physiological behaviors of living cells or tissues is a prerequisite to further advance the frontiers of bioscience and biotechnology.”

Wang explains that the Georgia Tech research can have positive impact on semiconductors being used in the development of healthcare applications including the more cost-effective development of pharmaceuticals and point-of-care devices and low-cost home-based diagnostics and drug testing systems. The research could also benefit defense and environmental monitoring applications for low-cost field-deployable sensors for hazard detections.

Specifically, in the case of the more cost-effective development of pharmaceuticals, the increasing cost of new medicine is largely due to the high risks involved in the drug development. As a major sector of the healthcare market, the global pharmaceutical industry is expected to reach more than $1.2 trillion this year. However, on average, only one out of every ten thousand tested chemical compounds eventually become an approved drug product.

In the early phases of drug development (when thousands of chemical candidates are screened), in vitro cultured cells and tissues are widely used to identify and quantify the efficacy and potency of drug candidates by recording their cellular physiology responses to the tested compounds, according to the research.

Moreover, patient-to-patient variations often exist even under the administration of the same type of drugs at the same dosage. If the cell samples are derived from a particular patient, patient-specific drug responses then can be tested, which opens the door to future personalized medicine.

“Therefore, there is a tremendous need for low-cost sensing platforms to perform fast, efficient and massively parallel screening of in vitro cells and tissues, so that the promising chemical candidates can be selected efficiently,” said Wang, who also holds the Demetrius T. Paris Junior Professorship in the Georgia Tech School of ECE. “This existing need can be addressed directly by our CMOS multi-modality cellular sensor array research.”

Among the benefits enabled by the CMOS sensor array chips are that they provide built-in computation circuits for in-situ signal processing and sensor fusion on multi-modality sensor data. The chips also eliminate the need of external electronic equipment and allow their use in general biology labs without dedicated electronic or optical setups.

Additionally, thousands of sensor array chips can operate in parallel to achieve high-throughput scanning of chemicals or drug candidates and real-time monitoring of their efficacy and toxicity. Compared with sequential scanning through limited fluorescent scanners, this parallel scanning approach can achieve more than 1,000 times throughput enhancement.

The Georgia Tech research team just wrapped its first year of research under the 3-year project, with the sensor array being demonstrated at the close of 2014 and presented at the IEEE International Solid-State Circuits Conference (ISSCC) in February 2015. In the next year, the team plans to further increase the sensor array pixel density while helping improve packaging solutions compatible with existing drug testing solutions. 

“Georgia Tech’s research combines semiconductor integrated circuits and living cells to create an electronics-biology hybrid platform, which has tremendous societal and technological implications that can potentially lead to better and cheaper healthcare solutions,” said Victor Zhirnov, director of Cross-Disciplinary Research and Special Projects at SRC.

From mobile phones and computers to television, cinema and wearable devices, the display of full color, wide-angle, 3D holographic images is moving ever closer to fruition, thanks to international research featuring Griffith University.

Led by Melbourne’s Swinburne University of Technology and including Dr Qin Li, from the Queensland Micro- and Nanotechnology Center within Griffith’s School of Engineering, scientists have capitalised on the exceptional properties of graphene and are confident of applications in fields such as optical data storage, information processing and imaging.

“While there is still work to be done, the prospect is of 3D images seemingly leaping out of the screens, thus promising a total immersion of real and virtual worlds without the need for cumbersome accessories such as 3D glasses,” says Dr Li.

First isolated in the laboratory about a decade ago, graphene is pure carbon and one of the thinnest, lightest and strongest materials known to humankind. A supreme conductor of electricity and heat, much has been written about its mechanical, electronic, thermal and optical properties.

“Graphene offers unprecedented prospects for developing flat displaying systems based on the intensity imitation within screens,” says Dr Li, who conducted carbon structure analysis for the research.

“Our consortium, which also includes China’s Beijing Institute of Technology and Tsinghua University, has shown that patterns of photo-reduced graphene oxide (rGO) that are directly written by laser beam can produce wide-angle and full-colour 3D images.

“This was achieved through the discovery that a single femtosecond (fs) laser pulse can reduce graphene oxide to rGO with a sub-wavelength-scale feature size and significantly differed refractive index.

“Furthermore, the spectrally flat optical index modulation in rGOs enables wavelength-multiplexed holograms for full colour images.”

Researchers say the sub-wavelength feature is particularly important because it allows for static holographic 3D images with a wide viewing angle up to 52 degrees.

Such laser-direct writing of sub-wavelength rGO featured in dots and lines could revolutionise capabilities across a range of optical and electronic devices, formats and industry sectors.

“The generation of multi-level modulations in the refractive index of GOs, and which do not require any solvents or post-processing, holds the potential for in-situ fabrication of rGO-based electro-optic devices,” says Dr Li.

“The use of graphene also relieves pressure on the world’s dwindling supplies of indium, the metallic element that has been commonly used for electronic devices.

“Other technologies are being developed in this area, but rGO looks by far the most promising and most practical, particularly for wearable devices. The prospects are quite thrilling.”

In 2013 James Hone, Wang Fong-Jen Professor of Mechanical Engineering at Columbia Engineering, and colleagues at Columbia demonstrated that they could dramatically improve the performance of graphene–highly conducting two-dimensional (2D) carbon–by encapsulating it in boron nitride (BN), an insulating material with a similar layered structure. In work published this week in the Advance Online Publication on Nature Nanotechnology‘s website, researchers at Columbia Engineering, Harvard, Cornell, University of Minnesota, Yonsei University in Korea, Danish Technical University, and the Japanese National Institute of Materials Science have shown that the performance of another 2D material–molybdenum disulfide (MoS2)–can be similarly improved by BN-encapsulation.

“These findings provide a demonstration of how to study all 2D materials,” says Hone, leader of this new study and director of Columbia’s NSF-funded Materials Research Science and Engineering Center. “Our combination of BN and graphene electrodes is like a ‘socket’ into which we can place many other materials and study them in an extremely clean environment to understand their true properties and potential. This holds great promise for a broad range of applications including high-performance electronics, detection and emission of light, and chemical/bio-sensing.”

Two-dimensional (2D) materials created by “peeling'” atomically thin layers from bulk crystals are extremely stretchable, optically transparent, and can be combined with each other and with conventional electronics in entirely new ways. But these materials–in which all atoms are at the surface–are by their nature extremely sensitive to their environment, and their performance often falls far short of theoretical limits due to contamination and trapped charges in surrounding insulating layers. The BN-encapsulated graphene that Hone’s group produced last year has 50× improved electronic mobility–an important measure of electronic performance–and lower disorder that enables the study of rich new phenomena at low temperature and high magnetic fields.

“We wanted to see what we could do with MoS2–it’s the best-studied 2D semiconductor, and, unlike graphene, it can form a transistor that can be switched fully ‘off’, a property crucial for digital circuits,” notes Gwan-Hyoung Lee, co-lead author on the paper and assistant professor of materials science at Yonsei. In the past, MoS2 devices made on common insulating substrates such as silicon dioxide have shown mobility that falls below theoretical predictions, varies from sample to sample, and remains low upon cooling to low temperatures, all indications of a disordered material. Researchers have not known whether the disorder was due to the substrate, as in the case of graphene, or due to imperfections in the material itself.

In the new work, Hone’s team created heterostructures, or layered stacks, of MoS2 encapsulated in BN, with small flakes of graphene overlapping the edge of the MoS2 to act as electrical contacts. They found that the room-temperature mobility was improved by a factor of about 2, approaching the intrinsic limit. Upon cooling to low temperature, the mobility increased dramatically, reaching values 5-50× that those measured previously (depending on the number of atomic layers). As a further sign of low disorder, these high-mobility samples also showed strong oscillations in resistance with magnetic field, which had not been previously seen in any 2D semiconductor.

“This new device structure enables us to study quantum transport behavior in this material at low temperature for the first time,” added Columbia Engineering PhD student Xu Cui, the first author of the paper.

By analyzing the low-temperature resistance and quantum oscillations, the team was able to conclude that the main source of disorder remains contamination at the interfaces, indicating that further improvements are possible.

“This work motivates us to further improve our device assembly techniques, since we have not yet reached the intrinsic limit for this material,” Hone says. “With further progress, we hope to establish 2D semiconductors as a new family of electronic materials that rival the performance of conventional semiconductor heterostructures–but are created using scotch tape on a lab-bench instead of expensive high-vacuum systems.”

The ability of materials to conduct heat is a concept that we are all familiar with from everyday life. The modern story of thermal transport dates back to 1822 when the brilliant French physicist Jean-Baptiste Joseph Fourier published his book “Théorie analytique de la chaleur” (The Analytic Theory of Heat), which became a corner stone of heat transport. He pointed out that the thermal conductivity, i.e., ratio of the heat flux to the temperature gradient is an intrinsic property of the material itself.

The advent of nanotechnology, where the rules of classical physics gradually fail as the dimensions shrink, is challenging Fourier’s theory of heat in several ways. A paper published in ACS Nano and led by researchers from the Max Planck Institute for Polymer Research (Germany), the Catalan Institute of Nanoscience and Nanotechnology (ICN2) at the campus of the Universitat Autònoma de Barcelona (UAB) (Spain) and the VTT Technical Research Centre of Finland (Finland) describes how the nanometre-scale topology and the chemical composition of the surface control the thermal conductivity of ultrathin silicon membranes. The work was funded by the European Project Membrane-based phonon engineering for energy harvesting (MERGING).

The results show that the thermal conductivity of silicon membranes thinner than 10 nm is 25 times lower than that of bulk crystalline silicon and is controlled to a large extent by the structure and the chemical composition of their surface. Combining state-of-the-art realistic atomistic modelling, sophisticated fabrication techniques, new measurement approaches and state-of-the-art parameter-free modelling, researchers unravelled the role of surface oxidation in determining the scattering of quantized lattice vibrations (phonons), which are the main heat carriers in silicon.

Both experiments and modelling showed that removing the native oxide improves the thermal conductivity of silicon nanostructures by almost a factor of two, while successive partial re-oxidation lowers it again. Large-scale molecular dynamics simulations with up to 1,000,000 atoms allowed the researchers to quantify the relative contributions to the reduction of the thermal conductivity arising from the presence of native SiO2 and from the dimensionality reduction evaluated for a model with perfectly specular surfaces.

Silicon is the material of choice for almost all electronic-related applications, where characteristic dimensions below 10nm have been reached, e.g. in FinFET transistors, and heat dissipation control becomes essential for their optimum performance. While the lowering of thermal conductivity induced by oxide layers is detrimental to heat spread in nanoelectronic devices, it will turn useful for thermoelectric energy harvesting, where efficiency relies on avoiding heat exchange across the active part of the device.

The chemical nature of surfaces, therefore, emerges as a new key parameter for improving the performance of Si-based electronic and thermoelectric nanodevices, as well as of that of nanomechanical resonators (NEMS). This work opens new possibilities for novel thermal experiments and designs directed to manipulate heat at such scales.

New work from Carnegie’s Russell Hemley and Ivan Naumov hones in on the physics underlying the recently discovered fact that some metals stop being metallic under pressure. Their work is published in Physical Review Letters.

Metals are compounds that are capable of conducting the flow of electrons that make up an electric current. Other materials, called insulators, are not capable of conducting an electric current. At low temperatures, all materials can be classified as either insulators or metals.

Insulators can be pushed across the divide from insulator to metal by tuning their surrounding conditions, particularly by placing them under pressure. It was long believed that once such a material was converted into a metal under pressure, it would stay that way forever as the pressure was increased. This idea goes back to the birth of quantum mechanics in the early decades of the last century.

But it was recently discovered that certain groups of metals become insulating under pressure-a remarkable finding that was not previously thought possible.

For example, lithium goes from being a metallic conductor to a somewhat resistant semiconductor under around 790,000 times normal atmospheric pressure (80 gigapascals) and then becomes fully metallic again under around 1.2 million times normal atmospheric pressure (120 gigapascals). Sodium enters an insulating state at pressures of around 1.8 million times normal atmospheric pressure (180 gigapascals). Calcium and nickel are predicted to have similar insulating states before reverting to being metallic.

Hemley and Naumov wanted to determine the unifying physics framework underlying these unexpected metal-to-insulator-to-metal transitions.

“The principles we developed will allow for predictions of when metals will become insulators under pressure, as well as the reverse, the when-insulators-can-become-metals transition,” Naumov said.

The onsets of these transitions can be determined by the positions of electrons within the basic structure of the material. Insulators typically become metallic by a reduction in the spacing between atoms in the material. Hemley and Naumov demonstrated that for a metal to become an insulator, these reduced-spacing overlaps must be organized in a specific kind of asymmetry that was not previously recognized. Under these conditions, electrons localize between the atoms and do not freely flow as they do in the metallic form.

“This is yet another example of how extreme pressure is an important tool for advancing our understanding principles of the nature of materials at a fundamental level. The work will have implications for the search for new energy materials.” Hemley said.

The key to better cellphones and other rechargeable electronics may be in tiny “sandwiches” made of nanosheets, according to mechanical engineering research from Kansas State University.

Gurpreet Singh, assistant professor of mechanical and nuclear engineering, and his research team are improving rechargeable lithium-ion batteries. The team has focused on the lithium cycling of molybdenum disulfide, or MoS2, sheets, which Singh describes as a “sandwich” of one molybdenum atom between two sulfur atoms.

In the latest research, the team has found that silicon carbonitride-wrapped molybdenum disulfide sheets show improved stability as a battery electrode with little capacity fading.

The findings appear in Nature’s Scientific Reports in the article “Polymer-Derived Ceramic Functionalized MoS2Composite Paper as a Stable Lithium-Ion Battery Electrode.” Other Kansas State University researchers involved include Lamuel David, doctoral student in mechanical engineering, India; Uriel Barrera, senior in mechanical engineering, Olathe; and Romil Bhandavat, 2013 doctoral graduate in mechanical engineering.

In this latest publication, Singh’s team observed that molybdenum disulfide sheets store more than twice as much lithium — or charge — than bulk molybdenum disulfide reported in previous studies. The researchers also found that the high lithium capacity of these sheets does not last long and drops after five charging cycles.

“This kind of behavior is similar to a lithium-sulfur type of battery, which uses sulfur as one of its electrodes,” Singh said. “Sulfur is notoriously famous for forming intermediate polysulfides that dissolve in the organic electrolyte of the battery, which leads to capacity fading. We believe that the capacity drop observed in molybdenum disulfide sheets is also due to loss of sulfur into the electrolyte.”

To reduce the dissolution of sulfur-based products into the electrolyte, the researchers wrapped the molybdenum disulfide sheets with a few layers of a ceramic called silicon carbonitride, or SiCN. The ceramic is a high-temperature, glassy material prepared by heating liquid silicon-based polymers and has much higher chemical resistance toward the liquid electrolyte, Singh said.

“The silicon carbonitride-wrapped molybdenum disulfide sheets show stable cycling of lithium-ions irrespective of whether the battery electrode is on copper foil-traditional method or as a self-supporting flexible paper as in bendable batteries,” Singh said.

After the reactions, the research team also dissembled and observed the cells under the electron microscope, which provided evidence that the silicon carbonitride protected against mechanical and chemical degradation with liquid organic electrolyte.

Singh and his team now want to better understand how the molybdenum disulfide cells might behave in an everyday electronic device — such as a cellphone — that is recharged hundreds of times. The researchers will continue to test the molybdenum disulfide cells during recharging cycles to have more data to analyze and to better understand how to improve rechargeable batteries.

Other research by Singh’s team may help improve high temperature coatings for aerospace and defense. The engineers are developing a coating material to protect electrode materials against harsh conditions, such as turbine blades and metals subjected to intense heat.

The research appears in the Journal of Physical Chemistry. The researchers showed that when silicon carbonitride and boron nitride nanosheets are combined, they have high temperature stability and improved electrical conductivity. Additionally, these silicon carbonitride/boron nitride nanosheets are better battery electrodes, Singh said.

“This was quite surprising because both silicon carbonitride and boron nitride are insulators and have little reversible capacity for lithium-ions,” Singh said. “Further analysis showed that the electrical conductivity improved because of the formation of a percolation network of carbon atoms known as ‘free carbon’ that is present in the silicon carbonitride ceramic phase. This occurs only when boron nitride sheets are added to silicon carbonitride precursor in its liquid polymeric phase before curing is achieved.”

Take a material that is a focus of interest in the quest for advanced solar cells. Discover a “freshman chemistry level” technique for growing that material into high-efficiency, ultra-small lasers. The result, disclosed Monday, April 13 in Nature Materials, is a shortcut to lasers that are extremely efficient and able to create many colors of light.

That makes these tiny lasers suitable for miniature optoelectronics, computers and sensors.

“We are working with a class of fascinating materials called organic-inorganic hybrid perovskites that are the focus of attention right now for high-efficiency solar cells that can be made from solution processes,” says Song Jin, a professor of chemistry at the University of Wisconsin-Madison.

“While most researchers make these perovskite compounds into thin films for the fabrication of solar cells, we have developed an extremely simple method to grow them into elongated crystals that make extremely promising lasers,” Jin says. The tiny rectangular crystals grown in Jin’s lab are about 10 to 100 millionths of a meter long by about 400 billionths of a meter (nanometers) across. Because their cross-section is measured in nanometers, these crystals are called nanowires.

The new growth technique skips the costly, complicated equipment needed to make conventional lasers, says Jin, an expert on crystal growth and nanomaterial synthesis.

Jin says the nanowires grow in about 20 hours once a glass plate coated with a solid reactant is submerged in a solution of the second reactant. “There’s no heat, no vacuum, no special equipment needed,” says Jin. “They grow in a beaker on the lab bench.”

“The single-crystal perovskite nanowires grown from solutions at room temperature are high quality, almost free of defects, and they have the nice reflective parallel facets that a laser needs,” Jin explains. “Most importantly, according to the conventional measures of lasing quality and efficiency, they are real standouts.”

When tested in the lab of Jin’s collaborator, Xiaoyang Zhu of Columbia University, the lasers were nearly 100 percent efficient. Essentially every photon absorbed produced a photon of laser light. “The advantage of these nanowire lasers is the much higher efficiency, by at least one order of magnitude, over existing ones,” says Zhu.

Lasers are devices that make coherent, pure-color light when stimulated with energy. “Coherent” means the light waves are moving synchronously, with their high and low points occurring at the same place. Coherence and the single-wavelength, pure color give lasers their most valuable properties. Lasers are used everywhere from DVD players, optical communications and surgery to cutting metal.

Nanowire lasers have the potential to enhance efficiency and miniaturize devices, and could be used in devices that merge optical and electronic technology for computing, communication and sensors.

“These are simply the best nanowire lasers by all performance criteria,” says Jin, “even when compared to materials grown in high temperature and high vacuum. Perovskites are intrinsically good materials for lasing, but when they are grown into high-quality crystals with the proper size and shape, they really shine.”

What is also exciting is that simply tweaking the recipe for growing the nanowires could create a series of lasers that emit a specific wavelength of light in many areas of the visible spectrum.

Before these nanowire lasers can be used in practical applications, Jin says their chemical stability must be improved. Also important is finding a way to stimulate the laser with electricity rather than light, which was just demonstrated.

A team of researchers from the University of Cambridge have unravelled one of the mysteries of electromagnetism, which could enable the design of antennas small enough to be integrated into an electronic chip. These ultra-small antennas – the so-called ‘last frontier’ of semiconductor design – would be a massive leap forward for wireless communications.

In new results published in the journal Physical Review Letters, the researchers have proposed that electromagnetic waves are generated not only from the acceleration of electrons, but also from a phenomenon known as symmetry breaking. In addition to the implications for wireless communications, the discovery could help identify the points where theories of classical electromagnetism and quantum mechanics overlap.

The phenomenon of radiation due to electron acceleration, first identified more than a century ago, has no counterpart in quantum mechanics, where electrons are assumed to jump from higher to lower energy states. These new observations of radiation resulting from broken symmetry of the electric field may provide some link between the two fields.

The purpose of any antenna, whether in a communications tower or a mobile phone, is to launch energy into free space in the form of electromagnetic or radio waves, and to collect energy from free space to feed into the device. One of the biggest problems in modern electronics, however, is that antennas are still quite big and incompatible with electronic circuits – which are ultra-small and getting smaller all the time.

“Antennas, or aerials, are one of the limiting factors when trying to make smaller and smaller systems, since below a certain size, the losses become too great,” said Professor Gehan Amaratunga of Cambridge’s Department of Engineering, who led the research. “An aerial’s size is determined by the wavelength associated with the transmission frequency of the application, and in most cases it’s a matter of finding a compromise between aerial size and the characteristics required for that application.”

Another challenge with aerials is that certain physical variables associated with radiation of energy are not well understood. For example, there is still no well-defined mathematical model related to the operation of a practical aerial. Most of what we know about electromagnetic radiation comes from theories first proposed by James Clerk Maxwell in the 19th century, which state that electromagnetic radiation is generated by accelerating electrons.

However, this theory becomes problematic when dealing with radio wave emission from a dielectric solid, a material which normally acts as an insulator, meaning that electrons are not free to move around. Despite this, dielectric resonators are already used as antennas in mobile phones, for example.

“In dielectric aerials, the medium has high permittivity, meaning that the velocity of the radio wave decreases as it enters the medium,” said Dr Dhiraj Sinha, the paper’s lead author. “What hasn’t been known is how the dielectric medium results in emission of electromagnetic waves. This mystery has puzzled scientists and engineers for more than 60 years.”

Working with researchers from the National Physical Laboratory and Cambridge-based dielectric antenna company Antenova Ltd, the Cambridge team used thin films of piezoelectric materials, a type of insulator which is deformed or vibrated when voltage is applied. They found that at a certain frequency, these materials become not only efficient resonators, but efficient radiators as well, meaning that they can be used as aerials.

The researchers determined that the reason for this phenomenon is due to symmetry breaking of the electric field associated with the electron acceleration. In physics, symmetry is an indication of a constant feature of a particular aspect in a given system. When electronic charges are not in motion, there is symmetry of the electric field.

Symmetry breaking can also apply in cases such as a pair of parallel wires in which electrons can be accelerated by applying an oscillating electric field. “In aerials, the symmetry of the electric field is broken ‘explicitly’ which leads to a pattern of electric field lines radiating out from a transmitter, such as a two wire system in which the parallel geometry is ‘broken’,” said Sinha.

The researchers found that by subjecting the piezoelectric thin films to an asymmetric excitation, the symmetry of the system is similarly broken, resulting in a corresponding symmetry breaking of the electric field, and the generation of electromagnetic radiation.

The electromagnetic radiation emitted from dielectric materials is due to accelerating electrons on the metallic electrodes attached to them, as Maxwell predicted, coupled with explicit symmetry breaking of the electric field.

“If you want to use these materials to transmit energy, you have to break the symmetry as well as have accelerating electrons – this is the missing piece of the puzzle of electromagnetic theory,” said Amaratunga. “I’m not suggesting we’ve come up with some grand unified theory, but these results will aid understanding of how electromagnetism and quantum mechanics cross over and join up. It opens up a whole set of possibilities to explore.”

The future applications for this discovery are important, not just for the mobile technology we use every day, but will also aid in the development and implementation of the Internet of Things: ubiquitous computing where almost everything in our homes and offices, from toasters to thermostats, is connected to the internet. For these applications, billions of devices are required, and the ability to fit an ultra-small aerial on an electronic chip would be a massive leap forward.

Piezoelectric materials can be made in thin film forms using materials such as lithium niobate, gallium nitride and gallium arsenide. Gallium arsenide-based amplifiers and filters are already available on the market and this new discovery opens up new ways of integrating antennas on a chip along with other components.

“It’s actually a very simple thing, when you boil it down,” said Sinha. “We’ve achieved a real application breakthrough, having gained an understanding of how these devices work.”

MIT researchers have developed a new, ultrasensitive magnetic-field detector that is 1,000 times more energy-efficient than its predecessors. It could lead to miniaturized, battery-powered devices for medical and materials imaging, contraband detection, and even geological exploration.

Magnetic-field detectors, or magnetometers, are already used for all those applications. But existing technologies have drawbacks: Some rely on gas-filled chambers; others work only in narrow frequency bands, limiting their utility.

Synthetic diamonds with nitrogen vacancies (NVs) — defects that are extremely sensitive to magnetic fields — have long held promise as the basis for efficient, portable magnetometers. A diamond chip about one-twentieth the size of a thumbnail could contain trillions of nitrogen vacancies, each capable of performing its own magnetic-field measurement.

The problem has been aggregating all those measurements. Probing a nitrogen vacancy requires zapping it with laser light, which it absorbs and re-emits. The intensity of the emitted light carries information about the vacancy’s magnetic state.

“In the past, only a small fraction of the pump light was used to excite a small fraction of the NVs,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science and one of the designers of the new device. “We make use of almost all the pump light to measure almost all of the NVs.”

The MIT researchers report their new device in the latest issue of Nature Physics. First author on the paper is Hannah Clevenson, a graduate student in electrical engineering who is advised by senior authors Englund and Danielle Braje, a physicist at MIT Lincoln Laboratory. They’re joined by Englund’s students Matthew Trusheim and Carson Teale (who’s also at Lincoln Lab) and by Tim Schröder, a postdoc in MIT’s Research Laboratory of Electronics.

Telling absence

A pure diamond is a lattice of carbon atoms, which don’t interact with magnetic fields. A nitrogen vacancy is a missing atom in the lattice, adjacent to a nitrogen atom. Electrons in the vacancy do interact with magnetic fields, which is why they’re useful for sensing.

When a light particle — a photon — strikes an electron in a nitrogen vacancy, it kicks it into a higher energy state. When the electron falls back down into its original energy state, it may release its excess energy as another photon. A magnetic field, however, can flip the electron’s magnetic orientation, or spin, increasing the difference between its two energy states. The stronger the field, the more spins it will flip, changing the brightness of the light emitted by the vacancies.

Making accurate measurements with this type of chip requires collecting as many of those photons as possible. In previous experiments, Clevenson says, researchers often excited the nitrogen vacancies by directing laser light at the surface of the chip.

“Only a small fraction of the light is absorbed,” she says. “Most of it just goes straight through the diamond. We gain an enormous advantage by adding this prism facet to the corner of the diamond and coupling the laser into the side. All of the light that we put into the diamond can be absorbed and is useful.”

Covering the bases

The researchers calculated the angle at which the laser beam should enter the crystal so that it will remain confined, bouncing off the sides — like a tireless cue ball ricocheting around a pool table — in a pattern that spans the length and breadth of the crystal before all of its energy is absorbed.

“You can get close to a meter in path length,” Englund says. “It’s as if you had a meter-long diamond sensor wrapped into a few millimeters.” As a consequence, the chip uses the pump laser’s energy 1,000 times as efficiently as its predecessors did.

Because of the geometry of the nitrogen vacancies, the re-emitted photons emerge at four distinct angles. A lens at one end of the crystal can collect 20 percent of them and focus them onto a light detector, which is enough to yield a reliable measurement.

Consider these eight issues where the packaging team should be closely involved with the circuit design team.

BY JOHN T. MACKAY, Semi-Pac, Inc., Sunnyvale, CA

Today’s integrated circuit designs are driven by size, performance, cost, reliability, and time- to-market. In order to optimize these design drivers, the requirements of the entire system should be considered at the beginning of the design cycle—from the end system product down to the chips and their packages. Failure to include packaging in this holistic view can result in missing market windows or getting to market with a product that is more costly and problematic to build than an optimized product.

Chip design

As a starting consideration, chip packaging strategies should be developed prior to chip design completion. System timing budgets, power management, and thermal behavior can be defined at the beginning of the design cycle, eliminating the sometimes impossible constraints that are given to the package engineering team at the end of the design. In many instances chip designs end up being unnecessarily difficult to manufacture, have higher than necessary assembly costs and have reduced manufacturing yields because the chip design team used minimum design rules when looser rules could have been used.

Examples of these are using minimum pad-to-pad spacing when the pads could have been spread out or using unnecessary minimum metal to pad clearance (FIGURE 1). These hard taught lessons are well understood by the large chip manufacturers, yet often resurface with newer companies and design teams that have not experienced these lessons. Using design rule minimums puts unnecessary pressure on the manufacturing process resulting in lower overall manufacturing yields.

Packaging 1

FIGURE 1. In this image, the bonding pads are grouped in tight clusters rather than evenly distributed across the edge of the chip. This makes it harder to bond to the pads and requires more-precise equipment to do the bonding, thus unnecessarily increasing the assembly cost and potentially impacting device reliability.

Packaging

Semiconductor packaging has often been seen as a necessary evil, with most chip designers relying on existing packages rather than package customization for optimal performance. Wafer level and chipscale packaging methods have further perpetuated the belief that the package is less important and can be eliminated, saving cost and improving performance. The real fact is that the semiconductor package provides six essential functions: power in, heat out, signal I/O, environmental protection, fan-out/compatibility to surface mounting (SMD), and managing reliability. These functions do not disappear with the implementation of chipscale packaging, they only transfer over to the printed circuit board (PCB) designer. Passing the buck does not solve the problem since the PCB designers and their tools are not usually expected to provide optimal consideration to the essential semiconductor die requirements.

Packages

Packaging technology has considerably evolved over the past 40 years. The evolution has kept pace with Moore’s Law increasing density while at the same time reducing cost and size. Hermetic pin grid arrays (PGAs) and side-brazed packages have mostly been replaced by the lead-frame-based plastic quad flat packs (QFP). Following those developments, laminate based ball grid arrays (BGA), quad flat pack no leads (QFN), chip scale and flip-chip direct attach became the dominate choice for packages.

The next generation of packages will employ through-silicon vias to allow 3D packaging with chip-on-chip or chip-on-interposer stacking. Such approaches promise to solve many of the packaging problems and usher in a new era. The reality is that each package type has its benefits and drawbacks and no package type ever seems to be completely extinct. The designer needs to have an in-depth understand of all of the packaging options to determine how each die design might benefit or suffer drawbacks from the use of any particular package type. If the designer does not have this expertise, it is wise to call in a packaging team that possesses this expertise.

Miniaturization

The push to put more and more electronics into a smaller space can inadvertently lead to unnec- essary packaging complications. The ever increasing push to produce thinner packages is a compromise against reliability and manufacturability. Putting unpackaged die on the board definitely saves space and can produce thinner assemblies such as smart card applications. This chip-on-board (COB) approach often has problems since the die are difficult to bond because of their tight proximity to other components or have unnecessarily long bond wires or wires at acute angles that can cause shorts as PCB designers attempt to accommodate both board manufacturing line and space realities with wire bond requirements.

Additionally, the use of minimum PCB design rules can complicate the assembly process since the PCB etch-process variations must be accommodated. Picking the right PCB manufacturer is important too as laminate substrate manufacturers and standard PCB shops are most often seen as equals by many users. Often, designers will use material selections and metal systems that were designed for surface mounting but turn out to be difficult to wire bond. Picking a supplier that makes the right metallization tradeoffs and process disciplines is important in order to maximize manufacturing yields

Power

Power distribution, including decoupling capaci- tance and copper ground and power planes have been mostly a job for the PCB designer. This is a wonder to most users as to why decoupling is rarely embedded into the package as a complete unit. Cost or package size limitations are typically the reasons cited as to why this isn’t done. The reality is that semiconductor component suppliers usually don’t know the system requirements, power fluctuation tolerance and switching noise mitigation in any particular installation. Therefore power management is left to the system designer at the board level.

Thermal Management

Miniaturization results in less volume and heat spreading to dissipate heat. Often, there is no room or project funds available for heat sinks. Managing junction temperature has always been the job of the packaging engineer who must balance operating and ambient temperatures and packaging heat flow.

Once again, it is important to develop a thermal strategy early in the design cycle that includes die specifics, die attachment material specification, heat spreading die attachment pad, thermal balls on BGA and direct thermal pad attachment during surface mount.

Signal input/output

Managing signal integrity has always been the primary concern of the packaging engineer. Minimizing parasitics, crosstalk, impedance mismatch, transmission line effects and signal atten- uation are all challenges that must be addressed. The package must handle the input/output signal requirements at the desired operating frequencies without a significant decrease in signal integrity. All packages have signal characteristics specific to the materials and package designs.

Performance

There are a number of factors that impact perfor- mance including: on-chip drivers, impedance matching, crosstalk, power supply shielding, noise and PCB materials to name a few. The performance goals must be defined at the beginning of the design cycle and tradeoffs made throughout the design process.

Environmental protection

The designer must also be aware that packaging choices have an impact on protecting the die from environmental contamination and/or damage. Next- generation chip-scale packaging (CSP) and flip chip technologies can expose the die to contami- nation. While the fab, packaging and manufacturing engineers are responsible for coming up with solutions that protect the die, the design engineer needs to understand the impact that these packaging technologies have on manufacturing yields and long-term reliability.

Involve your packaging team

Hopefully, these points have provided some insights on how packaging impacts many aspects of design and should not be relegated to just picking the right package at the end of the chip design. It is important that your packaging team be involved in the design process from initial specification through the final design review.

In today’s fast moving markets, market windows are shrinking so time to market is often the important differentiator between success and failure. Not involving your packaging team early in the design cycle can result in costly rework cycles at the end of the project, having manufacturing issues that delay the product introduction or, even worse, having impossible problems to solve that could have been eliminated had packaging been considered at the beginning of the design cycle.

System design incorporates many different design disciplines. Most designers are proficient in their domain specialty and not all domains. An important byproduct of these cross-functional teams is the spreading of design knowledge throughout the teams, resulting in more robust and cost effective designs.