Tag Archives: letter-mems-tech

Rice University scientists are counting on films of carbon nanotubes to make high-powered, fast-charging lithium metal batteries a logical replacement for common lithium-ion batteries.

Rice University graduate student Gladys López-Silva holds a lithium metal anode with a film of carbon nanotubes. Once the film is attached, it becomes infiltrated by lithium ions and turns red. Credit: Jeff Fitlow/Rice University

The Rice lab of chemist James Tour showed thin nanotube films effectively stop dendrites that grow naturally from unprotected lithium metal anodes in batteries. Over time, these tentacle-like dendrites can pierce the battery’s electrolyte core and reach the cathode, causing the battery to fail.

That problem has both dampened the use of lithium metal in commercial applications and encouraged researchers worldwide to solve it.

Lithium metal charges much faster and holds about 10 times more energy by volume than the lithium-ion electrodes found in just about every electronic device, including cellphones and electric cars.

“One of the ways to slow dendrites in lithium-ion batteries is to limit how fast they charge,” Tour said. “People don’t like that. They want to be able to charge their batteries quickly.”

The Rice team’s answer, detailed in Advanced Materials, is simple, inexpensive and highly effective at stopping dendrite growth, Tour said.

“What we’ve done turns out to be really easy,” he said. “You just coat a lithium metal foil with a multiwalled carbon nanotube film. The lithium dopes the nanotube film, which turns from black to red, and the film in turn diffuses the lithium ions.”

“Physical contact with lithium metal reduces the nanotube film, but balances it by adding lithium ions,” said Rice postdoctoral researcher Rodrigo Salvatierra, co-lead author of the paper with graduate student Gladys López-Silva. “The ions distribute themselves throughout the nanotube film.”

When the battery is in use, the film discharges stored ions and the underlying lithium anode refills it, maintaining the film’s ability to stop dendrite growth.

The tangled-nanotube film effectively quenched dendrites over 580 charge/discharge cycles of a test battery with a sulfurized-carbon cathode the lab developed in previous experiments. The researchers reported the full lithium metal cells retained 99.8 percent of their coulombic efficiency, the measure of how well electrons move within an electrochemical system.

Gyroscopes are devices that help vehicles, drones, and wearable and handheld electronic devices know their orientation in three-dimensional space. They are commonplace in just about every bit of technology we rely on every day. Originally, gyroscopes were sets of nested wheels, each spinning on a different axis. But open up a cell phone today, and you will find a microelectromechanical sensor (MEMS), the modern-day equivalent, which measures changes in the forces acting on two identical masses that are oscillating and moving in opposite directions. These MEMS gyroscopes are limited in their sensitivity, so optical gyroscopes have been developed to perform the same function but with no moving parts and a greater degree of accuracy using a phenomenon called the Sagnac effect.

This is the optical gyroscope developed in Ali Hajimiri’s lab, resting on grains of rice. Credit: Ali Hajimiri/Caltech

What is the Sagnac Effect?

The Sagnac effect, named after French physicist Georges Sagnac, is an optical phenomenon rooted in Einstein’s theory of general relativity. To create it, a beam of light is split into two, and the twin beams travel in opposite directions along a circular pathway, then meet at the same light detector. Light travels at a constant speed, so rotating the device–and with it the pathway that the light travels–causes one of the two beams to arrive at the detector before the other. With a loop on each axis of orientation, this phase shift, known as the Sagnac effect, can be used to calculate orientation.

The Problem

The smallest high-performance optical gyroscopes available today are bigger than a golf ball and are not suitable for many portable applications. As optical gyroscopes are built smaller and smaller, so too is the signal that captures the Sagnac effect, which makes it more and more difficult for the gyroscope to detect movement. Up to now, this has prevented the miniaturization of optical gyroscopes.

The Invention

Caltech engineers led by Ali Hajimiri, Bren Professor of Electrical Engineering and Medical Engineering in the Division of Engineering and Applied Science, developed a new optical gyroscope that is 500 times smaller than the current state-of-the-art device, yet they can detect phase shifts that are 30 times smaller than those systems. The new device is described in a paper published in the November issue of Nature Photonics.

How it works

The new gyroscope from Hajimiri’s lab achieves this improved performance by using a new technique called “reciprocal sensitivity enhancement.” In this case, “reciprocal” means that it affects both beams of the light inside the gyroscope in the same way. Since the Sagnac effect relies on detecting a difference between the two beams as they travel in opposite directions, it is considered nonreciprocal. Inside the gyroscope, light travels through miniaturized optical waveguides (small conduits that carry light, that perform the same function as wires do for electricity). Imperfections in the optical path that might affect the beams (for example, thermal fluctuations or light scattering) and any outside interference will affect both beams similarly.

Hajimiri’s team found a way to weed out this reciprocal noise while leaving signals from the Sagnac effect intact. Reciprocal sensitivity enhancement thus improves the signal-to-noise ratio in the system and enables the integration of the optical gyro onto a chip smaller than a grain of rice.

Spectrometers — devices that distinguish different wavelengths of light and are used to determine the chemical composition of everything from laboratory materials to distant stars — are large devices with six-figure price tags, and tend to be found in large university and industry labs or observatories.

A collection of mini-spectrometer chips are arrayed on a tray after being made through conventional chip-making processes. Credit: Felice Frankel

A new advance by researchers at MIT could make it possible to produce tiny spectrometers that are just as accurate and powerful but could be mass produced using standard chip-making processes. This approach could open up new uses for spectrometry that previously would have been physically and financially impossible.

The invention is described today in the journal Nature Communications, in a paper by MIT associate professor of materials science and engineering Juejun Hu, doctoral student Derek Kita, research assistant Brando Miranda, and five others.

The researchers say this new approach to making spectrometers on a chip could provide major advantages in performance, size, weight, and power consumption, compared to current instruments.

Other groups have tried to make chip-based spectrometers, but there is a built-in challenge: A device’s ability to spread out light based on its wavelength, using any conventional optical system, is highly dependent on the device’s size. “If you make it smaller, the performance degrades,” Hu says.

Another type of spectrometer uses a mathematical approach called a Fourier transform. But these devices are still limited by the same size constraint — long optical paths are essential to attaining high performance. Since high-performance devices require long, tunable optical path lengths, miniaturized spectrometers have traditionally been inferior compared to their benchtop counterparts.

Instead, “we used a different technique,” says Kita. Their system is based on optical switches, which can instantly flip a beam of light between the different optical pathways, which can be of different lengths. These all-electronic optical switches eliminate the need for movable mirrors, which are required in the current versions, and can easily be fabricated using standard chip-making technology.

By eliminating the moving parts, Kita says, “there’s a huge benefit in terms of robustness. You could drop it off the table without causing any damage.”

By using path lengths in power-of-two increments, these lengths can be combined in different ways to replicate an exponential number of discrete lengths, thus leading to a potential spectral resolution that increases exponentially with the number of on-chip optical switches. It’s the same principle that allows a balance scale to accurately measure a broad range of weights by combining just a small number of standard weights.

As a proof of concept, the researchers contracted an industry-standard semiconductor manufacturing service to build a device with six sequential switches, producing 64 spectral channels, with built-in processing capability to control the device and process its output. By expanding to 10 switches, the resolution would jump to 1,024 channels. They designed the device as a plug-and-play unit that could be easily integrated with existing optical networks.

The team also used new machine-learning techniques to reconstruct detailed spectra from a limited number of channels. The method they developed works well to detect both broad and narrow spectral peaks, Kita says. They were able to demonstrate that its performance did indeed match the calculations, and thus opens up a wide range of potential further development for various applications.

The researchers say such spectrometers could find applications in sensing devices, materials analysis systems, optical coherent tomography in medical imaging, and monitoring the performance of optical networks, upon which most of today’s digital networks rely. Already, the team has been contacted by some companies interested in possible uses for such microchip spectrometers, with their promise of huge advantages in size, weight, and power consumption, Kita says. There is also interest in applications for real-time monitoring of industrial processes, Hu adds, as well as for environmental sensing for industries such as oil and gas.

Computer bits are binary, with a value of 0 or 1. By contrast, neurons in the brain can have all kinds of different internal states, depending on the input that they received. This allows the brain to process information in a more energy-efficient manner than a computer. University of Groningen (UG) physicists are working on memristors, resistors with a memory, made from niobium-doped strontium titanate, which mimic how neurons work. Their results were published in the Journal of Applied Physics on 21 October.

The brain is superior to traditional computers in many ways. Brain cells use less energy, process information faster and are more adaptable. The way that brain cells respond to a stimulus depends on the information that they have received, which potentiates or inhibits the neurons. Scientists are working on new types of devices which can mimic this behavior, called memristors.

Memory

UG researcher Anouk Goossens, the first author of the paper, tested memristors made from niobium-doped strontium titanate. The conductivity of the memristors is controlled by an electric field in an analog fashion: ‘We use the system’s ability to switch resistance: by applying voltage pulses, we can control the resistance, and using a low voltage we read out the current in different states. The strength of the pulse determines the resistance in the device. We have shown a resistance ratio of at least 1000 to be realizable. We then measured what happened over time.’ Goossens was especially interested in the time dynamics of the resistance states.

She observed that the duration of the pulse with which the resistance was set determined how long the ‘memory’ lasted. This could be between one to four hours for pulses lasting between a second and two minutes. Furthermore, she found that after 100 switching cycles, the material showed no signs of fatigue.

Forgetting

‘There are different things you could do with this’, says Goossens. ‘By “teaching” the device in different ways, using different pulses, we can change its behavior.’ The fact that the resistance changes over time can also be useful: ‘These systems can forget, just like the brain. It allows me to use time as a variable parameter.’ In addition, the devices that Goossens made combine both memory and processing in one device, which is more efficient than traditional computer architecture in which storage (on magnetic hard discs) and processing (in the CPU) are separated.

Goossens conducted the experiments described in the paper during a research project as part of the Master in Nanoscience degree programme at the University of Groningen. Goossens’ research project took place within the group of students supervised by Dr. Tamalika Banerjee of Spintronics of Functional Materials. She is now a Ph.D. student in the same group.

Questions

Before building brain-like circuits with her device, Goossens plans to conduct experiments to really understand what happens within the material. ‘If we don’t know exactly how it works, we can’t solve any problems that might occur in these circuits. So, we have to understand the physical properties of the material: what does it do, and why?’

Questions that Goossens want to answer include what parameters influence the states that are achieved. ‘And if we manufacture 100 of these devices, do they all work the same? If they don’t, and there is device-to-device variation, that doesn’t have to be a problem. After all, not all elements in the brain are the same.’

STMicroelectronics (NYSE: STM) and Fidesmo, the contactless-services developer and Mastercard Approved Global Vendor, have created a turnkey active solution for secure contactless payments on smart watches and other wearable technology.

The complete payment system-on-chip (SoC) is based on ST’s STPay-Boost IC, which combines a hardware secure element to protect transactions and a contactless controller featuring proprietary active-boost technology that maintains reliable NFC connections even in devices made with metallic materials. Its single-chip footprint fits easily within wearable form factors.

Fidesmo’s MasterCard MDES tokenization platform completes the solution by allowing the user to load the personal data needed for payment transactions. Convenient Over-The-Air (OTA) technology makes personalization a simple step for the user without any special equipment.

STPay-Boost, featuring our secure element and performance-boosting active contactless technology, is a unique single-chip payment solution that fits the design constraints of wearable devices,” said Laurent Degauque, Marketing Director, Secure Microcontroller Division, STMicroelectronics. “Fidesmo’s personalization platform provides the vital ingredient to create a turnkey payment solution that device makers can simply take and use with minimal engineering and certification effort.

Our cooperation with STMicroelectronics opens up a new market for us with support for lightweight boosted secure elements, and broadens choices for our customers,” said Mattias Eld, CEO of Fidesmo. “Together, we have created a unique offering that is sure to impact the hybrid watch market and drive the emergence of innovative new products such as wristbands, bracelets, key fobs, and connected jewelry.

Kronaby, the Malmö, Sweden-based hybrid smart-watch maker, has embedded the STPay-Boost chip in its portfolio of men’s and women’s smart watches that offer differentiated features such as freedom from charging and filtered notifications. The SoC with Fidesmo tokenization enables Kronaby watches to support a variety of services such as payments, access control, transportation, and loyalty rewards.

Jonas Morän, Global Product Manager at Kronaby, said, “We have achieved an intuitive and seamless user experience leveraging the ability to communicate with the Secure Element in the watch using Bluetooth. In addition, the ST/Fidesmo chip’s boosted wireless performance and compact size gives us extra freedom to style our watches to maximize their appeal in our target markets.”

STPay-Boost chips with Fidesmo OTA personalization are sampling now to lead customers and are scheduled to enter production in November 2018, priced from $3.50 for orders of 1000 pieces (excluding Fidesmo license).

Just like their biological counterparts, hardware that mimics the neural circuitry of the brain requires building blocks that can adjust how they synapse, with some connections strengthening at the expense of others. One such approach, called memristors, uses current resistance to store this information. New work looks to overcome reliability issues in these devices by scaling memristors to the atomic level.

A group of researchers demonstrated a new type of compound synapse that can achieve synaptic weight programming and conduct vector-matrix multiplication with significant advances over the current state of the art. Publishing its work in the Journal of Applied Physics, from AIP Publishing, the group’s compound synapse is constructed with atomically thin boron nitride memristors running in parallel to ensure efficiency and accuracy.

Hardware that mimics the neural circuitry of the brain requires building blocks that can adjust how they synapse. One such approach, called memristors, uses current resistance to store this information. New work looks to overcome reliability issues in these devices by scaling memristors to the atomic level. Researchers demonstrated a new type of compound synapse that can achieve synaptic weight programming and conduct vector-matrix multiplication with significant advances over the current state of the art. They discuss their work in this week’s Journal of Applied Physics. This image shows a conceptual schematic of the 3D implementation of compound synapses constructed with boron nitride oxide (BNOx) binary memristors, and the crossbar array with compound BNOx synapses for neuromorphic computing applications. Credit: Ivan Sanchez Esqueda

The article appears in a special topic section of the journal devoted to “New Physics and Materials for Neuromorphic Computation,” which highlights new developments in physical and materials science research that hold promise for developing the very large-scale, integrated “neuromorphic” systems of tomorrow that will carry computation beyond the limitations of current semiconductors today.

“There’s a lot of interest in using new types of materials for memristors,” said Ivan Sanchez Esqueda, an author on the paper. “What we’re showing is that filamentary devices can work well for neuromorphic computing applications, when constructed in new clever ways.”

Current memristor technology suffers from a wide variation in how signals are stored and read across devices, both for different types of memristors as well as different runs of the same memristor. To overcome this, the researchers ran several memristors in parallel. The combined output can achieve accuracies up to five times those of conventional devices, an advantage that compounds as devices become more complex.

The choice to go to the subnanometer level, Sanchez said, was born out of an interest to keep all of these parallel memristors energy-efficient. An array of the group’s memristors were found to be 10,000 times more energy-efficient than memristors currently available.

“It turns out if you start to increase the number of devices in parallel, you can see large benefits in accuracy while still conserving power,” Sanchez said. Sanchez said the team next looks to further showcase the potential of the compound synapses by demonstrating their use completing increasingly complex tasks, such as image and pattern recognition.

As artificial intelligence has become increasingly sophisticated, it has inspired renewed efforts to develop computers whose physical architecture mimics the human brain. One approach, called reservoir computing, allows hardware devices to achieve the higher-dimension calculations required by emerging artificial intelligence. One new device highlights the potential of extremely small mechanical systems to achieve these calculations.

A group of researchers at the Université de Sherbrooke in Québec, Canada, reports the construction of the first reservoir computing device built with a microelectromechanical system (MEMS). Published in the Journal of Applied Physics, from AIP Publishing, the neural network exploits the nonlinear dynamics of a microscale silicon beam to perform its calculations. The group’s work looks to create devices that can act simultaneously as a sensor and a computer using a fraction of the energy a normal computer would use.

A single silicon beam (red), along with its drive (yellow) and readout (green and blue) electrodes, implements a MEMS capable of nontrivial computations. Credit: Guillaume Dion

The article appears in a special topic section of the journal devoted to “New Physics and Materials for Neuromorphic Computation,” which highlights new developments in physical and materials science research that hold promise for developing the very large-scale, integrated “neuromorphic” systems of tomorrow that will carry computation beyond the limitations of current semiconductors today.

“These kinds of calculations are normally only done in software, and computers can be inefficient,” said Guillaume Dion, an author on the paper. “Many of the sensors today are built with MEMS, so devices like ours would be ideal technology to blur the boundary between sensors and computers.”

The device relies on the nonlinear dynamics of how the silicon beam, at widths 20 times thinner than a human hair, oscillates in space. The results from this oscillation are used to construct a virtual neural network that projects the input signal into the higher dimensional space required for neural network computing.

In demonstrations, the system was able to switch between different common benchmark tasks for neural networks with relative ease, Dion said, including classifying spoken sounds and processing binary patterns with accuracies of 78.2 percent and 99.9 percent respectively.

“This tiny beam of silicon can do very different tasks,” said Julien Sylvestre, another author on the paper. “It’s surprisingly easy to adjust it to make it perform well at recognizing words.”

Sylvestre said he and his colleagues are looking to explore increasingly complicated computations using the silicon beam device, with the hopes of developing small and energy-efficient sensors and robot controllers.

A new approach in Fault Detection and Classification (FDC) allows engineers to uncover issues more thoroughly and accurately by taking advantage of full sensor traces.

By Tom Ho and Stewart Chalmers, BISTel, Santa Clara, CA

Traditional FDC systems collect data from production equipment, summarize it, and compare it to control limits that were previously set up by engineers. Software alarms are triggered when any of the summarized data fall outside of the control limits. While this method has been effective and widely deployed, it does create a few challenges for the engineers:

  • The use of summary data means that (1) subtle changes in the process may not be noticed and (2) the unmonitored section of the process will be overlooked by a typical FDC system. These subtle changes or the missed anomalies in unmonitored section may result in critical problems.
  • Modeling control limits for fault detection is a manual process, prone to human error and process drift. With hundreds of thousandssensors in a complex manufacturing process, the task of modeling control limits is extremely time consuming and requires a deep understanding of the particular manufacturing process on the part of the engineer. Non-optimized control limits result in misdetection: false alarms or missed alarms.
  • As equipment ages, processes change. Meticulously set control limit ranges must be adjusted, requiring engineers to constantly monitor equipment and sensor data to avoid false alarms or missed real alarm.

Full sensor trace detection

A new approach, Dynamic Fault Detection (DFD) was developed to address the shortcomings of traditional FDC systems and save both production time and engineer time. DFD takes advantage of the full trace from each and every sensor to detect any issues during a manufacturing process. By analyzing each trace in its entirety, and running them through intelligent software, the system is able to comprehensively identify potential issues and errors as they occur. As the Adaptive Intelligence behind Dynamic Fault Detection learns each unique production environment, it will be able to identify process anomalies in real time without the need for manual adjustment from engineers. Great savings can be realized by early detection, increased engineer productivity, and containment of malfunctions.

DFD’s strength is its ability to analyze full trace data. As shown in FIGURE 1, there are many subtle details on a trace, such as spikes, shifts, and ramp rate changes, which are typically ignored or go undetected by a traditional FDC systems, because they only examine a segment of the trace- summary data. By analyzing the full trace using DFD, these details can easily be identified to provide a more thorough analysis than ever before.

Figure 1

Dynamic referencing

Unlike traditional FDC deployments, DFD does not require control limit modeling. The novel solution adapts machine learning techniques to take advantage of neighboring traces as references, so control limits are dynamically defined in real time.  Not only does this substantially reduce set up and deployment time of a fault detection system, it also eliminates the need for an engineer to continuously maintain the model. Since the analysis is done in real time, the model evolves and adapts to any process shifts as new reference traces are added.

DFD has multiple reference configurations available for engineers to choose from to fine tune detection accuracy. For example, DFD can 1) use traces within a wafer lot as reference, 2) use traces from the last N wafers as reference, 3) use “golden” traces as reference, or 4) a combination of the above.

As more sensors are added to the Internet of Things network of a production plant, DFD can integrate their data into its decision-making process.

Optimized alarming

Thousands of process alarms inundate engineers each day, only a small percentage of which are valid. In today’s FDC systems, one of the main causes for false alarms is improperly configured Statistical Process Control (SPC) limits. Also, typical FDC may generate one alarm for each limit violation resulting in many alarms for each wafer process. DFD implementations require no control limits, greatly reducing the potential for false alarms.  In addition, DFD is designed to only issues one alarm per wafer, further streamlining the alarming system and providing better focus for the engineers.

Dynamic fault detection use cases

The following examples illustrate actual use cases to show the benefits of utilizing DFD for fault detection.

Use case #1End Point Abnormal Etching

In this example, both the upper and lower control limits in SPC were not set at the optimum levels, preventing the traditional FDC system from detecting several abnormally etched wafers (FIGURE 2).  No SPC alarms were issued to notify the engineer.

Figure 2

On the other hand, DFD full trace comparison easily detects the abnormality by comparing to neighboring traces (FIGURE 3).  This was accomplished without having to set up any control limits.

Figure 3

Use case #2 – Resist Bake Plate Temperature

The SPC chart in Figure 4 clearly shows that the Resist bake plate temperature pattern changed significantly; however, since the temperature range during the process never exceeded the control limits, SPC did not issue any alarms.

Figure 4

When the same parameter was analyzed using DFD, the temperature profile abnormality was easily identified, and the software notified an engineer (FIGURE 5).

Figure 5

Use case #3 – Full Trace Coverage

Engineers select only a segment of sensor trace data to monitor because setting up SPC limits is so arduous. In this specific case, the SPC system was set up to monitor only the He_Flow parameter in recipe step 3 and step 4.  Since no unusual events occurred during those steps in the process, no SPC alarms were triggered.

However, in that same production run, a DFD alarm was issued for one of the wafers. Upon examination of the trace summary chart shown in FIGURE 6, it is clear that while the parameter behaved normally during recipe step 3 and step 4, there was a noticeable issue from one of the wafers during recipe step 1 and step 2.  The trace in red represents the offending trace versus the rest of the (normal) population in blue. DFD full trace analysis caught the abnormality.

Figure 6

Use case #4 – DFD Alarm Accuracy

When setting up SPC limits in a conventional FDC system, the method of calculation taken by an engineer can yield vastly different results. In this example, the engineer used multiple SPC approaches to monitor parameter Match_LoadCap in an etcher. When the control limits were set using Standard Deviation (FIGURE 7), a large number of false alarms were triggered.  On the other hand, zero alarms were triggered using the Meanapproach (FIGURE 8).

Figure 7

Figure 8

Using DFD full trace detection eliminates the discrepancy between calculation methods. In the above example, DFD was able to identify an issue with one of the wafers in recipe step 3 and trigger only one alarm.

Dynamic fault detection scope of use

DFD is designed to be used in production environments of many types, ranging from semiconductor manufacturing to automotive plants and everything in between. As long as the manufacturing equipment being monitored generates systematic and consistent trace patterns, such as gas flow, temperature, pressure, power etc., proper referencing can be established by the Adaptive Intelligence (AI) to identify abnormalities. Sensor traces from Process of Record (POR) runs may be used as starting references.

Conclusion

The DFD solution reduces risk in manufacturing by protecting against events that impact yield.  It also provides engineers with an innovative new tool that addresses several limitations of today’s traditional FDC systems.  As shown in TABLE 1, the solution greatly reduces the time required for deployment and maintenance, while providing a more thorough and accurate detection of issues.

 

TABLE 1
FDC

(Per Recipe/Tool Type)

DFD

(Per Recipe/Tool Type)

FDC model creation 1 – 2 weeks < 1 day
FDC model validation and fine tuning 2 – 3 weeks < 1 week
Model Maintenance Ongoing Minimal
Typical Alarm Rate 100-500/chamber-day < 50/chamber-day
% Coverage of Number of Sensors 50-60% 100% as default
Trace Segment Coverage 20-40% 100%
Adaptive to Systematic Behavior Changes No Yes

 

 

TOM HO is President of BISTel America where he leads global product engineer and development efforts for BISTel.  [email protected].   STEWART CHALMERS is President & CEO of Hill + Kincaid, a technical marketing firm. [email protected]

The vast majority of computing devices today are made from silicon, the second most abundant element on Earth, after oxygen. Silicon can be found in various forms in rocks, clay, sand, and soil. And while it is not the best semiconducting material that exists on the planet, it is by far the most readily available. As such, silicon is the dominant material used in most electronic devices, including sensors, solar cells, and the integrated circuits within our computers and smartphones.

Now MIT engineers have developed a technique to fabricate ultrathin semiconducting films made from a host of exotic materials other than silicon. To demonstrate their technique, the researchers fabricated flexible films made from gallium arsenide, gallium nitride, and lithium fluoride — materials that exhibit better performance than silicon but until now have been prohibitively expensive to produce in functional devices.

MIT researchers have devised a way to grow single crystal GaN thin film on a GaN substrate through two-dimensional materials. The GaN thin film is then exfoliated by a flexible substrate, showing the rainbow color that comes from thin film interference. This technology will pave the way to flexible electronics and the reuse of the wafers. Credit: Wei Kong and Kuan Qiao

The new technique, researchers say, provides a cost-effective method to fabricate flexible electronics made from any combination of semiconducting elements, that could perform better than current silicon-based devices.

“We’ve opened up a way to make flexible electronics with so many different material systems, other than silicon,” says Jeehwan Kim, the Class of 1947 Career Development Associate Professor in the departments of Mechanical Engineering and Materials Science and Engineering. Kim envisions the technique can be used to manufacture low-cost, high-performance devices such as flexible solar cells, and wearable computers and sensors.

Details of the new technique are reported today in Nature Materials. In addition to Kim, the paper’s MIT co-authors include Wei Kong, Huashan Li, Kuan Qiao, Yunjo Kim, Kyusang Lee, Doyoon Lee, Tom Osadchy, Richard Molnar, Yang Yu, Sang-hoon Bae, Yang Shao-Horn, and Jeffrey Grossman, along with researchers from Sun Yat-Sen University, the University of Virginia, the University of Texas at Dallas, the U.S. Naval Research Laboratory, Ohio State University, and Georgia Tech.

Now you see it, now you don’t

In 2017, Kim and his colleagues devised a method to produce “copies” of expensive semiconducting materials using graphene — an atomically thin sheet of carbon atoms arranged in a hexagonal, chicken-wire pattern. They found that when they stacked graphene on top of a pure, expensive wafer of semiconducting material such as gallium arsenide, then flowed atoms of gallium and arsenide over the stack, the atoms appeared to interact in some way with the underlying atomic layer, as if the intermediate graphene were invisible or transparent. As a result, the atoms assembled into the precise, single-crystalline pattern of the underlying semiconducting wafer, forming an exact copy that could then easily be peeled away from the graphene layer.

The technique, which they call “remote epitaxy,” provided an affordable way to fabricate multiple films of gallium arsenide, using just one expensive underlying wafer.

Soon after they reported their first results, the team wondered whether their technique could be used to copy other semiconducting materials. They tried applying remote epitaxy to silicon, and also germanium — two inexpensive semiconductors — but found that when they flowed these atoms over graphene they failed to interact with their respective underlying layers. It was as if graphene, previously transparent, became suddenly opaque, preventing atoms of silicon and germanium from “seeing” the atoms on the other side.

As it happens, silicon and germanium are two elements that exist within the same group of the periodic table of elements. Specifically, the two elements belong in group four, a class of materials that are ionically neutral, meaning they have no polarity.

“This gave us a hint,” says Kim.

Perhaps, the team reasoned, atoms can only interact with each other through graphene if they have some ionic charge. For instance, in the case of gallium arsenide, gallium has a negative charge at the interface, compared with arsenic’s positive charge. This charge difference, or polarity, may have helped the atoms to interact through graphene as if it were transparent, and to copy the underlying atomic pattern.

“We found that the interaction through graphene is determined by the polarity of the atoms. For the strongest ionically bonded materials, they interact even through three layers of graphene,” Kim says. “It’s similar to the way two magnets can attract, even through a thin sheet of paper.”

Opposites attract

The researchers tested their hypothesis by using remote epitaxy to copy semiconducting materials with various degrees of polarity, from neutral silicon and germanium, to slightly polarized gallium arsenide, and finally, highly polarized lithium fluoride — a better, more expensive semiconductor than silicon.

They found that the greater the degree of polarity, the stronger the atomic interaction, even, in some cases, through multiple sheets of graphene. Each film they were able to produce was flexible and merely tens to hundreds of nanometers thick.

The material through which the atoms interact also matters, the team found. In addition to graphene, they experimented with an intermediate layer of hexagonal boron nitride (hBN), a material that resembles graphene’s atomic pattern and has a similar Teflon-like quality, enabling overlying materials to easily peel off once they are copied.

However, hBN is made of oppositely charged boron and nitrogen atoms, which generate a polarity within the material itself. In their experiments, the researchers found that any atoms flowing over hBN, even if they were highly polarized themselves, were unable to interact with their underlying wafers completely, suggesting that the polarity of both the atoms of interest and the intermediate material determines whether the atoms will interact and form a copy of the original semiconducting wafer.

“Now we really understand there are rules of atomic interaction through graphene,” Kim says.

With this new understanding, he says, researchers can now simply look at the periodic table and pick two elements of opposite charge. Once they acquire or fabricate a main wafer made from the same elements, they can then apply the team’s remote epitaxy techniques to fabricate multiple, exact copies of the original wafer.

“People have mostly used silicon wafers because they’re cheap,” Kim says. “Now our method opens up a way to use higher-performing, nonsilicon materials. You can just purchase one expensive wafer and copy it over and over again, and keep reusing the wafer. And now the material library for this technique is totally expanded.”

Kim envisions that remote epitaxy can now be used to fabricate ultrathin, flexible films from a wide variety of previously exotic, semiconducting materials — as long as the materials are made from atoms with a degree of polarity. Such ultrathin films could potentially be stacked, one on top of the other, to produce tiny, flexible, multifunctional devices, such as wearable sensors, flexible solar cells, and even, in the distant future, “cellphones that attach to your skin.”

“In smart cities, where we might want to put small computers everywhere, we would need low power, highly sensitive computing and sensing devices, made from better materials,” Kim says. “This [study] unlocks the pathway to those devices.”

The world is edging closer to a reality where smart devices are able to use their owners as an energy resource, say experts from the University of Surrey.

In a study published by the Advanced Energy Materials journal, scientists from Surrey’s Advanced Technology Institute (ATI) detail an innovative solution for powering the next generation of electronic devices by using Triboelectric Nanogenerators (TENGs). Along with human movements, TENGs can capture energy from common energy sources such as wind, wave, and machine vibration.

A TENG is an energy harvesting device that uses the contact between two or more (hybrid, organic or inorganic) materials to produce an electric current.

Researchers from the ATI have provided a step-by-step guide on how to construct the most efficient energy harvesters. The study introduces a “TENG power transfer equation” and “TENG impedance plots”, tools which can help improve the design for power output of TENGs.

Professor Ravi Silva, Director of the ATI, said: “A world where energy is free and renewable is a cause that we are extremely passionate about here at the ATI (and the University of Surrey) – TENGs could play a major role in making this dream a reality. TENGs are ideal for powering wearables, internet of things devices and self-powered electronic applications. This research puts the ATI in a world leading position for designing optimized energy harvesters.”

Ishara Dharmasena, PhD student and lead scientist on the project, said: “I am extremely excited with this new study which redefines the way we understand energy harvesting. The new tools developed here will help researchers all over the world to exploit the true potential of triboelectric nanogenerators, and to design optimised energy harvesting units for custom applications.”