Tag Archives: letter-mems-tech

A team of University of Alberta engineers developed a new way to produce electrical power that can charge handheld devices or sensors that monitor anything from pipelines to medical implants. The discovery sets a new world standard in devices called triboelectric nanogenerators by producing a high-density DC current–a vast improvement over low-quality AC currents produced by other research teams.

Jun Liu, a PhD student working under the supervision of chemical engineering professor Thomas Thundat, was conducting research unrelated to these tiny generators, using a device called an atomic force microscope. It provides images at the atomic level using a tiny cantilever to “feel” an object, the same way you might learn about an object by running a finger over it. Liu forgot to press a button that would apply electricity to the sample–but he still saw a current coming from the material.

“I didn’t know why I was seeing a current,” he recalled.

One theory was that it was an anomaly or a technical problem, or interference. But Liu wanted to get to the bottom of it. He eventually pinned the cause on the friction of the microscope’s probe on the material. It’s like shuffling across a carpet then touching someone and giving them a shock.

It turns out that the mechanical energy of the microscope’s cantilever moving across a surface can generate a flow of electricity. But instead of releasing all the energy in one burst, the U of A team generated a steady current.

“Many other researchers are trying to generate power at the prototype stages but their performances are limited by the current density they’re getting–that is the problem we solved,” said Liu.

“This is big,” said Thundat. “So far, what other teams have been able to do is to generate very high voltages, but not the current. What Jun has discovered is a new way to get continuous flow of high current.”

The discovery means that nanoscale generators have the potential to harvest power for electrical devices based on nanoscale movement and vibration: an engine, traffic on a roadway–even a heartbeat. It could lead to technology with applications in everything from sensors used to monitor the physical strength of structures such as bridges or pipelines, the performance of engines or wearable electronic devices.

Liu said the applications are limited only by imagination.

IBM’s Khare on A.I.


December 7, 2017

BY PETE SINGER, Editor-in-Chief

Mukesh Khare, VP of IBM Research, talked about the impact artificial intelligence (AI) is going to have on the semiconductor industry during a recent panel session hosted by Applied Materials. He said that today most artificial intelligence is too complex. It requires, training, building models and then doing inferencing using those models. “The reason there is good in artificial intelligence is because of the exponential increase in data, and cheap compute. But, keep in mind that, the compute that we are using right now is the old compute. That compute was built to do spreadsheet, databases, the traditional compute.

“Since that compute is cheap and available, we are making use of it. Even with the cheap and available compute in cloud, it takes months to generate those models. So right now, most of the training is still being done in cloud. Whereas, inferencing, making use from that model is done at the edge. However, going forward, it is not possible because the devices at the edge are continuously generating so much data that you cannot send all the data back to the cloud, generate models, and come back on the edge.

“Eventually, a lot of training needs to move to the edge as well,” Khare said. This will require some innovation so that the compute, which is being done right now in cloud, can be transferred over to edge with low-power devices, cheap devices. Applied Materials’ CIO Jay Kerley added that innovation has to happen not only at the edge, but in the data center and at the network layer, as well as in the software frameworks. “Not only the AI frameworks, but what’s driving compression, de-duplication at the storage layer is absolutely critical as well,” he said.

Khare also weighed in on how transistors and memory will need to evolve to meet the demands of new AI computer architec- tures, “For artificial intelligence in our world, we have to think very differently. This is an inflection, but this is the kind of inflection that world has not seen for last 60 years.” He said the world has gone from tabulating system era (1900 to 1940) to the programmable system era in 1950s, which we are still using. “We are entering the era of what we call cognitive computing, which we believe started in 2011, when IBM first demonstrated artificial intelligence through our Watson System, which played Jeopardy,” he said.

Khare said “we are still using the technology of programmable systems, such as logic, memory, the traditional way of thinking, and applying it to AI, because that’s the best we’ve got.”
AI needs more innovation at all levels, Khare said. “You have to think about systems level optimization, chip design level optimization, device level optimization, and eventually materials level optimization,” he said. “The artificial workloads that are coming out are very different. They do not require the traditional way of thinking — they require the way the brain thinks. These are the brain inspired systems that will start to evolve.”

Khare believes analog compute might hold the answer. “Analog compute is where compute started many, many years ago. It was never adopted because the precision was not high enough, so there were a lot of errors. But the brain doesn’t think in 32 bits, our brain thinks analog, right? So we have to bring those technologies to the forefront,” he said. “In research at IBM we can see that there could be several orders of magnitude reduction in power, or improvement in efficiency that’s possible by intro- ducing some of those concepts, which are more brain inspired.”

Christos Georgiopoulos (former Intel VP and professor who was also on the panel) said a new compute model is required for A.I. “It’s important to understand that the traditional workloads that we all knew and loved for the last forty years, don’t apply with A.I. They are completely new workloads that require very different type of capabilities from the machines that you build,” he said. “With these new kind of workloads, you’re going to require not only new architectures, you’re going to require new system level design. And you’re going to require new capabilities like frameworks. He said TensorFlow, which is an open-source software library for machine intelligence originally developed by researchers and engineers working on the Google Brain Team, seems to be the biggest framework right now. “Google made it public for only one very good reason. The TPU that they have created runs TensorFlow better than any other hardware around. Well, guess what? If you write something on TensorFlow, you want to go to the Google backend to run it, because you know you’re going to get great results. These kind of architectures are getting created right now that we’re going to see a lot more of,” he said.

By Inna Skvortsova, SEMI

Electromagnetic interference (EMI) is an increasingly important topic across the global electronics manufacturing supply chain.  Progressively smaller geometries of ICs, lower supply voltages, and higher data rates all make devices and processes more vulnerable to EMI. Electrical noise, EMI-induced signal generated by equipment, and factors such as power line transients affect manufacturing processes, from wafer handling to wire bonding to PCB assembly and test, causing millions of dollars in losses to the industry. Furthermore, conducted emission capable of causing electrical overstress (EOS) can damage sensitive semiconductor devices.  Intel consistently names EOS as the “number one source of damage to IC components.” (Intel® Manufacturing Enabling Guide 2001, 2010, 2016).

While EMC (Electromagnetic Compatibility) standards, such as the European EMC Directive and FCC Testing and Certification, etc. provide limits on allowed emission levels of equipment, once the equipment is installed along with other tools, the EMI levels in actual operating environments can be substantially different and therefore impact the equipment operation, performance, and reliability. For example, (i) Occasional transients induce “extra” pulses in rotary feedback of the servo motor which in time contributes to robotic arm’s erroneous position eventually damaging the wafer; (ii) Combination of high-frequency noise from servo motors and switched mode power supplies in the tool creates difference in voltage between the bonding wire/funnel and the device which causes high current and eventual electrical overstress to the devices; (iii) Wafer probe test provides inconsistent results due to high level of EMI on the wafer chuck caused by a combination of several servo motors in the wafer handler.  Field cases like these illustrate the gap between EMC test requirements and real-life EMI tolerance levels and its impact on semiconductor manufacturing and handling.

EMI on AC power lines

EMI on AC power lines

New standard, SEMI E176-1017, Guide to Assess and Minimize Electromagnetic Interference (EMI) in a Semiconductor Manufacturing Environment, developed by the NA Chapter of the Global Metrics Technical Committee bridges this gap. Targeted to IC manufacturers and anyone handling semiconductor devices, such as PCB assembly and integration of electronic devices, SEMI E176 is a practical guide as well as an educational document. SEMI E176 provides a concise summary of EMI origins, EMI propagation, measurement techniques and recommendations on mitigation of undesirable electromagnetic emission to enable equipment co-existence and proper operation as well as reduction of EOS in its intended usage environment. Specifically, E176 provides recommended levels for different types of EMI based on IC geometries.

“SEMI E176 is likely the only active Standard in the entire industry providing recommendations on both acceptable levels of EMI in manufacturing environments and the means of achieving and maintaining these numbers,” said Vladimir Kraz, co-Chair of the NA Metrics Technical Committee and president of OnFILTER, Inc. “E176 is also unique because it is not limited just to semiconductor manufacturing, but has application across other industries.  Back-end assembly and test, as well as PCB assembly are just as affected by EMI and can benefit from SEMI E176 implementation as there are strong similarities between handling of semiconductor devices in IC manufacturing and in PCB assemblies and prevention of defects is often shared between IC and PCBA manufacturers.”

The newly published SEMI E176 and recently updated SEMI E33-0217, Guide for Semiconductor Manufacturing Equipment Electromagnetic Compatibility (EMC),provide complete documentation for establishing and maintaining low EMI levels in the manufacturing environment.

Undesirable emission has operational, liability and regulatory consequences.  Taming it is a challenging task and requires a comprehensive approach that starts from proper system design practices and ends with developing EMI expertise in the field.  The new SEMI 176 provides practical guidance on reducing EMI to the levels necessary for effective high yield semiconductor manufacturing today and in the future.

SEMI Standards development activities take place throughout the year in all major manufacturing regions. To get involved, join the SEMI International Standards Program at: www.semi.org/standardsmembership.

 

Electrical physicists from Czech Technical University have provided additional evidence that new current sensors introduce errors when assessing current through iron conductors. It’s crucial to correct this flaw in the new sensors so that operators of the electrical grid can correctly respond to threats to the system. The researchers show how a difference in a conductor’s magnetic permeability, the degree of material’s magnetization response in a magnetic field, affects the precision of new sensors. They also provide recommendations for improving sensor accuracy. The results are published this week in AIP Advances, from AIP Publishing.

With the addition of new renewable energy sources and smart homes demanding more information, the electrical grid is becoming more complex. Author Pavel Ripka said, “If you have [a] grid at the edge of capacity, you have to be careful to monitor all the transients (power surges).” Surges are overloads or failures to the system, which can be caused by something as simple as a broken power line, or more dramatic events like lightning strikes or geomagnetic storms.

Ripka explained the importance of monitoring electrical currents: “Every day you get a lot of these small events (surges) within a big power grid, and sometimes it is difficult to interpret them. If it is something really serious, you should switch off parts of the grid to prevent catastrophic damage, but if it’s a short transient which will finish fast there is no need to disconnect the grid. It’s a risky business to distinguish between these events, because if you underestimate the danger then parts of the distribution installations can be damaged causing serious blackouts. But if you overestimate and disconnect, it is a problem because connecting these grids back together is quite complicated,” he said.

To address the increasing complexity of the grid and power outage threats, there has been an increase in use of ground current sensors in the past couple of years. New yokeless current sensors are popular because of their low cost and compact size. These sensors are good for assessing currents in nonmagnetic conductors such as copper and aluminum. However, ground conductors are usually iron due to its mechanical strength, and iron has a high magnetic permeability.

Using these new sensors to measure ground currents when iron is present is a bit like using a thermometer to assess if the heating needs to be switched on, not taking into account where exactly the thermometer is placed. Near a door or window, the thermometer’s reading can be affected differently than elsewhere. In the same way, this study has shown that not taking into account the magnetic permeability of a conductor distorts the accuracy of a reading with a yokeless sensor.

Ripka and his team matched experimental measurements with theoretical simulations to highlight the difference in yokeless sensor readings between nonmagnetic and magnetic conductors.

“We can show how to design (yokeless) current sensors so that they are not so susceptible to this type of error,” Ripka said. “[This study is] just a small reminder to make [engineers] design sensors safely.”

To further prove the point, Ripka’s group is starting to take long-term readings at power stations, comparing results to commercial uncalibrated sensors. In the future, Ripka envisions cooperating with geophysicists to correlate ground currents and geomagnetic activity, to better understand how these currents are distributed within the earth and even predict future disruptions to the grid.

STMicroelectronics (NYSE: STM) is powering up wireless charging for mobile devices by introducing one of the world’s first chips to support the latest industry standard for faster charging.

Nowadays, people are using their smartphones and tablets so intensively that many need to top up battery power several times a day. With wireless charging, users don’t need to carry the charger or a bulky power bank, and can charge their electronic devices as fast as with a cable. Major mobile manufacturers are committing to wireless charging by joining the industry alliances and launching compatible products.

Users on the move, who put their mobiles down to charge for a few minutes – say, during a break or in a meeting — need the device to be ready to go again when they are. To enable this, the Wireless Power Consortium (WPC) that manages the Qi specification — a widely adopted industry standard — has introduced the Extended Power profile for faster charging. By raising the maximum charging power from 5W to 15W, this new profile enables devices to be charged up to three times more quickly.

One of the market’s first wireless-charging controllers to support Qi Extended Power, ST’s STWBC-EP combines best-in-class energy efficiency, consuming just 16mW in standby and able to wirelessly transfer more than 80% of the total input power, with unique features created by ST to enhance the user experience. These include a patented solution enhancing active presence detection to wake the system quickly when a compatible object is presented for charging. The patented technology also enhances the performance of Foreign Object Detection (FOD), to cut power and prevent overheating if objects containing metals are brought too close to the charger. Other unique innovations enhance power control and energy transfer to maximize efficiency and ease of use.

“ST’s Advanced Wireless-Charging chip enables manufacturers to create new, high-power products that offer superior features and efficiency,” said Domenico Arrigo, General Manager, Industrial and Power Conversion Division, STMicroelectronics. “The Qi Extended Power support dramatically shortens charging time and our patented detection and safety innovations greatly improve safety and ease of use.”

The STWBC-EP provides the level of integration allowing to simplify charger design while providing the flexibility to work with supply voltages ranging from 5V USB power up to 12V.

To help accelerate time to market for product developers, ST has created an associated reference design with a Qi 15W ready-built transmitter board and documentation to get started. ST also has a 15W receiver chip (STWLC33) for use in high-speed chargeable devices, which developers can use to complete their applications.

ST’s new wireless-charging chip will be showcased at the Qi Wireless Power Developers Conference and Tradeshow held in San Francisco on November 16-17.

The STWBC-EP is available now, as a 32-lead QFN (5mm x 5mm) device, priced from $3.175 for 1000 pieces.

 

Enabling the A.I. era


November 8, 2017

BY PETE SINGER, Editor-in-Chief

There’s a strongly held belief now that the way in which semiconductors will be designed and manufactured in the future will be largely determined by a variety of rapidly growing applications, including artificial intelligence/deep learning, virtual and augmented reality, 5G, automotive, the IoT and many other uses, such as bioelectronics and drones.

The key question for most semiconductor manufacturers is how can they benefit from these trends? One of the goals of a recent panel assembled by Applied Materials for an investor day in New York was to answer that question.

The panel, focused on “enabling the A.I. era,” was moderated by Sundeep Bajikar (former Sellside Analyst, ASIC Design Engineer). The panelists were: Christos Georgiopoulos (former Intel VP, professor), Matt Johnson (SVP in Automotive at NXP), Jay Kerley (CIO of Applied Materials), Mukesh Khare (VP of IBM Research) and Praful Krishna (CEO of Coseer). The panel discussion included three debates: the first one was “Data: Use or Discard”; the second was “Cloud versus Edge”; and the third was “Logic versus Memory.”

“There’s a consensus view that there will be an explosion of data generation across multiple new categories of devices,” said Bajikar, noting that the most important one is the self-driving car. NXP’s Johnson responded that “when it comes to data generation, automotive is seeing amazing growth.” He noted the megatrends in this space: the autonomy, connectivity, the driver experience, and electrification of the vehicle. “These are changing automotive in huge ways. But if you look underneath that, AI is tied to all of these,” he said.

He said that estimates of data generation by the hour are somewhere from 25 gigabytes per hour on the low end, up to 250 gigabytes or more per hour on the high end. or even more in some estimates.

“It’s going to be, by the second, the largest data generator that we’ve seen ever, and it’s really going to have a huge impact on all of us.”

Intel’s Georgiopoulos agrees that there’s an enormous amount of infrastructure that’s getting built right now. “That infrastructure is consisting of both the ability to generate the data, but also the ability to process the data both on the edge as well as on the cloud,” he said. The good news is that sorting that data may be getting a little easier. “One of the more important things over the last four or five years has been the quality of the data that’s getting generated, which diminishes the need for extreme algorithmic development,” he said. “The better data we get, the more reasonable the AI neural networks can be and the simpler the AI networks can be for us to extract information that we need and turn the data information into dollars.” Check out our website at www.solid-state.com for a full report on the panel.

FlexTech, a SEMI Strategic Association Partner announced a new development project with PARC, a Xerox company, to develop a hybrid, highly bendable, paper-like smart tag, incorporating a thin audio speaker. The product is aimed at applications in packaging, wearables prosthetics, soft robotics, smart tags, and smart cities and homes.

PARC will use ink jet printing to build prototypes of the paper-like smart tags capable of producing audio signals, on a silver-printed polyethylene naphthalene (PEN) or polyimide (PI) substrate. They will develop and demonstrate a process for bonding chips, and printing active and passive components, as well as interconnects on the flexible substrate, essential in meeting the project goals for ruggedness and form factor. PARC will also focus on printing actuators to create thin film audio speakers. The technology will enable custom systems to be built on demand.

“Over the last 15 years PARC has been a pioneer in the exciting field of printed electronics.  We are pleased to continue our collaboration with SEMI-FlexTech in a project which takes advantage of the wide range of expertise on the PARC staff,” said Bob Street, project technical lead at PARC. “This new project is technically challenging because it combines a number of novel technologies needed to achieve stringent requirements, including the capability for a thin, paper-like film to produce clear speech audio.  We are looking forward to the challenge and implications for commercial products.”

In 2014, FlexTech awarded PARC with a project grant to develop printed sensors. Partly because of this work, it is now possible to print transistor circuits in a fully additive fashion, and to combine these with sensors, actuators and other electronic components.

“We have had a long, fruitful relationship with PARC and look forward to excellent results from this project which clearly advances innovation in flexible, printable electronics, enabling solutions that lead to safer, healthier lives,” said Melissa Grupen-Shemansky, CTO at SEMI-FlexTech. “In addition to pushing the boundaries in electronics, PARC pays attention to manufacturability and affordability, ensuring developments are scalable from R&D to production.”

PARC and SEMI-FlexTech staff envisage additive manufacturing delivering intelligence into electronics fabricated on demand, including smart packaging and wearable devices in conformal shapes. At the heart of this development are material science, novel printing technologies as well as process driven design that will deliver libraries of smart components and systems. The constituent “inks” of this technology are nanomaterials, molecular semiconductors, inorganic composites and silicon chiplets that together form circuits, sensors, light emitters, batteries, and more, integrated directly into products of all shapes, sizes and textures.

FlexTech’s R&D program is supported by the U.S. Army Research Laboratory (ARL), based in Adelphi, MD.

NXP Semiconductors N.V. (NASDAQ:NXPI) today debuted two significant technology breakthroughs at the largest fintech innovation event, Money 20/20, October 22-25, 2017, in Las Vegas. The company will showcase its new contactless fingerprint-on-card solution while also demonstrating a new world benchmark for payment card transactions speeds.

Fingerprint sensors on payment cards

The fingerprint-on-card solution gives payment network operators and banks a secure, convenient and fast payment card option to consumers. Coupling dual interface cards with an integrated fingerprint sensor enables faster transactions without the need for end-users to enter a PIN number.

“The result provides a secure and dramatically more convenient way for consumers to make payments. The convenience provided by mobile payment in today’s NFC-based mobile wallets can now be replicated with cards. It is also ideal for use in other form factors and applications such as electronic passports,” said Rafael Sotomayor, senior vice president and general manager of secure transactions and identification business. “The breakthrough reinforces NXP’s commitment to the payment and secure identification space by helping our customers deliver next-generation applications and solutions to the market.”

To ensure a lower barrier of entry for card makers, the company’s secure fingerprint authentication solution on cards does not require a battery and easily fits into standard card maker equipment as part of the broader payment ecosystem. Cards with fingerprint authentication are fully compliant with existing EMVCo point-of-sales (POS) systems.

New Benchmark for Blazing Transaction Speeds

Demonstrating seamless, fast, and smart card transaction experiences, the NXP high-performance platform makes it possible to achieve M/Chip transactions speeds of <200 ms, surpassing the industry requirement of 300 ms.

“This increased level of performance offers flexibility to add new features or higher crypto countermeasures and still meet current industry transaction requirement,” said Sotomayor. “The requirement for faster payment transaction will continue, and NXP is committed to providing the performance to meet these needs and make contactless transactions faster and flawless.”

NXP Demonstrations at Money 20/20 Las Vegas 2017

NXP will demonstrate these technology breakthroughs at its exclusive reception on October 24, 2017, in The Venetian.

The process of extracting natural gas from the earth or transporting it through pipelines can release methane into the atmosphere. Methane, the primary component of natural gas, is a greenhouse gas with a warming potential approximately 25 times larger than carbon dioxide, making it very efficient at trapping atmospheric heat energy. A new chip-based methane spectrometer, that is smaller than a dime, could one day make it easier to monitor for efficiency and leaks over large areas.

Scientists from IBM Thomas J. Watson Research Center in Yorktown Heights, NY, developed the new methane spectrometer, which is smaller than today’s standard spectrometers and more economical to manufacture. In Optica, The Optical Society’s journal for high impact research, the researchers detail the new spectrometer and show that it can detect methane in concentrations as low as 100 parts-per-million.

Low maintenance, high impact

The spectrometer is based on silicon photonics technology, which means it is an optical device made of silicon, the material used to make computer chips. Because the same high-volume manufacturing methods used for computer chips can be applied to make the chip-based methane spectrometer, the spectrometer along with a housing and a battery or solar power source might cost as little as a few hundred dollars if produced in large quantities.

“Compared with a cost of tens of thousands of dollars for today’s commercially available methane-detecting optical sensors, volume-manufacturing would translate to a significant value proposition for the chip spectrometer,” said William Green, leader of the IBM Research team. “Moreover, with no moving parts and no fundamental requirement for precise temperature control, this type of sensor could operate for years with almost no maintenance.”

Such low-cost, robust spectrometers could lead to exciting new applications. For example, the IBM team is working with partners in the oil and gas industry on a project that would use the spectrometers to detect methane leaks, saving companies the time and money involved in trying to find and fix leaks using in-person inspection of thousands of sites.

“During natural gas extraction and distribution, methane can leak into the air when equipment on the well malfunctions, valves get stuck, or there’s a crack in the pipeline,” said Green. “We’re developing a way to use this spectrometer-on-a-chip to create a network of sensors that could be distributed over a well pad, for example. Data from these sensors would be processed with IBM’s physical analytics software to automatically pinpoint the location of a leak as well as quantify the leak magnitude.”

Methane is a trace gas, the classification given to gases that make up less than 1 percent of the volume of Earth’s atmosphere. Although the researchers demonstrated methane detection, the same approach could be used for sensing the presence of other individual trace gases. It could also be used to detect multiple gases simultaneously.

“Our long-term vision is to incorporate these types of sensors into the home and things people use every day such as their cell phones or vehicles. They could be useful for detecting pollution, dangerous carbon monoxide levels or other molecules of interest,” said Eric Zhang, a member of the research team. “Because this spectrometer offers a platform for multispecies detection, it could also one day be used for health monitoring through breath analysis.”

Shrinking the spectrometer

The new device uses an approach known as absorption spectroscopy, which requires laser light at the wavelength uniquely absorbed by the molecule being measured. In a traditional absorption spectroscopy setup, the laser travels through the air, or free-space, until it reaches a detector. Measuring the light that reaches the detector reveals how much light was absorbed by the molecules of interest in the air and can be used to calculate the concentration of them present.

The new system uses a similar approach, but instead of a free-space setup, the laser travels through a narrow silicon waveguide that follows a 10-centimeter-long serpentine pattern on top of a chip measuring 16 square millimeters. Some of the light is trapped inside the waveguide while about 25 percent of the light extends outside of the silicon into the ambient air, where it can interact with trace gas molecules passing nearby the sensor waveguide. The researchers used near infrared laser light (1650 nanometer wavelength) for methane detection.

To increase the sensitivity of the device, the investigators carefully measured and controlled factors that contribute to noise and false absorption signals, fine-tuned the spectrometer’s design and determined the waveguide geometrical parameters that would produce favorable results.

Side-by-side comparison

To compare the new spectrometer’s performance with that of a standard free-space spectrometer, they placed the devices into an environmental chamber and released controlled concentrations of methane. The researchers found that the chip-based spectrometer provided accuracy on-par with the free-space sensor despite having 75 percent less light interacting with the air compared to the free-space design. Furthermore, the fundamental sensitivity of the chip sensor was quantified by measuring the smallest discernable change in methane concentration, showing performance comparable to free-space spectrometers developed in other laboratories.

“Although silicon photonics systems — especially those that use refractive index changes for sensing — have been explored previously, the innovative part of our work was to use this type of system to detect very weak absorption signals from small concentrations of methane, and our comprehensive analysis of the noise and minimum detection limits of our sensor chip,” said Zhang.

The current version of the spectrometer requires light to enter and exit the chip via optical fibers. However, the researchers are working to incorporate the light source and detectors onto the chip, which would create an essentially electrical device with no fiber connections required. Unlike current free-space sensors, the chip then does not require special sample or optical preparation. Next year, they plan to start field testing the spectrometers by placing them into a larger network that includes other off-the-shelf sensors.

“Our work shows that all of the knowledge behind silicon photonics manufacturing, packaging, and component design can be brought into the optical sensor space, to build high-volume manufactured and, in principle, low cost sensors, ultimately enabling an entirely new set of applications for this technology,” said Green.

Piezoelectric materials are used for applications ranging from the spark igniter in barbeque grills to the transducers needed by medical ultrasound imaging. Thin-film piezoelectrics, with dimensions on the scale of micrometers or smaller, offer potential for new applications where smaller dimensions or a lower voltage operation are required.

Researchers at Pennsylvania State University have demonstrated a new technique for making piezoelectric microelectromechanical systems (MEMS) by connecting a sample of lead zirconate titanate (PZT) piezoelectric thin films to flexible polymer substrates. Doctoral candidate Tianning Liu and her co-authors report their results this week in the Journal of Applied Physics, from AIP Publishing.

Electroded thin-film PZT on a flexible polyimide substrate of relatively large area. Credit: Tianning Liu

Electroded thin-film PZT on a flexible polyimide substrate of relatively large area. Credit: Tianning Liu

“There’s a rich history of work on piezoelectric thin films, but films on rigid substrates have limitations that come from the substrate,” said Thomas N. Jackson, a professor at Penn State and one of the paper’s authors. “This work opens up new areas for thin-film piezoelectrics that reduce the dependence on the substrate.”

The researchers grew polycrystalline PZT thin films on a silicon substrate with a zinc oxide release layer, to which they added a thin layer of polyimide. They then used acetic acid to etch away the zinc oxide, releasing the 1-micrometer thick PZT film with the polyimide layer from the silicon substrate. The PZT film on polyimide is flexible while possessing enhanced material properties compared to the films grown on rigid substrates.

Piezoelectric devices rely on the ability of some substances like PZT to generate electric charges when physically deformed, or inversely to deform when an electric field is applied to them. Growing high-quality PZT films, however, typically requires temperatures in excess of 650 degrees Celsius, almost 300 degrees hotter than what polyimide is able to withstand without degrading.

Most current piezoelectric device applications use bulk materials, which hampers miniaturization, precludes significant flexibility, and necessitates high-voltage operation.

“For example, if you’re looking at putting an ultrasound transducer in a catheter, a PZT film on a polymer substrate would allow you to wrap the transducer around the circumference of the catheter,” Liu said. “This could allow for significant miniaturization, and should provide more information for the clinician.”

The performance of many piezoelectric thin films has been limited by substrate clamping, a phenomenon in which the rigid substrate constrains the movement of the piezoelectric material’s domain walls and degrades its properties. Some work has been done crystallizing PZT at temperatures that are compatible with polymeric materials, for example using laser crystallization, but results thus far have led to porous thin films and inferior material properties.

The released thin films on polyimide that the researchers developed had a 45 percent increase in remanent polarization over silicon substrate controls, indicating a substantial mitigation in substrate clamping and improved performance. Even then, Liu said, much work remains before thin-film MEMS devices can compete with bulk piezoelectric systems.

“There’s still a big gap between putting PZT on thin film and bulk,” she said. “It’s not as big as between bulk and substrate, but there are also things like more defects that contribute to the lower response of the thin-film materials.”