Category Archives: MEMS

As artificial intelligence has become increasingly sophisticated, it has inspired renewed efforts to develop computers whose physical architecture mimics the human brain. One approach, called reservoir computing, allows hardware devices to achieve the higher-dimension calculations required by emerging artificial intelligence. One new device highlights the potential of extremely small mechanical systems to achieve these calculations.

A group of researchers at the Université de Sherbrooke in Québec, Canada, reports the construction of the first reservoir computing device built with a microelectromechanical system (MEMS). Published in the Journal of Applied Physics, from AIP Publishing, the neural network exploits the nonlinear dynamics of a microscale silicon beam to perform its calculations. The group’s work looks to create devices that can act simultaneously as a sensor and a computer using a fraction of the energy a normal computer would use.

A single silicon beam (red), along with its drive (yellow) and readout (green and blue) electrodes, implements a MEMS capable of nontrivial computations. Credit: Guillaume Dion

The article appears in a special topic section of the journal devoted to “New Physics and Materials for Neuromorphic Computation,” which highlights new developments in physical and materials science research that hold promise for developing the very large-scale, integrated “neuromorphic” systems of tomorrow that will carry computation beyond the limitations of current semiconductors today.

“These kinds of calculations are normally only done in software, and computers can be inefficient,” said Guillaume Dion, an author on the paper. “Many of the sensors today are built with MEMS, so devices like ours would be ideal technology to blur the boundary between sensors and computers.”

The device relies on the nonlinear dynamics of how the silicon beam, at widths 20 times thinner than a human hair, oscillates in space. The results from this oscillation are used to construct a virtual neural network that projects the input signal into the higher dimensional space required for neural network computing.

In demonstrations, the system was able to switch between different common benchmark tasks for neural networks with relative ease, Dion said, including classifying spoken sounds and processing binary patterns with accuracies of 78.2 percent and 99.9 percent respectively.

“This tiny beam of silicon can do very different tasks,” said Julien Sylvestre, another author on the paper. “It’s surprisingly easy to adjust it to make it perform well at recognizing words.”

Sylvestre said he and his colleagues are looking to explore increasingly complicated computations using the silicon beam device, with the hopes of developing small and energy-efficient sensors and robot controllers.

Researchers at the universities in Linköping and Shenzhen have shown how an inorganic perovskite can be made into a cheap and efficient photodetector that transfers both text and music. “It’s a promising material for future rapid optical communication”, says Feng Gao, researcher at Linköping University.

The film in the new perovskite, which contains only inorganic elements (caesium, lead, iodine and bromine), has been tested in a system for optical communication, which confirmed its ability to transfer both text and images, rapidly and reliably. Credit: Thor Balkhed

“Perovskites of inorganic materials have a huge potential to influence the development of optical communication. These materials have rapid response times, are simple to manufacture, and are extremely stable.” So says Feng Gao, senior lecturer at LiU who, together with colleagues who include Chunxiong Bao, postdoc at LiU, and scientists at Shenzhen University, has published the results in the prestigious journal Advanced Materials.

All optical communication requires rapid and reliable photodetectors – materials that capture a light signal and convert it into an electrical signal. Current optical communication systems use photodetectors made from materials such as silicon and indium gallium arsenide. But these are expensive, partly because they are complicated to manufacture. Moreover, these materials cannot to be used in some new devices, such as mechanically flexible, light-weight or large-area devices.

Researcher have been seeking cheap replacement, or at least supplementary, materials for many years, and have looked at, for example, organic semi-conductors. However, the charge transport of these has proved to be too slow. A photodetector must be rapid.

The new perovskite materials have been extremely interesting in research since 2009, but the focus has been on their use in solar cells and efficient light-emitting diodes. Feng Gao, researcher in Biomolecular and Organic Electronics at LiU, was awarded a Starting Grant of EUR 1.5 million from the European Research Council (ERC) in the autumn of 2016, intended for research into using perovskites in light-emitting diodes.

Perovskites form a completely new family of semi-conducting materials that are defined by their crystal structures. They can consist of both organic and inorganic substances. They have good light-emitting properties and are easy to manufacture. For applications such as light-emitting diodes and efficient solar cells, most interest has been placed on perovskites that consist of an organic substance (containing carbon and hydrogen), metal, and halogen (fluorine, chlorine, bromine or iodine) ions. However, when this composition was used in photodetectors, it proved to be too unstable.

The results changed, however, when Chunxiong Bao used the right materials, and managed to optimise the manufacturing process and the structure of the film. The film in the new perovskite, which contains only inorganic elements (caesium, lead, iodine and bromine), has been tested in a system for optical communication, which confirmed its ability to transfer both text and images, rapidly and reliably. The quality didn’t deteriorate, even after 2,000 hours at room temperature.

“It’s very gratifying that we have already achieved results that are very close to application,” says Feng Gao, who leads the research, together with Professor Wenjing Zhang at Shenzhen University.

Gartner, Inc. today highlighted the top strategic technology trends that organizations need to explore in 2019. Analysts presented their findings during Gartner Symposium/ITxpo, which is taking place here through Thursday.

Gartner defines a strategic technology trend as one with substantial disruptive potential that is beginning to break out of an emerging state into broader impact and use, or which are rapidly growing trends with a high degree of volatility reaching tipping points over the next five years.

“The Intelligent Digital Mesh has been a consistent theme for the past two years and continues as a major driver through 2019. Trends under each of these three themes are a key ingredient in driving a continuous innovation process as part of a ContinuousNEXT strategy,” said David Cearley, vice president and Gartner Fellow. “For example, artificial intelligence (AI) in the form of automated things and augmented intelligence is being used together with IoT, edge computing and digital twins to deliver highly integrated smart spaces. This combinatorial effect of multiple trends coalescing to produce new opportunities and drive new disruption is a hallmark of the Gartner top 10 strategic technology trends for 2019.”

The top 10 strategic technology trends for 2019 are:

Autonomous Things

Autonomous things, such as robots, drones and autonomous vehicles, use AI to automate functions previously performed by humans. Their automation goes beyond the automation provided by rigid programing models and they exploit AI to deliver advanced behaviors that interact more naturally with their surroundings and with people.

“As autonomous things proliferate, we expect a shift from stand-alone intelligent things to a swarm of collaborative intelligent things, with multiple devices working together, either independently of people or with human input,” said Mr. Cearley. “For example, if a drone examined a large field and found that it was ready for harvesting, it could dispatch an “autonomous harvester.” Or in the delivery market, the most effective solution may be to use an autonomous vehicle to move packages to the target area. Robots and drones on board the vehicle could then ensure final delivery of the package.”

Augmented Analytics

Augmented analytics focuses on a specific area of augmented intelligence, using machine learning (ML) to transform how analytics content is developed, consumed and shared. Augmented analytics capabilities will advance rapidly to mainstream adoption, as a key feature of data preparation, data management, modern analytics, business process management, process mining and data science platforms. Automated insights from augmented analytics will also be embedded in enterprise applications — for example, those of the HR, finance, sales, marketing, customer service, procurement and asset management departments — to optimize the decisions and actions of all employees within their context, not just those of analysts and data scientists. Augmented analytics automates the process of data preparation, insight generation and insight visualization, eliminating the need for professional data scientists in many situations.

“This will lead to citizen data science, an emerging set of capabilities and practices that enables users whose main job is outside the field of statistics and analytics to extract predictive and prescriptive insights from data,” said Mr. Cearley. “Through 2020, the number of citizen data scientists will grow five times faster than the number of expert data scientists. Organizations can use citizen data scientists to fill the data science and machine learning talent gap caused by the shortage and high cost of data scientists.”

AI-Driven Development

The market is rapidly shifting from an approach in which professional data scientists must partner with application developers to create most AI-enhanced solutions to a model in which the professional developer can operate alone using predefined models delivered as a service. This provides the developer with an ecosystem of AI algorithms and models, as well as development tools tailored to integrating AI capabilities and models into a solution. Another level of opportunity for professional application development arises as AI is applied to the development process itself to automate various data science, application development and testing functions. By 2022, at least 40 percent of new application development projects will have AI co-developers on their team.

“Ultimately, highly advanced AI-powered development environments automating both functional and nonfunctional aspects of applications will give rise to a new age of the ‘citizen application developer’ where nonprofessionals will be able to use AI-driven tools to automatically generate new solutions. Tools that enable nonprofessionals to generate applications without coding are not new, but we expect that AI-powered systems will drive a new level of flexibility,” said Mr. Cearley.

Digital Twins

A digital twin refers to the digital representation of a real-world entity or system. By 2020, Gartner estimates there will be more than 20 billion connected sensors and endpoints and digital twins will exist for potentially billions of things. Organizations will implement digital twins simply at first. They will evolve them over time, improving their ability to collect and visualize the right data, apply the right analytics and rules, and respond effectively to business objectives.

“One aspect of the digital twin evolution that moves beyond IoT will be enterprises implementing digital twins of their organizations (DTOs). A DTO is a dynamic software model that relies on operational or other data to understand how an organization operationalizes its business model, connects with its current state, deploys resources and responds to changes to deliver expected customer value,” said Mr. Cearley. “DTOs help drive efficiencies in business processes, as well as create more flexible, dynamic and responsive processes that can potentially react to changing conditions automatically.”

Empowered Edge

The edge refers to endpoint devices used by people or embedded in the world around us. Edge computing describes a computing topology in which information processing, and content collection and delivery, are placed closer to these endpoints. It tries to keep the traffic and processing local, with the goal being to reduce traffic and latency.

In the near term, edge is being driven by IoT and the need keep the processing close to the end rather than on a centralized cloud server. However, rather than create a new architecture, cloud computing and edge computing will evolve as complementary models with cloud services being managed as a centralized service executing, not only on centralized servers, but in distributed servers on-premises and on the edge devices themselves.

Over the next five years, specialized AI chips, along with greater processing power, storage and other advanced capabilities, will be added to a wider array of edge devices. The extreme heterogeneity of this embedded IoT world and the long life cycles of assets such as industrial systems will create significant management challenges. Longer term, as 5G matures, the expanding edge computing environment will have more robust communication back to centralized services. 5G provides lower latency, higher bandwidth, and (very importantly for edge) a dramatic increase in the number of nodes (edge endoints) per square km.

Immersive Experience

Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in perception and interaction models leads to the future immersive user experience.

“Over time, we will shift from thinking about individual devices and fragmented user interface (UI) technologies to a multichannel and multimodal experience. The multimodal experience will connect people with the digital world across hundreds of edge devices that surround them, including traditional computing devices, wearables, automobiles, environmental sensors and consumer appliances,” said Mr. Cearley. “The multichannel experience will use all human senses as well as advanced computer senses (such as heat, humidity and radar) across these multimodal devices. This multiexperience environment will create an ambient experience in which the spaces that surround us define “the computer” rather than the individual devices. In effect, the environment is the computer.”

Blockchain

Blockchain, a type of distributed ledger, promises to reshape industries by enabling trust, providing transparency and reducing friction across business ecosystems potentially lowering costs, reducing transaction settlement times and improving cash flow. Today, trust is placed in banks, clearinghouses, governments and many other institutions as central authorities with the “single version of the truth” maintained securely in their databases. The centralized trust model adds delays and friction costs (commissions, fees and the time value of money) to transactions. Blockchain provides an alternative trust mode and removes the need for central authorities in arbitrating transactions.

”Current blockchain technologies and concepts are immature, poorly understood and unproven in mission-critical, at-scale business operations. This is particularly so with the complex elements that support more sophisticated scenarios,” said Mr. Cearley. “Despite the challenges, the significant potential for disruption means CIOs and IT leaders should begin evaluating blockchain, even if they don’t aggressively adopt the technologies in the next few years.”

Many blockchain initiatives today do not implement all of the attributes of blockchain — for example, a highly distributed database. These blockchain-inspired solutions are positioned as a means to achieve operational efficiency by automating business processes, or by digitizing records. They have the potential to enhance sharing of information among known entities, as well as improving opportunities for tracking and tracing physical and digital assets. However, these approaches miss the value of true blockchain disruption and may increase vendor lock-in. Organizations choosing this option should understand the limitations and be prepared to move to complete blockchain solutions over time and that the same outcomes may be achieved with more efficient and tuned use of existing nonblockchain technologies.

Smart Spaces

A smart space is a physical or digital environment in which humans and technology-enabled systems interact in increasingly open, connected, coordinated and intelligent ecosystems. Multiple elements — including people, processes, services and things — come together in a smart space to create a more immersive, interactive and automated experience for a target set of people and industry scenarios.

“This trend has been coalescing for some time around elements such as smart cities, digital workplaces, smart homes and connected factories. We believe the market is entering a period of accelerated delivery of robust smart spaces with technology becoming an integral part of our daily lives, whether as employees, customers, consumers, community members or citizens,” said Mr. Cearley.

Digital Ethics and Privacy

Digital ethics and privacy is a growing concern for individuals, organizations and governments. People are increasingly concerned about how their personal information is being used by organizations in both the public and private sector, and the backlash will only increase for organizations that are not proactively addressing these concerns.

“Any discussion on privacy must be grounded in the broader topic of digital ethics and the trust of your customers, constituents and employees. While privacy and security are foundational components in building trust, trust is actually about more than just these components,” said Mr. Cearley. “Trust is the acceptance of the truth of a statement without evidence or investigation. Ultimately an organization’s position on privacy must be driven by its broader position on ethics and trust. Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing.’”

Quantum Computing

Quantum computing (QC) is a type of nonclassical computing that operates on the quantum state of subatomic particles (for example, electrons and ions) that represent information as elements denoted as quantum bits (qubits). The parallel execution and exponential scalability of quantum computers means they excel with problems too complex for a traditional approach or where a traditional algorithms would take too long to find a solution. Industries such as automotive, financial, insurance, pharmaceuticals, military and research organizations have the most to gain from the advancements in QC. In the pharmaceutical industry, for example, QC could be used to model molecular interactions at atomic levels to accelerate time to market for new cancer-treating drugs or QC could accelerate and more accurately predict the interaction of proteins leading to new pharmaceutical methodologies.

“CIOs and IT leaders should start planning for QC by increasing understanding and how it can apply to real-world business problems. Learn while the technology is still in the emerging state. Identify real-world problems where QC has potential and consider the possible impact on security,” said Mr. Cearley. “But don’t believe the hype that it will revolutionize things in the next few years. Most organizations should learn about and monitor QC through 2022 and perhaps exploit it from 2023 or 2025.”

Gartner clients can learn more in the Gartner Special Report “Top 10 Strategic Technology Trends for 2019.” Additional detailed analysis on each tech trend can be found in the Gartner YouTube video “Gartner Top 10 Strategic Technology Trends 2019.”

A new approach in Fault Detection and Classification (FDC) allows engineers to uncover issues more thoroughly and accurately by taking advantage of full sensor traces.

By Tom Ho and Stewart Chalmers, BISTel, Santa Clara, CA

Traditional FDC systems collect data from production equipment, summarize it, and compare it to control limits that were previously set up by engineers. Software alarms are triggered when any of the summarized data fall outside of the control limits. While this method has been effective and widely deployed, it does create a few challenges for the engineers:

  • The use of summary data means that (1) subtle changes in the process may not be noticed and (2) the unmonitored section of the process will be overlooked by a typical FDC system. These subtle changes or the missed anomalies in unmonitored section may result in critical problems.
  • Modeling control limits for fault detection is a manual process, prone to human error and process drift. With hundreds of thousandssensors in a complex manufacturing process, the task of modeling control limits is extremely time consuming and requires a deep understanding of the particular manufacturing process on the part of the engineer. Non-optimized control limits result in misdetection: false alarms or missed alarms.
  • As equipment ages, processes change. Meticulously set control limit ranges must be adjusted, requiring engineers to constantly monitor equipment and sensor data to avoid false alarms or missed real alarm.

Full sensor trace detection

A new approach, Dynamic Fault Detection (DFD) was developed to address the shortcomings of traditional FDC systems and save both production time and engineer time. DFD takes advantage of the full trace from each and every sensor to detect any issues during a manufacturing process. By analyzing each trace in its entirety, and running them through intelligent software, the system is able to comprehensively identify potential issues and errors as they occur. As the Adaptive Intelligence behind Dynamic Fault Detection learns each unique production environment, it will be able to identify process anomalies in real time without the need for manual adjustment from engineers. Great savings can be realized by early detection, increased engineer productivity, and containment of malfunctions.

DFD’s strength is its ability to analyze full trace data. As shown in FIGURE 1, there are many subtle details on a trace, such as spikes, shifts, and ramp rate changes, which are typically ignored or go undetected by a traditional FDC systems, because they only examine a segment of the trace- summary data. By analyzing the full trace using DFD, these details can easily be identified to provide a more thorough analysis than ever before.

Figure 1

Dynamic referencing

Unlike traditional FDC deployments, DFD does not require control limit modeling. The novel solution adapts machine learning techniques to take advantage of neighboring traces as references, so control limits are dynamically defined in real time.  Not only does this substantially reduce set up and deployment time of a fault detection system, it also eliminates the need for an engineer to continuously maintain the model. Since the analysis is done in real time, the model evolves and adapts to any process shifts as new reference traces are added.

DFD has multiple reference configurations available for engineers to choose from to fine tune detection accuracy. For example, DFD can 1) use traces within a wafer lot as reference, 2) use traces from the last N wafers as reference, 3) use “golden” traces as reference, or 4) a combination of the above.

As more sensors are added to the Internet of Things network of a production plant, DFD can integrate their data into its decision-making process.

Optimized alarming

Thousands of process alarms inundate engineers each day, only a small percentage of which are valid. In today’s FDC systems, one of the main causes for false alarms is improperly configured Statistical Process Control (SPC) limits. Also, typical FDC may generate one alarm for each limit violation resulting in many alarms for each wafer process. DFD implementations require no control limits, greatly reducing the potential for false alarms.  In addition, DFD is designed to only issues one alarm per wafer, further streamlining the alarming system and providing better focus for the engineers.

Dynamic fault detection use cases

The following examples illustrate actual use cases to show the benefits of utilizing DFD for fault detection.

Use case #1End Point Abnormal Etching

In this example, both the upper and lower control limits in SPC were not set at the optimum levels, preventing the traditional FDC system from detecting several abnormally etched wafers (FIGURE 2).  No SPC alarms were issued to notify the engineer.

Figure 2

On the other hand, DFD full trace comparison easily detects the abnormality by comparing to neighboring traces (FIGURE 3).  This was accomplished without having to set up any control limits.

Figure 3

Use case #2 – Resist Bake Plate Temperature

The SPC chart in Figure 4 clearly shows that the Resist bake plate temperature pattern changed significantly; however, since the temperature range during the process never exceeded the control limits, SPC did not issue any alarms.

Figure 4

When the same parameter was analyzed using DFD, the temperature profile abnormality was easily identified, and the software notified an engineer (FIGURE 5).

Figure 5

Use case #3 – Full Trace Coverage

Engineers select only a segment of sensor trace data to monitor because setting up SPC limits is so arduous. In this specific case, the SPC system was set up to monitor only the He_Flow parameter in recipe step 3 and step 4.  Since no unusual events occurred during those steps in the process, no SPC alarms were triggered.

However, in that same production run, a DFD alarm was issued for one of the wafers. Upon examination of the trace summary chart shown in FIGURE 6, it is clear that while the parameter behaved normally during recipe step 3 and step 4, there was a noticeable issue from one of the wafers during recipe step 1 and step 2.  The trace in red represents the offending trace versus the rest of the (normal) population in blue. DFD full trace analysis caught the abnormality.

Figure 6

Use case #4 – DFD Alarm Accuracy

When setting up SPC limits in a conventional FDC system, the method of calculation taken by an engineer can yield vastly different results. In this example, the engineer used multiple SPC approaches to monitor parameter Match_LoadCap in an etcher. When the control limits were set using Standard Deviation (FIGURE 7), a large number of false alarms were triggered.  On the other hand, zero alarms were triggered using the Meanapproach (FIGURE 8).

Figure 7

Figure 8

Using DFD full trace detection eliminates the discrepancy between calculation methods. In the above example, DFD was able to identify an issue with one of the wafers in recipe step 3 and trigger only one alarm.

Dynamic fault detection scope of use

DFD is designed to be used in production environments of many types, ranging from semiconductor manufacturing to automotive plants and everything in between. As long as the manufacturing equipment being monitored generates systematic and consistent trace patterns, such as gas flow, temperature, pressure, power etc., proper referencing can be established by the Adaptive Intelligence (AI) to identify abnormalities. Sensor traces from Process of Record (POR) runs may be used as starting references.

Conclusion

The DFD solution reduces risk in manufacturing by protecting against events that impact yield.  It also provides engineers with an innovative new tool that addresses several limitations of today’s traditional FDC systems.  As shown in TABLE 1, the solution greatly reduces the time required for deployment and maintenance, while providing a more thorough and accurate detection of issues.

 

TABLE 1
FDC

(Per Recipe/Tool Type)

DFD

(Per Recipe/Tool Type)

FDC model creation 1 – 2 weeks < 1 day
FDC model validation and fine tuning 2 – 3 weeks < 1 week
Model Maintenance Ongoing Minimal
Typical Alarm Rate 100-500/chamber-day < 50/chamber-day
% Coverage of Number of Sensors 50-60% 100% as default
Trace Segment Coverage 20-40% 100%
Adaptive to Systematic Behavior Changes No Yes

 

 

TOM HO is President of BISTel America where he leads global product engineer and development efforts for BISTel.  [email protected].   STEWART CHALMERS is President & CEO of Hill + Kincaid, a technical marketing firm. [email protected]

The Micron Foundation (Nasdaq:MU) announced a $1 million grant for universities and nonprofit organizations to conduct research into how artificial intelligence (AI) can improve lives while ensuring safety, security and privacy. The grant was announced at the inaugural Micron Insight 2018 conference where the technology industry’s top minds gathered in San Francisco to discuss the future of AI, machine learning and data science, and how memory technology is essential in bringing intelligence to life.

“Artificial intelligence is one of the frontiers where science and engineering education can best be applied,” said Micron Foundation Executive Director Dee Mooney. “We want to accelerate advances in AI by investing in education and making sure that pioneers of this technology, reflect the diversity and richness of the world we live in and build a future where AI benefits everyone.”

Micron awarded a total of $500,000 to three initial recipients at Micron Insight 2018.

  • AI4All, a nonprofit organization, works to increase diversity and inclusion in AI education, research, development and policy. AI4All supports the next generation of diverse AI talent through its AI Summer Camp. Open to 9th-11th grade students, the camp gives special consideration to young women, underrepresented groups and families of lower socioeconomic status.
  • Berkeley Artificial Intelligence Research (BAIR) Lab supports researchers and graduate students developing fundamental advances in computer vision, machine learning, natural-language processing, planning and robotics. BAIR is based at UC Berkeley’s College of Engineering.
  • In a related announcement, the Micron Foundation launched a $1 million grant for universities and non-profit organizations to conduct research on AI. For more details, visit http://bit.ly/MicronFoundation.

The $1 million fund is available to select research universities focused on the future implications of AI in life, healthcare and business, with a portion specifically allocated to support women and underrepresented groups. The Micron Foundation supports researchers tackling some of AI’s greatest challenges – from building highly reliable software and hardware programs to finding solutions that address the business and consumer impacts of AI.

In August 2018, the Micron Foundation announced a $1 million fund for Virginia colleges and universities to advance STEM and STEM-related diversity programs in connection with Micron’s expansion of its memory production facilities in Manassas, Virginia.

The vast majority of computing devices today are made from silicon, the second most abundant element on Earth, after oxygen. Silicon can be found in various forms in rocks, clay, sand, and soil. And while it is not the best semiconducting material that exists on the planet, it is by far the most readily available. As such, silicon is the dominant material used in most electronic devices, including sensors, solar cells, and the integrated circuits within our computers and smartphones.

Now MIT engineers have developed a technique to fabricate ultrathin semiconducting films made from a host of exotic materials other than silicon. To demonstrate their technique, the researchers fabricated flexible films made from gallium arsenide, gallium nitride, and lithium fluoride — materials that exhibit better performance than silicon but until now have been prohibitively expensive to produce in functional devices.

MIT researchers have devised a way to grow single crystal GaN thin film on a GaN substrate through two-dimensional materials. The GaN thin film is then exfoliated by a flexible substrate, showing the rainbow color that comes from thin film interference. This technology will pave the way to flexible electronics and the reuse of the wafers. Credit: Wei Kong and Kuan Qiao

The new technique, researchers say, provides a cost-effective method to fabricate flexible electronics made from any combination of semiconducting elements, that could perform better than current silicon-based devices.

“We’ve opened up a way to make flexible electronics with so many different material systems, other than silicon,” says Jeehwan Kim, the Class of 1947 Career Development Associate Professor in the departments of Mechanical Engineering and Materials Science and Engineering. Kim envisions the technique can be used to manufacture low-cost, high-performance devices such as flexible solar cells, and wearable computers and sensors.

Details of the new technique are reported today in Nature Materials. In addition to Kim, the paper’s MIT co-authors include Wei Kong, Huashan Li, Kuan Qiao, Yunjo Kim, Kyusang Lee, Doyoon Lee, Tom Osadchy, Richard Molnar, Yang Yu, Sang-hoon Bae, Yang Shao-Horn, and Jeffrey Grossman, along with researchers from Sun Yat-Sen University, the University of Virginia, the University of Texas at Dallas, the U.S. Naval Research Laboratory, Ohio State University, and Georgia Tech.

Now you see it, now you don’t

In 2017, Kim and his colleagues devised a method to produce “copies” of expensive semiconducting materials using graphene — an atomically thin sheet of carbon atoms arranged in a hexagonal, chicken-wire pattern. They found that when they stacked graphene on top of a pure, expensive wafer of semiconducting material such as gallium arsenide, then flowed atoms of gallium and arsenide over the stack, the atoms appeared to interact in some way with the underlying atomic layer, as if the intermediate graphene were invisible or transparent. As a result, the atoms assembled into the precise, single-crystalline pattern of the underlying semiconducting wafer, forming an exact copy that could then easily be peeled away from the graphene layer.

The technique, which they call “remote epitaxy,” provided an affordable way to fabricate multiple films of gallium arsenide, using just one expensive underlying wafer.

Soon after they reported their first results, the team wondered whether their technique could be used to copy other semiconducting materials. They tried applying remote epitaxy to silicon, and also germanium — two inexpensive semiconductors — but found that when they flowed these atoms over graphene they failed to interact with their respective underlying layers. It was as if graphene, previously transparent, became suddenly opaque, preventing atoms of silicon and germanium from “seeing” the atoms on the other side.

As it happens, silicon and germanium are two elements that exist within the same group of the periodic table of elements. Specifically, the two elements belong in group four, a class of materials that are ionically neutral, meaning they have no polarity.

“This gave us a hint,” says Kim.

Perhaps, the team reasoned, atoms can only interact with each other through graphene if they have some ionic charge. For instance, in the case of gallium arsenide, gallium has a negative charge at the interface, compared with arsenic’s positive charge. This charge difference, or polarity, may have helped the atoms to interact through graphene as if it were transparent, and to copy the underlying atomic pattern.

“We found that the interaction through graphene is determined by the polarity of the atoms. For the strongest ionically bonded materials, they interact even through three layers of graphene,” Kim says. “It’s similar to the way two magnets can attract, even through a thin sheet of paper.”

Opposites attract

The researchers tested their hypothesis by using remote epitaxy to copy semiconducting materials with various degrees of polarity, from neutral silicon and germanium, to slightly polarized gallium arsenide, and finally, highly polarized lithium fluoride — a better, more expensive semiconductor than silicon.

They found that the greater the degree of polarity, the stronger the atomic interaction, even, in some cases, through multiple sheets of graphene. Each film they were able to produce was flexible and merely tens to hundreds of nanometers thick.

The material through which the atoms interact also matters, the team found. In addition to graphene, they experimented with an intermediate layer of hexagonal boron nitride (hBN), a material that resembles graphene’s atomic pattern and has a similar Teflon-like quality, enabling overlying materials to easily peel off once they are copied.

However, hBN is made of oppositely charged boron and nitrogen atoms, which generate a polarity within the material itself. In their experiments, the researchers found that any atoms flowing over hBN, even if they were highly polarized themselves, were unable to interact with their underlying wafers completely, suggesting that the polarity of both the atoms of interest and the intermediate material determines whether the atoms will interact and form a copy of the original semiconducting wafer.

“Now we really understand there are rules of atomic interaction through graphene,” Kim says.

With this new understanding, he says, researchers can now simply look at the periodic table and pick two elements of opposite charge. Once they acquire or fabricate a main wafer made from the same elements, they can then apply the team’s remote epitaxy techniques to fabricate multiple, exact copies of the original wafer.

“People have mostly used silicon wafers because they’re cheap,” Kim says. “Now our method opens up a way to use higher-performing, nonsilicon materials. You can just purchase one expensive wafer and copy it over and over again, and keep reusing the wafer. And now the material library for this technique is totally expanded.”

Kim envisions that remote epitaxy can now be used to fabricate ultrathin, flexible films from a wide variety of previously exotic, semiconducting materials — as long as the materials are made from atoms with a degree of polarity. Such ultrathin films could potentially be stacked, one on top of the other, to produce tiny, flexible, multifunctional devices, such as wearable sensors, flexible solar cells, and even, in the distant future, “cellphones that attach to your skin.”

“In smart cities, where we might want to put small computers everywhere, we would need low power, highly sensitive computing and sensing devices, made from better materials,” Kim says. “This [study] unlocks the pathway to those devices.”

By Serena Brischetto

SEMI met with Heinz Martin Esser, managing director at Fabmatics GmbH, to discuss how existing 200mm semiconductor fabs can master the challenges of a 24×7 production under highest cost and quality pressure by implementing intralogistics automation solutions. The two spoke ahead to his presentation at the Fab Management Forum at SEMICON Europa 2018, 13-16, November 2018, in Munich, Germany. To register for the event, click here.

SEMI: Looking at the latest production capacity data for 2018 – it is a 200mm fab boom. Growing demand for analog, MEMS and RF chips continues to cause acute shortages for both 200mm fab capacity and equipment. Do you think this trend will continue the next years or is it only a short term run on 200mm fabs?

Esser: We at Fabmatics believe in a long-term trend. The emergence of the Internet of Things and growing digitalization in all areas of life will continue to increase demand for integrated circuits (ASICs), analog ICs, high-performance components and micro-mechanical sensors (MEMS) in the coming years. Many of these semiconductor elements should be produced in 200 mm fabs.

SEMI: How does Fab automation contribute to increase capacity of existing, mature 200mm fabs?

Esser:  We are convinced that fab automation is one of the greatest potentials for older 200mm factories to effectively master increased demand, increasing efficiency, quality assurance and flexibility at the same time. In particular, material flow automation, which is often the missing link between existing equipment in different production areas, can help increase productivity in an elementary way.

If you analyze how long valuable tools typically wait for loading and unloading, you can see a direct effect of the intralogistics automation system, which leads to a significantly higher utilization of process equipment by making the material flow independent from human performance. Additional side effects such as reduced cycle time, stable fab flow factor or flattened WIP shafts further increase the contribution of material flow automation to get the most out of existing mature factories. Older does not mean obsolete.

SEMI: What are the biggest challenges for a successful implementation?

Esser: There is no single challenge when you automate an existing mature fab. Instead, you face a whole variety of challenges you have to tackle, ranging from historically grown non-aligned fab layouts over non-linear material flows and older non-standardized equipment to “automation unfriendly” fab environment. Also you should not underestimate the efforts to overcome the practice manual fab operation people in the cleanroom are so familiar with for many years. Before doing automation you have to think automation, i.e. you have to question all processes to make them ready for automation.

SEMI: What are the key drivers to automate a mature fab today: costs, process stability, quality or a combination of them?

Esser: This question should be better asked to our customers, but we believe it is a mix of many impacts. Most likely everybody sees the cost reduction at first, but we get more aware of process and performance stability as well as quality requirements – and here our customers’ play the most important role – become more and more focused.

SEMI: What do you expect from SEMICON Europa 2018 and why do you recommend attending the Fab Management Forum?

Esser: This year SEMICON Europa will co-locate with electronica. So it`s going to be the greatest trade fair for electronics manufacturing in Europe. We will meet innovators and decision-makers across the whole electronics supply chain.

The Fab Management Forum addresses a highly topical question that concerns all semiconductor manufacturers not only in Europe – how to handle complexity and enable the necessary flexibility to cope with customers’ needs. High-ranking speakers will give an insight into the latest technologies and best practices. I am looking forward to the lively exchange with the participants and taking away new impulses for our business.

Heinz Martin Esser is managing director at Fabmatics GmbH, responsible for sales and marketing, customer service and administration. He studied supply engineering at the University of Applied Sciences in Cologne and later earned a university degree in business administration.

Originally published on the SEMI blog.

Technion, Israel’s technological institute, announced this week that Intel is collaborating with the institute on its new artificial intelligence (AI) research center. The announcement was made at the center’s inauguration attended by Dr. Michael Mayberry, Intel’s chief technology officer, and Dr. Naveen Rao, Intel corporate vice president and general manager of the Artificial Intelligence Products Group.

“AI is not a one-size-fits-all approach, and Intel has been working closely with a range of industry leaders to deploy AI capabilities and create new experiences. Our collaboration with Technion not only reinforces Intel Israel’s AI operations, but we are also seeing advancements to the field of AI from the joint research that is under way and in the pipeline,” said Naveen Rao, Intel corporate vice president and general manager of Artificial Intelligence Products Group

The center features Technion’s computer science, electrical engineering, industrial engineering and management departments, among others, all collaborating to drive a closer relationship between academia and industry in the race to AI. Intel, which invested undisclosed funds in the center, will represent the industry in leading AI-dedicated computing research.

Intel is committed to accelerating the promise of AI across many industries and driving the next wave of computing. Research exploring novel architectural and algorithmic approaches is a critical component of Intel’s overall AI program. The company is working with customers across verticals – including healthcare, autonomous driving, sports/entertainment, government, enterprise, retail and more – to implement AI solutions and demonstrate real value. Along with Technion, Intel is also involved in AI research with other universities and organizations worldwide.

Intel and Technion have enjoyed a strong relationship through the years, as generations of Technion graduates have joined Intel’s development center in Haifa, Israel, as engineers. Intel has also previously collaborated with Technion on AI as part of the Intel Collaborative Research Institute for Computational Intelligence program.

“2017 was an excellent year for CIS , with growth observed in all segments except computing,” commented Pierre Cambou, Principal Analyst, Technology & Market, Imaging at Yole Développement (Yole). Driven by new applications, the industry’s future remains on strong footing.

Yole announces its annual technology & market analysis focused on the CIS industry, from 2017 to 2023, titled: Status of the CMOS Image Sensor Industry. In 2017 the CIS market reached US$13.9 billion. The market research & strategy consulting company forecasts a 9.4% CAGR between 2017 and 2023, driven mainly by smartphones integrating additional cameras to support functionalities like optical zoom, biometry, and 3D interactions.

Yole proposes this year again a comprehensive technology & market analysis of the CMOS Image Industry. In addition to a clear understanding of the CIS ecosystem, analysts detail in this new edition, 2017-2023 forecasts, a relevant description of the M&A activities, an impressive overview of the dual and 3D camera trends for mobile. Mobile and consumer applications are also well detailed in this 2018 edition, with a deep added-value section focused on technology evolution.
In collaboration with Jean-Luc Jaffard, formerly at STMicroelectronics and part of Red Belt Conseil, Pierre Cambou pursued his investigation all year long and reveals today the status of the CIS industry.

2017 saw aggregated CIS industry revenue of US$13.9 billion. And 5 years later, the consulting company Yole announces more than US$ 23 billion. The YoY growth hit a peak at 20% due to the exceptional increase in image sensor value, across almost all markets, but primarily in the mobile sector. “CIS keeps its momentum”,confirms Pierre Cambou from Yole.

Revenue is dominated by mobile, consumer, and computing, which represent 85% of total 2017 CIS revenue. Mobile alone represents 69%. Security is the second-largest segment, behind automotive.

The CIS ecosystem is currently dominated by the three Asian heavyweights: Sony, Samsung, and Omnivision. Europe made a noticeable comeback. Meanwhile, the US maintains a presence in the high-end sector.

The market has benefited from the operational recovery of leading CIS player Sony, which captured 42% market share. “…Apple iPhone has had a tremendous effect on the semiconductor industry, and on imaging in particular. It offered an opportunity for its main supplier, Sony, to reach new highs in the CIS process, building on its early advances in high-end digital photography…”, explains Pierre Cambou in its article: Image sensors have hugely benefited from Apple’s avant-garde strategy posted on i-micronews.com.

The CIS industry is able to grow at the speed of the global semiconductor industry, which also had a record year, mainly due to DRAM revenue growth. CIS have become a key segment of the broader semiconductor industry, featuring in the strategy of most key players, and particularly the newly-crowned industry leader Samsung. Mobile, security and automotive markets are all in the middle of booming expansion, mostly benefiting ON Semiconductor and Omnivision.

These markets are boosting most players that are able to keep up with technology and capacity development through capital expenditure. The opportunities are all across the board, with new players able to climb the rankings, such as STMicroelectronics and Smartsense. Technology advancement and the switch from imaging to sensing is fostering innovation at multiple levels: pixel, chip, wafer, all the way to the system.

CIS sensors are also at the forefront of 3D semiconductor approaches. They are a main driver in the development of artificial intelligence. Yole’s analysts foresee new techniques and new applications all ready to keep up the market growth momentum… A detailed description of this report is available on i-micronews.com, imaging reports section.

MEMS & Sensors Industry Group (MSIG), a SEMI Strategic Association Partner, today announced four Technology Showcase finalists for the 14th annual MEMS & Sensors Executive Congress (MSEC), October 28-30, 2018, at the Silverado Resort and Spa in Napa, Calif. The MEMS & Sensors Executive Congress is the premier event for industry executives to gain insights on emerging MEMS and sensors opportunities and network with partners, customers and competitors. An early bird registration discount is available until Oct. 8.

The Technology Showcase highlights the latest applications enabled by MEMS and sensors as finalists demonstrate their innovations and vie for attendee votes. The finalists were selected by a committee of industry experts.

Technology Showcase Finalists

N5 Sensors’ Micro-Scale Gas Sensors on a Chip enable low-power, high-reliability microscale gas and chemical sensing technologies in small-footprint devices. The chip promises to broaden the implementation of gas and chemical sensing for industrial detection, first response, smart cities, demand-controlled ventilation, wearables and other consumer electronics. N5 Sensors Logo
NXP Semiconductor’s Asset Tracking Technology uses motion sensors, GPS and edge computing for precision tracking of a package’s journey from origin to delivery point. The technology enables logistics companies to quickly pinpoint and resolve transportation issues. See video NXP Logo
Scorched Ice Inc.’s Smart Skates leverage STMicroelectronics’ inertial measurement unit (IMU) sensors to facilitate real-time diagnostics of a hockey player’s skating technique, condition and performance. The device provides actionable insights to players, coaches, trainers and scouts. SI Logo
SportFitz’s Concussion-Monitoring Device combines real-time measurements of location, position, direction and force of impact as well as big data analytics and embedded protocols to stream data that can help assess potentially concussive brain impacts. The one-inch wearable device is hypoallergenic, waterproof, recyclable, reusable and rechargeable. See video. SportsFitz Logo