Category Archives: Device Architecture

Market shares of top semiconductor equipment manufacturers for the full year 2017 indicate large gains by Tokyo Electron and Lam Research while top supplier Applied Materials dropped, according to the report “Global Semiconductor Equipment: Markets, Market Shares, Market Forecasts,” recently published by The Information Network, a New Tripoli-based market research company.

The chart below shows shares for the entire years of 2016 and 2017. Market shares are for equipment only, excluding service and spare parts, and have been converted for revenues of foreign companies to U.S. dollars on a quarterly exchange rate.

market shares

Market leader Applied Materials lost 1.8 share points among the top seven companies, dropping from 28.8% in 2016 to 27.0% in 2017. Gaining share are Tokyo Electron Ltd., which gained 2.1 share points while rising from 17.4% in 2016 to 19.1% in 2017, and Lam Research, which gained 1.5 share points and grew from a 19.4% share in 2016 to a 20.9% share in 2017.

In third place ASML gained 0.6 share points, growing from an 18.8% share in 2016 to a 19.4% share in 2017.

Fifth place KLA-Tencor is the dominant supplier in the process control sector (inspection and metrology) and competes against Applied Materials and Hitachi High-Technologies, as well as several other companies including Nanometrics, Nova Measuring Instruments, and Rudolph Technologies. KLA-Tencor gained market share against each of its competitors in this sector in 2017.

Much of the equipment revenue growth was attributed to strong growth in the DRAM and NAND sectors, as equipment was installed in memory manufacturers Intel, Micron Technology, Samsung Electronics, SK Hynix, Toshiba, and Western Digital. The memory sector is expected to have grown 60.1% in 2017 and another 9.3% in 2018 according to industry consortium WSTS (World Semiconductor Trade Statistics).

Following the strong growth in the semiconductor equipment market, The Information Network projects another 11% growth in 2018. for semiconductor equipment.

Silicon has long been the go-to material in the world of microelectronics and semiconductor technology. But silicon still faces limitations, particularly with scalability for power applications. Pushing semiconductor technology to its full potential requires smaller designs at higher energy density.

“One of the largest shortcomings in the world of microelectronics is always good use of power: Designers are always looking to reduce excess power consumption and unnecessary heat generation,” said Gregg Jessen, principal electronics engineer at the Air Force Research Laboratory. “Usually, you would do this by scaling the devices. But the technologies in use today are already scaled close to their limits for the operating voltage desired in many applications. They are limited by their critical electric field strength.”

This is a false-color, plan-view SEM image of a lateral gallium oxide field effect transistor with an optically defined gate. From near (bottom) to far (top): the source, gate, and drain electrodes. Metal is shown in yellow and orange, dark blue represents dielectric material, and lighter blue denotes the gallium oxide substrate. Credit: AFRL Sensors Directorate at WPAFB, Ohio, US

This is a false-color, plan-view SEM image of a lateral gallium oxide field effect transistor with an optically defined gate. From near (bottom) to far (top): the source, gate, and drain electrodes. Metal is shown in yellow and orange, dark blue represents dielectric material, and lighter blue denotes the gallium oxide substrate. Credit: AFRL Sensors Directorate at WPAFB, Ohio, US

Transparent conductive oxides are a key emerging material in semiconductor technology, offering the unlikely combination of conductivity and transparency over the visual spectrum. One conductive oxide in particular has unique properties that allow it to function well in power switching: Ga2O3, or gallium oxide, a material with an incredibly large bandgap.

In their article published this week in Applied Physics Letters, from AIP Publishing, authors Masataka Higashiwaki and Jessen outline a case for producing microelectronics using gallium oxide. The authors focus on field effect transistors (FETs), devices that could greatly benefit from gallium oxide’s large critical electric field strength. a quality which Jessen said could enable the design of FETs with smaller geometries and aggressive doping profiles that would destroy any other FET material.

The material’s flexibility for various applications is due to its broad range of possible conductivities — from highly conductive to very insulating — and high-breakdown-voltage capabilities due to its electric field strength. Consequently, gallium oxide can be scaled to an extreme degree. Large-area gallium oxide wafers can also be grown from the melt, lowering manufacturing costs.

“The next application for gallium oxide will be unipolar FETs for power supplies,” Jessen said. “Critical field strength is the key metric here, and it results in superior energy density capabilities. The critical field strength of gallium oxide is more than 20 times that of silicon and more than twice that of silicon carbide and gallium nitride.”

The authors discuss manufacturing methods for Ga2O3 wafers, the ability to control electron density, and the challenges with hole transport. Their research suggests that unipolar Ga2O3 devices will dominate. Their paper also details Ga2O3 applications in different types of FETs and how the material can be of service in high-voltage, high-power and power-switching applications.

“From a research perspective, gallium oxide is really exciting,” Jessen said. “We are just beginning to understand the full potential of these devices for several applications, and it’s a great time to be involved in the field.”

The Semiconductor Industry Association (SIA), representing U.S. leadership in semiconductor manufacturing, design, and research, today announced the global semiconductor industry posted sales totaling $412.2 billion in 2017, the industry’s highest-ever annual sales and an increase of 21.6 percent compared to the 2016 total. Global sales for the month of December 2017 reached $38.0 billion, an increase of 22.5 percent over the December 2016 total and 0.8 percent more than the previous month’s total. Fourth-quarter sales of $114.0 billion were 22.5 percent higher than the total from the fourth quarter of 2016 and 5.7 percent more than the third quarter of 2017. Global sales during the fourth quarter of 2017 and during December 2017 were the industry’s highest-ever quarterly and monthly sales, respectively. All monthly sales numbers are compiled by the World Semiconductor Trade Statistics (WSTS) organization and represent a three-month moving average.

Worldwide semiconductor revenues, year-to-year percent change

Worldwide semiconductor revenues, year-to-year percent change

“As semiconductors have become more heavily embedded in an ever-increasing number of products – from cars to coffee makers – and nascent technologies like artificial intelligence, virtual reality, and the Internet of Things have emerged, global demand for semiconductors has increased, leading to landmark sales in 2017 and a bright outlook for the long term,” said John Neuffer, SIA president and CEO. “The global market experienced across-the-board growth in 2017, with double-digit sales increases in every regional market and nearly all major product categories. We expect the market to grow more modestly in 2018.”

Several semiconductor product segments stood out in 2017. Memory was the largest semiconductor category by sales with $124.0 billion in 2017, and the fastest growing, with sales increasing 61.5 percent. Within the memory category, sales of DRAM products increased 76.8 percent and sales of NAND flash products increased 47.5 percent. Logic ($102.2 billion) and micro-ICs ($63.9 billion) – a category that includes microprocessors – rounded out the top three product categories in terms of total sales. Other fast-growing product categories in 2017 included rectifiers (18.3 percent), diodes (16.4 percent), and sensors and actuators (16.2 percent). Even without sales of memory products, sales of all other products combined increased by nearly 10 percent in 2017.

Annual sales increased substantially across all regions: the Americas (35.0 percent), China (22.2 percent), Europe (17.1 percent), Asia Pacific/All Other (16.4 percent), and Japan (13.3 percent). The Americas market also led the way in growth for the month of December 2017, with sales up 41.4 percent year-to-year and 2.1 percent month-to-month. Next were Europe (20.2 percent/-1.6 percent), China (18.1 percent/1.0 percent), Asia Pacific/All Other (17.4 percent/0.2 percent), and Japan (14.0 percent/0.9 percent).

“A strong semiconductor industry is foundational to America’s economic strength, national security, and global technology leadership,” said Neuffer. “We urge Congress and the Trump Administration to enact polices in 2018 that promote U.S. innovation and allow American businesses to compete on a more level playing field with our counterparts overseas. We look forward to working with policymakers in the year ahead to further strengthen the semiconductor industry, the broader tech sector, and our economy.”

First came the switch. Then the transistor. Now another innovation stands to revolutionize the way we control the flow of electrons through a circuit: vanadium dioxide (VO2). A key characteristic of this compound is that it behaves as an insulator at room temperature but as a conductor at temperatures above 68°C. This behavior – also known as metal-insulator transition – is being studied in an ambitious EU Horizon 2020 project called Phase-Change Switch. EPFL was chosen to coordinate the project following a challenging selection process.

The project will last until 2020 and has been granted €3.9 million of EU funding. Due to the array of high-potential applications that could come out of this new technology, the project has attracted two major companies – Thales of France and the Swiss branch of IBM Research – as well as other universities, including Max-Planck-Gesellschaft in Germany and Cambridge University in the UK. Gesellschaft für Angewandte Mikro- und Optoelektronik (AMO GmbH), a spin-off of Aachen University in Germany, is also taking part in the research.

Scientists have long known about the electronic properties of VO2 but haven’t been able to explain them until know. It turns out that its atomic structure changes as the temperature rises, transitioning from a crystalline structure at room temperature to a metallic one at temperatures above 68°C. And this transition happens in less than a nanosecond – a real advantage for electronics applications. “VO2 is also sensitive to other factors that could cause it to change phases, such as by injecting electrical power, optically, or by applying a THz radiation pulse,” says Adrian Ionescu, the EPFL professor who heads the school’s Nanoelectronic Devices Laboratory (Nanolab) and also serves as the Phase-Change Switch project coordinator.

The challenge: reaching higher temperatures

However, unlocking the full potential of VO2 has always been tricky because its transition temperature of 68°C is too low for modern electronic devices, where circuits must be able to run flawlessly at 100°C. But two EPFL researchers – Ionescu from the School of Engineering (STI) and Andreas Schüler from the School of Architecture, Civil and Environmental Engineering (ENAC) – may have found a solution to this problem, according to their joint research published in Applied Physics Letters in July 2017. They found that adding germanium to VO2 film can lift the material’s phase change temperature to over 100°C.

Even more interesting findings from the Nanolab – especially for radiofrequency applications – were published in IEEE Access on 2 February 2018. For the first time ever, scientists were able to make ultra-compact, modulable frequency filters. Their technology also uses VO2 and phase-change switches, and is particularly effective in the frequency range crucial for space communication systems (the Ka band, with programmable frequency modulation between 28.2 and 35 GHz).

Neuromorphic processors and autonomous vehicles

These promising discoveries are likely to spur further research into applications for VO2 in ultra-low-power electronic devices. In addition to space communications, other fields could include neuromorphic computing and high-frequency radars for self-driving cars.

Researchers at the University of Illinois at Chicago describe a new technique for precisely measuring the temperature and behavior of new two-dimensional materials that will allow engineers to design smaller and faster microprocessors. Their findings are reported in the journal Physical Review Letters.

Newly developed two-dimensional materials, such as graphene — which consists of a single layer of carbon atoms — have the potential to replace traditional microprocessing chips based on silicon, which have reached the limit of how small they can get. But engineers have been stymied by the inability to measure how temperature will affect these new materials, collectively known as transition metal dichalcogenides, or TMDs.

Using scanning transmission electron microscopy combined with spectroscopy, researchers at UIC were able to measure the temperature of several two-dimensional materials at the atomic level, paving the way for much smaller and faster microprocessors. They were also able to use their technique to measure how the two-dimensional materials would expand when heated.

“Microprocessing chips in computers and other electronics get very hot, and we need to be able to measure not only how hot they can get, but how much the material will expand when heated,” said Robert Klie, professor of physics at UIC and corresponding author of the paper. “Knowing how a material will expand is important because if a material expands too much, connections with other materials, such as metal wires, can break and the chip is useless.”

Traditional ways to measure temperature don’t work on tiny flakes of two-dimensional materials that would be used in microprocessors because they are just too small. Optical temperature measurements, which use a reflected laser light to measure temperature, can’t be used on TMD chips because they don’t have enough surface area to accommodate the laser beam.

“We need to understand how heat builds up and how it is transmitted at the interface between two materials in order to build efficient microprocessors that work,” said Klie.

Klie and his colleagues devised a way to take temperature measurements of TMDs at the atomic level using scanning transition electron microscopy, which uses a beam of electrons transmitted through a specimen to form an image.

“Using this technique, we can zero in on and measure the vibration of atoms and electrons, which is essentially the temperature of a single atom in a two-dimensional material,” said Klie. Temperature is a measure of the average kinetic energy of the random motions of the particles, or atoms that make up a material. As a material gets hotter, the frequency of the atomic vibration gets higher. At absolute zero, the lowest theoretical temperature, all atomic motion stops.

Klie and his colleagues heated microscopic “flakes” of various TMDs inside the chamber of a scanning transmission electron microscope to different temperatures and then aimed the microscope’s electron beam at the material. Using a technique called electron energy-loss spectroscopy, they were able to measure the scattering of electrons off the two-dimensional materials caused by the electron beam. The scattering patterns were entered into a computer model that translated them into measurements of the vibrations of the atoms in the material – in other words, the temperature of the material at the atomic level.

“With this new technique, we can measure the temperature of a material with a resolution that is nearly 10 times better than conventional methods,” said Klie. “With this new approach, we can design better electronic devices that will be less prone to overheating and consume less power.”

The technique can also be used to predict how much materials will expand when heated and contract when cooled, which will help engineers build chips that are less prone to breaking at points where one material touches another, such as when a two-dimensional material chip makes contact with a wire.

“No other method can measure this effect at the spatial resolution we report,” said Klie. “This will allow engineers to design devices that can manage temperature changes between two different materials at the nano-scale level.”

Air Products (NYSE: APD) today announced it has been awarded the industrial gases supply for Samsung Electronics’ second semiconductor fab in Xi’an, Shaanxi Province, western China.

The Xi’an fabrication line, within the Xi’an High-tech Zone (XHTZ), represents one of Samsung’s largest overseas investments and one of the most advanced fabs in China. It produces three-dimensional (3D) vertical NAND (V-NAND) flash memory chips for a wide range of applications, including embedded NAND storage, solid state drives, mobile devices, and other consumer electronics products.

Air Products has been supporting this project since 2014 from a large site housing two large air separation units (ASUs), a hydrogen plant and a bulk specialty gas delivery system. Under the new award, Air Products will expand its site by building several large ASUs, hydrogen and compressed dry air plants, and a bulk specialty gas supply yard to supply ultra-high purity nitrogen, oxygen, argon, hydrogen and compressed dry air to the new fab, which is scheduled to be operational in 2019.

“Samsung is a strategic and longstanding customer for Air Products. It is our honor to have their continued confidence and again be selected to support their business growth and this important project in western China,” said Kyo-Yung Kim, president of Air Products Korea, who also oversees the company’s electronics investment in the XHTZ. “We have been supplying the project with proven safety, reliability and operational excellence. This latest investment further reinforces our global leading position and commitment to serving our valued customer, as well as the broader semiconductor and electronics industries.”

Continuing to build its strong relationship with Samsung Electronics, Air Products also recently announced the next phases of expansion to build two more nitrogen plants serving the customer’s giga fab in Pyeongtaek City, Gyeonggi Province, South Korea.

A leading integrated gases supplier, Air Products has been serving the global electronics industry for more than 40 years, supplying industrial gases safely and reliably to most of the world’s largest technology companies. Air Products is working with these industry leaders to develop the next generation of semiconductors and displays for tablets, computers and mobile devices.

Engineers at the University of California, Riverside, have reported advances in so-called “spintronic” devices that will help lead to a new technology for computing and data storage. They have developed methods to detect signals from spintronic components made of low-cost metals and silicon, which overcomes a major barrier to wide application of spintronics. Previously such devices depended on complex structures that used rare and expensive metals such as platinum. The researchers were led by Sandeep Kumar, an assistant professor of mechanical engineering.

UCR researchers have developed methods to detect signals from spintronic components made of low-cost metals and silicon. Credit: UC Riverside

UCR researchers have developed methods to detect signals from spintronic components made of low-cost metals and silicon. Credit: UC Riverside

Spintronic devices promise to solve major problems in today’s electronic computers, in that the computers use massive amounts of electricity and generate heat that requires expending even more energy for cooling. By contrast, spintronic devices generate little heat and use relatively minuscule amounts of electricity. Spintronic computers would require no energy to maintain data in memory. They would also start instantly and have the potential to be far more powerful than today’s computers.

While electronics depends on the charge of electrons to generate the binary ones or zeroes of computer data, spintronics depends on the property of electrons called spin. Spintronic materials register binary data via the “up” or “down” spin orientation of electrons–like the north and south of bar magnets–in the materials. A major barrier to development of spintronics devices is generating and detecting the infinitesimal electric spin signals in spintronic materials.

In one paper published in the January issue of the scientific journal Applied Physics Letters, Kumar and colleagues reported an efficient technique of detecting the spin currents in a simple two-layer sandwich of silicon and a nickel-iron alloy called Permalloy. All three of the components are both inexpensive and abundant and could provide the basis for commercial spintronic devices. They also operate at room temperature. The layers were created with the widely used electronics manufacturing processes called sputtering. Co-authors of the paper were graduate students Ravindra Bhardwaj and Paul Lou.

In their experiments, the researchers heated one side of the Permalloy-silicon bi-layer sandwich to create a temperature gradient, which generated an electrical voltage in the bi-layer. The voltage was due to phenomenon known as the spin-Seebeck effect. The engineers found that they could detect the resulting “spin current” in the bi-layer due to another phenomenon known as the “inverse spin-Hall effect.”

The researchers said their findings will have application to efficient magnetic switching in computer memories, and “these scientific breakthroughs may give impetus” to development of such devices. More broadly, they concluded, “These results bring the ubiquitous Si (silicon) to forefront of spintronics research and will lay the foundation of energy efficient Si spintronics and Si spin caloritronics devices.”

In two other scientific papers, the researchers demonstrated that they could generate a key property for spintronics materials, called antiferromagnetism, in silicon. The achievement opens an important pathway to commercial spintronics, said the researchers, given that silicon is inexpensive and can be manufactured using a mature technology with a long history of application in electronics.

Ferromagnetism is the property of magnetic materials in which the magnetic poles of the atoms are aligned in the same direction. In contrast, antiferromagnetism is a property in which the neighboring atoms are magnetically oriented in opposite directions. These “magnetic moments” are due to the spin of electrons in the atoms, and is central to the application of the materials in spintronics.

In the two papers, Kumar and Lou reported detecting antiferromagnetism in the two types of silicon–called n-type and p-type–used in transistors and other electronic components. N-type semiconductor silicon is “doped” with substances that cause it to have an abundance of negatively-charged electrons; and p-type silicon is doped to have a large concentration of positively charged “holes.” Combining the two types enables switching of current in such devices as transistors used in computer memories and other electronics.

In the paper in the Journal of Magnetism and Magnetic Materials, Lou and Kumar reported detecting the spin-Hall effect and antiferromagnetism in n-silicon. Their experiments used a multilayer thin film comprising palladium, nickel-iron Permalloy, manganese oxide and n-silicon.

And in the second paper, in the scientific journal physica status solidi, they reported detecting in p-silicon spin-driven antiferromagnetism and a transition of silicon between metal and insulator properties. Those experiments used a thin film similar to those with the n-silicon.

The researchers wrote in the latter paper that “The observed emergent antiferromagnetic behavior may lay the foundation of Si (silicon) spintronics and may change every field involving Si thin films. These experiments also present potential electric control of magnetic behavior using simple semiconductor electronics physics. The observed large change in resistance and doping dependence of phase transformation encourages the development of antiferromagnetic and phase change spintronics devices.”

In further studies, Kumar and his colleagues are developing technology to switch spin currents on and off in the materials, with the ultimate goal of creating a spin transistor. They are also working to generate larger, higher-voltage spintronic chips. The result of their work could be extremely low-power, compact transmitters and sensors, as well as energy-efficient data storage and computer memories, said Kumar.

Microprocessors, which first appeared in the early 1970s as 4-bit computing devices for calculators, are among the most complex integrated circuits on the market today.  During the past four decades, powerful microprocessors have evolved into highly parallel multi-core 64-bit designs that contain all the functions of a computer’s central processing unit (CPU) as well as a growing number of system-level functions and accelerator blocks for graphics, video, and emerging artificial intelligence (AI) applications.  MPUs are the “brains” of personal computers, servers, and large mainframes, but they can also be used for embedded processing in a wide range of systems, such as networking gear, computer peripherals, medical and industrial equipment, cars, televisions, set-top boxes, video-game consoles, wearable products and Internet of Things applications.  The recently released 2018 edition of IC Insights’ McClean Report shows that the fastest growing types of microprocessors in the last five years have been mobile system-on-chip (SoC) designs for tablets and data-handling cellphones and MPUs used in embedded-processing applications (Figure 1).

Figure 1

Figure 1

The McClean Report also forecasts that 52% of 2018 MPU sales will come from sales of all types of microprocessors used as CPUs in standard PCs, servers, and large computers.  As shown in Figure 2, only about 16% of MPU sales are expected from embedded applications in 2018, with the rest coming from mobile application processors used in tablets (4%) and cellphones (28%).  Cellphone and tablet MPUs exclude baseband processors, which handle modem transmissions in cellular networks and are counted in the wireless communications segment of the special-purpose logic IC product category. A little over half of 2018 microprocessor sales are expected to come from x86 MPUs for computer CPUs sold by Intel and rival Advanced Micro Devices.

Figure 2

Figure 2

Cellphone and tablet SoC processors were the main growth drivers in microprocessors during the first half of this decade, but slowdowns have hit both of these MPU categories since 2015.  Market saturation and the maturing of the smartphone segment have stalled unit growth in cellular handsets.  Cellphone application processor shipments were flat in 2016 and 2017 and are forecast to rise just 0.3% in 2018 to reach a record high of nearly 1.8 billion units in the year.

The microprocessor business continues to be dominated by the world’s largest IC maker, Intel (Samsung was the world’s largest semiconductor supplier in 2017). Intel’s share of total MPU sales had been more than 75% during most of the last decade, but that percentage is now slightly less than 60% because of stronger growth in cellphones and tablets that contain ARM-based SoC processors.  For nearly 20 years, Intel’s huge MPU business for personal computers has primarily competed with just one other major x86 processor supplier—AMD—but increases in the use of smartphones and tablets to access the Internet for a variety of applications has caused a paradigm shift in personal computing, which is often characterized as the “Post-PC era.”

This year, AMD looks to continue its aggressive comeback effort in x86-based server processors that it started in 2017 with the rollout of highly parallel MPUs built with the company’s new Zen microarchitecture. Intel has responded by increasing the number of 64-bit x86 CPUs in its Xeon processors. Intel, AMD, Nvidia, Qualcomm, and others are also increasing emphasis of processors and co-processor accelerators for machine-learning AI in servers, personal computing platforms, smartphones and embedded processing.

The 2018 McClean Report shows that the total MPU market is forecast to rise 4% to $74.5 billion in 2018, following market growth of 5% in 2017 and 9% in 2016.  Through 2022, total MPU sales are expected to increase at a compound annual growth rate of 3.4%.  Total microprocessor units are expected to rise 2% in 2018, the same growth rate as 2017, to 2.6 billion units.  Through the forecast period, total MPU units are forecast to rise by a CAGR of 2.1%.

By Emmy Yi, SEMI Taiwan 

Driven by emerging technologies like Artificial Intelligence (AI), Internet of Things (IoT), machine learning and big data, the digital transformation has become an irreversible trend for the electronics manufacturing industry. The global market for smart manufacturing and smart factory technologies is expected to reach US$250 billion in 2018.

“The semiconductor manufacturing process has reached its downscaling limit, making outstanding manufacturing capabilities indispensable for corporations to stay competitive,” said Ana Li, Director of Outreach and Member Service at SEMI. “Advances in cloud computing, data processing, and system integration technologies will be key to driving the broad adoption of smart manufacturing.”

ompany representatives shared insights and successes in manufacturing digitalization.

ompany representatives shared insights and successes in manufacturing digitalization.

To help semiconductor manufacturing companies navigate the digital transformation, SEMI recently held the AI and Smart Manufacturing Forum, a gathering of industry professionals from Microsoft, Stark Technology, Advantech, ISCOM, and Tectura to examine technology trends and smart manufacturing opportunities and challenges. The nearly 100 guests at the forum also included industry veterans from TSMC, ASE, Siliconware, Micron, and AUO. Following are key takeaways from the forum:

1)    Smart manufacturing is the key for digital transformation
Industry 4.0 is all about using automation to better understand customer needs and help drive efficiency improvements that enable better strategic manufacturing decisions. For electronics manufacturers, thriving in the digital transformation should begin with research and development focused on optimizing processes, developing innovative business models, and analyzing data in ways that support their customers’ business values and objectives. Digitization is also crucial for manufacturers to target the right client base, increase productivity, optimize operations and create new revenue opportunities.

2)    Powerful data analysis capabilities will enable manufacturing digitalization

As product development focuses more on smaller production volumes, companies need a powerful data analysis software to accelerate decision-making and problem-solving processes, enhance integration across different types of equipment, and improve management efficiency across enterprise resources including business operations, marketing, and customer service.

3)    The digital transformation will fuel revenue growth
Connectivity and data analysis, the two essential concepts of smart manufacturing, are not only essential for companies to improve facility management efficiency and production line planning but also key for maintaining healthy revenue growth.

“With our more than 130 semiconductor manufacturers and long fab history, Taiwan is in a strong position to help the industry evolve manufacturing to support the explosion of new data-intensive technologies,” said Chen-Wei Chiang, the Senior Specialist at the Taichung City Government’s Economic Development Bureau. “We look forward to working with SEMI to help manufacturers realize the full potential of smart manufacturing.”

With the advent of new data-intensive technologies including AI and IoT, advanced manufacturing processes that improve product yield rates and reduce production costs will become even more important for manufacturers to remain competitive. SEMI Taiwan will continue to assemble representatives from the industry, government, academia and research to examine critical topics in smart manufacturing. To learn more, please contact Emmy Yi, SEMI Taiwan, at
[email protected] or +886.3.560.1777 #205.

 

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses — the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

The design, published today in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.