Category Archives: Semiconductors

The steady increase in PC capabilities that has justified the upgrade cycle and fueled the long-term growth of the PC market is undergoing a historical deceleration, as evidenced by the slowing increase in dynamic random access memory (DRAM) content in notebooks and desktops since 2007.

Annual growth in the average DRAM usage per shipped PC has been slowing dramatically since peaking in 2007, according to an IHS iSuppli DRAM Dynamics Market Brief from information and analytics provider IHS. Following a 21.4% increase in 2012, the average growth of DRAM content per PC will decline to a record low of 17.4% this year, as presented in the attached figure. This compares to the high point of 56.1% in 2007, and 49.9% in 2008.

“For a generation, PCs have steadily improved their hardware performance and capabilities every year, with faster microprocessors, rising storage capacities and major increases in DRAM content,” said Clifford Leimbach, memory analyst at IHS. “These improvements—largely driven by rising performance demands of new operating system software—have justified the replacement cycle for PCs, compelling consumers and businesses to buy new machines to keep pace. However, on the DRAM front, the velocity of the increase has slackened. This slowdown reflects the maturity of the PC platform as well as a change in the nature of notebook computers as OEMs adjust to the rise of alternative systems—namely smartphones and media tablets.”

The growth in DRAM loading in PCs is expected remain in a low range in the coming years, rising by 21.3% in 2014 to and then continuing in the 20.0% range until at least 2016.

Notebooks slim down on DRAM

Notebooks increasingly are adopting ultrathin form factors and striving to increase battery life in order to become more competitive with popular media tablets. Because of this, DRAM chips must share limited space on the PC motherboard with other semiconductors that control the notebook’s other functions. Incorporating more DRAM bits can limit other notebook capabilities.

Notebook makers have shown a willingness to limit increase in DRAM on their systems, rather than sacrifice the thin form factor or eschew other features.

Desktops feel their age

For desktops, the slowing in DRAM bit growth reflects the maturity of PC hardware and operating system software.

DRAM has become less of a bottleneck in PC performance, tempering the need to increase DRAM bits in each system to ostensibly improve system speed.

Moreover, a change in PC operating system requirements has had the effect of limiting growth in DRAM loading. The latest version of Windows, in particular, has not required a step up in DRAM content, unlike previous Windows system versions where increased DRAM loading was explicitly required for desktops to avail of optimal performance that came with a new OS.

Post-PC era realities

“All told, PCs no longer need to add DRAM content as much as they did in the previous times, when failure to increase memory content in either desktops or laptops could have resulted in a direct impediment to performance,” Leimbach said. “The new normal now calls for a different state of affairs, in which DRAM PC loading won’t be growing at the same rates seen in past years.”

PCs historically have dominated DRAM consumption. However, starting in the second quarter of 2012, PCs accounted for less than half of all DRAM shipments—the first time in a generation that they didn’t consume 50 percent or more of the leading type of semiconductor memory. This is partly due to slowing shipment growth for PCs, combined with the deceleration in DRAM loading growth.

The development also illustrates the diminishing dominion of PCs in the electronics supply chain—and represented another sign of the post-PC era.

“The arrival of the post-PC era doesn’t mean that people will stop using personal computers, or even necessarily that the PC market will stop expanding,” Leimbach said. “What the post-PC era does mean is that personal computers are not at the center of the technology universe anymore—and are seeing their hegemony over the electronics supply chain erode. PCs are no longer generating the kind of growth and overwhelming market size that can single-handedly drive demand, pricing and technology trends in DRAM any many other major technology businesses.”

STMicroelectronics yesterday filed a complaint with the United States International Trade Commission (ITC). The complaint requests that the ITC initiate an investigation into the alleged infringement of five ST patents covering all of InvenSense, Inc.’s MEMS device offerings, as well as products from two of InvenSense’s customers: Black and Decker, Inc. and Roku, Inc. ST has requested that the ITC issue an order excluding InvenSense’s infringing gyroscopes and accelerometers, as well its customers’ products that include those InvenSense devices, from importation into the United States.

This is the second patent lawsuit that ST has brought against InvenSense. In May 2012, ST filed a patent infringement lawsuit against InvenSense in the Northern District of California, alleging infringement of nine ST patents and seeking injunctive relief and monetary damages. InvenSense requested a stay of litigation, which the district court granted on February 27, 2013. According to the court’s order, the case was stayed until the United States Patent Office completed its reexamination process and ST completed any appeals of the Patent Offices findings, at which time the parties were to provide the court with a status report on the re-examination.

“Historically, InvenSense developed the first integrated dual-axis MEMS gyroscope for consumer electronics applications, and by 2006, its novel applications in consumer electronics products created very significant customer demand for similar products,” InvenSense spokesperson said, in an official press release, “ST did not enter the consumer MEMS gyroscope market until 2008, when it tried to catch up to InvenSense and target the growing consumer electronics market. “

"While we welcome fair competition, ST cannot tolerate continued infringement of our strong and unique patent portfolio, which is the result of more than 15 years of intensive R&D efforts and substantial investment, to bring competitive and innovative solutions to customers worldwide," said Bob Krysiak, President and Chief Executive Officer of STMicroelectronics.

Historically, STMicroelectronics said in its official press release, over 89% of reexamined patents are confirmed upon ex parte reexamination.

STMicroelectronics itself came under fire in 2007, when SanDisk claimed patent infringement over three different NAND flash patents. The district court sided with STMicroelectronics, even after SanDisk’s appeal.

Driven by the government’s focus on the futuristic Internet of Things – embedding connectivity and intelligence in everyday objects – and a surge in private sector growth, China’s RFID card market will nearly double in value and more than double in units in 2017, according to Lux Research.

The RFID card/tag market volume will grow to 2.11 billion units, from 894 million in 2012, reflecting a compound annual growth rate (CAGR) of 19%. In revenue terms, the market will grow to $807 million in 2017, from $454 million in 2012, at a CAGR of 12%.

“So far, government applications account for 22% of the volume and 34% of the revenue, but that is about to change quickly,” said Richard Jun Li, Lux Research Director and the lead author of the report titled, “Identifying Growth and Threat in China’s Emerging RFID Ecosystem.”

“With the rise of market-driven applications, there are opportunities for multinationals to leverage China’s RFID growth – speed and identification of the best local partnerships will be critical,” he added.

Lux Research analysts studied the Chinese RFID market and government policy to evaluate growth prospects for the industry. Among their findings:

  • Consumer market is the strongest. Driven mainly by the adoption of RFID tags for anti-counterfeiting, consumer applications will grow the fastest in volume terms – at a CAGR of 38% until 2017. Industrial applications will grow at a 25% rate, while electronic toll collection will be a fast-growing subsector.
  • Local OEM players emerging. The rise of Chinese original equipment manufacturer (OEM) suppliers for RFID cards/tags is creating a new industry dynamic. Currently, the top 15 suppliers – led by China Card Group and Tatwah Smartech – account for 57% of the Chinese market and are poised for further gains.
  • Focus is on fast-growing UHF market. Chinese companies do not have as strong a position in superior ultra-high frequency (UHF) chips – which will grow dramatically to become a $236 million market in 2017. However, the clock is ticking for multinational suppliers, as the Chinese government is putting significant resources into developing homemade UHF chips.

The report, titled “Identifying Growth and Threat in China’s Emerging RFID Ecosystem,” is part of the Lux Research China Innovation Intelligence service.

internet of things
By SRI Consulting Business Intelligence/National Intelligence Council [Public domain or Public domain], via Wikimedia Commons

Tokyo-based Asahi Glass Co., Ltd. and nMode Solutions Inc. of Tucson, Arizona, have invested $2.1 million to co-found a subsidiary business, Triton Micro Technologies , to develop via-fill technology for interposers, enabling next-generation semiconductor packaging solutions using ultra-thin glass. The new company, headquartered in Tucson with a manufacturing facility planned in California, will combine nMode’s interposer technology for electrically connecting semiconductor devices with AGC’s materials technology and micro-hole drilling techniques to produce 2.5-dimensional (2.5D) and three-dimensional (3D) through-glass-via (TGV) interposers needed for advanced semiconductor devices.

To achieve the next generation in high-density semiconductor packaging, interposer technologies are needed to form the high number of electrical connections between a silicon chip and a printed circuit board. Interposers allow high packaging integration in the smallest available form factors.

Triton Micro Technologies will manufacture ultra-thin glass interposers using a high-efficiency continuous process that lowers costs and helps to commercialize the widespread use of interposers. The company will draw upon nMode’s intellectual property and AGC’s proven carrier-glass technology and via-hole drilling methodologies to fabricate its interposers. Triton then will apply its proprietary technology to fill the high-aspect-ratio via holes with a copper paste that has the same coefficient of thermal expansion as glass. This reduces the potentially damaging effects of thermal stress during manufacturing and long-term use. Triton’s process creates high-quality electrodes within the interposer to provide the electrical interface capable of accommodating advanced, high-density ICs.

Triton’s interposers are compatible with wafers having diameters from 100mm to 300mm and thicknesses of 0.7mm and below. The company also can design and manufacture customized solutions for unique applications.

“The global semiconductor industry recognizes that silicon is approaching its performance limits as an interposer material, but the need remains to create smaller, more efficient packages for today’s and tomorrow’s high-performance ICs,” said Tim Mobley, CEO at Triton. “Our technology allows us to achieve known-good-die testing at the highest levels of packaging integration, faster cycle times and the lowest cost per unit in the market.”

STMicroelectronics announced today that Didier Lamouche, Chief Operating Officer, whose operational role was suspended when he took the assignment as President and Chief Executive Officer at ST-Ericsson in December 2011, has decided to resign from the company effective March 31, 2013 to pursue other opportunities.

"Over the past years Didier has brought his strong contribution to ST, initially as the Chief Operating Officer, and then taking the challenging task to lead ST-Ericsson" said Carlo Bozotti, President and CEO of ST. "We thank him for his outstanding contribution and wish him all the best for his future."

Prior to taking on this role, he was a member of our Supervisory Board and Audit Committee until October 26, 2010. Dr. Lamouche is a graduate of Ecole Centrale de Lyon and holds a PhD in semiconductor technology. He has over 20 years of experience in the semiconductor industry. Dr. Lamouche started his career in 1984 in the R&D department of Philips before joining IBM Microelectronics where he held several positions in France and the United States. In 1995, he became Director of Operations of Motorola’s Advanced Power IC unit in Toulouse, France. Three years later, in 1998, he joined IBM as General Manager of the largest European semiconductor site in Corbeil, France, to lead its turnaround and transformation into a joint venture between IBM and Infineon: Altis Semiconductor. He managed Altis Semiconductor as CEO for four years. In 2003, Dr. Lamouche rejoined IBM and was the Vice President for Worldwide Semiconductor Operations based in New York (United States) until the end of 2004. Since February 2005, Dr. Lamouche has been the Chairman and CEO of Groupe Bull, a France based global company operating in the IT sector. He is also a member of the Board of Directors of SOITEC since 2005, and Adecco since 2011. Dr. Lamouche suspended his operational responsibilities in the Company effective December 1, 2011 in view of his appointment as President and Chief Executive Officer of ST-Ericsson.

According to the latest analysis by Semicast Research, Renesas Electronics was again the leading vendor of semiconductors to the OE automotive sector in 2012, ahead of Infineon Technologies. STMicroelectronics retained its position as third largest supplier, with Freescale fourth and NXP fifth. Semicast calculates that revenues for OE automotive semiconductors grew by 12% to USD $25.5 billion in 2012, while the total semiconductor industry is judged to have declined by almost three percent to USD $292 billion.

Semicast’s OE automotive semiconductor vendor share analysis ranks Renesas Electronics as the leading supplier in 2012, with an estimated market share of 13.3%. Renesas continues to hold a substantial lead over the second placed supplier, Infineon Technologies, which in 2012 had an estimated market share of 8.3%. STMicroelectronics is judged to have been the third largest supplier last year with a market share of 7.4%, ahead of Freescale on 6.6% and NXP on 6.0%.

“The list of vendors making up the top five positions to the OE automotive semiconductor market has remained unchanged since 2006, despite the dramatic rises and falls in the market over this period,” said Colin Barnden, principal analyst at Semicast Research and study author.

Currency movements are likely to have a substantial impact on market shares in 2013, particularly for Renesas which reports in yen. Newly elected Japanese Prime Minister Shinzo Abe has announced plans to depreciate the yen in the short term, to stimulate the Japanese economy and raise domestic inflation to a target of two percent. The progress of this policy can already be seen, with the US dollar/yen exchange rate weakening to 94 yen in early March, from 80 yen before Abe’s election in December 2012, a fall approaching twenty percent. Barnden summed up “The yen has not traded below 100 since March 2009, reflecting its status as a safe haven currency, but this level looks certain to be breached in the months ahead.”

2012 OE Automotive Semiconductor Vendor Share Ranking

 Renesas Electronics    13.3%

Infineon Technologies            8.3%

STMicroelectronics     7.4%

Freescale Semiconductor        6.6%

NXP Semiconductor   6.0%

Top 5 Total      41.6%

Others 58.4%

graphene collapse observed in berkley labThe first experimental observation of a quantum mechanical phenomenon that was predicted nearly 70 years ago holds important implications for the future of graphene-based electronic devices. Working with microscopic artificial atomic nuclei fabricated on graphene, a collaboration of researchers led by scientists with the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley have imaged the “atomic collapse” states theorized to occur around super-large atomic nuclei.

“Atomic collapse is one of the holy grails of graphene research, as well as a holy grail of atomic and nuclear physics,” says Michael Crommie, a physicist who holds joint appointments with Berkeley Lab’s Materials Sciences Division and UC Berkeley’s Physics Department. “While this work represents a very nice confirmation of basic relativistic quantum mechanics predictions made many decades ago, it is also highly relevant for future nanoscale devices where electrical charge is concentrated into very small areas.”

Crommie is the corresponding author of a paper describing this work in the journal Science. The paper is titled “Observing Atomic Collapse Resonances in Artificial Nuclei on Graphene.”  Co-authors are Yang Wang, Dillon Wong, Andrey Shytov, Victor Brar, Sangkook Choi, Qiong Wu, Hsin-Zon Tsai, William Regan, Alex Zettl, Roland Kawakami, Steven Louie, and Leonid Levitov.

Originating from the ideas of quantum mechanics pioneer Paul Dirac, atomic collapse theory holds that when the positive electrical charge of a super-heavy atomic nucleus surpasses a critical threshold, the resulting strong Coulomb field causes a negatively charged electron to populate a state where the electron spirals down to the nucleus and then spirals away again, emitting a positron (a positively–charged electron) in the process. This highly unusual electronic state is a significant departure from what happens in a typical atom, where electrons occupy stable circular orbits around the nucleus.

 “Nuclear physicists have tried to observe atomic collapse for many decades, but they never unambiguously saw the effect because it is so hard to make and maintain the necessary super-large nuclei,” Crommie says. “Graphene has given us the opportunity to see a condensed matter analog of this behavior, since the extraordinary relativistic nature of electrons in graphene yields a much smaller nuclear charge threshold for creating the special supercritical nuclei that will exhibit atomic collapse behavior.”

Perhaps no other material is currently generating as much excitement for new electronic technologies as graphene, sheets of pure carbon just one atom thick through which electrons can freely race 100 times faster than they move through silicon. Electrons moving through graphene’s two-dimensional layer of carbon atoms, which are arranged in a hexagonally patterned honeycomb lattice, perfectly mimic the behavior of highly relativistic charged particles with no mass. Superthin, superstrong, superflexible, and superfast as an electrical conductor, graphene has been touted as a potential wonder material for a host of electronic applications, starting with ultrafast transistors.

In recent years scientists predicted that highly-charged impurities in graphene should exhibit a unique electronic resonance – a build-up of electrons partially localized in space and energy – corresponding to the atomic collapse state of super-large atomic nuclei. Last summer Crommie’s team set the stage for experimentally verifying this prediction by confirming that graphene’s electrons in the vicinity of charged atoms follow the rules of relativistic quantum mechanics. However, the charge on the atoms in that study was not yet large enough to see the elusive atomic collapse.

“Those results, however, were encouraging and indicated that we should be able to see the same atomic physics with highly charged impurities in graphene as the atomic collapse physics predicted for isolated atoms with highly charged nuclei,” Crommie says. “That is to say, we should see an electron exhibiting a semiclassical inward spiral trajectory and a novel quantum mechanical state that is partially electron-like near the nucleus and partially hole-like far from the nucleus. For graphene we talk about ‘holes’ instead of the positrons discussed by nuclear physicists.”

Non-relativistic electrons orbiting a subcritical nucleus exhibit the traditional circular Bohr orbit of atomic physics. But when the charge on a nucleus exceeds the critical value, Zc, the semiclassical electron trajectory is predicted to spiral in toward the nucleus, then spiral away, a novel electronic state known as “atomic collapse.” Artificial nuclei composed of three or more calcium dimers on graphene exhibit this behavior as graphene’s electrons move in the supercritical Coulomb potential.

To test this idea, Crommie and his research group used a specially equipped scanning tunneling microscope (STM) in ultra-high vacuum to construct, via atomic manipulation, artificial  nuclei on the surface of a gated graphene device. The “nuclei” were actually clusters made up of pairs, or dimers, of calcium ions. With the STM, the researchers pushed calcium dimers together into a cluster, one by one, until the total charge in the cluster became supercritical. STM spectroscopy was then used to measure the spatial and energetic characteristics of the resulting atomic collapse electronic state around the supercritical impurity.

“The positively charged calcium dimers at the surface of graphene in our artificial nuclei played the same role that protons play in regular atomic nuclei,” Crommie says. “By squeezing enough positive charge into a sufficiently small area, we were able to directly image how electrons behave around a nucleus as the nuclear charge is methodically increased from below the supercritical charge limit, where there is no atomic collapse, to above the supercritical charge limit, where atomic collapse occurs.”

Observing atomic collapse physics in a condensed matter system is very different from observing it in a particle collider, Crommie says. Whereas in a particle collider the “smoking gun” evidence of atomic collapse is the emission of a positron from the supercritical nucleus, in a condensed matter system the smoking gun is the onset of a signature electronic state in the region nearby the supercritical nucleus. Crommie and his group observed this signature electronic state with artificial nuclei of three or more calcium dimers.

“The way in which we observe the atomic collapse state in condensed matter and think about it is quite different from how the nuclear and high-energy physicists think about it and how they have tried to observe it, but the heart of the physics is essentially the same,” says Crommie.

If the immense promise of graphene-based electronic devices is to be fully realized, scientists and engineers will need to achieve a better understanding of phenomena such as this that involve the interactions of electrons with each other and with impurities in the material.

“Just as donor and acceptor states play a crucial role in understanding the behavior of conventional semiconductors, so too should atomic collapse states play a similar role in understanding the properties of defects and dopants in future graphene devices,” Crommie says. “Because atomic collapse states are the most highly localized electronic states possible in pristine graphene, they also present completely new opportunities for directly exploring and understanding electronic behavior in graphene.”

In addition to Berkeley Lab and UC Berkeley, other institutions represented in this work include UC Riverside, MIT, and the University of Exeter.

Berkeley Lab’s work was supported by DOE’s Office of Science.  Other members of the research team received support from the Office of Naval Research and the National Science Foundation. Computational resources were provided by DOE at Berkeley Lab’s NERSC facility.

Lattice Semiconductor Corporation today announced the iCE40 LP384 FPGA, the smallest member of its iCE40 family of ultra-low density FPGAs. Enabling designers to rapidly add new features and differentiate cost-sensitive, space-constrained, low-power products, the new small footprint FPGA is ideal for applications such as portable medical monitors, smartphones, digital cameras, eReaders, and compact embedded systems.

The tiny, low-power, low-cost iCE40 LP384 FPGA has a capacity of 384 LUTs; consumes 25-Microwatts static core power; comes in packages as small as 2.5mm x 2.5mm with a migration path to 2.0mm x 2.0mm; and costs less than 50 cents per unit in multi-million unit quantities.

"While system footprints continue to shrink, designers must constantly search for new ways to add more functionality so they can process more information," said Brent Przybus, senior director of Corporate and Product Marketing at Lattice Semiconductor. "The iCE40 LP384 FPGA offers the perfect architecture for capturing and processing large amounts of data at hardware speeds while using very little power and board space. It deftly handles system tasks such as managing sensor interfaces, adapting to new interface standards, and offloading the CPU without requiring fully custom-designed chips."

New applications drive hardware innovation

The exponential growth of handheld applications is creating new challenges for hardware designers. Many new applications today connect end users with data collected from a growing number of sensors that measure natural phenomena such as temperature, moisture, light, and positioning. At the same time the growing use of video is driving the deployment of new low power, display technology that not only enhances the visual experience, but does so without breaking stringent power budgets.

Moreover, small automated control units are now being used to maximize energy efficiency and security in buildings and homes by responding to light, infrared, noise, and by adjusting fans, blinds, and temperature controls. Designers of these types of equipment must find ways to shrink the size of their systems while differentiating their products from competitive market offerings.

The iCE40 LP384

The iCE40 LP384 FPGA includes the programmable logic, flexible IO, and on-chip memory necessary to process data at speeds greater than ASSPs or companion microprocessors while simultaneously reducing power consumption for an equivalent cost. Lattice also provides reference designs and application notes to accelerate development and reduce time-to-market by several months.

Development software

Lattice’s iCEcube2™ development software is a feature-rich development platform for Lattice’s iCE40 FPGAs. It integrates a free synthesis tool with Lattice’s placement and routing tools. It also includes the Aldec Active-HDL™ simulation solution, with Waveform Viewer and an RTL/gate-level mixed-language simulator.

The iCEcube2 design environment also includes key features and functions that help facilitate the design process for mobile applications. These features and functions include a project navigator, constraint editor, floorplanner, package viewer, power estimator, and static timing analyzer. Please contact your local Lattice sales representative for information on how to download a free license for Lattice iCEcube2 software for use with iCE40 LP384 FPGAs.

MRAM: disruptive technology for storage applicationsEveryone wants faster access to stored data, and the issue is becoming critical with Big Data and cloud initiatives. With the speed of DRAM and the non-volatility of storage, Magnetoresistive Random Access Memory (MRAM) encourages a new way of thinking about storage applications. Storage is associated with longer latencies, but with MRAM storage can have similar latencies to memory. These capabilities and others make MRAM a catalyst for new thinking about how we design storage applications.

MRAM Overview

MRAM stores data using magnetic polarization rather than electric charge. As a result, MRAM stores data for decades while reading and writing at RAM speed without wearing out. MRAM products use an efficient cell with one transistor to deliver the highest density and best price/performance in the non-volatile RAM marketplace.

The first generation of commercial MRAM uses the magnetic field from current pulses in corresponding metal digit lines and bit lines. Toggle MRAM uses a unique sequence of pulses, bit orientation and proprietary layers in the magnetic tunnel junction. Products developed with Toggle MRAM are SRAM compatible in specification and package, filling a need where data persistence is critical.

Prior to MRAM, system designers had to provide a way to protect critical data in the event of power loss. In the case of SRAMs, a battery is required to keep the device powered up to retain critical data, but batteries present a host of issues such as replacement, frequent failures and disposal. Chipmakers have also resorted to integrating both SRAM and non-volatile memory such as EEPROM or Flash in a single chip, commonly called nvSRAM. The complexity of this approach drives up chip cost and adds to system complexity in order to ensure that critical data is backed up when power fails. With the inherent, automatic non-volatility of MRAM, system designers have been utilizing MRAM in a broad base of applications including enterprise storage, industrial automation, smart meters, transportation, and embedded computing. Whenever frequent writing of critical data that must be protected in the event of power loss is a requirement, Toggle MRAM based persistent SRAM is now the preferred choice because of the simplicity of implementation, compatibility with CPU memory busses, and elimination of less reliable, more complex methods to protect the critical data.

Advances in Spin Torque MRAM development expands the market

The introduction of ST-MRAM, the second generation of MRAM technology, with a high bandwidth DDR3 DRAM interface brings MRAM into a category of the memory market with DRAM-like performance, combined with non-volatility, called persistent DRAM. Now MRAM can be utilized in the data path of applications that need extremely low latency, high endurance and, again, protection of data on power loss. Storage devices, appliances and servers will benefit from a persistent DRAM class of product.

Storage servers have resorted to employing large, bulky super capacitors to DRAM modules to provide enough residual energy to capture last written data, or they have employed non-volatile DIMMs (modules), which have both DRAM and non-volatile memory at a significant cost adder. MRAM, with its relatively simple, 1 transistor + 1 magnetic tunnel junction structure, eliminates the need for costly batteries, capacitors or complex mixed technology RAMs to provide the best combination of non-volatile memory and RAM-like performance.

Scaling the MRAM bit cell to allow for higher density in more advanced lithography nodes will require a transition from field switching to spin torque switching. Figure 1 shows a comparison between the two. In spin torque, the free layer is flipped with the angular momentum from the electrons going from one magnetic layer to the other through the tunnel barrier. This approach eliminates the need to generate a magnetic field with current in metal lines as is done in the toggle write technique. The simpler structure has the potential to provide the path to higher densities and lower cost per bit, which are fundamental to becoming a mainstream memory technology.

Although the density of initial spin torque MRAM (ST-MRAM) products will not be as high as the aforementioned DRAM and NAND Flash products, the added benefit of non-volatility at RAM speeds will make ST-MRAM a valuable addition to those memory technologies. This breakthrough approach is leading to new thinking about memory hierarchy as system designers, both hardware and software, start utilizing ST-MRAM as a performance and reliability enhancement in systems such as enterprise storage.

For example, there is a potential to complement and extend the system life of NAND based SSDs by providing a layer of persistent memory that does not have an endurance issue, or to extend the performance of high-end storage appliances that cannot tolerate the longer latency required to program NAND Flash memory. Loss of data on power outages can be addressed by adding a bank of ST-MRAM in a traditional DRAM cache in a server application and protect the last data being written. Making memory controllers, RAID controllers or SSD controllers both aware of and capable of talking to ST-MRAM is part of the ecosystem development in storage that is taking place now.

The longer-term promise of ST-MRAM is that it will rapidly scale down the semiconductor technology feature size roadmap and attain Gigabit densities in the coming years at feature sizes in the 20nm range. This opens up even further market opportunity as ST-MRAM can be thought of as either a DRAM replacement technology or an alternative mass storage technology. In the meantime, MRAM has quickly become the preferred choice for protecting critical data in a wide range of systems and will reach into storage systems as a performance and reliability enhancement as ST-MRAM products move to production.

A New Storage Tier

There is a gap between DRAM and NAND Flash when it comes to performance. MRAM makes it possible to disrupt computer design by adding a new tier of storage between the DRAM and NAND Flash. You have a microprocessor with one, two, or three levels of cache memory so the processor doesn’t wait for data to come to it over a memory bus. The DRAM keeps loading that microprocessor cache with updated information, trying to anticipate what the CPU will want next.

DRAM has a speed on the order of tens of nanoseconds, but DRAM is quite expensive in storage terms. Rather than putting in hundreds of gigabytes of DRAM, designers use data storage. The data still has to go over a storage bus like SATA or SAS, and even though these storage buses are quite fast there’s still a latency getting data from a spinning disk – milliseconds of time. NAND Flash has changed that tremendously, and this is why we see a tremendous adoption of NAND Flash SSDs.

However, NAND Flash has asymmetrical performance. It is very fast when reading data, but the limitation is that it doesn’t write very fast. When it comes time to put data back into storage there’s a latency there that can be measured in microseconds. And what NAND offers in density and cost it lacks in endurance – it wears out quickly. DRAM and MRAM have virtually infinite endurance, on the order of 1015 or more writes, but some of the NAND on the market now has only tens of thousands of wear cycles.

So even with NAND Flash, computer and storage systems are still limited by data storage in terms of performance. In order to increase IOPS, you have to break that bottleneck. That’s where MRAM comes in. MRAM can supplement the cache RAM in a microprocessor as well as buffer data storage.

Because MRAM is persistent and also has the speed of DRAM, system architects can start thinking about where that boundary is between RAM to the processor and storage to the storage system. In an ideal world, you would have MRAM at high enough densities to where it can act as a storage layer. Because it has infinite endurance you no longer care about wear leveling or overprovisioning as you have with NAND Flash. This is not to say that MRAM will take the place of NAND Flash, but it will create a new storage tier that bridges the gap between DRAM and NAND Flash. MRAM would be a faster, solid-state array for very performance-intensive applications, or another caching tier where the NAND Flash is loading into and out of MRAM.

If the operating system is aware that there’s a tier of non-volatile memory out there, it can really begin to take advantage of that from a performance standpoint. The IOPS will go way  up, and performance is greatly enhanced.

 RAM Cache Applications

The other application for MRAM is in the RAM cache itself. While data is in RAM, it’s vulnerable. When there’s a power glitch, the data that’s in RAM may not be stored permanently anywhere yet. For high-reliability storage, system architects jump through hoops to mitigate that problem with supercapacitors or batteries. These provide enough power to the RAM to flush whatever’s in DRAM to NAND Flash in the event of a power disruption. But supercapacitors add tens to even a hundred dollars to the BOM for a DRAM tier, and batteries are notoriously unreliable.

If you use MRAM instead of DRAM, data written to the MRAM cache is permanent. Power losses don’t affect the storage of data in MRAM. So we have an opportunity to simplify system design, enhancing reliability and eliminating the need for these other ways to provide energy to DRAM. In this case, MRAM will replace DDR3 DRAM.

MRAM can sit on the same memory bus as the DDR3 DRAM, and you can have a couple of banks of DRAM and a couple of banks of MRAM. This allows designers to segment the cache between writes and reads. Typically you need a very large read buffer for the amount of data coming off the disk arrays to the CPU, but where you’re writing there’s a relatively small amount of data. The concept is to have the MRAM function as the write cache.

We can also think about MRAM as a new storage tier, where what they’ve done to accelerate storage is to put NAND arrays in front of HDDs on a serial ATA bus. Now we’re proposing that there be a smaller array but with even higher performance in MRAM that can talk to any kind of controller or processor.

As we can see, MRAM presents several different disruptive applications for storage and computer design. As MRAM densities improve and costs decline, it will become a standard part of storage infrastructure.

Joe O’Hare is the director of product marketing at Everspin Technologies.

 

Global electronic components distributor Digi-Key Corporation today announced the signing of a global distribution agreement with MEMSIC, a provider of MEMS sensor components, sophisticated inertial systems, and leading-edge wireless sensor networks.

“As technology tries to fit more and more functionality into smaller and smaller spaces, MEMS has grown exponentially in utilization,” said Mark Zack, Digi-Key Vice President, Global Semiconductor Product. “By integrating IC and MEMS functionality, MEMSIC offers our customers a unique product to fill a growing need in their designs. We are pleased to partner with MEMSIC.”

MEMSIC designs and manufactures integrated micro-electromechanical sensors (MEMS) using a standard integrated circuit (IC) manufacturing process. The company combines proprietary thermal-based MEMS technology and advanced analog mixed-signal processing circuitry into a single chip. This allows MEMSIC to produce high-performance accelerometers and other MEMS devices at substantially lower cost than most traditional processes.

"Digi-Key is recognized by design engineers worldwide for its excellent service, and for its access to readily available components they can count on for new designs,” noted John Newton, MEMSIC Vice President of Marketing. “We are excited to be partnering with Digi-Key, and believe this agreement will significantly expand MEMSIC’s global reach to design engineers looking for the latest in sensor technology."