Everyone wants faster access to stored data, and the issue is becoming critical with Big Data and cloud initiatives. With the speed of DRAM and the non-volatility of storage, Magnetoresistive Random Access Memory (MRAM) encourages a new way of thinking about storage applications. Storage is associated with longer latencies, but with MRAM storage can have similar latencies to memory. These capabilities and others make MRAM a catalyst for new thinking about how we design storage applications.
MRAM Overview
MRAM stores data using magnetic polarization rather than electric charge. As a result, MRAM stores data for decades while reading and writing at RAM speed without wearing out. MRAM products use an efficient cell with one transistor to deliver the highest density and best price/performance in the non-volatile RAM marketplace.
The first generation of commercial MRAM uses the magnetic field from current pulses in corresponding metal digit lines and bit lines. Toggle MRAM uses a unique sequence of pulses, bit orientation and proprietary layers in the magnetic tunnel junction. Products developed with Toggle MRAM are SRAM compatible in specification and package, filling a need where data persistence is critical.
Prior to MRAM, system designers had to provide a way to protect critical data in the event of power loss. In the case of SRAMs, a battery is required to keep the device powered up to retain critical data, but batteries present a host of issues such as replacement, frequent failures and disposal. Chipmakers have also resorted to integrating both SRAM and non-volatile memory such as EEPROM or Flash in a single chip, commonly called nvSRAM. The complexity of this approach drives up chip cost and adds to system complexity in order to ensure that critical data is backed up when power fails. With the inherent, automatic non-volatility of MRAM, system designers have been utilizing MRAM in a broad base of applications including enterprise storage, industrial automation, smart meters, transportation, and embedded computing. Whenever frequent writing of critical data that must be protected in the event of power loss is a requirement, Toggle MRAM based persistent SRAM is now the preferred choice because of the simplicity of implementation, compatibility with CPU memory busses, and elimination of less reliable, more complex methods to protect the critical data.
Advances in Spin Torque MRAM development expands the market
The introduction of ST-MRAM, the second generation of MRAM technology, with a high bandwidth DDR3 DRAM interface brings MRAM into a category of the memory market with DRAM-like performance, combined with non-volatility, called persistent DRAM. Now MRAM can be utilized in the data path of applications that need extremely low latency, high endurance and, again, protection of data on power loss. Storage devices, appliances and servers will benefit from a persistent DRAM class of product.
Storage servers have resorted to employing large, bulky super capacitors to DRAM modules to provide enough residual energy to capture last written data, or they have employed non-volatile DIMMs (modules), which have both DRAM and non-volatile memory at a significant cost adder. MRAM, with its relatively simple, 1 transistor + 1 magnetic tunnel junction structure, eliminates the need for costly batteries, capacitors or complex mixed technology RAMs to provide the best combination of non-volatile memory and RAM-like performance.
Scaling the MRAM bit cell to allow for higher density in more advanced lithography nodes will require a transition from field switching to spin torque switching. Figure 1 shows a comparison between the two. In spin torque, the free layer is flipped with the angular momentum from the electrons going from one magnetic layer to the other through the tunnel barrier. This approach eliminates the need to generate a magnetic field with current in metal lines as is done in the toggle write technique. The simpler structure has the potential to provide the path to higher densities and lower cost per bit, which are fundamental to becoming a mainstream memory technology.
Although the density of initial spin torque MRAM (ST-MRAM) products will not be as high as the aforementioned DRAM and NAND Flash products, the added benefit of non-volatility at RAM speeds will make ST-MRAM a valuable addition to those memory technologies. This breakthrough approach is leading to new thinking about memory hierarchy as system designers, both hardware and software, start utilizing ST-MRAM as a performance and reliability enhancement in systems such as enterprise storage.
For example, there is a potential to complement and extend the system life of NAND based SSDs by providing a layer of persistent memory that does not have an endurance issue, or to extend the performance of high-end storage appliances that cannot tolerate the longer latency required to program NAND Flash memory. Loss of data on power outages can be addressed by adding a bank of ST-MRAM in a traditional DRAM cache in a server application and protect the last data being written. Making memory controllers, RAID controllers or SSD controllers both aware of and capable of talking to ST-MRAM is part of the ecosystem development in storage that is taking place now.
The longer-term promise of ST-MRAM is that it will rapidly scale down the semiconductor technology feature size roadmap and attain Gigabit densities in the coming years at feature sizes in the 20nm range. This opens up even further market opportunity as ST-MRAM can be thought of as either a DRAM replacement technology or an alternative mass storage technology. In the meantime, MRAM has quickly become the preferred choice for protecting critical data in a wide range of systems and will reach into storage systems as a performance and reliability enhancement as ST-MRAM products move to production.
A New Storage Tier
There is a gap between DRAM and NAND Flash when it comes to performance. MRAM makes it possible to disrupt computer design by adding a new tier of storage between the DRAM and NAND Flash. You have a microprocessor with one, two, or three levels of cache memory so the processor doesn’t wait for data to come to it over a memory bus. The DRAM keeps loading that microprocessor cache with updated information, trying to anticipate what the CPU will want next.
DRAM has a speed on the order of tens of nanoseconds, but DRAM is quite expensive in storage terms. Rather than putting in hundreds of gigabytes of DRAM, designers use data storage. The data still has to go over a storage bus like SATA or SAS, and even though these storage buses are quite fast there’s still a latency getting data from a spinning disk – milliseconds of time. NAND Flash has changed that tremendously, and this is why we see a tremendous adoption of NAND Flash SSDs.
However, NAND Flash has asymmetrical performance. It is very fast when reading data, but the limitation is that it doesn’t write very fast. When it comes time to put data back into storage there’s a latency there that can be measured in microseconds. And what NAND offers in density and cost it lacks in endurance – it wears out quickly. DRAM and MRAM have virtually infinite endurance, on the order of 1015 or more writes, but some of the NAND on the market now has only tens of thousands of wear cycles.
So even with NAND Flash, computer and storage systems are still limited by data storage in terms of performance. In order to increase IOPS, you have to break that bottleneck. That’s where MRAM comes in. MRAM can supplement the cache RAM in a microprocessor as well as buffer data storage.
Because MRAM is persistent and also has the speed of DRAM, system architects can start thinking about where that boundary is between RAM to the processor and storage to the storage system. In an ideal world, you would have MRAM at high enough densities to where it can act as a storage layer. Because it has infinite endurance you no longer care about wear leveling or overprovisioning as you have with NAND Flash. This is not to say that MRAM will take the place of NAND Flash, but it will create a new storage tier that bridges the gap between DRAM and NAND Flash. MRAM would be a faster, solid-state array for very performance-intensive applications, or another caching tier where the NAND Flash is loading into and out of MRAM.
If the operating system is aware that there’s a tier of non-volatile memory out there, it can really begin to take advantage of that from a performance standpoint. The IOPS will go way up, and performance is greatly enhanced.
RAM Cache Applications
The other application for MRAM is in the RAM cache itself. While data is in RAM, it’s vulnerable. When there’s a power glitch, the data that’s in RAM may not be stored permanently anywhere yet. For high-reliability storage, system architects jump through hoops to mitigate that problem with supercapacitors or batteries. These provide enough power to the RAM to flush whatever’s in DRAM to NAND Flash in the event of a power disruption. But supercapacitors add tens to even a hundred dollars to the BOM for a DRAM tier, and batteries are notoriously unreliable.
If you use MRAM instead of DRAM, data written to the MRAM cache is permanent. Power losses don’t affect the storage of data in MRAM. So we have an opportunity to simplify system design, enhancing reliability and eliminating the need for these other ways to provide energy to DRAM. In this case, MRAM will replace DDR3 DRAM.
MRAM can sit on the same memory bus as the DDR3 DRAM, and you can have a couple of banks of DRAM and a couple of banks of MRAM. This allows designers to segment the cache between writes and reads. Typically you need a very large read buffer for the amount of data coming off the disk arrays to the CPU, but where you’re writing there’s a relatively small amount of data. The concept is to have the MRAM function as the write cache.
We can also think about MRAM as a new storage tier, where what they’ve done to accelerate storage is to put NAND arrays in front of HDDs on a serial ATA bus. Now we’re proposing that there be a smaller array but with even higher performance in MRAM that can talk to any kind of controller or processor.
As we can see, MRAM presents several different disruptive applications for storage and computer design. As MRAM densities improve and costs decline, it will become a standard part of storage infrastructure.
Joe O’Hare is the director of product marketing at Everspin Technologies.