Tag Archives: letter-wafer-tech

Micron Technology, Inc. this week announced the production of 8GB DDR4 NVDIMM, the company’s first commercially available solution in the persistent memory category. Persistent memory delivers a unique balance of latency, bandwidth, capacity and cost, delivering ultra-fast DRAM-like access to critical data and allowing system designers to better manage overall costs. With persistent memory, system architects are no longer forced to sacrifice latency and bandwidth when accessing critical data that must be preserved.

As data centers evolve to handle the massively growing influx of data, the cost of moving data and storing it away from the CPU becomes increasingly prohibitive, creating the need for a new generation of faster, more responsive solutions. Persistent memory, a new addition to the memory hierarchy, allows greater flexibility in data management by providing non-volatile, low latency memory closer to the processor. With NVDIMM technology, Micron delivers a persistent memory solution capable of meeting many of today’s biggest computing challenges.

Micron’s NVDIMM begins to address some of the difficult architectural challenges facing CIOs today, and is ideal for applications such as big data analytics, storage appliances, RAID cache, In-Memory Databases and On Line Transaction Processing. Traditional memory architectures force system architects to sacrifice latency or bandwidth needed to access the critical data for these applications, and as a result, performance is often limited by I/O bottlenecks. Micron’s NVDIMM solutions deliver architectures suited to meet the demands of applications that require high performance coupled with frequent access to large data sets while being sensitive to down time. In the event of a power failure or system crash, Micron’s NVDIMM solution provides an onboard controller that safely transfers data stored in DRAM to the onboard non-volatile memory, preserving the data that would otherwise be lost.

“Micron is delivering on the promise of persistent memory with a solution that gives system architects a new approach for designing systems with better performance, reduced energy usage and improved total cost of ownership,” said Tom Eby, vice president for Micron’s compute and networking business unit. “With NVDIMM, we have a powerful solution that is available today. We’re also leading the way on future persistent memory development by spearheading R&D efforts on promising new technologies such as 3D XPoint memory, which will be available in 2016 and beyond.”

Persistent memory: A new architecture for the new data age

Micron’s NVDIMM technology is a non-volatile solution that combines NAND flash reliability, DRAM performance and an optional power source into a single memory subsystem, delivering a powerful solution that ensures data stored in memory is protected against power loss. By placing non-volatile memory on the DRAM bus, this new architecture allows customers to store data close to the processor and significantly optimize data movement by delivering faster access to variables stored in DRAM.

“Persistent memory is a critical new technology to move computing forward. The amount of information that can be found in data produced by today’s organizations requires a platform with the performance abilities to more efficiently store, manage and analyze large data sets frequently and quickly,” said Greg Wong, founder and principal analyst at Forward Insights. “Micron’s NVDIMM technology is a positive step in this direction, delivering a solution that fills a gap in the current memory hierarchy right now.”

Additional Resources:

Security by design


November 13, 2015

Chowdary_Yanamadala-150x150By Chowdary Yanamadala, Senior Vice President of Business Development, ChaoLogix

The advent of Internet-connected devices, the so-called Internet of Things (IoT), offers myriad opportunities and significant risks. The pervasive collection and sharing of data by IoT devices constitutes the core value proposition for most IoT applications. However, it is our collective responsibility, as an industry, to secure the transport and storage of the data. Failing to properly secure the data risks turning the digital threat into a physical threat.  

Properly securing IoT systems requires layering security solutions. Data must be secured at both the network and hardware level. As a hardware example, let’s concentrate, on the embedded security implemented by semiconductor chips.

Authentication and encryption are the two main crypto functions utilized to ensure data security. With the mathematical security of the standardized algorithms (such as AES, ECDSA, SHA512, etc.) is intact, hackers often exploit the implementation defects to compromise the inherent security provided by the algorithms.

One of the most dangerous and immediate threats to data security is a category of attacks called Side Chanel Analysis attacks (SCA). SCA attacks exploit the power consumption signature during the execution of the crypto algorithms. This type of attack is called Differential Power Analysis (DPA). Another potent attack form of SCA is exploiting the Electromagnetic emanations that are occurring during the execution of the crypto algorithm – or Differential Electromagnetic Analysis attacks (DEMA).

Both DPA and DEMA attacks rely on the fact that sensitive data, such as secret keys, leaks via the power signature (or EM signature) during execution of the crypto algorithm.

DPA and DEMA attacks are especially dangerous, not only because of their effectiveness in exploiting security vulnerabilities but also due the low cost of the equipment required for the attack. An attacker can carry out DPA attacks against most security chips using equipment costing less than $2,000.

There are two fundamental ways to solve the threat of DPA and DEMA. One approach is to address the symptoms of the problem. This involves adding significant noise to the power signature in order to obfuscate the sensitive data leakage. This is an effective technique.  However, it is an ad-hoc and temporary measure against a potent threat to data security. Chip manufacturers can also apply this technique as a security patch, or afterthought, once  and architecture work is completed.

Another way (and arguably a much better way) to solve the threat of DPA is to address the problem at the source. The source of the threat derives from the leakage of sensitive data the form of power signature variations. The power signature captured during the crypto execution is dependent on the secret key that is processed during the crypto execution. This makes the power signature indicative of the secret key.

What if we address the problem by minimizing the relation between the power signature and the secret key that is used for crypto computation? Wouldn’t this offer a superior security? Doesn’t addressing the problem at the source provide more fundamental security? And arguably a more permanent security solution?

Data security experts call this Security By Design. It is obvious that solving a problem at the source is a fundamentally better approach than providing symptomatic relief to the problems. This is true in the case of data security as well. In order to achieve the solution (against the threat of DPA and DEMA) at the source, chip designers and architects need to build the security into the architecture.

Security needs to be a deliberate design specification and needs to be worked into the fabric of the design. Encouragingly, more and more chip designers are moving away from addressing security as an afterthought and embracing security by design.

As an industry, we design chips for performance, power, yield and testability. Now it is time to start designing for security. This is especially true for chips used in IoT applications. These chips tend to be small, have limited computational power and under tight cost constraints. It is, therefore, difficult, and in some cases impossible, to apply security patches as an afterthought. The sound approach is to start weaving security into the building blocks of these chips.

In sum, designing security into a chip is as much about methodology as it is about acquiring various technology and tools. As IoT applications expand and the corresponding demand for inherently secure chips grows, getting this methodology right will be a key to successful deployment of secure IoT systems.

Related data security articles: 

Security should not be hard to implement

ChaoLogix introduces ChaoSecure technology to boost semiconductor chip security

From laptops and televisions to smartphones and tablets, semiconductors have made advanced electronics possible. These types of devices are so pervasive, in fact, that Northwestern Engineering’s Matthew Grayson says we are living in the “Semiconductor Age.”

“You have all these great applications like computer chips, lasers, and camera imagers,” said Grayson, associate professor of electrical engineering and computer science in Northwestern’s McCormick School of Engineering. “There are so many applications for semiconductor materials, so it’s important that we can characterize these materials carefully and accurately. Non-uniform semiconductors lead to computer chips that fail, lasers that burn out, and imagers with dark spots.”

Grayson’s research team has created a new mathematical method that has made semiconductor characterization more efficient, more precise, and simpler. By flipping the magnetic field and repeating one measurement, the method can quantify whether or not electrical conductivity is uniform across the entire material – a quality required for high-performance semiconductors.

“Up until now, everyone would take separate pieces of the material, measure each piece, and compare differences to quantify non-uniformity,” Grayson said. “That means you need more time to make several different measurements and extra material dedicated for diagnostics. We have figured out how to measure a single piece of material in a magnetic field while flipping the polarity to deduce the average variation in the density of electrons across the sample.”

Remarkably, the contacts at the edge of the sample reveal information about the variations happening throughout the body of the sample.

Supported by funding from the Air Force’s Office of Scientific Research, Grayson’s research was published on October 28 online in the journal Physical Review Letters. Graduate student Wang Zhou is first author of the paper.

One reason semiconductors have so many applications is because researchers and manufacturers can control their properties. By adding impurities to the material, researchers can modulate the semiconductor’s electrical properties. The trick is making sure that the material is uniformly modulated so that every part of the material performs equally well. Grayson’s technique allows researchers and manufacturers to directly quantify such non-uniformities.

“When people see non-uniform behavior, sometimes they just throw out the material to find a better piece,” Grayson said. “With our information, you can find a piece of the material that’s more uniform and can still be used. Or you can use the information to figure out how to balance out the next sample.”

Grayson’s method can be applied to samples as large as a 12-inch wafer or as small as an exfoliated 10-micron flake, allowing researchers to profile the subtleties in a wide range of semiconductor samples. The method is especially useful for 2-D materials, such as graphene, which are too small for researchers to make several measurements across the surface.

Grayson has filed a patent on the method, and he hopes the new technique will find use in academic laboratories and industry.

“There are companies that mass produce semiconductors and need to know if the material is uniform before they start making individual computer chips,” Grayson said. “Our method will give them better feedback during sample preparation. We believe this is a fundamental breakthrough with broad impact.”

By Dr. Lianfeng YangLianfeng Yang, Vice President of Marketing, ProPlus Design Solutions, Inc.

The squeaky wheel gets the grease, or so it seems in the semiconductor industry, as the high level of the design process seems to get the most attention. Meanwhile, the transistor level appears to have been largely forgotten.

With increasing complexities and scale of electronic system, design and verification have moved up the abstraction level from register transfer level (RTL) to the electronic system level (ESL) with help from high-level synthesis software and other new EDA technologies. Portable stimulus is available at ESL to test specifications and virtual platforms enable early software consideration, for example.

Throughout, transistor-level challenges are ongoing but appear to be largely forgotten. New process technologies, such as FinFET, increasingly stress transistor-level verification tools, in particular, SPICE and FastSPICE simulators, and designer needs are greater. Highly accurate and reliable verification and sign-off tools for large post-layout simulation is one of many.

When designers move to 16/14 nanometer and beyond with FinFETs, accuracy is a priority and essential for characterization, verification and signoff, due to reduced Vdd and the impact of process variations. Device characteristics and physical behavior is more complicated with these process nodes. Circuit size is increasing and design margins are shrinking. Every aspect that contributes to leakage and power must be measured and accurately modeled. The entire circuit, including all parasitic components, has to be simulated accurately.

While circuit designers may not be squeaky wheels, they do need to be confident of their designs, as they’re under the pressure from ever-increasing design and manufacturing complexities and cost. FastSPICE simulators used in final verification and signoff do not offer enough accuracy. This is true for small currents critical to low-power design and achieving sufficient noise margins. Often, FastSPICE simulations rely on special, fine-tuned options and start with non-converged DC, further challenging accuracy.

Designers use FastSPICE to verify timing and power before tapeout. Unfortunately, they can’t be sure of the results, risk expensive respins and missing market windows, for applications sensitive to small current or noise elements, such as advanced memory designs. This is an all-too-familiar scenario where wheels should be squeaking.

What sends the situation out of control is FastSPICE’s lack of a golden to refer. FastSPICE provides many options for designer to tune to trade-off accuracy and speed, which worked in past generations. Such an option tuning strategy, however, becomes unreliable for advanced designs where designers have much less design margin than before. Designers now see more and more failure or inaccurate cases due to fundamental accuracy limitations of FastSPICE.

Traditional SPICE simulators were the “golden” simulator to validate FastSPICE, but only for small blocks as no commercially available SPICE simulator can offer simulation capacity for verification and signoff that FastSPICE used. And such validation can’t automatically scale up. The circuit size continues to increase and giga-scale designs are common. At 16nm and beyond, 3D device structures add greater capacity and accuracy challenges. FastSPICE simply doesn’t offer enough confidence and may introduce unpredictably inaccurate or wrong verification results, which designers don’t want to risk for tapeout.

Well, circuit designers may not be squeaking, but help is on the way nevertheless. A new type of SPICE simulator known as giga-scale SPICE simulators or GigaSpice, is able to support giga-scale circuit simulation and verification with a pure SPICE engine. It features SPICE accuracy and FastSPICE-like capacity and performance through advanced parallelization technology. It does not require option tuning and always converges on DC, making it easy for designers to adopt and offering accurate and reliable results for designers. GigaSpice can be a golden reference for FastSPICE and a replacement for memory characterization, large block simulation and full-chip verification.

The squeaky wheel may be noisy, but a few clever developers have been paying attention to the new developments for transistor-level verification and signoff and are responding.Giga-scale SPICE simulators are fast becoming part of circuit-level design flows for squeaky wheel results.

Dr. Lianfeng Yang currently serves as the ProPlus Design Solutions’ vice president of marketing and general manager of Beijing R&D Center. Previously, he was a senior product engineer leading the efforts on product engineering and technical support for the modeling product line to Asian customers at Cadence Design Systems, Inc. Dr. Yang holds a Ph.D. degree in Electrical Engineering from the University of Glasgow, UK.

SAN JOSE, Calif. — Nov. 11, 2015 — Ultratech, Inc., a supplier of lithography, laser-processing and inspection systems used to manufacture semiconductor devices and high-brightness LEDs (HB-LEDs), as well as atomic layer deposition (ALD) systems, today introduced the Superfast 4G+  in-line, 3D topography inspection system. Ultratech’s new 4G+ system builds on the field leadership of the Superfast 4G, providing the industry’s highest-productivity and lowest-cost solution for high-volume manufacturing. The Superfast 4G+ system’s patented coherent gradient sensing (CGS) technology enables Ultratech customers to use a single type of wafer inspection tool to measure patterned wafers across the entire fab line at the lowest cost. Ultratech plans to begin shipping the Superfast 4G+ systems in the first quarter of 2016.

Superfast 4G+ features include:

  • Direct, front-side 3D topography measurement for opaque and transparent stacks patterned wafers
  • 150 wph, the highest industry 3D in-line inspection throughput with the smallest footprint
  • 1-mm edge exclusion enabling full-wafer pattern inspection and thin-film 3D process control
  • Large bow option for in-line manufacturing control of highly bowed wafers without impacting throughput

Damon Tsai, Ultratech Asia Director for Inspection Systems, said, “Our current leadership position in in-line 3D inspection at advanced memory and foundry manufacturers with Superfast 4G has provided us with a tremendous learning environment. Our partners have helped us develop new hardware capabilities like the ‘Recipe Driven Range Control,’ an innovative high-throughput, large bow optical option on board the Superfast 4G+, as well as new fleet management performance metrics. The inherently simple design of the CGS technology is enabling us to rapidly deliver new capabilities and performance improvements over more complex optical solutions.”

Based on patented CGS technology, Ultratech’s Superfast 4G+ inspection system provides the industry’s highest throughput (150 wph) with the lowest cost-of-ownership compared to competing systems. The direct, front-side 3D topography measurement capability is well-suited for patterned wafer applications such as lithography feed-forward overlay distortion and edge-defocus control as well as thin-film deposition stress and planarization control. Delivering a 2X improvement in performance with fleet matching TMU (Total Measurement Uncertainty), along with the ability to measure opaque and transparent stacks on patterned wafers, the Superfast 4G+  provides cost-effective technology to address the critical needs of its global customers. In addition, leveraging the same breakthrough CGS optical module, the Superfast 4G+ is available as a field upgrade of the Superfast 4G.

Hillsboro, Ore. — November 2, 2015 — FEI today announced the new Helios™ G4 DualBeam series, which offers the highest throughput ultra-thin TEM lamella preparation for leading-edge semiconductor manufacturing and failure analysis applications. The new DualBeam series, which includes FX and HX models, takes a significant leap forward in both technological capability and ease-of-use.

The new Phoenix focused ion beam (FIB) makes finer cuts with higher precision and simplifies the creation of ultra-thin (sub 10nm) lamella for transmission electron microscopy (TEM) imaging. The FX is a flexible system that delivers dramatically improved STEM resolution – down to sub-three Ångströms – and significantly shortens the time to data for failure analysis. Images can now be obtained within minutes of completing the lamella, rather than the hours or days required previously to finalize the images on a stand-alone S/TEM system. The HX model is geared specifically for high-throughput TEM lamella production. It features an automated QuickFlip holder that reduces sample preparation times.

“FEI is the first to market with a TEM sample preparation solution capable of making 7nm thick lamella, addressing the needs of our customers who are developing next-generation devices,” states Rob Krueger, vice president and general manager of FEI’s semiconductor business. “In addition, by offering the ability to achieve sub-three Ångström image resolution in a DualBeam, failure analysis labs can now dramatically cut ‘time to data’ without compromising image quality. And, by combining high-resolution imaging and sample preparation on one system, we have reduced the amount of valuable lab real estate required.”

Today, in conjunction with the 41st International Symposium for Testing and Failure Analysis (ISTFA), DCG Systems® announced the release of EBIRCH™, a new, unique technology for localizing shorts and other low-resistance faults that may reside in the interconnect structures or the polysilicon base layer of integrated circuits. Named for Electron Beam Induced Resistance Change, EBIRCH offers fault analysis (FA) engineers and yield experts the ability to detect and isolate low-resistance electrical faults without resorting to brute-force binary search approaches that rely on successive FIB* cuts. Its unparalleled ability to quickly isolate low-resistance faults enables EBIRCH to boost the success rate of physical failure analysis (PFA) imaging techniques to well above 90%, accelerating time-to-results and establishing the FA lab as a critical partner organization in solving yield and reliability problems.

“At foundries and IDM* fabs, the process has become more difficult to control using traditional inline measures,” said Mike Berkmyre, business unit manager of the Nanoprobing Group at DCG Systems. “More yield issues are remaining undetected until they show up at final test — and land on the desk of the FA lab manager. The FA engineers must be equipped to localize the fault and supply images of the root cause to process or yield engineers in a timely manner. The ability to quickly and reliably localize low-resistance faults was missing before we developed EBIRCH. With the introduction of EBIRCH, we are helping to solve an FA problem that has been growing in prevalence and importance with each new device node.”

Available on DCG’s current SEM*- based nanoprobing systems, EBIRCH offers the following capabilities:

  • Detects and isolates electrical faults with resistances from < 10 ohm to > 50 Mohm;
  • Finds faults at surface and several levels below concurrently, significantly accelerating the existing work flow; and
  • Can scan areas as large as 1mm by 1mm, and zoom in to areas as small as 50nm by 50nm, providing accurate and actionable fault localization within minutes.

To collect an EBIRCH image, the operator lands two nanoprobes on surface metal layers, straddling the suspected defect site. A bias is applied, and the electron beam rasters across the region of interest. As the e-beam interrogates the defect site, localized heating from the e-beam changes the resistance of the defect, thereby changing the current sensed by the nanoprobe. The EBIRCH map displays the change in current as a function of the e-beam position—typically showing a bright spot at the site of the resistance change. The simultaneously acquired SEM image, together with knowledge of the circuit layout, allows the engineer to determine the exact defect location. The depth at which the defect lies can be explored by optimizing the landing energy as a function of the EBIRCH signal.

Available exclusively on the flexProber™, nProber™ and nProber II™ nanoprobers from DCG Systems, EBIRCH is part of an integrated electron beam current (EBC) module that offers seamless switching from EBAC to EBIRCH, with no re-cabling needed.

SAN JOSE, Calif. — mCube, provider of MEMS motion sensors, today announced the industry’s first 3-axis accelerometer which is less than a cubic millimeter in total size (0.9mm3). The MC3571 is only 1.1×1.1×0.74mm in size making it 75% smaller than current 2x2mm accelerometers on the market today, enabling developers to design high-resolution 3-axis inertial solutions for products that require ultra-small sensor form factors.

mCube_MC3571_AccelerometerThe MC3571 features a Wafer Level Chip Scale Package (WLCSP), making it smaller than a grain of sand. This achievement marks a major innovation milestone in the MEMS sensor industry and opens up new design possibilities for the next generation of sleek new mobile phones, surgical devices, and consumer products.

“The new MC3571 truly represents mCube’s vision of delivering a high-performance motion sensor in less than a cubic millimeter size,” said Ben Lee, president and CEO, mCube. “This advancement demonstrates how our monolithic technology can unleash amazing possibilities for designers to create exciting new products that could never be possible with today’s standard 2x2mm sensors.”

“mCube is the first company we’ve seen with a 1.1×1.1mm integrated MEMS+CMOS accelerometer and stretches once again the limits of miniaturization establishing new standards for the industry,” said Guillaume Girardin, Technology & Market Analyst MEMS & Sensors at Yole Développement (Yole). And his colleague, Thibault Buisson, Technology & Market Analyst, Advanced Packaging added: “Clearly, there is a growing trend among consumer companies to transition to wafer-level CSP packaging designs and with the MC3571 inertial motion sensor, mCube is at the forefront of this market evolution and at Yole, we are curious to see how competition will react.”

The high-resolution 14-bit, 3-axis MC3571 accelerometer is built upon the company’s award-winning 3D monolithic single-chip MEMS technology platform, which is widely adopted in mobile handsets with over 100 million units shipped. With the mCube approach, the MEMS sensors are fabricated directly on top of IC electronics in a standard CMOS fabrication facility. Advantages of this monolithic approach include smaller size, higher performance, lower cost, and the ability to integrate multiple sensors onto a single chip.

About the MC3571 Accelerometer

MC3571 is a low-noise, integrated digital output 3-axis accelerometer, which features the following:

  • 8, 10, or 14-bit resolution;
  • Output Data Rates (ODR) up to 1024Hz;
  • Selectable interrupt modes via an I2C bus;
  • Requires only a single external passive component, compared to competitive offerings requiring 2 or more.

Samples of the world’s smallest 1.1×1.1mm WLCSP accelerometer are available to select lead customers now with volume production scheduled for the second quarter of 2016.

 

Graphene has generally been described as a two-dimensional structure — a single sheet of carbon atoms arranged in a regular structure — but the reality is not so simple. In reality, graphene can form wrinkles which make the structure more complicated, potentially being applied to device systems. The graphene can also interact with the substrate upon which it is laid, adding further complexity. In research published in Nature Communications, RIKEN scientists have now discovered that wrinkles in graphene can restrict the motion of electrons to one dimension, forming a junction-like structure that changes from zero-gap conductor to semiconductor back to zero-gap conductor. Moreover, they have used the tip of a scanning tunneling microscope to manipulate the formation of wrinkles, opening the way to the construction of graphene semiconductors not through chemical means — by adding other elements — but by manipulating the carbon structure itself in a form of “graphene engineering.”

The tip of the scanning tunneling microscope (in yellow-orange) is moved over the graphene and the nanowrinkle.

The tip of the scanning tunneling microscope (in yellow-orange) is moved over the graphene and the nanowrinkle.

The discovery began when the group was experimenting with creating graphene films using chemical vapor deposition, which is considered the most reliable method. They were working to form graphene on a nickel substrate, but the success of this method depends heavily on the temperature and cooling speed.

According to Hyunseob Lim, the first author of the paper, “We were attempting to grow graphene on a single crystalline nickel substrate, but in many cases we ended up creating a compound of nickel and carbon, Ni2C, rather than graphene. In order to resolve the problem, we tried quickly cooling the sample after the dosing with acetylene, and during that process we accidentally found small nanowrinkles, just five nanometers wide, in the sample.”

They were able to image these tiny wrinkles using scanning tunneling microscopy, and discovered that there were band gap openings within them, indicating that the wrinkles could act as semiconductors. Normally electrons and electron holes flow freely through a conductor without a band gap, but when it is a semiconductor there are band gaps between the permitted electron states, and the electrons can only pass through these gaps under certain conditions. This indicates that the graphene could, depending on the wrinkles, become a semiconductor. Initially they considered two possibilities for the emergence of this band gap. One is that the mechanical strain could cause a magnetic phenomenon, but they ruled this out, and concluded that the phenomenon was caused by the confinement of electrons in a single dimension due to “quantum confinement.”

According to Yousoo Kim, head of the Surface and Interface Science Laboratory, who led the team, “Up until now, efforts to manipulate the electronic properties of graphene have principally been done through chemical means, but the downside of this is that it can lead to degraded electronic properties due to chemical defects. Here we have shown that the electronic properties can be manipulated merely by changing the shape of the carbon structure. It will be exciting to see if this could lead to ways to find new uses for graphene.”

Reference

Hyunseob Lim, Jaehoon Jung, Rodney S. Ruoff & Yousoo Kim, “Structurally driven one-dimensional electron confinement in sub-5-nm graphene nanowrinkles”, Nature Communications (2015), 10.1038/ncomms9601

Electrons are so 20th century. In the 21st century, photonic devices, which use light to transport large amounts of information quickly, will enhance or even replace the electronic devices that are ubiquitous in our lives today. But there’s a step needed before optical connections can be integrated into telecommunications systems and computers: researchers need to make it easier to manipulate light at the nanoscale.

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have done just that, designing the first on-chip metamaterial with a refractive index of zero, meaning that the phase of light can travel infinitely fast.

In this zero-index material there is no phase advance, instead it creates a constant phase, stretching out in infinitely long wavelengths. (Credit: Peter Allen, Harvard SEAS)

In this zero-index material there is no phase advance, instead it creates a constant phase, stretching out in infinitely long wavelengths. (Credit: Peter Allen, Harvard SEAS)

This new metamaterial was developed in the lab of Eric Mazur, the Balkanski Professor of Physics and Applied Physics and Area Dean for Applied Physics at SEAS, and is described in the journal Nature Photonics.

“Light doesn’t typically like to be squeezed or manipulated but this metamaterial permits you to manipulate light from one chip to another, to squeeze, bend, twist and reduce diameter of a beam from the macroscale to the nanoscale,” said Mazur. “It’s a remarkable new way to manipulate light.”

Although this infinitely high velocity sounds like it breaks the rule of relativity, it doesn’t. Nothing in the universe travels faster than light carrying information — Einstein is still right about that. But light has another speed, measured by how fast the crests of a wavelength move, known as phase velocity. This speed of light increases or decreases depending on the material it’s moving through.

When light passes through water, for example, its phase velocity is reduced as its wavelengths get squished together. Once it exits the water, its phase velocity increases again as its wavelength elongates. How much the crests of a light wave slow down in a material is expressed as a ratio called the refraction index — the higher the index, the more the material interferes with the propagation of the wave crests of light. Water, for example, has a refraction index of about 1.3.

When the refraction index is reduced to zero, really weird and interesting things start to happen.

In a zero-index material, there is no phase advance, meaning light no longer behaves as a moving wave, traveling through space in a series of crests and troughs. Instead, the zero-index material creates a constant phase — all crests or all troughs — stretching out in infinitely long wavelengths.  The crests and troughs oscillate only as a variable of time, not space.

This uniform phase allows the light to be stretched or squished, twisted or turned, without losing energy. A zero-index material that fits on a chip could have exciting applications, especially in the world of quantum computing.

“Integrated photonic circuits are hampered by weak and inefficient optical energy confinement in standard silicon waveguides,” said Yang Li, a postdoctoral fellow in the Mazur Group and first author on the paper. “This zero-index metamaterial offers a solution for the confinement of electromagnetic energy in different waveguide configurations because its high internal phase velocity produces full transmission, regardless of how the material is configured.”

The metamaterial consists of silicon pillar arrays embedded in a polymer matrix and clad in gold film. It can couple to silicon waveguides to interface with standard integrated photonic components and chips.

“In quantum optics, the lack of phase advance would allow quantum emitters in a zero-index cavity or waveguide to emit photons which are always in phase with one another,” said Philip Munoz, a graduate student in the Mazur lab and co-author on the paper.  “It could also improve entanglement between quantum bits, as incoming waves of light are effectively spread out and infinitely long, enabling even distant particles to be entangled.”

“This on-chip metamaterial opens the door to exploring the physics of zero index and its applications in integrated optics,” said Mazur.

The paper was co-authored by Shota Kita, Orad Reshef, Daryl I. Vulis, Mei Yin and Marko Loncar, the Tiantsai Lin Professor of Electrical Engineering.