Tag Archives: letter-wafer-tech

Common pulsed measurement challenges are defined.

In case you missed it, Part 1 is available here.

BY DAVID WYBAN, Keithley Instruments, a Tektronix Company, Solon, Ohio

For SMU and PMU users, an issue that sometimes arises when making transient pulse measurements is the presence of “humps” (FIGURE 1) in the captured current waveform at the rising and falling edges of the voltage pulse. These humps are caused by capacitances in the system originating from the cabling, the test fixture, the instrument, and even the device itself. When the voltage being output is changed, the stray capacitances in the system must be either charged or discharged and the charge current for this either flows out of or back into the instrument. SMUs and PMUs measure current at the instrument, not at the DUT, so the instrument measures these current flows while a scope probe at the device does not.

FIGURE 1. Humps in the captured current (red) waveform at the rising and falling edges of the voltage pulse.

FIGURE 1. Humps in the captured current (red) waveform at the rising and falling edges of the voltage pulse.

This phenomenon is seen most often when the change in voltage is large or happens rapidly and the current through the device itself is low. The higher the voltage of the pulse or the faster the rising and falling edges, the larger the current humps will be. For SMUs with rise times in the tens of microseconds, these humps are usually only seen when the voltages are hundreds or even thousands of volts and the current through the device is only tens of microamps or less. However, for PMUs where the rise times are often less than 1μs, these humps can become noticeable on pulses of only a couple of volts, even when the current through the device is as high as several milliamps.
Although these humps in the current waveform may seem like a big problem, they are easy to eliminate. The humps are the result of the current being measured at the high side of the device where the voltage is changing. Adding a second SMU or PMU at the low side of the device to measure current will make these humps go away because at the low side of the device the voltage does not change so there’s no charge or discharge currents flowing and the current measured at the instrument will match the current at the device. If this isn’t an option, this problem can be minimized by reducing the stray capacitance in the system by reducing the length of the cables. Shorter cables equal less stray capacitance, which reduces the size of the humps in the current waveform.

The next common pulse measurement issue is test lead resistance. As test currents get higher, the impact of this resistance becomes increasingly significant. FIGURE 2 shows an SMU that is performing a pulse I-V measurement at 2V across a 50mΩ load. Based on Ohm’s Law, one might expect to measure a current through the device of 40A, but when the test is actually performed, the level of current measured is only 20A. That “missing” 20A is the result of test lead resis- tance. In fact, we were not pulsing 2V into 50mΩ but into 100mΩ instead, with 25mΩper test lead. With 50mΩ of lead resistance, half of the output voltage sourced was dropped in the test leads and only half of it ever reached the device.

FIGURE 2. Impact of test lead resistance.

FIGURE 2. Impact of test lead resistance.

To characterize the device correctly, it’s essential to know not only the current through the device but the actual voltage at the device. On SMUs this is done by using remote voltage sensing. Using a second set of test leads allows the instrument to sense the voltage directly at the device; because almost no current flows through these leads, the voltage fed back to the instrument will match the voltage at the device. Also, because these leads feed the voltage at the device directly back into the SMU’s feedback loop, the SMU can compensate for the voltage drop across the test leads by outputting a higher voltage at its output terminals.

Although SMUs can use remote sensing to compensate for voltage drops in the test leads, there is a limit to how much drop it can compensate for. For most SMUs, this maximum drop is about 3V/lead. If the voltage drop per lead reaches or exceeds this limit, strange things can start happening. The first thing is that the rise and fall times of the voltage pulse slow down, significantly increasing the time required to make a settled measurement. Given enough time for the pulse to settle, the voltage measurements may come back as the expected value, but the measured current will be lower than expected because the SMU is actually sourcing a lower voltage at the DUT than the level that it is programmed to source.

If you exceed the source-sense lead drop while sourcing current, a slightly different set of strange behaviors may occur. The current measurement will come back as the expected value and will be correct because current is measured internally and this measurement is not affected by lead drop, but the voltage reading will be higher than expected. In transient pulse measurements, you may even see the point at which the source-sense lead drop limit was exceeded as the measured voltage suddenly starts increasing again after it appeared to be settling.

These strange behaviors can be difficult to detect in the measured data if you do not know what voltage to expect from your device. Therefore, inspecting your pulse waveforms fully when validating your test system is essential.

Minimizing test lead resistance is essential to ensuring quality pulse measurements. There are two ways to do this:

Minimize the length of the test leads. Wire resistance increases at a rate that’s directly proportional to the length of the wire. Doubling the wire’s length doubles the resis- tance. Keeping leads lengths no greater than 3 meters is highly recommended for high current pulse applications.

Use wire of the appropriate diameter or gauge for the current being delivered. The resistance of a wire is also directly proportional to the cross sectional area of the wire. Increasing the diameter, or reducing the gauge, of the wire increases this area and reduces the resistance. For pulse applications up to 50A, a wire gauge of no greater than 12 AWG is recommended; for applications up to 100A, it’s best to use no greater than 10 gauge.

Excessive test lead inductance is another common issue. In DC measurements, test lead inductance is rarely considered because it has little effect on the measurements. However, in pulse measurements, lead inductance has a huge effect and can play havoc with a system’s ability to take quality measurements.

FIGURE 3. Humps in the voltage waveform of transient pulse measurements due to test system inductance.

FIGURE 3. Humps in the voltage waveform of transient pulse measurements due to test system inductance.

Humps in the voltage waveform of transient pulse measurements (FIGURE 3) are a common problem when generating current pulses. Just as with humps in the current waveforms, these humps can be seen in the data from the instrument but are nowhere to be seen when measured at the device with an oscilloscope. These humps are the result of the additional voltage seen at the instrument due to inductance in the cabling between the instrument and th

Equation 1

Equation 1

Equation 1 describes the relation between inductance and voltage. With this equation, we can see that for a given change in current over change in time (di over dt), the larger the inductance L is, the larger the resulting voltage will be. This equation also tells us that for a fixed inductance L, the larger the change in current or the smaller the change in time, the larger the resulting voltage will be. This means that the larger the pulse and or the faster the rise and falls times, the bigger the voltage humps will be.

To remedy this problem, instruments like SMUs offer remote voltage sensing, allowing them to measure around this lead inductance and measure the voltage directly at the device. However, as with excessive lead resistance, excessive lead inductance can also cause a problem for SMUs. If the inductance is large enough and causes the source-sense lead drop to exceed the SMU’s limit, transient pulse measurement data will have voltage measurement errors on the rising and falling edges similar to the ones seen when lead resistance is too large. Pulse I-V measurements are generally unaffected by lead inductance because the measurements are taken during the flat portion of the pulse where the current is not changing. However, excessive lead inductance will slow the rising and falling edges of voltage pulses and may cause ringing on current pulses, thereby requiring larger pulse widths to make a good settled pulse I-V measurement.

The Anatomy of a Pulse The amplitude and base describe the height of the pulse in the pulse waveform. Base describes the DC offset of the waveform from 0. This is the level the waveform will be both before and after the pulse. Amplitude is the level of the waveform relative to the base level and has an absolute value that is equal to the base plus amplitude. For example, a pulse waveform with a base of 1Vand an amplitude of 2V would have a low level of 1V and a high level of 3V. Pulse width is the time that the pulse signal is applied. It is commonly defined as the width in time of the pulse at half maximum also known as Full Width at Half Maximum (FWHM). This industry standard definition means the pulse width is measured where the pulse height is 50% of the amplitude. Pulse period is the length in time of the entire pulse waveform before it is repeated and can easily be measured by measuring the time from the start of one pulse to the next. The ratio of pulse width over pulse period is the duty cycle of the pulse waveform. A pulse’s rise time and fall time are the times it takes for the waveform to transition from the low level to the high level and from the high level back down to the low level. The industry standard way to measure the rise time is to measure the time it takes the pulse waveform to go from 10% amplitude to 90% amplitude on the rising edge. Fall time is defined as the time it takes for the waveform to go from 90% amplitude to 10% amplitude on the falling edge.

The Anatomy of a Pulse
The amplitude and base describe the height of the pulse in the pulse waveform. Base describes the DC offset of the waveform from 0. This is the level the waveform will be both before and after the pulse. Amplitude is the level of the waveform relative to the base level and has an absolute value that is equal to the base plus amplitude. For example, a pulse waveform with a base of 1Vand an amplitude of 2V would have a low level of 1V and a high level of 3V.
Pulse width is the time that the pulse signal is applied. It is commonly defined as the width in time of the pulse at half maximum also known as Full Width at Half Maximum (FWHM). This industry standard definition means the pulse width is measured where the pulse height is 50% of the amplitude.
Pulse period is the length in time of the entire pulse waveform before it is repeated and can easily be measured by measuring the time from the start of one pulse to the next.
The ratio of pulse width over pulse period is the duty cycle of the pulse waveform.
A pulse’s rise time and fall time are the times it takes for the waveform to transition from the low level to the high level and from the high level back down to the low level. The industry standard way to measure the rise time is to measure the time it takes the pulse waveform to go from 10% amplitude to 90% amplitude on the rising edge. Fall time is defined as the time it takes for the waveform to go from 90% amplitude to 10% amplitude on the falling edge.

Although SMUs are able to compensate for some lead inductance, PMUs have no compensation features, so the effects of inductance must be dealt with directly, such as by:

  • Reducing the size of the change in current by reducing the magnitude of the pulse.
  • Increasing the length of the transition times by increasing the rise and fall times.
  • Reducing the inductance in the test leads

Depending on the application or even the instrument, the first two measures are usually infeasible, which leaves reducing the inductance in the test leads. The amount of inductance in a set of test leads is proportionate to the loop area between the HI and LO leads. So, in order to reduce the inductance in the leads and therefore reduce the size of the humps, we must reduce the loop area, which is easily done by simply twisting the leads together to create a twisted pair or by using coaxial cable. Loop area can be reduced further by simply reducing the length of the cable.

The exceptional properties of tiny molecular cylinders known as carbon nanotubes have tantalized researchers for years because of the possibility they could serve as a successors to silicon in laying the logic for smaller, faster and cheaper electronic devices.

First of all they are tiny — on the atomic scale and perhaps near the physical limit of how small you can shrink a single electronic switch. Like silicon, they can be semiconducting in nature, a fact that is essential for circuit boards, and they can undergo fast and highly controllable electrical switching.

But a big barrier to building useful electronics with carbon nanotubes has always been the fact that when they’re arrayed into films, a certain portion of them will act more like metals than semiconductors — an unforgiving flaw that fouls the film, shorts the circuit and throws a wrench into the gears of any potential electronic device.

In fact, according to University of Illinois-Urbana Champaign professor John Rogers, the purity needs to exceed 99.999 percent — meaning even one bad tube in 100,000 is enough to kill an electronic device. “If you have lower purity than that,” he said, “that class of materials will not work for semiconducting circuits.”

Now Rogers and a team of researchers have shown how to strip out the metallic carbon nanotubes from arrays using a relatively simple, scalable procedure that does not require expensive equipment. Their work is described this week in the Journal of Applied Physics, from AIP Publishing.

The Road to Purification

Though it has been a persistent problem for the last 10-15 years, the challenge of making uniform, aligned arrays of carbon nanotubes packed with good densities on thin films has largely been solved by several different groups of scientists in recent years, Rogers said.

That just left the second problem, which was to find a way to purify the material to make sure that none of the tubes were metallic in character — a thorny problem that had remained unsolved. There were some methods of purification that were easy to do but fell far short of the level of purification necessary to make useful electronic components. Very recent approaches offer the right level of purification but rely on expensive equipment, putting the process out of reach of most researchers.

As the team reports this week, they were able to deposit a thin coating of organic material directly on top of a sheet of arrayed nanotubes in contact with a sheet of metal. They then applied current across the sheet, which allowed the current to flow through the nanotubes that were metal conductors — but not the bulk of the tubes, which were semiconducting.

The current heated up the metal nanotubes a tiny amount — just enough to create a “thermal capillary flow” that opened up a trench in the organic topcoat above them. Unprotected, the metallic tubes could then be etched away using a standard benchtop instrument, and then the organic topcoat could be washed away. This left an electronic wafer coated with semiconducting nanotubes free of metallic contaminants, Rogers said. They tested it by building arrays of transistors, he said.

“You end up with a device that can switch on and off as expected, based on purely semiconducting character,” Rogers said.

Pibond Oy, a specialty chemical manufacturer of advanced semiconductor solutions, today introduced its new product line of liquid spin-on metal oxide hardmask materials. Targeting 10nm node semiconductor processing, 3D NAND, power ICs as well as MEMS applications, this technology enables advanced device manufacturing through reduced cost of ownership (COO) and simplified processing.

With the ever-increasing demand for increased functionality in applications from personal computing to mobile to cloud storage to wearables, the semiconductor industry is targeting smaller and smaller nodes and in so doing has lived up to Gordon Moore’sprediction. However, the limits of current lithography processes and the uncertainty surrounding next generation approaches, compounded by their costs, have cast doubt on whether Moore’s law has finally run “out of steam.”

Pibond’s materials are designed to bridge this gap, providing continuity for existing high-end fabs, while maintaining compatibility for future technology roadmaps. These novel polymers represent the next generation of liquid spin-on hard mask products and are suitable for advanced lithographic patterning, 2.5/3D-IC packaging, as well as MEMS processing.

Pibond’s SAP 100 product line is based on patent pending organo-siloxane modified spin-on metal oxide thin films that are compatible with advanced photoresist lithography and other semiconductor etch processes. The product line offers tunable optical (n&k) properties matching critical requirements of advanced lithography. Furthermore, it shows extraordinary etch resistance in plasma etching processes even at very low film thicknesses. Unlike most conventional hard masks, the Pibond SAP hard mask is applied with low cost spin-on track equipment, enabling high throughput and lowering the overall COO. Importantly, it can be applied with process equipment common in both state-of-the-art and legacy fabs, thus eliminating the need for new and potentially capital-intensive equipment. Future product releases in the SAP-100 family will be directly photopatternable further decreasing process complexity and COO.

“As process throughput and the demand for ever increasing device performance continue to challenge the semiconductor industry, we are happy to announce this new class of products based on advanced metal oxide and siloxane polymers. Capable of extending the runway for existing lithography tools and processes, thereby lowering the operating costs of current and future fabs, they are also paving the way for the future as new technologies like EUV mature,” said Jonathan Glen, Chairman of Pibond. “As the industry demands new materials to meet the needs of EUV lithography, 3D memory, power ICs, image sensors, TSV and MEMS applications, Pibond is well placed to be a driving force in this transition.”

The promising new material molybdenum disulfide (MoS2) has an inherent issue that’s steeped in irony. The material’s greatest asset–its monolayer thickness–is also its biggest challenge.

Monolayer MoS2’s ultra-thin structure is strong, lightweight, and flexible, making it a good candidate for many applications, such as high-performance, flexible electronics. Such a thin semiconducting material, however, has very little interaction with light, limiting the material’s use in light emitting and absorbing applications.

“The problem with these materials is that they are just one monolayer thick,” said Koray Aydin, assistant professor of electrical engineering and computer science at Northwestern University’s McCormick School of Engineering. “So the amount of material that is available for light emission or light absorption is very limited. In order to use these materials for practical photonic and optoelectric applications, we needed to increase their interactions with light.”

Aydin and his team tackled this problem by combining nanotechnology, materials science, and plasmonics, the study of the interactions between light and metal. The team designed and fabricated a series of silver nanodiscs and arranged them in a periodic fashion on top of a sheet of MoS2. Not only did they find that the nanodiscs enhanced light emission, but they determined the specific diameter of the most successful disc, which is 130 nanometers.

“We have known that these plasmonic nanostructures have the ability to attract and trap light in a small volume,” said Serkan Butun, a postdoctoral researcher in Aydin’s lab. “Now we’ve shown that placing silver nanodiscs over the material results in twelve times more light emission.”

The use of the nanostructures–as opposed to using a continuous film to cover the MoS2–allows the material to retain its flexible nature and natural mechanical properties.

Supported by Northwestern’s Materials Research Science and Engineering Center and the Institute for Sustainability and Energy at Northwestern, the research is described in the March 2015 online issue of NanoLetters. Butun is first author of the paper. Sefaatiin Tongay, assistant professor of materials science and engineering at Arizona State University, provided the large-area monolayer MoS2 material used in the study.

With enhanced light emission properties, MoS2 could be a good candidate for light emitting diode technologies. The team’s next step is to use the same strategy for increasing the material’s light absorption abilities to create a better material for solar cells and photodetectors.

“This is a huge step, but it’s not the end of the story,” Aydin said. “There might be ways to enhance light emission even further. But, so far, we have successfully shown that it’s indeed possible to increase light emission from a very thin material.”

Chemists from Brown University have come up with a way to make new nanomaterials from a silicon-based compound. The materials can be made in a variety of morphologies and could be used in semiconductor devices, optics or batteries.

In a paper published in the journal Nanoletters, the researchers describe methods for making nanoribbons and nanoplates from a compound called silicon telluride. The materials are pure, p-type semiconductors (positive charge carriers) that could be used in a variety of electronic and optical devices. Their layered structure can take up lithium and magnesium, meaning it could also be used to make electrodes in those types of batteries.

nanomaterials

Credit: Koski lab / Brown University

“Silicon-based compounds are the backbone of modern electronics processing,” said Kristie Koski, assistant professor of chemistry at Brown, who led the work. “Silicon telluride is in that family of compounds, and we’ve shown a totally new method for using it to make layered, two-dimensional nanomaterials.”

Koski and her team synthesized the new materials through vapor deposition in a tube furnace. When heated in the tube, silicon and tellurium vaporize and react to make a precursor compound that is deposited on a substrate by an argon carrier gas. The silicon telluride then grows from the precursor compound.

Different structures can be made by varying the furnace temperature and using different treatments of the substrate. By tweaking the process, the researchers made nanoribbons that are about 50 to 1,000 nanometers in width and about 10 microns long. They also made nanoplates flat on the substrate and standing upright.

“We see the standing plates a lot,” Koski said. “They’re half hexagons sitting upright on the substrate. They look a little like a graveyard.”

Each of the different shapes has a different orientation of the material’s crystalline structure. As a result, they all have different properties and could be used in different applications.

The researchers also showed that the material can be “doped” through the use of different substrates. Doping is a process through which tiny impurities are introduced to change a material’s electrical prosperities. In this case, the researchers showed that silicon telluride can be doped with aluminum when grown on a sapphire substrate. That process could be used, for example, to change the material from a p-type semiconductor (one with positive charge carriers) to an n-type (one with negative charge carriers).

The materials are not particularly stable out in the environment, Koski says, but that’s easily remedied.

“What we can do is oxidize the silicon telluride and then bake off the tellurium, leaving a coating of silicon oxide,” she said. “That coating protects it and it stays pretty stable.”

From here, Koski and her team plan to continue testing the material’s electronic and optical properties. They’re encouraged by what they’ve seen so far.

“We think this is a good candidate for bringing the properties of 2-D materials into the realm of electronics,” Koski said.

Micron Technology, Inc. and Intel Corporation today revealed the availability of their 3D NAND technology, the world’s highest-density flash memory. Flash is the storage technology used inside the lightest laptops, fastest data centers, and nearly every cellphone, tablet and mobile device.

3D_NAND_Die_with_M2_SSD

This new 3D NAND technology, which was jointly developed by Intel and Micron, stacks layers of data storage cells vertically with extraordinary precision to create storage devices with three times higher capacity than competing NAND technologies. This enables more storage in a smaller space, bringing significant cost savings, low power usage and high performance to a range of mobile consumer devices as well as the most demanding enterprise deployments.

Planar NAND flash memory is nearing its practical scaling limits, posing significant challenges for the memory industry. 3D NAND technology is poised to make a dramatic impact by keeping flash storage solutions aligned with Moore’s Law, the trajectory for continued performance gains and cost savings, driving more widespread use of flash storage.

“Micron and Intel’s collaboration has created an industry-leading solid-state storage technology that offers high density, performance and efficiency and is unmatched by any flash today,” said Brian Shirley, vice president of Memory Technology and Solutions at Micron Technology. “This 3D NAND technology has the potential to create fundamental market shifts. The depth of the impact that flash has had to date—from smartphones to flash-optimized supercomputing—is really just scratching the surface of what’s possible.”

“Intel’s development efforts with Micron reflect our continued commitment to offer leading and innovative non-volatile memory technologies to the marketplace,” said Rob Crooke, senior vice president and general manager, Non-Volatile Memory Solutions Group, Intel Corporation. “The significant improvements in density and cost enabled by our new 3D NAND technology innovation will accelerate solid-state storage in computing platforms.”

Innovative Process Architecture

One of the most significant aspects of this technology is in the foundational memory cell itself. Intel and Micron chose to use a floating gate cell, a universally utilized design refined through years of high-volume planar flash manufacturing. This is the first use of a floating gate cell in 3D NAND, which was a key design choice to enable greater performance and increase quality and reliability.

The new 3D NAND technology stacks flash cells vertically in 32 layers to achieve 256Gb multilevel cell (MLC) and 384Gb triple-level cell (TLC) die that fit within a standard package. These capacities can enable gum stick-sized SSDs with more than 3.5TB of storage and standard 2.5-inch SSDs with greater than 10TB. Because capacity is achieved by stacking cells vertically, the individual cell dimensions can be considerably larger. This is expected to increase both performance and endurance and make even the TLC designs well-suited for data center storage.

At this week’s OFC 2015, the largest global conference and exposition for optical communications, nanoelectronics research center imec, its associated lab at Ghent University (Intec), and Stanford University have demonstrated a compact germanium (Ge) waveguide electro-absorption modulator (EAM) with a modulation bandwidth beyond 50GHz. Combining state-of-the-art extinction ratio and low insertion loss with an ultra-low capacitance of just 10fF, the demonstrated EAM marks an important milestone for the realization of next-generation silicon integrated optical interconnects at 50Gb/s and beyond.

Future chip-level optical interconnects require integrated optical modulators with stringent requirements for modulation efficiency and bandwidth, as well as for footprint and thermal robustness. In the presented work, imec and its partners have improved the state-of-the-art for Ge EAMs on Si, realizing higher modulation speed, higher modulation efficiency and lower capacitance. This was obtained by fully leveraging the strong confinement of the optical and electrical fields in the Ge waveguides, as enabled in imec’s 200mm Silicon Photonics platform. The EAM was implemented along with various Si waveguide devices, highly efficient grating couplers, various active Si devices, and high speed Ge photodetectors, paving the way to industrial adoption of optical transceivers based on this device.

“This achievement is a milestone for realizing silicon optical transceivers for datacom applications at 50Gb/s and beyond,” stated Joris Van Campenhout, program director at imec. “We have developed a modulator that addresses the bandwidth and density requirements for future chip-level optical interconnects.”

Companies can benefit from imec’s Silicon Photonics platform (iSiPP25G) through established standard cells, or by exploring the functionality of their own designs in Multi-Project Wafer (MPW) runs. The iSiPP25G technology is available via ICLink services and MOSIS, a provider of low-cost prototyping and small volume production services for custom ICs.

Attempting to develop a novel type of permanent magnet, a team of researchers at Trinity College in Dublin, Ireland has discovered a new class of magnetic materials based on Mn-Ga alloys.

Described as a zero-moment half metal this week in the journal Applied Physics Letters, from AIP Publishing, the new Mn2RuxGa magnetic alloy has some unique properties that give it the potential to revolutionize data storage and significantly increase wireless data transmission speeds.

The discovery realizes a goal researchers have sought for several decades: to make a material with no net magnetic moment, but full spin polarization. Having no magnetic moment — essentially a measure of the net strength of a magnet — frees the material from its own demagnetizing forces and means that it creates no stray magnetic fields. Zero moment also means being immune to the influence of any external magnetic fields, unlike conventional ferromagnets. As a result, there would be no radiation losses during magnetic switching of the material, which occurs as data is read or written, for instance. This property, coupled with full spin polarization means that the material should be extremely efficient in spintronics – the electronics of magnetized electrons.

Furthermore, it promises to shift the ferromagnetic resonance frequency, the maximum speed at which data is written or retrieved, into the low terahertz range. This range is currently of great interest for fast data transmission, but it is unexploited since it is difficult to make effective, yet reasonably-priced emitters and detectors that operate at such extremely high frequencies.

Though scientists have long recognized the merits of such a ‘zero-moment half metal’, nobody has been able to synthesize one. Several have been proposed through the years, but none of them delivered this combination of properties.

Now the Trinity College team, led by Michael Coey, studying spin-dependent transport properties of Mn2RuxGa (MRG) thin-films as a function of the Ru concentration, developed a zero-moment half metal free from demagnetizing forces that created no stray fields, essentially removing two of the obstacles to integrating magnetic elements in densely packed, nanometer-scale memory elements, and millimeter-wave generators.

The secret was in combining the Manganese with the Ruthenium, said Karsten Rode, a co-author on the new paper.

“Mn is in the Goldilocks zone – the magnetic coupling of the electrons is neither too strong nor too weak – just right,” he said. “Ruthenium plays a critical role since without any Ru, even if one were able to crystallize the alloy in the right structure, the electronic bands contributing to the conduction would be only slightly spin polarized.”

Building a better magnet

The solution the Trinity College team came up with was to design a material such that the moments of two inequivalent, oppositely aligned magnetic Mn sublattices perfectly compensated for one another — essentially cancelling each other out and giving no net moment. But, in a simplified picture, only one of these sublattices actually carries current — so that the result was a 100 percent spin polarized current with no net magnetic moment.

The development of this new material required a delicate balance. Spin-polarized current is due to the coupling of electrons in localized magnetic states (d-states) with mobile electrons in current-carrying states (s-states). If this coupling is too strong in a two-sublattice system, the spin polarization of the mobile carriers in the material tends to average to zero, but on the other hand, if the coupling is too weak, only a small fraction of the s-like electrons are spin polarized, and this would result in a very low spontaneous Hall effect. It is the spontaneous Hall effect that provides one piece of evidence of the spin polarization at room temperature.

Rode explained that the Manganese in the material was key to achieving this breakthrough because it allowed them to create a highly spin-polarized band of s-like electrons, yet keeping the magnetic coupling weak enough to allow for one of the spin bands to be pushed away from the Fermi level where all the conduction takes place. The addition by Ruthenium of both electrons and extra electronic states was also key because that made it possible to achieve zero net moment.

“The most difficult part was to understand that our new material was truly special,” said Rode. “Our first experimental results could have been dismissed as a weakly-anisotropic ferrimagnet of no particular interest. Once we realized that there was a possibility that we could achieve full compensation of the magnetic moments, coupled with a large spin polarization, we started checking to see if the ‘zero-moment half metal’ hypothesis would stand intense scrutiny – and it did.”

Now that the first example of this new type of magnet has been developed, the team will work to realize its benefits. “We need to demonstrate the spintronic functionality in a practical device” Rode said. “This is challenging for a Mn-based alloy. The manganese is easily oxidized and this has to be avoided in a fully-functional thin-film device stack. But now that we think we understand the conditions necessary to create a zero-moment half metal, it is likely that MRG will not long remain an only child.”

University of Washington scientists have built a new nanometer-sized laser — using the thinnest semiconductor available today — that is energy efficient, easy to build and compatible with existing electronics.

Lasers play essential roles in countless technologies, from medical therapies to metal cutters to electronic gadgets. But to meet modern needs in computation, communications, imaging and sensing, scientists are striving to create ever-smaller laser systems that also consume less energy.

The ultra-thin semiconductor, which is about 100,000 times thinner than a human hair, stretches across the top of the photonic cavity. Credit: University of Washington

The ultra-thin semiconductor, which is about 100,000 times thinner than a human hair, stretches across the top of the photonic cavity. Credit:
University of Washington

The UW nanolaser, developed in collaboration with Stanford University, uses a tungsten-based semiconductor only three atoms thick as the “gain material” that emits light. The technology is described in a paper published in the March 16 online edition of Nature.

“This is a recently discovered, new type of semiconductor which is very thin and emits light efficiently,” said Sanfeng Wu, lead author and a UW doctoral candidate in physics. “Researchers are making transistors, light-emitting diodes, and solar cells based on this material because of its properties. And now, nanolasers.”

Nanolasers — which are so small they can’t be seen with the eye — have the potential to be used in a wide range of applications from next-generation computing to implantable microchips that monitor health problems. But nanolasers so far haven’t strayed far from the research lab.

Other nanolaser designs use gain materials that are either much thicker or that are embedded in the structure of the cavity that captures light. That makes them difficult to build and to integrate with modern electrical circuits and computing technologies.

The UW version, instead, uses a flat sheet that can be placed directly on top of a commonly used optical cavity, a tiny cave that confines and intensifies light. The ultrathin nature of the semiconductor — made from a single layer of a tungsten-based molecule — yields efficient coordination between the two key components of the laser.

The UW nanolaser requires only 27 nanowatts to kickstart its beam, which means it is very energy efficient.

Other advantages of the UW team’s nanolaser are that it can be easily fabricated, and it can potentially work with silicon components common in modern electronics. Using a separate atomic sheet as the gain material offers versatility and the opportunity to more easily manipulate its properties.

“You can think of it as the difference between a cell phone where the SIM card is embedded into the phone versus one that’s removable,” said co-author Arka Majumdar, UW assistant professor of electrical engineering and of physics.

“When you’re working with other materials, your gain medium is embedded and you can’t change it. In our nanolasers, you can take the monolayer out or put it back, and it’s much easier to change around,” he said.

The researchers hope this and other recent innovations will enable them to produce an electrically-driven nanolaser that could open the door to using light, rather than electrons, to transfer information between computer chips and boards.

The current process can cause systems to overheat and wastes power, so companies such as Facebook, Oracle, HP, Google and Intel with massive data centers are keenly interested in more energy-efficient solutions.

Using photons rather than electrons to transfer that information would consume less energy and could enable next-generation computing that breaks current bandwidth and power limitations. The recently proven UW nanolaser technology is one step toward making optical computing and short distance optical communication a reality.

“We all want to make devices run faster with less energy consumption, so we need new technologies,” said co-author Xiaodong Xu, UW associate professor of materials science and engineering and of physics. “The real innovation in this new approach of ours, compared to the old nanolasers, is that we’re able to have scalability and more controls.”

Still, there’s more work to be done in the near future, Xu said. Next steps include investigating photon statistics to establish the coherent properties of the laser’s light.

Creating large amounts of polymer nanofibers dispersed in liquid is a challenge that has vexed researchers for years. But engineers and researchers at North Carolina State University and one of its start-up companies have now reported a method that can produce unprecedented amounts of polymer nanofibers, which have potential applications in filtration, batteries and cell scaffolding.

In a paper published online in Advanced Materials, the NC State researchers and colleagues from industry, including NC State start-up company Xanofi, describe the method that allows them to fabricate polymer nanofibers on a massive scale.

The method – fine-tuned after nearly a decade of increasing success in producing micro- and nanoparticles of different shapes – works as simply as dropping liquid solution of a polymer in a beaker containing a spinning cylinder. Glycerin – a common and safe liquid that has many uses – is used to shear the polymer solution inside the beaker along with an antisolvent like water. When you take out the rotating cylinder, says Dr. Orlin Velev, Invista Professor of Chemical and Biomolecular Engineering at NC State and the corresponding author of the paper describing the research, you find a mat of nanofibers wrapped around it.

When they first started investigating the liquid shearing process, the researchers created polymer microrods, which could have various useful applications in foams and consumer products.

“However, while investigating the shear process we came up with something strange. We discovered that these rods were really just pieces of ‘broken’ fibers,” Velev said. “We didn’t quite have the conditions set perfectly at that time. If you get the conditions right, the fibers don’t break.”

NC State patented the liquid shear process in 2006 and in a series of subsequent patents while Velev and his colleagues continued to work to perfect the process and its outcome. First, they created microfibers and nanoribbons as they investigated the process.

“Microfibers, nanorods and nanoribbons are interesting and potentially useful, but you really want nanofibers,” Velev said. “We achieved this during the scaling up and commercialization of the technology.”

Velev engaged with NC State’s Office of Technology Transfer and the university’s TEC (The Entrepreneurship Collaborative) program to commercialize the discoveries. They worked with the experienced entrepreneur Miles Wright to start a company called Xanofi to advance the quest for nanofibers and the most efficient way to make mass quantities of them.

“We can now create kilograms of nanofibers per hour using this simple continuous flow process, which when scaled up becomes a ‘nanofiber gusher,'” Velev said. “Depending on the concentrations of liquids, polymers and antisolvents, you can create multiple types of nanomaterials of different shapes and sizes.”

“Large quantities are paramount in nanomanufacturing, so anything scalable is important,” said Wright, the CEO of Xanofi and a co-author on the paper. “When we produce the nanofibers via continuous flow, we get exactly the same nanofibers you would get if you were producing small quantities of them. The fabrication of these materials in liquid is advantageous because you can create truly three-dimensional nanofiber substrates with very, very high overall surface area. This leads to many enhanced products ranging from filters to cell scaffolds, printable bioinks, battery separators, plus many more.”