Tag Archives: letter-mems-tech

Scientists have developed what they believe is the thinnest-possible semiconductor, a new class of nanoscale materials made in sheets only three atoms thick.

As seen under an optical microscope, the heterostructures have a triangular shape. The two different monolayer semiconductors can be recognized through their different colors.

The University of Washington researchers have demonstrated that two of these single-layer semiconductor materials can be connected in an atomically seamless fashion known as a heterojunction. This result could be the basis for next-generation flexible and transparent computing, better light-emitting diodes, or LEDs, and solar technologies.

As seen under an optical microscope, the heterostructures have a triangular shape. The two different monolayer semiconductors can be recognized through their different colors. Photo credit: U of Washington

As seen under an optical microscope, the heterostructures have a triangular shape. The two different monolayer semiconductors can be recognized through their different colors. Photo credit: U of Washington

“Heterojunctions are fundamental elements of electronic and photonic devices,” said senior author Xiaodong Xu, a UW assistant professor of materials science and engineering and of physics. “Our experimental demonstration of such junctions between two-dimensional materials should enable new kinds of transistors, LEDs, nanolasers, and solar cells to be developed for highly integrated electronic and optical circuits within a single atomic plane.”

The research was published online this week in Nature Materials.

The researchers discovered that two flat semiconductor materials can be connected edge-to-edge with crystalline perfection. They worked with two single-layer, or monolayer, materials – molybdenum diselenide and tungsten diselenide – that have very similar structures, which was key to creating the composite two-dimensional semiconductor.

Collaborators from the electron microscopy center at the University of Warwick in England found that all the atoms in both materials formed a single honeycomb lattice structure, without any distortions or discontinuities. This provides the strongest possible link between two single-layer materials, necessary for flexible devices. Within the same family of materials it is feasible that researchers could bond other pairs together in the same way.

A high-resolution scanning transmission electron microscopy (STEM) image shows the lattice structure of the heterojunctions in atomic precision. Photo credit: U of Warwick

A high-resolution scanning transmission electron microscopy (STEM) image shows the lattice structure of the heterojunctions in atomic precision. Photo credit: U of Warwick

The researchers created the junctions in a small furnace at the UW. First, they inserted a powder mixture of the two materials into a chamber heated to 900 degrees Celsius (1,652 F). Hydrogen gas was then passed through the chamber and the evaporated atoms from one of the materials were carried toward a cooler region of the tube and deposited as single-layer crystals in the shape of triangles.

After a while, evaporated atoms from the second material then attached to the edges of the triangle to create a seamless semiconducting heterojunction.

“This is a scalable technique,” said Sanfeng Wu, a UW doctoral student in physics and one of the lead authors. “Because the materials have different properties, they evaporate and separate at different times automatically. The second material forms around the first triangle that just previously formed. That’s why these lattices are so beautifully connected.”

With a larger furnace, it would be possible to mass-produce sheets of these semiconductor heterostructures, the researchers said. On a small scale, it takes about five minutes to grow the crystals, with up to two hours of heating and cooling time.

“We are very excited about the new science and engineering opportunities provided by these novel structures,” said senior author David Cobden, a UW professor of physics. “In the future, combinations of two-dimensional materials may be integrated together in this way to form all kinds of interesting electronic structures such as in-plane quantum wells and quantum wires, superlattices, fully functioning transistors, and even complete electronic circuits.”

This photoluminescence intensity map shows a typical piece of the lateral heterostructures. The junction region produces an enhanced light emission, indicating its application potential in optoelectronics. Photo credit: U of Washington

This photoluminescence intensity map shows a typical piece of the lateral heterostructures. The junction region produces an enhanced light emission, indicating its application potential in optoelectronics. Photo credit: U of Washington

The researchers have already demonstrated that the junction interacts with light much more strongly than the rest of the monolayer, which is encouraging for optoelectric and photonic applications like solar cells.

Other co-authors are Chunming Huang and Pasqual Rivera of UW physics; Ana Sanchez, Richard Beanland and Jonathan Peters at the University of Warwick; Jason Ross of UW materials science and engineering; and Wang Yao, a theoretical physicist of the University of Hong Kong.

This research was funded by the U.S. Department of Energy, the UW’s Clean Energy Institute, the Research Grant Council of Hong Kong, the University Grants Committee of Hong Kong, the Croucher Foundation, the Science City Research Alliance and the Higher Education Funding Council for England’s Strategic Development Fund.

JEDEC Solid State Technology Association today announced the publication of JESD209-4 Low Power Double Data Rate 4 (LPDDR4). Designed to significantly boost memory speed and efficiency for mobile computing devices such as smartphones, tablets, and ultra-thin notebooks, LPDDR4 will eventually operate at an I/O rate of 4266 MT/s, twice that of LPDDR3. The new interface promises to have an enormous impact on the performance and capabilities of next-generation portable electronics. “LPDDR4 represents a dramatic performance increase,” said Mian Quddus, Chairman, JEDEC Board of Directors. “It is intended to meet the power, bandwidth, packaging, cost and compatibility requirements of the world’s most advanced mobile systems.” Developed by JEDEC’s JC-42.6 Subcommittee for Low Power Memories, the JESD209-4 LPDDR4 standard can be downloaded from the JEDEC website for free by clicking here.

The market for mobile computing continues to grow, and with it the demand for ever faster devices and ever longer operation on a single charge. LPDDR4 launches with an I/O data rate of 3200 MT/s and a target speed of 4266 MT/s, compared to 2133 MT/s for LPDDR3. To achieve this performance, the members of the committee had to completely redesign the architecture, going from a one-channel die with 16 bits per channel to a two-channel die with 16 bits per channel, for a total of 32 bits.

“LPDDR3 was an evolutionary change from LPDDR2. With LPDDR4, the architecture is completely different,” said Hung Vuong, Chairman of JC-42.6. “We knew the only way to achieve the performance that the industry required was to make a total departure from previous generations.” The two-channel architecture reduces the distance data signals must travel from the memory array to the I/O bond pads. This reduces the power required to transmit the large amount of data the LPDDR4 interface requires. Because most of the area of a memory device is taken up by the memory array, doubling the interface area has a minimal impact on the overall footprint.

The two-channel architecture also allows the clock and address bus to be grouped together with the data bus. Thus, the skew between data bus to the clock and address bus is minimized, allowing the LPDDR4 device to reach a higher data rate. This saves power and improves timing margins compared to the LPDDR3 architecture.

A new approach to signaling

Recognizing that extending the LPDDR3 interface to higher frequencies would consume too much power, the JEDEC committee decided to implement a significant change in LPDDR4’s I/O signaling to low-voltage swing-terminated logic (LVSTL). LPDDR4’s LVSTL I/O signaling voltage of 367 or 440mV is less than 50% the I/O voltage swing of LPDDR3. This reduces power while enabling high-frequency operation. In addition, by using Vssq termination and data bus inversion (DBI), termination power can be minimized since any I/O signal driving a “0” consumes no termination power.

Several other steps were taken to save power. The operating voltage was reduced from the 1.2V of previous generations to 1.1V. Also, the standard was specifically designed to enable power-efficient operation at a wide range of frequencies. The I/O can operate in un-terminated mode at low frequencies with a reduced voltage swing, and the standard allows rapid switching between operating points so the lower frequency operation can be used whenever possible.

This rapid switching is enabled by the addition of frequency set points (FSPs). LPDDR4 specifies two FSPs, which are copies of all the DRAM registers that store operating parameters which might need to be changed for operation at two different frequencies. Once both operating frequencies are trained and the parameters stored in each of the two corresponding FSPs, switching between the frequencies can be accomplished by a single mode register write. This reduces the latency for frequency changes, and enables the system to operate at the optimal speed for the workload more often.

“It supports end-user flexibility,” noted Vuong. “Some designers like to run their devices as fast as they can and then put them to sleep. Others like to run at lower frequencies – and lower power – when possible. A process might take a little longer but that’s a trade-off they’re willing to make. We designed LPDDR4 to be flexible enough to allow the end-user to decide what they want to do.” With that flexibility comes superior performance – an LPDDR4 device, at a similar data rate, will consume less power than an LPDDR3 device.

JEDEC leads in the development of standards for the microelectronics industry.

Nanometer-scale gold particles are intensively investigated for application as catalysts, sensors, drug delivery devices, biological contrast agents and components in photonics and molecular electronics. Gaining knowledge of their atomic-scale structures, fundamental for understanding physical and chemical properties, has been challenging. Now, researchers at Stanford University, USA, have demonstrated that high-resolution electron microscopy can be used to reveal a three-dimensional structure in which all gold atoms are observed. The results are in close agreement with a structure predicted at the University of Jyväskylä, Finland, on the basis of theoretical modelling and infrared spectroscopy (see Figure). The research was published in Science on 22 August 2014.

The revealed gold nanoparticle is 1.1 nm in diameter and contains 68 gold atoms organised in a crystalline fashion at the center of the particle. The result was supported by small-angle X-ray scattering done in Lawrence Berkeley National Laboratory, USA, and by mass spectrometry done at Hokkaido University, Japan.

Electron microscopy is similar in principle to conventional light microscopy, with the exception that the wavelength of the electron beam used for imaging is close to the spacing of atoms in solid matter, about a tenth of a nanometer, in contrast with the wavelength of visible light, which is hundreds of nanometres. A crucial aspect of the new work is the irradiation of the nanoparticle with very few electrons to avoid perturbing the structure of the nanoparticle. The success of this approach opens the way to the determination of many more nanoparticle structures and to both fundamental understanding and practical applications.

The researchers involved in the work are Maia Azubel, Ai Leen Koh, David Bushnell and Roger D. Kornberg from Stanford University, Sami Malola, Jaakko Koivisto, Mika Pettersson and Hannu Häkkinen from the University of Jyväskylä, Greg L. Hura from Lawrence Berkeley National Laboratory, and Tatsuya Tsukuda and Hironori Tsunoyama from Hokkaido University. The work at the University of Jyväskylä was supported by the Academy of Finland. The computational work in Hannu Häkkinen’s group was done at the HLRS-GAUSS centre in Stuttgart as part of the PRACE project “Nano-gold at the bio-interface”.

Researchers from the Institute of General Physics of the Russian Academy of Sciences, the Institute of Bioorganic Chemistry of the Russian Academy of Sciences and MIPT have made an important step towards creating medical nanorobots. They discovered a way of enabling nano- and microparticles to produce logical calculations using a variety of biochemical reactions.

Details of their research project are given in the journal Nature Nanotechnology.It is the first experimental publication by an exclusively Russian team in one of the most cited scientific magazines in many years.

The paper draws on the idea of computing using biomolecules. In electronic circuits, for instance, logical connectives use current or voltage (if there is voltage, the result is 1, if there is none, it’s 0).In biochemical systems, the result can a given substance.

For example, modern bioengineering techniques allow for making a cell illuminate with different colors or even programming it to die, linking the initiation of apoptosis to the result of binary operations.

Many scientists believe logical operations inside cells or in artificial biomolecular systems to be a way of controlling biological processes and creating full-fledged micro-and nano-robots, which can, for example, deliver drugs on schedule to those tissues where they are needed.

Calculations using biomolecules inside cells, a.k.a. biocomputing, are a very promising and rapidly developing branch of science, according to the leading author of the study, Maxim Nikitin, a 2010 graduate of MIPT’s Department of Biological and Medical Physics. Biocomputing uses natural cellular mechanisms. It is far more difficult, however, to do calculations outside cells, where there are no natural structures that could help carry out calculations. The new study focuses specifically on extracellular biocomputing.

The study paves the way for a number of biomedical technologies and differs significantly from previous works in biocomputing, which focus on both the outside and inside of cells. Scientists from across the globe have been researching binary operations in DNA, RNA and proteins for over a decade now, but Maxim Nikitin and his colleagues were the first to propose and experimentally confirm a method to transform almost any type of nanoparticle or microparticle into autonomous biocomputing structures that are capable of implementing a functionally complete set of Boolean logic gates (YES, NOT, AND and OR) and binding to a target (such as a cell) as result of a computation. This method allows for selective binding to target cells, as well as it represents a new platform to analyze blood and other biological materials.

The prefix “nano” in this case is not a fad or a mere formality. A decrease in particle size sometimes leads to drastic changes in the physical and chemical properties of a substance. The smaller the size, the greater the reactivity; very small semiconductor particles, for example, may produce fluorescent light. The new research project used nanoparticles (i.e. particles of 100 nm) and microparticles (3000nm or 3 micrometers).

Nanoparticles were coated with a special layer, which “disintegrated” in different ways when exposed to different combinations of signals. A signal here is the interaction of nanoparticles with a particular substance. For example, to implement the logical operation “AND” a spherical nanoparticle was coated with a layer of molecules, which held a layer of spheres of a smaller diameter around it. The molecules holding the outer shell were of two types, each type reacting only to a particular signal; when in contact with two different substances small spheres separated from the surface of a nanoparticleof a larger diameter. Removing the outer layer exposed the active parts of the inner particle, and it was then able to interact with its target. Thus, the team obtained one signal in response to two signals.

For bonding nanoparticles, the researchers selected antibodies. This also distinguishes their project from a number of previous studies in biocomputing, which used DNA or RNA for logical operations. These natural proteins of the immune system have a small active region, which responds only to certain molecules; the body uses the high selectivity of antibodies to recognize and neutralize bacteria and other pathogens.

Making sure that the combination of different types of nanoparticles and antibodies makes it possible to implement various kinds of logical operations, the researchers showed that cancer cells can be specifically targeted as well. The team obtained not simply nanoparticles that can bind to certain types of cells, but particles that look for target cells when both of two different conditions are met, or when two different molecules are present or absent. This additional control may come in handy for more accurate destruction of cancer cells with minimal impact on healthy tissues and organs.

Maxim Nikitin said that although this is just as mall step towards creating efficient nanobiorobots, this area of science is very interesting and opens up great vistas for further research, if one draws an analogy between the first works in the creation of nanobiocomputers and the creation of the first diodes and transistors, which resulted in the rapid development of electronic computers.

The new work was published on the website of the journal Nature Nanotechnology, one of the most authoritative scientific publications in the world. It is considered the leading publication by Impact Factor in nanoscience and nanotechnology.

Maxim Nikitin developed the approach and the scheme of experiments and carried them out with Victoria Shipunova (a post-graduate student at the Institute of Bioorganic Chemistry of the Russian Academy of Sciences and a 2013 graduate of MIPT’s Department of Biological and Medical Physics). Sergey Deyev (the Institute of Bioorganic Chemistry and the University of Nizhny Novgorod, a graduate of Moscow State University), Petr Nikitin (Institute of General Physics of the Russian Academy of Sciences, a1979 graduate of MIPT’s Department of Problems of Physics and Energetics) and Maxim Nikitin processed the results of the experiments and wrote the article.

A team of materials chemists, polymer scientists, device physicists and others at the University of Massachusetts Amherst today report a breakthrough technique for controlling molecular assembly of nanoparticles over multiple length scales that should allow faster, cheaper, more ecologically friendly manufacture of organic photovoltaics and other electronic devices.

Postdoctoral research associate Monojit Bag (left) and graduate student Tim Gehan (right) synthesize polymer nanoparticles for use in organic-based solar cells being made at the University of Massachusetts Amherst-based energy center. Deep purple nanoparticles are forming in the small glass container above Gehan's left hand. Credit: UMass Amherst

Postdoctoral research associate Monojit Bag (left) and graduate student Tim Gehan (right) synthesize polymer nanoparticles for use in organic-based solar cells being made at the University of Massachusetts Amherst-based energy center. Deep purple nanoparticles are forming in the small glass container above Gehan’s left hand. Credit: UMass Amherst

Lead investigator, chemist Dhandapani Venkataraman, points out that the new techniques successfully address two major goals for device manufacture: controlling molecular assembly and avoiding toxic solvents like chlorobenzene. “Now we have a rational way of controlling this assembly in a water-based system,” he says. “It’s a completely new way to look at problems. With this technique we can force it into the exact structure that you want.”

Materials chemist Paul Lahti, co-director with Thomas Russell of UMass Amherst’s Energy Frontiers Research Center (EFRC) supported by the U.S. Department of Energy, says, “One of the big implications of this work is that it goes well beyond organic photovoltaics or solar cells, where this advance is being applied right now. Looking at the bigger picture, this technique offers a very promising, flexible and ecologically friendly new approach to assembling materials to make device structures.”

Lahti likens the UMass Amherst team’s advance in materials science to the kind of benefits the construction industry saw with prefabricated building units. “This strategy is right along that general philosophical line,” he says. “Our group discovered a way to use sphere packing to get all sorts of materials to behave themselves in a water solution before they are sprayed onto surfaces in thin layers and assembled into a module. We are pre-assembling some basic building blocks with a few predictable characteristics, which are then available to build your complex device.”

“Somebody still has to hook it up and fit it out the way they want,” Lahti adds. “It’s not finished, but many parts are pre-assembled. And you can order characteristics that you need, for example, a certain electron flow direction or strength. All the modules can be tuned to have the ability to provide electron availability in a certain way. The availability can be adjusted, and we’ve shown that it works.”

The new method should reduce the time nano manufacturing firms spend in trial-and-error searches for materials to make electronic devices such as solar cells, organic transistors and organic light-emitting diodes. “The old way can take years,” Lahti says.

“Another of our main objectives is to make something that can be scaled up from nano- to mesoscale, and our method does that. It is also much more ecologically friendly because we use water instead of dangerous solvents in the process,” he adds.

For photovoltaics, Venkataraman points out, “The next thing is to make devices with other polymers coming along, to increase power conversion efficiency and to make them on flexible substrates. In this paper we worked on glass, but we want to translate to flexible materials and produce roll-to-roll manufactured materials with water. We expect to actually get much greater efficiency.” He suggests that reaching 5 percent power conversion efficiency would justify the investment for making small, flexible solar panels to power devices such as smart phones.

If the average smart phone uses 5 watts of power and all 307 million United States users switched from batteries to flexible solar, it could save more than 1500 megawatts per year. “That’s nearly the output of a nuclear power station,” Venkataraman says, “and it’s more dramatic when you consider that coal-fired power plants generate 1 megawatt and release 2,250 lbs. of carbon dioxide. So if a fraction of the 6.6 billion mobile phone users globally changed to solar, it would reduce our carbon footprint a lot.”

Doctoral student and first author Tim Gehan says that organic solar cells made in this way can be semi-transparent, as well, “so you could replace tinted windows in a skyscraper and have them all producing electricity during the day when it’s needed. And processing is much cheaper and cleaner with our cells than in traditional methods.”

Venkataraman credits organic materials chemist Gehan, with postdoctoral fellow and device physicist Monojit Bag, with making “crucial observations” and using “persistent detective work” to get past various roadblocks in the experiments. “These two were outstanding in helping this story move ahead,” he notes. For their part, Gehan and Bag say they got critical help from the Amherst Fire Department, which loaned them an infrared camera to pinpoint some problem hot spots on a device.

It was Bag who put similar sized and charged nanoparticles together to form a building block, then used an artist’s airbrush to spray layers of electrical circuits atop each other to create a solar-powered device. He says, “Here we pre-formed structures at nanoscale so they will form a known structure assembled at the meso scale, from which you can make a device. Before, you just hoped your two components in solution would form the right mesostructure, but with this technique we can direct it to that end.”

University of California, Davis researchers sponsored by Semiconductor Research Corporation (SRC), a university-research consortium for semiconductors and related technologies, are exploring new materials and device structures to develop next-generation memory technologies.

The research promises to help data storage companies advance their technologies with predicted benefits including increased speed, lower costs, higher capacity, more reliability and improved energy efficiency compared to today’s magnetic hard disk drive and solid state random access memory (RAM) solutions.

Conducted by UC Davis’ Takamura Research Group that has extensive experience in the growth and characterization of complex oxide thin films, heterostructures and nanostructures, the research involves leveraging complex oxides to manipulate magnetic domain walls within the wires of semiconductor memory devices at nanoscale dimensions. This work utilized sophisticated facilities available through the network of Department of Energy-funded national laboratories at the Center for Nanophase Materials Sciences, Oak Ridge National Laboratory and the Advanced Light Source, Lawrence Berkeley National Laboratory.

“We were inspired by the ‘Race Track Memory’ developed at IBM and believe complex oxides have the potential to provide additional degrees of freedom that may enable more efficient and reliable manipulation of magnetic domain walls,” said Yayoi Takamura, Associate Professor, Department of Chemical Engineering and Materials Science, UC Davis.

Existing magnetic hard disk drive and solid state RAM solutions store data either based on the magnetic or electronic state of the storage medium. Hard disk drives provide a lower cost solution for ultra-dense storage, but are relatively slow and suffer reliability issues due to the movement of mechanical parts. Solid state solutions, such as Flash memory for long-term storage and DRAM for short-term storage, offer higher access speeds, but can store fewer bits per unit area and are significantly more costly per bit of data stored.

An alternative technology that may address both of these shortcomings is based on the manipulation of magnetic domain walls, regions that separate two magnetic regions. This technology, originally proposed by IBM researchers and named ‘Race Track Memory,’ is where the UC Davis work picked up.

With most previous studies focused on metallic magnetic materials and their alloys due to well-established processing steps and high Curie temperatures, challenges still remain in manipulating parameters such as the type of domain walls formed, their position within the nanowires and their controlled movement along the length of the nanowires.

The UC Davis research investigates the use of complex oxides, such as La0.67Sr0.33MnO3 (LSMO), and heterostructures with other complex oxides as candidate materials. Complex oxides are part of an exciting new class of so-called “multifunctional’ materials that exhibit multiple properties (e.g. electronic, magnetic, etc.) and may thereby enable multiple functions in a single device. For the case of LSMO, it is a half metal, exhibits colossal magnetoresistance (CMR), meaning it can dramatically change electrical resistance in the presence of a magnetic field, and undergoes a simultaneous ferromagnetic-to-paramagnetic and metal-to-insulator transition at its Curie temperature.

In addition, these properties are sensitive to external stimuli, such as applied magnetic/electric fields, light irradiation, pressure and temperature. These attributes may allow researchers to better manipulate the position and movement of the magnetic domain walls along the length of the nanowires.

“While still in the early stages, the innovative research from the UC Davis team is helping the industry gain a better fundamental understanding linking the chemical, structural, magnetic and electronic properties of next-generation memory materials,” said Bob Havemann, Director of Nanomanufacturing Sciences at the SRC.

A “valley of death” is well-known to entrepreneurs–the lull between government funding for research and industry support for prototypes and products. To confront this problem, in 2013 the National Science Foundation (NSF) created a new program called InTrans to extend the life of the most high-impact NSF-funded research and help great ideas transition from lab to practice.

Today, in partnership with Intel Corporation, NSF announced the first InTrans award of $3 million to a team of researchers who are designing customizable, domain-specific computing technologies for use in healthcare.

The work could lead to less exposure to dangerous radiation during x-rays by speeding up the computing side of medicine. It also could result in patient-specific cancer treatments.

Led by the University of California, Los Angeles, the research team includes experts in computer science and engineering, electrical engineering and medicine from Rice University and Oregon Health and Science University. The team comes mainly from the Center of Domain-Specific Computing (CDSC), which was supported by an NSF Expeditions in Computing Award in 2009.

Expeditions, consisting of five-year, $10 million awards, represent some of the largest investments currently made by NSF’s Computer, Information Science and Engineering (CISE) directorate.

Today’s InTrans grant extends research efforts funded by the Expedition program with the aim of bringing the new technology to the point where it can be produced at a microchip fabrication plant (or fab) for a mass market.

“We see the InTrans program as an innovative approach to public-private partnership and a way of enhancing research sustainability,” said Farnam Jahanian, head of NSF’s CISE Directorate. “We’re thrilled that Intel and NSF can partner to continue to support the development of domain-specific hardware and to transition this excellent fundamental research into real applications.”

In the project, the researchers looked beyond parallelization (the process of working on a problem with more than one processor at the same time) and instead focused on domain-specific customization, a disruptive technology with the potential to bring orders-of-magnitude improvements to important applications. Domain-specific computing systems work efficiently on specific problems–in this case, medical imaging and DNA sequencing of tumors–or a set of problems with similar features, reducing the time to solution and bringing down costs.

“We tried to create energy-efficient computers that are more like brains,” explained Jason Cong, the director of CDSC, a Chancellor’s Professor of computer science and electrical engineering at UCLA, and the lead on the project.

“We don’t really have a centralized central processing unit in there. If you look at the brain you have one region responsible for speech, another region for motor control, another region for vision. Those are specialized ‘accelerators.’ We want to develop a system architecture of that kind, where each accelerator can deliver a hundred to a thousand times better efficiency than the standard processors.”

The team plans to identify classes of applications that share similar computation kernels, thereby creating hardware that solves a range of common related problems with high efficiency and flexibility. This differs from specialized circuits that are designed to solve a single problem (such as those used in cell phones) or general-purpose processors designed to solve all problems.

“The group laid out a different way of presenting the problem of domain-specific computing, which is: How to determine the common features and support them efficiently?” said Sankar Basu, program officer at NSF. “They developed a framework for domain-specific hardware design that they believe can be applied in many other domains as well.”

The group selected medical imaging and patient specific cancer treatments–two important problems in healthcare–as the test applications upon which to create their design because of healthcare’s significant impact on the national economy and quality of life.

Medical imaging is now used diagnose a multitude of medical problems. However, diagnostic methods like x-ray CT (computed tomography) scanners can expose the body to cumulative radiation, which increases risk to the patient in the long term.

Scientists have developed new medical imaging algorithms that lead to less radiation exposure, but these have been constrained due to a lack of computing power.

Using their customizable heterogeneous platform, Cong and his team were able to make one of the leading CT image reconstruction algorithms a hundred times faster, thereby reducing a subject’s exposure to radiation significantly. They presented their results in May 2014 at the IEEE International Symposium on Field-Programmable Custom Computing Machines.

“The low-dose CT scan allows you to get a similar resolution to the standard CT, but the patient can get several times lower radiation,” said Alex Bui, a professor in the UCLA Radiological Sciences department and a co-lead of the project. “Anything we can do to lower that exposure will have a significant health impact.”

In theory, the technology also exists to determine the specific strain of cancer a patient has through DNA sequencing and to use that information to design a patient-specific treatment. However, it currently takes so long to sequence the DNA that once one determines a tumor’s strain, the cancer has already mutated. With domain-specific hardware, Cong believes rapid diagnoses and targeted treatments will be possible.

“Power- and cost-efficient high-performance computation in these domains will have a significant impact on healthcare in terms of preventive medicine, diagnostic procedures and therapeutic procedures,” said Cong.

“Cancer genomics, in particular, has been hobbled by the lack of open, scalable and efficient approaches to rapidly and accurately align and interpret genome sequence data,” said Paul Spellman, a professor at OHSU, who works on personalized cancer treatment and served as another co-lead on the project.

“The ability to use hardware approaches to dramatically improve these speeds will facilitate the rapid turnarounds in enormous datasets that will be necessary to deliver on precision medicine.”

Down the road, the team will work with Spellman and other physicians at OHSU to test the application of the hardware in a real-world environment.

“Intel excels in creating customizable computing platforms optimized for data-intensive computation,” said Michael C. Mayberry, corporate vice president of Intel’s Technology and Manufacturing Group and chair of Corporate Research Council. “These researchers are some of the leading lights in the field of domain-specific computing.

“This new effort enables us to maximize the benefits of Intel architecture. For example, we can ensure that Intel Xeon processor features are optimized, in connection with various accelerators, for a specific application domain and across all architectural layers,” Mayberry said. “Life science and healthcare research will undoubtedly benefit from the performance, flexibility, energy efficiency and affordability of this application.”

The InTrans program not only advances important fundamental research and integrates it into industry, it also benefits society by improving medical imaging technologies and cancer treatments, helping to extend lives.

“Not every research project will get to the stage where they’re ready to make a direct impact on industry and on society, but in our case, we’re quite close,” Cong said. “We’re thankful for NSF’s support and are excited about continuing our research under this unique private-public funding model.”

There’s no shortage of ideas about how to use nanotechnology, but one of the major hurdles is how to manufacture some of the new products on a large scale. With support from the National Science Foundation (NSF), University of Massachusetts (UMass) Amherst chemical engineer Jim Watkins and his team are working to make nanotechnology more practical for industrial-scale manufacturing.

One of the projects they’re working on at the NSF Center for Hierarchical Manufacturing (CHM) is a roll-to-roll process for nanotechnology that is similar to what is used in traditional manufacturing. They’re also designing a process to manufacture printable coatings that improve the way solar panels absorb and direct light. They’re even investigating the use of self-assembling nanoscale products that could have applications for many industries.

“New nanotechnologies can’t impact the U.S. economy until practical methods are available for producing products, using them in high volumes, at low cost. CHM is researching the fundamental scientific and engineering barriers that impede such commercialization, and innovating new technologies to surmount those barriers,” notes Bruce Kramer, senior advisor in the NSF Engineering Directorate’s Division of Civil, Mechanical and Manufacturing Innovation (CMMI), which funded the research.

“The NSF Center for Hierarchical Manufacturing is developing platform technologies for the economical manufacture of next generation devices and systems for applications in computing, electronics, energy conversion, resource conservation and human health,” explains Khershed Cooper, a CMMI program director.

“The center creates fabrication tools that are enabling versatile and high-rate continuous processes for the manufacture of nanostructures that are systematically integrated into higher order structures using bottom-up and top-down techniques,” Cooper says. “For example, CHM is designing and building continuous, roll-to-roll nanofabrication systems that can print, in high-volume, 3-D nanostructures and multi-layer nanodevices at sub-100 nanometer resolution, and in the process, realize hybrid electronic-optical-mechanical nanosystems.”

The research was supported by NSF award #1025020, Nanoscale Science and Engineering Centers (NSEC): Center for Hierarchical Manufacturing.

Over the years, computer chips have gotten smaller thanks to advances in materials science and manufacturing technologies. This march of progress, the doubling of transistors on a microprocessor roughly every two years, is called Moore’s Law. But there’s one component of the chip-making process in need of an overhaul if Moore’s law is to continue: the chemical mixture called photoresist. Similar to film used in photography, photoresist, also just called resist, is used to lay down the patterns of ever-shrinking lines and features on a chip.

Paul Ashby and Deirdre Olynick of Berkeley Lab at the Advanced Light Source (ALS) Extreme Ultraviolet 12.0.1 Beamline.

Paul Ashby and Deirdre Olynick of Berkeley Lab at the Advanced Light Source (ALS) Extreme Ultraviolet 12.0.1 Beamline. Credit: Roy Kaltschmidt, Berkeley Lab

Now, in a bid to continue decreasing transistor size while increasing computation and energy efficiency, chip-maker Intel has partnered with researchers from the U.S. Department of Energy’s Lawrence Berkeley National Lab (Berkeley Lab) to design an entirely new kind of resist. And importantly, they have done so by characterizing the chemistry of photoresist, crucial to further improve performance in a systematic way. The researchers believe their results could be easily incorporated by companies that make resist, and find their way into manufacturing lines as early as 2017.

The new resist effectively combines the material properties of two pre-existing kinds of resist, achieving the characteristics needed to make smaller features for microprocessors, which include better light sensitivity and mechanical stability, says Paul Ashby, staff scientist at Berkeley Lab’s Molecular Foundry, a DOE Office of Science user facility. “We discovered that mixing chemical groups, including cross linkers and a particular type of ester, could improve the resist’s performance.” The work is published this week in the journal Nanotechnology.

Finding a new kind of photoresist is “one of the largest challenges facing the semiconductor industry in the materials space,” says Patrick Naulleau, director of the Center for X-ray Optics (CXRO) at Berkeley Lab. Moreover, there’s been very little understanding of the fundamental science of how resist actually works at the chemical level, says Deirdre Olynick, staff scientist at the Molecular Foundry. “Resist is a very complex mixture of materials and it took so long to develop the technology that making huge leaps away from what’s already known has been seen as too risky,” she says. But now the lack of fundamental understanding could potentially put Moore’s Law in jeopardy, she adds.

To understand why resist is so important, consider a simplified explanation of how your microprocessors are made. A silicon wafer, about a foot in diameter, is cleaned and coated with a layer of photoresist. Next ultraviolet light is used to project an image of the desired circuit pattern including components such as wires and transistors on the wafer, chemically altering the resist.

Depending on the type of resist, light either makes it more or less soluble, so when the wafer is immersed in a solvent, the exposed or unexposed areas wash away. The resist protects the material that makes up transistors and wires from being etched away and can allow the material to be selectively deposited. This process of exposure, rinse and etch or deposition is repeated many times until all the components of a chip have been created.

The problem with today’s resist, however, is that it was originally developed for light sources that emit so-called deep ultraviolet light with wavelengths of 248 and 193 nanometers. But to gain finer features on chips, the industry intends to switch to a new light source with a shorter wavelength of just 13.5 nanometers. Called extreme ultraviolet (EUV), this light source has already found its way into manufacturing pilot lines. Unfortunately, today’s photoresist isn’t yet ready for high volume manufacturing.

“The semiconductor industry wants to go to smaller and smaller features,” explains Ashby. While extreme ultraviolet light is a promising technology, he adds, “you also need the resist materials that can pattern to the resolution that extreme ultraviolet can promise.” So teams led by Ashby and Olynick, which include Berkeley Lab postdoctoral researcher Prashant Kulshreshtha, investigated two types of resist. One is called crosslinking, composed of molecules that form bonds when exposed to ultraviolet light. This kind of resist has good mechanical stability and doesn’t distort during development—that is, tall, thin lines made with it don’t collapse. But if this is achieved with excessive crosslinking, it requires long, expensive exposures. The second kind of resist is highly sensitive, yet doesn’t have the mechanical stability.

When low concentrations of crosslinker is added to resist (left), it gains mechanical stability and doesn't require expensive exposures as with high crosslinker concentrations (right). Credit:

When a low concentrations of crosslinker is added to resist (left), it is able to pattern smaller features and doesn’t require longer, expensive exposures as with a high concentrations of crosslinker (right). Credit: Prashant Kulshreshtha, Berkeley Lab

When the researchers combined these two types of resist in various concentrations, they found they were able to retain the best properties of both. The materials were tested using the unique EUV patterning capabilities at the CXRO. Using the Nanofabrication and Imaging and Manipulation facilities at the Molecular Foundry to analyze the patterns, the researchers saw improvements in the smoothness of lines created by the photoresist, even as they shrunk the width. Through chemical analysis, they were also able to see how various concentrations of additives affected the cross-linking mechanism and resulting stability and sensitivity.

The researchers say future work includes further optimizing the resist’s chemical formula for the extremely small components required for tomorrow’s microprocessors. The semiconductor industry is currently locking down its manufacturing processes for chips at the so-called 10-nanometer node. If all goes well, these resist materials could play an important role in the process and help Moore’s Law persist. This research was funded by the Intel Corporation, JSR Micro, and the DOE Office of Science (Basic Energy Sciences).

Among other carbon-based nanomaterials, graphene represents a great promise for gas sensing applications. In 2009 the detection of individual gas molecules of NO2 adsorbed onto graphene surface was reported for the first time. This initial observation has been successfully explored during the recent years. The Nanobioelectronics & Biosensors Group at Institut Català de Nanociència i Nanotecnologia (ICN2), led by ICREA Research Professor Arben Merkoçi, published in Small a work showing how to use a Graphene/Silicon Heterojunction Schottky Diode as a sensitive, selective and simple tool for vapors sensing. The work was developed in collaboration with researchers from the Amirkabir University of Technology (Tehran, Iran).

The Graphene/Silicon heterojunction Schottky diode is fabricated using a silicon wafer onto which Cr and Au were deposited to form the junction between graphene and silicon (see the attached figure). The adsorbed vapor molecules change the local carrier concentration in graphene, which yields to the changes in impedance response. The vapors of the various chemical compounds studied change the impedance response of Graphene/Silicon heterojunction Schottky diode. The relative impedance change versus frequency dependence shows a selective response in gas sensing which makes this characteristic frequency a distinctive parameter of a given vapor.

The device is well reproducible for different concentrations of phenol vapor using three different devices. This graphene based device and the developed detection methodology could be extended to several other gases and applications with interest for environmental monitoring as well as other industries.