Category Archives: Applications

Researchers at the universities in Linköping and Shenzhen have shown how an inorganic perovskite can be made into a cheap and efficient photodetector that transfers both text and music. “It’s a promising material for future rapid optical communication”, says Feng Gao, researcher at Linköping University.

The film in the new perovskite, which contains only inorganic elements (caesium, lead, iodine and bromine), has been tested in a system for optical communication, which confirmed its ability to transfer both text and images, rapidly and reliably. Credit: Thor Balkhed

“Perovskites of inorganic materials have a huge potential to influence the development of optical communication. These materials have rapid response times, are simple to manufacture, and are extremely stable.” So says Feng Gao, senior lecturer at LiU who, together with colleagues who include Chunxiong Bao, postdoc at LiU, and scientists at Shenzhen University, has published the results in the prestigious journal Advanced Materials.

All optical communication requires rapid and reliable photodetectors – materials that capture a light signal and convert it into an electrical signal. Current optical communication systems use photodetectors made from materials such as silicon and indium gallium arsenide. But these are expensive, partly because they are complicated to manufacture. Moreover, these materials cannot to be used in some new devices, such as mechanically flexible, light-weight or large-area devices.

Researcher have been seeking cheap replacement, or at least supplementary, materials for many years, and have looked at, for example, organic semi-conductors. However, the charge transport of these has proved to be too slow. A photodetector must be rapid.

The new perovskite materials have been extremely interesting in research since 2009, but the focus has been on their use in solar cells and efficient light-emitting diodes. Feng Gao, researcher in Biomolecular and Organic Electronics at LiU, was awarded a Starting Grant of EUR 1.5 million from the European Research Council (ERC) in the autumn of 2016, intended for research into using perovskites in light-emitting diodes.

Perovskites form a completely new family of semi-conducting materials that are defined by their crystal structures. They can consist of both organic and inorganic substances. They have good light-emitting properties and are easy to manufacture. For applications such as light-emitting diodes and efficient solar cells, most interest has been placed on perovskites that consist of an organic substance (containing carbon and hydrogen), metal, and halogen (fluorine, chlorine, bromine or iodine) ions. However, when this composition was used in photodetectors, it proved to be too unstable.

The results changed, however, when Chunxiong Bao used the right materials, and managed to optimise the manufacturing process and the structure of the film. The film in the new perovskite, which contains only inorganic elements (caesium, lead, iodine and bromine), has been tested in a system for optical communication, which confirmed its ability to transfer both text and images, rapidly and reliably. The quality didn’t deteriorate, even after 2,000 hours at room temperature.

“It’s very gratifying that we have already achieved results that are very close to application,” says Feng Gao, who leads the research, together with Professor Wenjing Zhang at Shenzhen University.

Gartner, Inc. today highlighted the top strategic technology trends that organizations need to explore in 2019. Analysts presented their findings during Gartner Symposium/ITxpo, which is taking place here through Thursday.

Gartner defines a strategic technology trend as one with substantial disruptive potential that is beginning to break out of an emerging state into broader impact and use, or which are rapidly growing trends with a high degree of volatility reaching tipping points over the next five years.

“The Intelligent Digital Mesh has been a consistent theme for the past two years and continues as a major driver through 2019. Trends under each of these three themes are a key ingredient in driving a continuous innovation process as part of a ContinuousNEXT strategy,” said David Cearley, vice president and Gartner Fellow. “For example, artificial intelligence (AI) in the form of automated things and augmented intelligence is being used together with IoT, edge computing and digital twins to deliver highly integrated smart spaces. This combinatorial effect of multiple trends coalescing to produce new opportunities and drive new disruption is a hallmark of the Gartner top 10 strategic technology trends for 2019.”

The top 10 strategic technology trends for 2019 are:

Autonomous Things

Autonomous things, such as robots, drones and autonomous vehicles, use AI to automate functions previously performed by humans. Their automation goes beyond the automation provided by rigid programing models and they exploit AI to deliver advanced behaviors that interact more naturally with their surroundings and with people.

“As autonomous things proliferate, we expect a shift from stand-alone intelligent things to a swarm of collaborative intelligent things, with multiple devices working together, either independently of people or with human input,” said Mr. Cearley. “For example, if a drone examined a large field and found that it was ready for harvesting, it could dispatch an “autonomous harvester.” Or in the delivery market, the most effective solution may be to use an autonomous vehicle to move packages to the target area. Robots and drones on board the vehicle could then ensure final delivery of the package.”

Augmented Analytics

Augmented analytics focuses on a specific area of augmented intelligence, using machine learning (ML) to transform how analytics content is developed, consumed and shared. Augmented analytics capabilities will advance rapidly to mainstream adoption, as a key feature of data preparation, data management, modern analytics, business process management, process mining and data science platforms. Automated insights from augmented analytics will also be embedded in enterprise applications — for example, those of the HR, finance, sales, marketing, customer service, procurement and asset management departments — to optimize the decisions and actions of all employees within their context, not just those of analysts and data scientists. Augmented analytics automates the process of data preparation, insight generation and insight visualization, eliminating the need for professional data scientists in many situations.

“This will lead to citizen data science, an emerging set of capabilities and practices that enables users whose main job is outside the field of statistics and analytics to extract predictive and prescriptive insights from data,” said Mr. Cearley. “Through 2020, the number of citizen data scientists will grow five times faster than the number of expert data scientists. Organizations can use citizen data scientists to fill the data science and machine learning talent gap caused by the shortage and high cost of data scientists.”

AI-Driven Development

The market is rapidly shifting from an approach in which professional data scientists must partner with application developers to create most AI-enhanced solutions to a model in which the professional developer can operate alone using predefined models delivered as a service. This provides the developer with an ecosystem of AI algorithms and models, as well as development tools tailored to integrating AI capabilities and models into a solution. Another level of opportunity for professional application development arises as AI is applied to the development process itself to automate various data science, application development and testing functions. By 2022, at least 40 percent of new application development projects will have AI co-developers on their team.

“Ultimately, highly advanced AI-powered development environments automating both functional and nonfunctional aspects of applications will give rise to a new age of the ‘citizen application developer’ where nonprofessionals will be able to use AI-driven tools to automatically generate new solutions. Tools that enable nonprofessionals to generate applications without coding are not new, but we expect that AI-powered systems will drive a new level of flexibility,” said Mr. Cearley.

Digital Twins

A digital twin refers to the digital representation of a real-world entity or system. By 2020, Gartner estimates there will be more than 20 billion connected sensors and endpoints and digital twins will exist for potentially billions of things. Organizations will implement digital twins simply at first. They will evolve them over time, improving their ability to collect and visualize the right data, apply the right analytics and rules, and respond effectively to business objectives.

“One aspect of the digital twin evolution that moves beyond IoT will be enterprises implementing digital twins of their organizations (DTOs). A DTO is a dynamic software model that relies on operational or other data to understand how an organization operationalizes its business model, connects with its current state, deploys resources and responds to changes to deliver expected customer value,” said Mr. Cearley. “DTOs help drive efficiencies in business processes, as well as create more flexible, dynamic and responsive processes that can potentially react to changing conditions automatically.”

Empowered Edge

The edge refers to endpoint devices used by people or embedded in the world around us. Edge computing describes a computing topology in which information processing, and content collection and delivery, are placed closer to these endpoints. It tries to keep the traffic and processing local, with the goal being to reduce traffic and latency.

In the near term, edge is being driven by IoT and the need keep the processing close to the end rather than on a centralized cloud server. However, rather than create a new architecture, cloud computing and edge computing will evolve as complementary models with cloud services being managed as a centralized service executing, not only on centralized servers, but in distributed servers on-premises and on the edge devices themselves.

Over the next five years, specialized AI chips, along with greater processing power, storage and other advanced capabilities, will be added to a wider array of edge devices. The extreme heterogeneity of this embedded IoT world and the long life cycles of assets such as industrial systems will create significant management challenges. Longer term, as 5G matures, the expanding edge computing environment will have more robust communication back to centralized services. 5G provides lower latency, higher bandwidth, and (very importantly for edge) a dramatic increase in the number of nodes (edge endoints) per square km.

Immersive Experience

Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in perception and interaction models leads to the future immersive user experience.

“Over time, we will shift from thinking about individual devices and fragmented user interface (UI) technologies to a multichannel and multimodal experience. The multimodal experience will connect people with the digital world across hundreds of edge devices that surround them, including traditional computing devices, wearables, automobiles, environmental sensors and consumer appliances,” said Mr. Cearley. “The multichannel experience will use all human senses as well as advanced computer senses (such as heat, humidity and radar) across these multimodal devices. This multiexperience environment will create an ambient experience in which the spaces that surround us define “the computer” rather than the individual devices. In effect, the environment is the computer.”

Blockchain

Blockchain, a type of distributed ledger, promises to reshape industries by enabling trust, providing transparency and reducing friction across business ecosystems potentially lowering costs, reducing transaction settlement times and improving cash flow. Today, trust is placed in banks, clearinghouses, governments and many other institutions as central authorities with the “single version of the truth” maintained securely in their databases. The centralized trust model adds delays and friction costs (commissions, fees and the time value of money) to transactions. Blockchain provides an alternative trust mode and removes the need for central authorities in arbitrating transactions.

”Current blockchain technologies and concepts are immature, poorly understood and unproven in mission-critical, at-scale business operations. This is particularly so with the complex elements that support more sophisticated scenarios,” said Mr. Cearley. “Despite the challenges, the significant potential for disruption means CIOs and IT leaders should begin evaluating blockchain, even if they don’t aggressively adopt the technologies in the next few years.”

Many blockchain initiatives today do not implement all of the attributes of blockchain — for example, a highly distributed database. These blockchain-inspired solutions are positioned as a means to achieve operational efficiency by automating business processes, or by digitizing records. They have the potential to enhance sharing of information among known entities, as well as improving opportunities for tracking and tracing physical and digital assets. However, these approaches miss the value of true blockchain disruption and may increase vendor lock-in. Organizations choosing this option should understand the limitations and be prepared to move to complete blockchain solutions over time and that the same outcomes may be achieved with more efficient and tuned use of existing nonblockchain technologies.

Smart Spaces

A smart space is a physical or digital environment in which humans and technology-enabled systems interact in increasingly open, connected, coordinated and intelligent ecosystems. Multiple elements — including people, processes, services and things — come together in a smart space to create a more immersive, interactive and automated experience for a target set of people and industry scenarios.

“This trend has been coalescing for some time around elements such as smart cities, digital workplaces, smart homes and connected factories. We believe the market is entering a period of accelerated delivery of robust smart spaces with technology becoming an integral part of our daily lives, whether as employees, customers, consumers, community members or citizens,” said Mr. Cearley.

Digital Ethics and Privacy

Digital ethics and privacy is a growing concern for individuals, organizations and governments. People are increasingly concerned about how their personal information is being used by organizations in both the public and private sector, and the backlash will only increase for organizations that are not proactively addressing these concerns.

“Any discussion on privacy must be grounded in the broader topic of digital ethics and the trust of your customers, constituents and employees. While privacy and security are foundational components in building trust, trust is actually about more than just these components,” said Mr. Cearley. “Trust is the acceptance of the truth of a statement without evidence or investigation. Ultimately an organization’s position on privacy must be driven by its broader position on ethics and trust. Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing.’”

Quantum Computing

Quantum computing (QC) is a type of nonclassical computing that operates on the quantum state of subatomic particles (for example, electrons and ions) that represent information as elements denoted as quantum bits (qubits). The parallel execution and exponential scalability of quantum computers means they excel with problems too complex for a traditional approach or where a traditional algorithms would take too long to find a solution. Industries such as automotive, financial, insurance, pharmaceuticals, military and research organizations have the most to gain from the advancements in QC. In the pharmaceutical industry, for example, QC could be used to model molecular interactions at atomic levels to accelerate time to market for new cancer-treating drugs or QC could accelerate and more accurately predict the interaction of proteins leading to new pharmaceutical methodologies.

“CIOs and IT leaders should start planning for QC by increasing understanding and how it can apply to real-world business problems. Learn while the technology is still in the emerging state. Identify real-world problems where QC has potential and consider the possible impact on security,” said Mr. Cearley. “But don’t believe the hype that it will revolutionize things in the next few years. Most organizations should learn about and monitor QC through 2022 and perhaps exploit it from 2023 or 2025.”

Gartner clients can learn more in the Gartner Special Report “Top 10 Strategic Technology Trends for 2019.” Additional detailed analysis on each tech trend can be found in the Gartner YouTube video “Gartner Top 10 Strategic Technology Trends 2019.”

The Micron Foundation (Nasdaq:MU) announced a $1 million grant for universities and nonprofit organizations to conduct research into how artificial intelligence (AI) can improve lives while ensuring safety, security and privacy. The grant was announced at the inaugural Micron Insight 2018 conference where the technology industry’s top minds gathered in San Francisco to discuss the future of AI, machine learning and data science, and how memory technology is essential in bringing intelligence to life.

“Artificial intelligence is one of the frontiers where science and engineering education can best be applied,” said Micron Foundation Executive Director Dee Mooney. “We want to accelerate advances in AI by investing in education and making sure that pioneers of this technology, reflect the diversity and richness of the world we live in and build a future where AI benefits everyone.”

Micron awarded a total of $500,000 to three initial recipients at Micron Insight 2018.

  • AI4All, a nonprofit organization, works to increase diversity and inclusion in AI education, research, development and policy. AI4All supports the next generation of diverse AI talent through its AI Summer Camp. Open to 9th-11th grade students, the camp gives special consideration to young women, underrepresented groups and families of lower socioeconomic status.
  • Berkeley Artificial Intelligence Research (BAIR) Lab supports researchers and graduate students developing fundamental advances in computer vision, machine learning, natural-language processing, planning and robotics. BAIR is based at UC Berkeley’s College of Engineering.
  • In a related announcement, the Micron Foundation launched a $1 million grant for universities and non-profit organizations to conduct research on AI. For more details, visit http://bit.ly/MicronFoundation.

The $1 million fund is available to select research universities focused on the future implications of AI in life, healthcare and business, with a portion specifically allocated to support women and underrepresented groups. The Micron Foundation supports researchers tackling some of AI’s greatest challenges – from building highly reliable software and hardware programs to finding solutions that address the business and consumer impacts of AI.

In August 2018, the Micron Foundation announced a $1 million fund for Virginia colleges and universities to advance STEM and STEM-related diversity programs in connection with Micron’s expansion of its memory production facilities in Manassas, Virginia.

The vast majority of computing devices today are made from silicon, the second most abundant element on Earth, after oxygen. Silicon can be found in various forms in rocks, clay, sand, and soil. And while it is not the best semiconducting material that exists on the planet, it is by far the most readily available. As such, silicon is the dominant material used in most electronic devices, including sensors, solar cells, and the integrated circuits within our computers and smartphones.

Now MIT engineers have developed a technique to fabricate ultrathin semiconducting films made from a host of exotic materials other than silicon. To demonstrate their technique, the researchers fabricated flexible films made from gallium arsenide, gallium nitride, and lithium fluoride — materials that exhibit better performance than silicon but until now have been prohibitively expensive to produce in functional devices.

MIT researchers have devised a way to grow single crystal GaN thin film on a GaN substrate through two-dimensional materials. The GaN thin film is then exfoliated by a flexible substrate, showing the rainbow color that comes from thin film interference. This technology will pave the way to flexible electronics and the reuse of the wafers. Credit: Wei Kong and Kuan Qiao

The new technique, researchers say, provides a cost-effective method to fabricate flexible electronics made from any combination of semiconducting elements, that could perform better than current silicon-based devices.

“We’ve opened up a way to make flexible electronics with so many different material systems, other than silicon,” says Jeehwan Kim, the Class of 1947 Career Development Associate Professor in the departments of Mechanical Engineering and Materials Science and Engineering. Kim envisions the technique can be used to manufacture low-cost, high-performance devices such as flexible solar cells, and wearable computers and sensors.

Details of the new technique are reported today in Nature Materials. In addition to Kim, the paper’s MIT co-authors include Wei Kong, Huashan Li, Kuan Qiao, Yunjo Kim, Kyusang Lee, Doyoon Lee, Tom Osadchy, Richard Molnar, Yang Yu, Sang-hoon Bae, Yang Shao-Horn, and Jeffrey Grossman, along with researchers from Sun Yat-Sen University, the University of Virginia, the University of Texas at Dallas, the U.S. Naval Research Laboratory, Ohio State University, and Georgia Tech.

Now you see it, now you don’t

In 2017, Kim and his colleagues devised a method to produce “copies” of expensive semiconducting materials using graphene — an atomically thin sheet of carbon atoms arranged in a hexagonal, chicken-wire pattern. They found that when they stacked graphene on top of a pure, expensive wafer of semiconducting material such as gallium arsenide, then flowed atoms of gallium and arsenide over the stack, the atoms appeared to interact in some way with the underlying atomic layer, as if the intermediate graphene were invisible or transparent. As a result, the atoms assembled into the precise, single-crystalline pattern of the underlying semiconducting wafer, forming an exact copy that could then easily be peeled away from the graphene layer.

The technique, which they call “remote epitaxy,” provided an affordable way to fabricate multiple films of gallium arsenide, using just one expensive underlying wafer.

Soon after they reported their first results, the team wondered whether their technique could be used to copy other semiconducting materials. They tried applying remote epitaxy to silicon, and also germanium — two inexpensive semiconductors — but found that when they flowed these atoms over graphene they failed to interact with their respective underlying layers. It was as if graphene, previously transparent, became suddenly opaque, preventing atoms of silicon and germanium from “seeing” the atoms on the other side.

As it happens, silicon and germanium are two elements that exist within the same group of the periodic table of elements. Specifically, the two elements belong in group four, a class of materials that are ionically neutral, meaning they have no polarity.

“This gave us a hint,” says Kim.

Perhaps, the team reasoned, atoms can only interact with each other through graphene if they have some ionic charge. For instance, in the case of gallium arsenide, gallium has a negative charge at the interface, compared with arsenic’s positive charge. This charge difference, or polarity, may have helped the atoms to interact through graphene as if it were transparent, and to copy the underlying atomic pattern.

“We found that the interaction through graphene is determined by the polarity of the atoms. For the strongest ionically bonded materials, they interact even through three layers of graphene,” Kim says. “It’s similar to the way two magnets can attract, even through a thin sheet of paper.”

Opposites attract

The researchers tested their hypothesis by using remote epitaxy to copy semiconducting materials with various degrees of polarity, from neutral silicon and germanium, to slightly polarized gallium arsenide, and finally, highly polarized lithium fluoride — a better, more expensive semiconductor than silicon.

They found that the greater the degree of polarity, the stronger the atomic interaction, even, in some cases, through multiple sheets of graphene. Each film they were able to produce was flexible and merely tens to hundreds of nanometers thick.

The material through which the atoms interact also matters, the team found. In addition to graphene, they experimented with an intermediate layer of hexagonal boron nitride (hBN), a material that resembles graphene’s atomic pattern and has a similar Teflon-like quality, enabling overlying materials to easily peel off once they are copied.

However, hBN is made of oppositely charged boron and nitrogen atoms, which generate a polarity within the material itself. In their experiments, the researchers found that any atoms flowing over hBN, even if they were highly polarized themselves, were unable to interact with their underlying wafers completely, suggesting that the polarity of both the atoms of interest and the intermediate material determines whether the atoms will interact and form a copy of the original semiconducting wafer.

“Now we really understand there are rules of atomic interaction through graphene,” Kim says.

With this new understanding, he says, researchers can now simply look at the periodic table and pick two elements of opposite charge. Once they acquire or fabricate a main wafer made from the same elements, they can then apply the team’s remote epitaxy techniques to fabricate multiple, exact copies of the original wafer.

“People have mostly used silicon wafers because they’re cheap,” Kim says. “Now our method opens up a way to use higher-performing, nonsilicon materials. You can just purchase one expensive wafer and copy it over and over again, and keep reusing the wafer. And now the material library for this technique is totally expanded.”

Kim envisions that remote epitaxy can now be used to fabricate ultrathin, flexible films from a wide variety of previously exotic, semiconducting materials — as long as the materials are made from atoms with a degree of polarity. Such ultrathin films could potentially be stacked, one on top of the other, to produce tiny, flexible, multifunctional devices, such as wearable sensors, flexible solar cells, and even, in the distant future, “cellphones that attach to your skin.”

“In smart cities, where we might want to put small computers everywhere, we would need low power, highly sensitive computing and sensing devices, made from better materials,” Kim says. “This [study] unlocks the pathway to those devices.”

Technion, Israel’s technological institute, announced this week that Intel is collaborating with the institute on its new artificial intelligence (AI) research center. The announcement was made at the center’s inauguration attended by Dr. Michael Mayberry, Intel’s chief technology officer, and Dr. Naveen Rao, Intel corporate vice president and general manager of the Artificial Intelligence Products Group.

“AI is not a one-size-fits-all approach, and Intel has been working closely with a range of industry leaders to deploy AI capabilities and create new experiences. Our collaboration with Technion not only reinforces Intel Israel’s AI operations, but we are also seeing advancements to the field of AI from the joint research that is under way and in the pipeline,” said Naveen Rao, Intel corporate vice president and general manager of Artificial Intelligence Products Group

The center features Technion’s computer science, electrical engineering, industrial engineering and management departments, among others, all collaborating to drive a closer relationship between academia and industry in the race to AI. Intel, which invested undisclosed funds in the center, will represent the industry in leading AI-dedicated computing research.

Intel is committed to accelerating the promise of AI across many industries and driving the next wave of computing. Research exploring novel architectural and algorithmic approaches is a critical component of Intel’s overall AI program. The company is working with customers across verticals – including healthcare, autonomous driving, sports/entertainment, government, enterprise, retail and more – to implement AI solutions and demonstrate real value. Along with Technion, Intel is also involved in AI research with other universities and organizations worldwide.

Intel and Technion have enjoyed a strong relationship through the years, as generations of Technion graduates have joined Intel’s development center in Haifa, Israel, as engineers. Intel has also previously collaborated with Technion on AI as part of the Intel Collaborative Research Institute for Computational Intelligence program.

“2017 was an excellent year for CIS , with growth observed in all segments except computing,” commented Pierre Cambou, Principal Analyst, Technology & Market, Imaging at Yole Développement (Yole). Driven by new applications, the industry’s future remains on strong footing.

Yole announces its annual technology & market analysis focused on the CIS industry, from 2017 to 2023, titled: Status of the CMOS Image Sensor Industry. In 2017 the CIS market reached US$13.9 billion. The market research & strategy consulting company forecasts a 9.4% CAGR between 2017 and 2023, driven mainly by smartphones integrating additional cameras to support functionalities like optical zoom, biometry, and 3D interactions.

Yole proposes this year again a comprehensive technology & market analysis of the CMOS Image Industry. In addition to a clear understanding of the CIS ecosystem, analysts detail in this new edition, 2017-2023 forecasts, a relevant description of the M&A activities, an impressive overview of the dual and 3D camera trends for mobile. Mobile and consumer applications are also well detailed in this 2018 edition, with a deep added-value section focused on technology evolution.
In collaboration with Jean-Luc Jaffard, formerly at STMicroelectronics and part of Red Belt Conseil, Pierre Cambou pursued his investigation all year long and reveals today the status of the CIS industry.

2017 saw aggregated CIS industry revenue of US$13.9 billion. And 5 years later, the consulting company Yole announces more than US$ 23 billion. The YoY growth hit a peak at 20% due to the exceptional increase in image sensor value, across almost all markets, but primarily in the mobile sector. “CIS keeps its momentum”,confirms Pierre Cambou from Yole.

Revenue is dominated by mobile, consumer, and computing, which represent 85% of total 2017 CIS revenue. Mobile alone represents 69%. Security is the second-largest segment, behind automotive.

The CIS ecosystem is currently dominated by the three Asian heavyweights: Sony, Samsung, and Omnivision. Europe made a noticeable comeback. Meanwhile, the US maintains a presence in the high-end sector.

The market has benefited from the operational recovery of leading CIS player Sony, which captured 42% market share. “…Apple iPhone has had a tremendous effect on the semiconductor industry, and on imaging in particular. It offered an opportunity for its main supplier, Sony, to reach new highs in the CIS process, building on its early advances in high-end digital photography…”, explains Pierre Cambou in its article: Image sensors have hugely benefited from Apple’s avant-garde strategy posted on i-micronews.com.

The CIS industry is able to grow at the speed of the global semiconductor industry, which also had a record year, mainly due to DRAM revenue growth. CIS have become a key segment of the broader semiconductor industry, featuring in the strategy of most key players, and particularly the newly-crowned industry leader Samsung. Mobile, security and automotive markets are all in the middle of booming expansion, mostly benefiting ON Semiconductor and Omnivision.

These markets are boosting most players that are able to keep up with technology and capacity development through capital expenditure. The opportunities are all across the board, with new players able to climb the rankings, such as STMicroelectronics and Smartsense. Technology advancement and the switch from imaging to sensing is fostering innovation at multiple levels: pixel, chip, wafer, all the way to the system.

CIS sensors are also at the forefront of 3D semiconductor approaches. They are a main driver in the development of artificial intelligence. Yole’s analysts foresee new techniques and new applications all ready to keep up the market growth momentum… A detailed description of this report is available on i-micronews.com, imaging reports section.

A team of semiconductor researchers based in France has used a boron nitride separation layer to grow indium gallium nitride (InGaN) solar cells that were then lifted off their original sapphire substrate and placed onto a glass substrate.

Ph.D. Student Taha Ayari measures the photovoltaic performance of the InGaN solar cells with a solar simulator. (Credit: Ougazzaden laboratory)

By combining the InGaN cells with photovoltaic (PV) cells made from materials such as silicon or gallium arsenide, the new lift-off technique could facilitate fabrication of higher efficiency hybrid PV devices able to capture a broader spectrum of light. Such hybrid structures could theoretically boost solar cell efficiency as high as 30 percent for an InGaN/Si tandem device.

The technique is the third major application for the hexagonal boron nitride lift-off technique, which was developed by a team of researchers from the Georgia Institute of Technology, the French National Center for Scientific Research (CNRS), and Institut Lafayette in Metz, France. Earlier applications targeted sensors and light-emitting diodes (LEDs).

“By putting these structures together with photovoltaic cells made of silicon or a III-V material, we can cover the visible spectrum with the silicon and utilize the blue and UV light with indium gallium nitride to gather light more efficiently,” said Abdallah Ougazzaden, director of Georgia Tech Lorraine in Metz, France and a professor in Georgia Tech’s School of Electrical and Computer Engineering (ECE). “The boron nitride layer doesn’t impact the quality of the indium gallium nitride grown on it, and we were able to lift off the InGaN solar cells without cracking them.”

The research was published August 15 in the journal ACS Photonics. It was supported by the French National Research Agency under the GANEX Laboratory of Excellence project and the French PIA project “Lorraine Université d’Excellence.”

The technique could lead to production of solar cells with improved efficiency and lower cost for a broad range of terrestrial and space applications. “This demonstration of transferred InGaN-based solar cells on foreign substrates while increasing performance represents a major advance toward lightweight, low cost, and high efficiency photovoltaic applications,” the researchers wrote in their paper.

“Using this technique, we can process InGaN solar cells and put a dielectric layer on the bottom that will collect only the short wavelengths,” Ougazzaden explained. “The longer wavelengths can pass through it into the bottom cell. By using this approach we can optimize each surface separately.”

The researchers began the process by growing monolayers of boron nitride on two-inch sapphire wafers using an MOVPE process at approximately 1,300 degrees Celsius. The boron nitride surface coating is only a few nanometers thick, and produces crystalline structures that have strong planar surface connections, but weak vertical connections.

The InGaN attaches to the boron nitride with weak van der Waals forces, allowing the solar cells to be grown across the wafer and removed without damage. So far, the cells have been removed from the sapphire manually, but Ougazzaden believes the transfer process could be automated to drive down the cost of the hybrid cells. “We can certainly do this on a large scale,” he said.

The InGaN structures are then placed onto the glass substrate with a backside reflector and enhanced performance is obtained. Beyond demonstrating placement atop an existing PV structure, the researchers hope to increase the amount of indium in their lift-off devices to boost light absorption and increase the number of quantum wells from five to 40 or 50.

“We have now demonstrated all the building blocks, but now we need to grow a real structure with more quantum wells,” Ougazzaden said. “We are just at the beginning of this new technology application, but it is very exciting.”

In addition to Ougazzaden, the research team includes Georgia Tech Ph.D. students Taha Ayari, Matthew Jordan, Xin Li and Saiful Alam; Chris Bishop and Simon Gautier from Institut Lafayette; Suresh Sundaram, a researcher at Georgia Tech Lorraine; Walid El Huni and Yacine Halfaya from CNRS; Paul Voss, an associate professor in the Georgia Tech School of ECE; and Jean Paul Salvestrini, a professor at Georgia Tech Lorraine and adjunct professor in the Georgia Tech School of ECE.

CITATION: Taha Ayari, et al., “Heterogeneous Integration of Thin-Film InGaN-Based Solar Cells on Foreign Substrates with Enhanced Performance,” (ACS Photonics 2018) https://pubs.acs.org/doi/abs/10.1021/acsphotonics.8b00663

MEMS & Sensors Industry Group (MSIG), a SEMI Strategic Association Partner, today announced four Technology Showcase finalists for the 14th annual MEMS & Sensors Executive Congress (MSEC), October 28-30, 2018, at the Silverado Resort and Spa in Napa, Calif. The MEMS & Sensors Executive Congress is the premier event for industry executives to gain insights on emerging MEMS and sensors opportunities and network with partners, customers and competitors. An early bird registration discount is available until Oct. 8.

The Technology Showcase highlights the latest applications enabled by MEMS and sensors as finalists demonstrate their innovations and vie for attendee votes. The finalists were selected by a committee of industry experts.

Technology Showcase Finalists

N5 Sensors’ Micro-Scale Gas Sensors on a Chip enable low-power, high-reliability microscale gas and chemical sensing technologies in small-footprint devices. The chip promises to broaden the implementation of gas and chemical sensing for industrial detection, first response, smart cities, demand-controlled ventilation, wearables and other consumer electronics. N5 Sensors Logo
NXP Semiconductor’s Asset Tracking Technology uses motion sensors, GPS and edge computing for precision tracking of a package’s journey from origin to delivery point. The technology enables logistics companies to quickly pinpoint and resolve transportation issues. See video NXP Logo
Scorched Ice Inc.’s Smart Skates leverage STMicroelectronics’ inertial measurement unit (IMU) sensors to facilitate real-time diagnostics of a hockey player’s skating technique, condition and performance. The device provides actionable insights to players, coaches, trainers and scouts. SI Logo
SportFitz’s Concussion-Monitoring Device combines real-time measurements of location, position, direction and force of impact as well as big data analytics and embedded protocols to stream data that can help assess potentially concussive brain impacts. The one-inch wearable device is hypoallergenic, waterproof, recyclable, reusable and rechargeable. See video. SportsFitz Logo

 

The world is edging closer to a reality where smart devices are able to use their owners as an energy resource, say experts from the University of Surrey.

In a study published by the Advanced Energy Materials journal, scientists from Surrey’s Advanced Technology Institute (ATI) detail an innovative solution for powering the next generation of electronic devices by using Triboelectric Nanogenerators (TENGs). Along with human movements, TENGs can capture energy from common energy sources such as wind, wave, and machine vibration.

A TENG is an energy harvesting device that uses the contact between two or more (hybrid, organic or inorganic) materials to produce an electric current.

Researchers from the ATI have provided a step-by-step guide on how to construct the most efficient energy harvesters. The study introduces a “TENG power transfer equation” and “TENG impedance plots”, tools which can help improve the design for power output of TENGs.

Professor Ravi Silva, Director of the ATI, said: “A world where energy is free and renewable is a cause that we are extremely passionate about here at the ATI (and the University of Surrey) – TENGs could play a major role in making this dream a reality. TENGs are ideal for powering wearables, internet of things devices and self-powered electronic applications. This research puts the ATI in a world leading position for designing optimized energy harvesters.”

Ishara Dharmasena, PhD student and lead scientist on the project, said: “I am extremely excited with this new study which redefines the way we understand energy harvesting. The new tools developed here will help researchers all over the world to exploit the true potential of triboelectric nanogenerators, and to design optimised energy harvesting units for custom applications.”

It used to be known as the information superhighway – the fibre-optic infrastructure on which our gigabytes and petabytes of data whizz around the world at (nearly) the speed of light.

And like any highway system, increased traffic has created slowdowns, especially at the junctions where data jumps on or off the system.

Local and access networks especially, such as financial trading systems, city-wide mobile phone networks and cloud computing warehouses, are therefore not as fast as they could be.

This is because increasingly complex digital signal processing and laser-based ‘local oscillator’ systems are needed to unpack the photonic, or optical, information and transfer it into the electronic information that computers can process.

Now, scientists at the University of Sydney have for the first time developed a chip-based information recovery technique that eliminates the need for a separate laser-based local oscillator and complex digital signal processing system.

Dr Amol Choudhary (left) and Professor Ben Eggleton, Director of Sydney Nano, in one of the photonic laboratories at the Sydney Nanoscience Hub. Credit: Louise Cooper/University of Sydney

“Our technique uses the interaction of photons and acoustic waves to enable an increase in signal capacity and therefore speed,” said Dr Elias Giacoumidis, joint lead author of a new study. “This allows for the successful extraction and regeneration of the signal for electronic processing at very-high speed.”

The incoming photonic signal is processed in a filter on a chip made from a glass known as chalcogenide. This material has acoustic properties that allows a photonic pulse to ‘capture’ the incoming information and transport it on the chip to be processed into electronic information.

This removes the need for complicated laser oscillators and complex digital signal processing.

“This will increase processing speed by microseconds, reducing latency or what is referred to as ‘lag’ in the gaming community,” said Dr Amol Choudhary from the University of Sydney Nano Institute and School of Physics. “While this doesn’t sound a lot, it will make a huge difference in high-speed services, such as the financial sector and emerging e-health applications.”

The photonic-acoustic interaction harnesses what is known as stimulated Brillouin scattering, a effect used by the Sydney team to develop photonic chips for information processing.

“Our demonstration device using stimulated Brillouin scattering has produced a record-breaking narrowband of about 265 megahertz bandwidth for carrier signal extraction and regeneration. This narrow bandwidth increases the overall spectral efficiency and therefore overall capacity of the system,” Dr Choudhary said.

Group research leader and Director of Sydney Nano, Professor Ben Eggleton, said: “The fact that this system is lower in complexity and includes extraction speedup means it has huge potential benefit in a wide range of local and access systems such as metropolitan 5G networks, financial trading, cloud computing and the Internet-of-Things.”

The study is published today in Optica.

Dr Choudhary said the research team’s next steps will be to construct prototype receiver chips for further testing.

The study was a collaboration with Monash University and the Australian National University.