It’s springtime for creative engineers
04/01/2008
It’s springtime for engineers in electronics, computing, and chip architecture.
That was the message of a stimulating talk by Paolo Gargini, Intel Fellow and guiding light for the International Technology Roadmap for Semiconductors, at the last SEMICON West. He pointed out that since the early 1970s, performance improvement for every new chip generation was a side benefit of following shrink rules set up for IC transistors by IBM’s Robert Dennard in 1973. Chip designers got faster circuits for free!
Sadly, we all eventually learned that there is no such thing as a free lunch. As conductors got narrower, resistivity mounted. As they were crammed closer together, capacitance increased. A higher RC time constant slowed circuits. Copper helped for awhile, but it has taken longer to figure out how to do low-k dielectrics for lowering capacitance. Meanwhile, the gate dielectric got too thin, raising leakage current, and short channel effects set in as the gate length shrunk. It has been tough to find good high-k dielectric/metal gate combinations to get a thicker effective oxide thickness and to eliminate the polysilicon depletion layer problem.
While semiconductor processes and materials are being pushed to physical limits, the engineers who have depended so long on the multifold benefits of shrinking devices are now scrambling to make do with less. They are finding ways to boost performance without just depending on faster circuitry. Multiprocessor chips are now common; multilevel cache memories are helping speed program execution; and programmers are coming up with ideas like multi-threading to get more done with the hardware available. Specialized processors, like digital signal processors (DSPs) and graphics chips, have proliferated. But now a number of different types of tuned processor architectures are emerging, and soon our systems will have a wide variety of specially designed execution units. For example, at ISS08, Doug Grose, senior VP for AMD, described accelerated processing units (APUs), which will do customized processing for multiple application areas, which he said have been made feasible by 45nm technology. This technology, jointly developed by AMD and IBM using immersion litho, strained silicon, and ultra-low-k dielectric, is being ramped this year and the first APU product will be called Swift, Grose said.
Another approach to combining specialized processors on one chip is to come up with generalized platform chips that are somewhat malleable, so they can be made in quantity and then architected as needed with programmable links and software.
Some applications can benefit from a reconfigurable processor, which can switch architectures between instructions to optimize execution.
A fundamental limitation of all these efforts is the constraint of von Neumann architecture, the stored program computer designed for sequential processing. Doing anything in parallel is a beast, especially when there are so many program loops. An alternative architecture using multiple processors operating in synchronized parallelism was devised by Thinking Machines of Cambridge, MA, several years ago. This new type of computer was designed by Danny Hillis, who had recently received his PhD from MIT. His design was structured around a parallel instruction in LISP, an artificial intelligence language popular at MIT. Wiring up the parallel links between processors was a huge challenge, and Danny credited a former DEC engineering manager for helping him get the job done. The machine was especially good at very rapid context search, even of complex graphic images.
What was different about this machine was that the HARDWARE was parallel. Some years ago computer designers tried something called “data flow” architecture to try to solve the problem of so many loops in sequential programs. Each instruction had a tag with it, and an execution unit wouldn’t fire until it received a full set of tags, indicating all the calculations in the loop had been completed and the next step in the calculation could proceed. Bill Joy, a one-time Sun Microsystems guru, commented that anything that had to do that much work to add 2 and 2 would never fly.
As we struggle to boost performance without much faster circuits, some clever architects may rise to the challenge and go beyond the von Neumann model. Hardware might become more parallel, or more adaptable depending on specialized processing requirements. Our brain readily does many tasks that computers now struggle to accomplish, and we may learn more about how to improve our systems by deeper study of biological models.
The challenge is there, engineers. The shrink has been your crutch for decades. Now it’s your turn.
|
Robert Haavind
Editorial Director