Better EDA tool integration needed for growing SoC market
07/01/2001
Walfred Raisanen, ICE/Chipworks, Scottsdale, Arizona
While the concept of system-on-a-chip (SoC) has been around for a decade, it is only now that software tools and semiconductor processing have advanced to where SoC is accessible to more than the largest design teams. In the past, a SoC project needed 20-100 or more engineers and programmers, supported by an array of a dozen or so million dollar electronic design automation (EDA) tools. This level of expense and capital investment demands a continuous series of SoC projects, and a substantial investment in code and infrastructure development to stitch together the various tools into an efficient design flow. For the small- to medium-sized firm, these are prohibitive requirements.
|
EDA vendors have responded with integrated toolsets that have largely eliminated the need for home-brew integration code. Also, EDA tools are becoming available on a pay-per-use basis, providing access to even the smallest of design teams. Learning to use them is still a challenge, however. Intellectual property (IP) vendors have refined their predesigned circuit function offerings to make them easier to integrate into a completed design. Standardized busses like IBM's CoreConnect and the AMBA bus simplify interoperability between cores. Field programmable gate array (FPGA) providers have raised the level of complexity so programmable logic products are available in the whole spectrum from 10,000 gates at $10 to more than a million gates at $2000.
For those unwilling to make even this level of investment, there are more than 80 design services companies offering engineering expertise for a reasonable price. Many have ongoing partnerships with the major foundries and packaging firms, so end-to-end support is available, from system architecture consulting to delivery of finished tested products.
At Electronica 2000, the CEOs of STMicro, Philips, and Infineon agreed that SoC will determine the semiconductor market direction for the next five years. They said that two thirds of their R&D money is spent on SoC-related projects.
Demand for SoC devices continues to increase, as end product applications require faster performance, increased functionality, smaller packaging, lower costs, and improved reliability. There is a strong demand for SoC in communication, networking, and consumer electronics. In communication products, wireless applications require faster, smaller, and lower-power chips for handsets and other battery-powered devices. Networking applications need increased complexity and bandwidth as Internet traffic expands. Digital audio/video and digital camera applications lead opportunities for SoC in consumer electronics. Real-time MPEG-4 compression of video streams is a reality, demanding the ultimate in processing speed and bandwidth.
Increasing design cost and time
Current silicon technology is capable of fabricating tens of millions of transistors on a single chip. The SIA roadmap predicts that in 15 years, the IC transistor count will increase to around 100 million. It takes an enormous amount of time and effort to design chips using today's technology, especially if each new design is started from scratch. For example, consider the history of MIPS R2000/4000/10000 microprocessor designs as shown in the table.
There is a growing shortage of hardware and software engineers. Moreover, companies want to reach the market quickly so that design costs are recovered before the next product start. Something will have to be done to speed up the design process.
A solution design reuse
One solution to this problem is to reuse parts from previous designs or to make use of parts designed by third parties. Such parts are not packaged components, but are sold in the form of design information supplied as a graphical description of the layout or at a higher level in a hardware description language. An industry survey conducted by ICE revealed more than 60 firms offering various forms of IP and predictions indicate that this number will grow to more than 300 by 2002. These virtual components must be connected by the design team, and the software engineers must write the code to make all the pre-designed elements, and any circuit innovations needed for the new chip, work together. This is system level integration (SLI), where the design is done at higher levels of abstraction, reusing design components where possible. It is supported by the development of EDA tools that can synthesize hardware and software from very high-level descriptions of the system.
Figure 1. SoC year-to-date dollar and unit shipments with aggregate average selling price (AASP); 2001 compared to 2000. |
As the industry progresses toward SLI and SoC, reuse of designs and improved design tools allow the design team to concentrate its efforts more on the system application and its high-level design. The design team consists of software, hardware, and systems engineers working together to get the product designed in as short a time as possible. Members of the team need to know how to make use of reusable virtual components; how to make past designs reusable; and how to manage the division of hardware and software to meet systems constraints such as power consumption, speed, chip size, cost, and so forth.
Figure 2. SoC dollar shipments with AASP, 1996-2005. |
Despite general industry acceptance of the inevitable logic of design reuse, there is little evidence of its practice. Designers estimate that designing an IP block for reuse will double the effort required, primarily in documentation. With the current acute shortage of designers, most managers forego design reuse methodology in the interest of getting the product to market.
Figure 3. SoC dollar shipments compared to non-SoC standard cell, 1996-2005. |
Another deterrent to design reuse is the incompatibility in form, fit, and function between IP blocks sourced from various vendors. Many come with no built-in test methodology, and adding built-in self test (BIST) to an existing IP core is as much work as designing it from scratch.
SoC market bulletproof?
The SoC market is almost bulletproof, bucking the trend this year. Driven by a rising tide of demand from the communications world, SoC shipments this year are 17% ahead of last year at this time. ICE predicts that when 2002 rolls around, 2001 SoC dollar shipments to the Americas and Japan will have grown nearly a third over the previous year (Fig. 1).
Figure 4. Moore's Law in memories and large chips: greater integration. |
Despite predictions of reduced demand for all ICs and layoffs in the OEM world, current IC demand and layoffs are a temporary problem. System makers believed their own PR and overestimated their markets. Inventory burn off should be complete in a few months. When the market for communication equipment comes back, SoC makers will have the capacity to meet demand. (They almost did it last year.)
Figure 5. Growth in chip size. |
Average selling prices are expected to rise rapidly in the near term, reflecting the rapid pace in increased SoC chip complexity. As embedded FPGA technology and "platform-based" SoC penetrate the market, ICE believes prices will fall, especially as a glut of capacity forms in 2003 when several 300mm fabs ramp up (Fig. 2).
Figure 6. Shrinking transistor size. Moore's Law: MOSFET scaling. |
ICE expects SoC designs to rapidly take over the bulk of the business in the SIA category called standard cell (Fig. 3). Unfortunately, the SIA categories are less and less descriptive of the actual content of modern chip designs. A mixture of embedded cores, and the rapid growth in embedded memory content will blur traditional distinctions between chip categories and make forecasting and analysis much more complicated.
Benefits of SoC
SoC integration offers many benefits. Cost savings is the most obvious, assuming unit volumes high enough to recover the enormous engineering expenses involved.
Cost benefits. Compacting the functions of many ICs into one SoC means that the support infrastructure for many chips is eliminated, simplifying assembly, reducing manufacturing cost, and improving system reliability through reduction of the total parts count. It is not uncommon to find the cost of connectors, cables, decoupling capacitors, sockets, and PCBs to equal or exceed the cost of the chips in a conventional design. These costs can be reduced by a factor of 3-10 if SoC design drops the chip count to one or two. Less obvious is the potential power savings. Significant amounts of system power are required to drive signals through the board- and card-level interconnects, all eliminated in SoC.
Performance benefits. Better performance is also a benefit, both in higher speed and lower electrical noise. Keeping all signals on the chip shortens the length of interconnecting wires by two orders of magnitude and more, so signal flight times are reduced by the same amount. Noise signal coupling is similarly reduced, as are power supply transients caused by lengthy power runs. MPU designers recognize that the only way to reconcile the speed difference between the arithmetic logic unit (ALU) and memory is to embed more and more memory on the processor chip. Embedding permits memory to ALU communication at picosecond speeds as opposed to nanosecond speeds if the memory is off-chip. Also, because I/O pins are in critically short supply, on-chip memory allows very wide interfaces, multiplying the bandwidth to the memory array.
Most SoC designs make extensive use of FPGA structures and memory-based control architectures, making it easy to change system features in the field. This is especially important for projects with multiyear development times, for not all system requirements are fully known when a project starts. Future system upgrades to meet competitive pressures are also easier to implement with the field upgradability that can be economically provided in SoC designs.
Because of their complexity, SoC designs are inherently difficult to test and debug. This has led to the development of sophisticated techniques and procedures employing BIST and boundary scan chain structures. Many IP firms offer BIST structures and boundary scan IP, as well as automated software tools to assist the designer in using these techniques. Their embedded nature means that the methods and tools are best used in a SoC setting, not in a multichip assembly. Great benefits in overall test coverage, test costs, and test times are claimed for BIST methods.
Commitment to SoC
Any product organization contemplating SoC implementation must face the fact that embarking on this course is a long-term commitment. Reuse of core designs is essential to the economics of SoC. This means that the engineering infrastructure must be configured and managed with a long-term strategic view of the business. Core functions must be developed that will be useful for many years into the future, despite dramatic changes in semiconductor process performance, packaging revolutions, and unforeseeable changes in the market environment.
Process technology
Fig. 4 shows the progression of the number of transistors on Intel's family of microprocessors from the 1971 "4004" to the 2000 "Pentium Pro," Motorola's 68000 series, and various DRAM and ASIC chips (upper curve). It corresponds exactly to Moore's prediction. Many feel this is a self-fulfilling prophecy, but it is a testimonial to the capability of Intel's engineers over three decades.
Device dimensions & chip density
The absolute size of an IC has grown over the years at approximately 16%/year.
Currently, most production ICs are seen at about 80-100mm2. Memory chips tend to be at the high end of the range, while linear chips are at the low end. Because of the rapidly increasing cost of photolithography equipment for very large die, ICE believes that the growth of chip size will, in fact, slow markedly in the near future (Fig. 5).
At the same time, the minimum transistor dimension has fallen, and will continue to fall. Leading-edge production is at 180nm (0.18µm). Leading-edge R&D parts are at 130nm, with predictions in the next few years of 100nm. This progress permits the continued increase in transistor count/IC even as the die size stagnates.
Figure 7. Growth in embedded memory. |
Shrinking transistor size also enables dramatic improvements in processing speed and frequency response (Fig. 6). CMOS technology only dissipates power when the devices are switching. In the past, standby power, or static power dissipation, was negligibly small. Modern devices have very low threshold voltages, which has allowed power supply voltages to fall as devices shrink. Since dynamic power is proportional to the power supply voltage squared, power dissipation has been kept to moderate levels. When chips contain hundreds of millions of transistors, however, leakage currents in the static case add up to significant levels, and total power dissipation/SoC has risen rapidly in recent years. The industry has reached the practical limit today, with leading-edge chips dissipating 50-100W at full speed. ICE believes that this will be a major limiting factor in the future growth of SoC complexity, and will add to the pressure to make chips smaller, rather than larger.
Intel Fellow Shekhar Borkar presented his point of view at the ASIC/SoC conference in September 2000. He showed that the increasing speed of processing elements inexorably increases their power dissipation, and the continuous shrinkage of the transistors used increases the device density, driving the power density up at accelerating rates. The industry is arguably already at the limit for practical power extraction and delivery technology, so something has to change, and fast.
The SRAM content of microprocessors has continued to rise, as Fig. 7 shows. This has a beneficial effect on the total power dissipation, as the average power of SRAM is much lower than for logic. It also helps improve processing speed by reducing access time. The delay associated with off-chip wiring and data alignment is orders of magnitude larger than the same delay for on-chip wiring.
Both groups suggest that memory will dominate logic in the near future, and the ratio of memory to logic will increase without limit. Since the power dissipation in memory is orders of magnitude lower than fast logic, and since an increasingly smaller proportion of larger memory is actually switching at any one time, growing memory solves the power density problem. It also permits continued growth in transistor count, while avoiding a combinatorial explosion of the design effort.
This organizational concept was supported by the ICCAD 2000 keynote talk given by Bill Dally of Stanford, who argued that SoC chips are now, and forever will be, mainly limited in performance by the wiring delays, not gate delays as in the past. This points to the need to change our design flow from the current wires last method to one where the interconnect wiring plan is integral to initial floor-planning and architectural constraints. He also demonstrated with actual chip designs that the lack of area visibility in present physical design tools leads to a chop suey layout with all of the communications constraints lost in the process. Inevitably, this leads to inefficient and poorly performing layouts, and makes the process of fixing timing problems haphazard and slow. In his view, a wires first design can shrink the space needed for control and processing logic by a factor of 2 to 4, and improve speed by a factor of 1.4 to 2.
Walfred Raisanen is VP of operations at Integrated Circuit Engineering (ICE)/Chipworks, 17350 N. Hartford Dr., Scottsdale, AZ 85255; ph 480/515-9780, fax 480/515-9781, e-mail [email protected], www.ice-corp.com.
The trends in this article are discussed in detail in ICE's publication, ASIC System-on-a-Chip for the 21st Century. Also detailed are trends in the EDA industry, the foundry business, and the global market for SoC. Another ICE guide lists 80 firms offering design services. See ICE's web site for ordering details.