Issue



FPGA-to-PCB System Designs


01/01/2006







Optimizing the Process

BY DAVID WIENS

Integration of FPGAs onto system designs used to follow the traditional, “black box” process for silicon devices. PCB designers ignored the contents of the device and its package, and just worried about how it would fit on a schematic sheet and on the board layout. The increased functionality and density of today’s FPGAs, coupled with escalating system performance requirements, is driving a new design paradigm. FPGA and PCB designers must now collaborate continuously throughout the design process.

FPGA device complexity and package densities have exploded in recent years. In 1998, one company* introduced a device** containing 680 pins - 512 of which were usable - all in a 1-mm-pitch BGA. By late 2000, the next generation of the device was announced containing 1,108 usable pins also in a 1-mm-pitch, 1,517-pin BGA. By early 2002, packages had grown to 1704 pins, and to 1760-pins by July 2005.

FPGA Design

Given the industry’s system-on-chip trend, with increased functionality driven into fewer, more highly integrated components, FPGA designers influence a growing percentage of the overall system and, as a result, the total number of pins in the design. Unfortunately, that authority doesn’t always come with the associated responsibility to ensure that pin usage has been optimized for the system and not just for the FPGA. Until recently, the FPGA designer didn’t have the proper EDA tools to adequately consider the consequences of the I/O pin assignments on the rest of the system.

To optimize system performance effectively, the FPGA and PCB designers must collaborate; frequently communicating design changes that impact each other. Changes to an FPGA must be communicated to the PCB design and be evaluated within the PCB design context. Necessary changes must be communicated back to the FPGA design process to achieve product requirement convergence. Because FPGA and PCB teams are often in independent arms of the engineering organization, the process optimization can be strained. Often, design teams are geographically dispersed, and team members cannot communicate effectively. Adding to the confusion are the different constraints, methodology, and language of FPGA design and PCB design.

FPGA I/O Assignments and System Timing Optimization

A simple example with a small FPGA and a connector illustrates the impact of I/O pin assignments on the rest of the system. Figure 1a shows a “rat’s nest” view of a PCB, where I/O pin assignments for a 32-bit data bus were chosen by FPGA place-and-route tools. Because there is no view into the PCB, pin locations were chosen to provide the best results for the FPGA. Notice the long connections to the far side of the FPGA, and the intersecting connections. Figure 1b shows the actual routing on the PCB. Notice the severe amount of “tromboning” that the PCB-routing tools inserted into some of the traces to equalize trace lengths, minimizing timing skew for the bus.


Figures 1a & b: Inferior system-level FPGA I/O pin assignment.
Click here to enlarge image

In Figure 2, the data-bus pins were moved to improve their proximity to the connector. PCB routing is significantly improved. The longest trace has been reduced from 3.6 to 1.8", and is 320 ps faster. Extrapolating this example to a more complex design shows that optimizing the FPGA’s I/O pin assignments helps reduce PCB-interconnect length and congestion, possibly eliminating PCB layers.


Figures 2a & b: Improved system-level FPGA I/O pin assignment.
Click here to enlarge image

This improvement in PCB routing does not come without costs to the FPGA. Both the PCB and FPGA designers need to cooperate to optimize pin assignments for both the FPGA and the PCB. To close the loop on this example, the FPGA designer would have to re-synthesize and re-run place-and-route on the FPGA with new pin assignments to ensure that it still meets its timing requirements.

Signal Integrity Considerations

Signal timing isn’t the only system constraint with tight margins. Modern FPGAs have banks of I/Os dedicated to high-speed communications in the multi-gigabit range. Ensuring clean transmission of a gigabit signal from the driver, through the board, and to a receiving device requires tight control of the board’s interconnect (e.g. traces and vias) and driver
eceiver operating characteristics.

For signals operating in the multi-gigahertz frequency range, vias can operate like miniature antennas, degrading signal quality with every added via. The industry-standard PCI Express bus specification explicitly suggests the use of less than two vias per trace, and trace length matching to within 0.025% tolerance.

Drive strength plays a critical role for effective signal switching. In Figure 3a, the designer has a choice of drive strengths as low as 2 mA and all the way to 24 mA. Figure 3b shows the effects of the 8-mA driver - a delayed transition time and possible false switching edges. Figure 3c shows the results of a 24-mA driver. Since the 24-mA driver pulls more current, a tradeoff between power and signal integrity requirements have to be made (a satisfactory solution may be found in the 12- to 16-mA range).


Figures 3a-c: FPGA driver strength selection.
Click here to enlarge image

One of the major challenges posed by large FPGAs is the conceptually straightforward process of creating symbols, placing them on a schematic, and wiring them to the rest of the design. Creating a 1,200- or 1,500-pin symbol is not only daunting, but unfeasible. Symbols representing such large devices cannot be placed on a single schematic sheet. Symbols need to be fractured, but the schematic design tools must recognize those fractures as parts of a larger, homogenous component. Once the fractures are created and placed, they still must be connected to the rest of the system. Making net connections to 1,500 pins is as overwhelming as creating the symbols.

Most FPGAs go through a minimum of three to five iterations before converging on a final I/O pin assignment. Within minutes, an FPGA designer can completely reconfigure the FPGA device interface to the PCB. Those few minutes may cause catastrophic changes to the PCB physical design, sometimes requiring the PCB designer to start over. Somebody - or something - needs to maintain the symbols and schematics with each design change, and communicate and synchronize those changes between every engineer, tool, and database. Considering that 30% of designs contain two or more FPGAs, it is impractical to attempt to manage all of this manually. Many companies solve this issue by locking the pin assignments made by the FPGA designer. This restricts the PCB designer in his efforts to optimize the routing, turning the FPGA into a programmable ASIC.

PCB designers have a choice between shrinking the dimensions of their existing laminate stack-up geometries, resulting in lower fabrication yields, or adopting a potentially more expensive fabrication process like high-density interconnect (HDI). The escalating complexity of FPGAs is also driving requirements for additional passive devices (e.g. resistor terminations and decoupling capacitors). Alternatives such as embedded passives are being used on inner layers of the board. Both advanced technologies result in reduced interconnect dimensions and board area, and improved system performance.

Conclusion

After making the decision to integrate FPGAs into a PCB design, designers encounter FPGA to PCB integration issues. These issues only increase as FPGA capacity and performance increases. Design teams can learn to deploy FPGA technology in their product development process, accounting for and minimizing integration costs, and resulting in an optimized product design.

* Xilinx
** Virtex

DAVID WIENS, director of business development, System Design Division, may be reached at Mentor Graphics Corp., 1811 Pike Rd., Suite 2F, Longmont, CO 80501; 720/494-1086; E-mail: [email protected].