Category Archives: Reliability

Power device characterization and reliability testing require instrumentation capable of sourcing higher voltages and more sensitive current measurements than ever before.

Silicon carbide (SiC), gallium nitride (GaN), and similar wide bandgap semiconductor materials offer physical properties superior to those of silicon, which allows for power semiconductor devices based on these materials to withstand high voltages and temperatures. These properties also permit higher frequency response, greater current density and faster switching. These emerging power devices have great potential, but the technologies necessary to create and refine them are less mature than silicon technology. For IC fabricators, this presents significant challenges associated with designing and characterizing these devices, as well as process monitoring and reliability issues.

Before wide bandgap devices can gain commercial acceptance, their reliability must be proven and the demand for higher reliability is growing. The continuous drive for greater power density at the device and package levels creates consequences in terms of higher temperatures and temperature gradients across the package. New application areas often mean more severe ambient conditions. For example, in automotive hybrid traction systems, the temperature of the cooling liquid for the combustion engine may reach up to 120°C. In order to provide sufficient margin, this means the maximum junction temperature (TJMAX) must be increased from 150°C to 175°C. In safety-critical applications such as aircraft, the zero defect concept has been proposed to meet stricter reliability requirements.

HTRB reliability testing
Along with the drain-source voltage (VDS) ramp test, the High Temperature Reverse Bias (HTRB) test is one of the most common reliability tests for power devices. In a VDS ramp test, as the drain-source voltage is stepped from a low voltage to a voltage that’s higher than the rated maximum drain-source voltage, specified device parameters are evaluated. This test is useful for tuning the design and process conditions, as well as verifying that devices deliver the performance specified on their data sheets. For example, Dynamic RDS(ON), monitored using a VDS ramp test, provides a measurement of how much a device’s ON-resistance increases after being subjected to a drain bias. A VDS ramp test offers a quick form of parametric verification; in contrast, an HTRB test evaluates long-term stability under high drain-source bias. HTRB tests are intended to accelerate failure mechanisms that are thermally activated through the use of biased operating conditions. During an HTRB test, the device samples are stressed ator slightly less than the maximum rated reverse breakdown voltage (usually 80 or 100% of VRRM) at an ambient temperature close to their maximum rated junction temperature (TJMAX) over an extended period (usually 1,000 hours).

Because HTRB tests stress the die, they can lead to junction leakage. There can also be parametric changes resulting from the release of ionic impurities onto the die surface, from either the package or the die itself. This test’s high temperature accelerates failure mechanisms according to Arrhenius equation, which states the temperature dependence of reaction rates. Therefore, this simulates a test conducted for a much longer period at a lower temperature. The leakage current is continuously monitored throughout the HTRB test and a fairly constant leakage current is generally required to pass it. Because it combines electrical and thermal stress, this test can be used to check the junction integrity, crystal defects and ionic-contamination level, which can reveal weaknesses or degradation effects in the field depletion structures at the device edges and in the passivation.

Instrument and measurement considerations
Power device characterization and reliability testing require instrumentation capable of sourcing higher voltages and more sensitive current measurements than ever before. During operation, power semiconductor devices undergo both electrical and thermal stress: when in the ON state, they have to pass tens or hundreds of amps with minimal loss (low voltage, high current); when they are OFF, they have to block thousands of volts with minimal leakage currents (high voltage, low current). Additionally, during the switching transient, they are subject to a brief period of both high voltage and high current. The high current experienced during the ON state generates a large amount of heat, which may degrade device reliability if it is not dissipated efficiently.

Reliability tests typically involve high voltages, long test times, and often multiple devices under test (wafer level testing). As a result, to avoid breaking devices, damaging equipment, and losing test data, properly designed test systems and measurement plans are essential. Consider the following factors when configuring test systems and plans for executing VDS ramp and HTRB reliability tests needed for device connections, current limit control, stress control, proper test abort design, and data management.

Device connections: Depending on the number of instruments and devices or the probe card type, various connection schemes can be used to achieve the desired stress configurations. When testing a single device, a user can apply voltage at the drain only for VDS stress and measure, which requires only one source measure unit (SMU) instrument per device. Alternatively, a user can connect each gate and source to a SMU instrument for more control in terms of measuring current at all terminals, extend the range of VDS stress, and set voltage on the gate to simulate a practical circuit situation. For example, to evaluate the device in the OFF state (including HTRB test), the gate-source voltage (VGS) might be set to VGS 0 for a P-channel device, or VGS = 0 for an enhancement mode device. Careful consideration of device connections is essential for multi-device testing. In a vertical device structure, the drain is common; therefore, it is not used for stress sourcing so that stress will not be terminated in case a single device breaks down. Instead, the source and gate are used to control stress.

Current limit control: Current limit should allow for adjustment at breakdown to avoid damage to the probe card and device. The current limit is usually set by estimating the maximum current during the entire stress process, for example, the current at the beginning of the stress. However, when a device breakdown occurs, the current limit should be lowered accordingly to avoid the high level current, which would clamp to the limit, melting the probe card tips and damaging the devices over an extended time. Some modern solutions offer dynamic limit change capabilities, which allow setting a varying current limit for the system’s SMU instruments when applying the voltage. When this function is enabled, the output current is clamped to the limit (compliance value) to prevent damage to the device under test (DUT).

Stress control: The high voltage stress must be well controlled to avoid overstressing the device, which can lead to unexpected device breakdown. Newer systems may offer a “soft bias” function that allows the forced voltage or current to reach the desired value by ramping gradually at the start or the end of the stress, or when aborting the test, instead of changing suddenly. This helps to prevent in-rush currents and unexpected device breakdowns. In addition, it serves as a timing control over the process of applying stress.

Proper test abort design: The test program must be designed in a way that allows the user to abort the test (that is, terminate the test early) without losing the data already acquired. Test configurations with a “soft abort” function offer the advantage that test data will not be lost at the termination of the test program, which is especially useful for those users who do not want to continue the test as planned. For instance, imagine that 20 devices are being evaluated over the course of 10 hours in a breakdown test and one of the tested devices exhibits abnormal behavior (such as substantial leakage current). Typically, that user will want to stop the test and redesign the test plan without losing the data already acquired.

Data management: Reliability tests can run over many hours, days, or weeks, and have the potential to amass enormous datasets, especially when testing multiple sites. Rather than collecting all the data produced, systems with data compression functions allow logging only the data important to that particular work. The user can choose when to start data compression and how the data will be recorded. For example, data points can be logged when the current shift exceeds a specified percentage as compared to previously logged data and when the current is higher than a specified noise level.

A comprehensive hardware and software solution is essential to address these test considerations effectively, ideally one that supports high power semiconductor characterization at the device, wafer and cassette levels. The measurement considerations described above, although very important, are too often left unaddressed in commercial software implementations. The software should also offer sufficient flexibility to allow users to switch easily between manual operation for lab use and fully automated operation for production settings, using the same test plan. It should also be compatible with a variety of sourcing and measurement hardware, typically various models of SMU instruments equipped with sufficient dynamic range to address the application’s high power testing levels.

With the right programming environment, system designers can readily configure test systems with anything from a few instruments on a benchtop to an integrated, fully automated rack of instruments on a production floor, complete with standard automatic probers. For example, Keithley’s Automated Characterization Suite (ACS) integrated test plan and wafer description function allow setting up single or multiple test plans on one wafer and selectively executing them later, either manually or automatically. This test environment is compatible with many advanced SMU instruments, including low current SMU instruments capable of sourcing up to 200V and measuring with 0.1fA resolution and high power SMU instruments capable of sourcing up to 3kV and measuring with 1fA resolution.

reliability_1
Figure 1: Example of a stress vs. time diagram for Vds_Vramp test for a single device and the associated device connection. Drain, gate and source are each connected to an SMU instrument respectively. The drain is used for VDS stress and measure; the VDS range is extended by a positive bias on drain and a negative bias on source. A soft bias (gradual change of stress) is enabled at the beginning and end of the stress (initial bias and post bias). Measurements are performed at the “x” points.

The test development environment includes a VDS breakdown test module that’s designed to apply two different stress tests across the drain and source of the MOSFET structure (or across the collector and emitter of an IGBT) for VDS ramp and HTRB reliability assessment.

Vds_Vramp – This test sequence is useful for evaluating the effect of a drain-source bias on the device’s parameters and offers a quick method of parametric verification (FIGURE 1). It has three stages: optional pre-test, main stress-measure, and optional post-test. During the pre-test, a constant voltage is applied to verify the initial integrity of the body diode of the MOSFET; if the body diode is determined to be good, the test proceeds to the main stress-measure stage. Starting at a lower level, the drain-source voltage stress is applied to the device and ramps linearly to a point higher than the rated maximum voltage or until the user-specified breakdown criteria is reached. If the tested device is not broken at the main stress stage, the test proceeds to the next step, the post-test, in which a constant voltage is applied to evaluate the state of the device, similar to the pre-test. The measurements throughout the test sequence are made at both source and gate for multi-device testing (or drain for the single device case) and the breakdown criteria will be based on the current measured at source (or drain for a single device).

Vds_Constant –This test sequence can be set up for reliability testing over an extended period and at elevated temperature, such as an HTRB test (FIGURE 2). The Vds_Constant test sequence has a structure similar to that of the Vds_Vramp with a constant voltage stress applied to the device during the stress stage and different breakdown settings. The stability of the leakage current (IDSS) is monitored throughout the test.

FIGURE 3. Example of stress vs. time diagram for Vds_Constant test sequence for vertical structure and multi-device case and the associated device connection. Common drain, gate and source are each connected to an SMU instrument respectively. The source is used for VDS stress and measure; the VDS range is extended by a positive bias on the drain and a negative bias on the source. A soft bias (gradual change of stress) is enabled at the beginning and end of the stress (initial bias and post bias). Measurements are performed at the “x” points.

FIGURE 3. Example of stress vs. time diagram for Vds_Constant test sequence for vertical structure and multi-device case and the associated device connection. Common drain, gate and source are each connected to an SMU instrument respectively. The source is used for VDS stress and measure; the VDS range is extended by a positive bias on the drain and a negative bias on the source. A soft bias (gradual change of stress) is enabled at the beginning and end of the stress (initial bias and post bias). Measurements are performed at the “x” points.

Conclusion

HTRB testing offers wide bandgap device developers invaluable insights into the long-term reliability and performance of their designs. •


LISHAN WENG is an applications engineer at Keithley Instruments, Inc. in Cleveland, Ohio, which is part of the Tektronix test and measurement portfolio. [email protected].