Extracting signals from noise is a task that has challenged RF and microwave engineers since the earliest days of electronic communications systems. Engineers involved with cutting-edge radar and communication systems often use extreme amounts of signal processing to extract maximum information from faint or impaired signals. A major obstacle these engineers must often overcome is phase noise, which can severely limit the performance of a receiving system.

Phase noise is often the nemesis that limits the performance of a receiving system. For example, it can degrade the capability of a pulsed-based radar system to process Doppler information, and impair the error-vector-magnitude (EVM) performance of a digitally modulated communications system. Measuring phase noise is consequently vital to improving the performance of these and other RF or microwave electronic systems used in military and commercial applications.

Phase-noise measurements may seem difficult to some—more like a puzzle, with many oddly shaped pieces that are difficult to connect. Even with today’s advanced hardware and improved techniques, the process of making measurements and interpreting the results may still contain a certain amount of mystery. To help clear things up, it may be useful to first review some fundamental details about phase noise. Next, we will detail the three most common phase-noise measurement techniques and to which applications they are best suited.

Phase noise is essentially a measure of an electronic signal’s frequency stability. The long-term frequency stability of—for example—an oscillator, can be characterized in terms of hours, days, months, or even years. Short-term stability refers to frequency changes that occur over a period of a few seconds or less. These short-cycle variations can have deleterious effects on electronic systems that rely on extensive processing to extract information from a signal. Thus, this discussion will focus on short-term stability.

Short-term stability can be described in many ways, but the most common is single-sideband (SSB) phase noise. The United States National Institute of Standards and Technology (NIST; www.nist.gov) defines SSB phase noise as the ratio of two power quantities: the power density at a specific frequency offset from a signal carrier and the total power of the carrier signal. This is most commonly measured in a 1-Hz bandwidth at some frequency “f” away from the carrier. The units of measure are in decibels relative to the carrier per Hertz (dBc/Hz) over a 1-Hz bandwidth.

The level of phase noise is deterministically related to the carrier frequency, increasing by 6 dB for every doubling in frequency. As a result, when characterizing the performance of components integrated into advanced radar and communication systems, measurements of phase noise for a 1-GHz carrier signal may extend from roughly −40 dBc/Hz at offsets “close to the carrier” (such as 1 kHz or less) to as low as −150 dBc/Hz or less at offsets “far from the carrier” (such as 10 MHz or more). At such low levels, the measurement noise floor is affected by two microscopic electronic effects: thermal noise from passive devices, which is broad and flat (white noise), and flicker noise from active devices, which has a shape akin to the inverse of frequency, 1/f (such as pink noise), that emerges from the thermal noise at lower offsets. Both of these contributors are unavoidable because they are present all along the signal chain—in the measuring instrument, in the device that produces the signal-under-test (SUT), and even in the cables used to connect the measuring instrument to the device under test (DUT).

Any type of amplifier in the test signal chain will also serve as a source of noise. While the main purpose of the amplifier is to increase the power level of a weak carrier signal, it also adds its own noise to the signal and boosts any input noise. The net result is that the amplifier, thermal noise, and flicker noise continue to give any phase-noise plot a characteristic shape and, more significantly, reduce the theoretical lower limit of sensitivity for any phase-noise measurement (Fig. 1). These effects appear in the phase-noise characteristics of any high-performance signal generator.

The underlying sources of noise can be traced back to the major sections of the block diagram for such an instrument (Fig. 2). For offsets below 1 kHz, the noise is dominated by the performance of the reference oscillator, which is multiplied to the carrier frequency. In this particular signal-generator design, the other major contributors include the synthesizer circuitry at offsets of 1 kHz to roughly 100 kHz, the yttrium-iron-garnet (YIG) oscillator for offsets from 100 kHz to 2 MHz, and the output amplifier for offsets above 2 MHz. When these effects on phase noise are well understood, they can be minimized and optimized within a system design to ensure maximum performance.

Phase-noise measurement techniques have evolved along with advances in analyzer technology. Three phase-noise measurement methods (ranging in complexity from basic to intermediate) are direct spectrum measurements, phase-detector-based measurements, and two-channel cross-correlation techniques. The direct-spectrum approach is the oldest and perhaps simplest way to measure phase noise. In this approach, the SUT or DUT is simply connected to the input port of a spectrum or signal analyzer and then tuned to the carrier frequency of interest. Two measurements are then made: First, the power level of the carrier is measured. Next, the power spectral density (PSD) of the signal source noise, at a specified offset frequency, is measured and referenced to the level measured for the carrier power.

As is often the case when using a simple measurement approach, a variety of corrections must be made to ensure an accurate result. For example, it may be necessary to correct for the noise bandwidth of the signal or spectrum analyzer’s resolution-bandwidth (RBW) filters. In addition, it may also be necessary to correct for the behavior of the analyzer’s peak detector, which may under-report the actual noise power. It was once necessary to perform these corrections manually (Agilent Technologies’ Application Note No. 150 provides guidance in this process). But these extra steps are no longer necessary when using a signal analyzer equipped with either an interval-band/interval-density marker function (for the PSD measurement) or a built-in phase noise measurement application.

Time and experience have revealed the limitations of the direct-spectrum phase-noise measurement method. Most of these limitations are related to shortcomings in the quality or performance of the signal or spectrum analyzer: the residual frequency modulation (FM) of the instrument’s local oscillator (LO), the noise sidebands or phase noise of the analyzer’s LO, and the analyzer’s noise floor can all impact the phase-noise measurement results. In addition, most spectrum analyzers measure only the scalar magnitude of the SUT noise sidebands. As a result, the analyzer cannot differentiate between amplitude noise and phase noise. Finally, the process is complicated by the need to make a noise measurement at every frequency offset of interest—potentially, a very time-consuming task.

In the phase-detector measurement approach, a phase detector is used to separate the phase noise from the amplitude noise. Figure 3 shows how the phase detector converts the phase difference of two signals into a voltage available at the output of the phase detector. When this phase difference is set to quadrature, the voltage is zero. Any phase fluctuation from quadrature will result in a corresponding voltage fluctuation at the output and a value other than zero. The phase-detector approach is the basis for several commonly used phase-noise measurement techniques, including the reference-source/phase-locked loop (PLL) method, the frequency-discriminator method, and the heterodyne digital discriminator method.

The reference-source/PLL method uses a double-balanced mixer as the detector, with the reference frequency source and the SUT serving as the inputs to the mixer (Fig. 4). The reference source is controlled such that it follows the SUT at the same carrier frequency, but with a 90-deg. phase offset. To ensure accurate measurements of the SUT, the phase noise of the reference source should be as low as possible, with behavior that is well-characterized. The sum frequency from the mixer is removed by means of a lowpass filter, while the difference frequency is 0 Hz with an average output voltage of 0 VDC. Any AC voltage fluctuations will rise on top of the DC voltage, and be proportional to the combined noise contributions of the two input signals. In this phase-noise measurement approach, the baseband signal from the mixer is often boosted by a low-noise amplifier (LNA) before being connected to the input port of a baseband analyzer.

The reference-source/PLL phase-noise measurement method yields the overall best sensitivity and the widest measurement coverage, with a frequency-offset range that spans 0.1 Hz to 100 MHz. It is also insensitive to AM noise and is capable of tracking drifting sources. However, it requires a reference source with low phase noise and with the capability of being tuned electronically. In addition, if the SUT has a high frequency drift rate, the reference source must be tunable over a very wide frequency range.

The frequency-discriminator phase-noise measurement method simplifies the equipment configuration and measurement process by substituting an analog delay line for the reference oscillator. In this approach (Fig. 5), the SUT is split into two channels. One path is delayed relative to the other, and the delay line converts frequency variations into phase fluctuations. Adjusting the delay time will determine the phase quadrature of the two inputs to the mixer. The phase detector converts the phase fluctuations into voltage variations that are measured as frequency noise by the analyzer. The frequency noise is then converted to a phase noise reading for the SUT or DUT.

Unfortunately, this method sacrifices some measurement sensitivity, especially at offsets close to the carrier. Longer delay lines can improve sensitivity, but while also reducing the signal-to-noise ratio (SNR) of the measurement setup and limiting the maximum measurable offset frequency. The insertion loss of the delay line can also be a concern when trying to produce measurable test signal levels when analyzing low-level signals. As a result, this method works best with free-running sources such as inductor/capacitor (LC) oscillators and cavity oscillators. These tend to produce noisy signals that have high-level, low-rate phase noise or high close-in spurious sideband conditions that can limit the performance of the PLL technique.

In the heterodyne digital discriminator method, a heterodyne digital discriminator replaces the analog delay line of the frequency-discriminator phase-noise measurement method. In this approach (Fig. 6), the SUT is downconverted to an intermediate frequency (IF) by means of a mixer and a frequency-locked LO. The IF signal is amplified and digitized and then split and delayed using digital-signal-processing (DSP) techniques. As in the frequency-discriminator method, the delayed version of the signal is compared to the non-delayed version using a digital mixer and the delay is adjusted to achieve quadrature. The mixer output is filtered to remove the sum component, leaving a baseband component that is processed to produce a phase-noise value.

This method is suited to measurements of the high levels of phase noise that are typically present in unstable signal sources, such as some high-frequency VCOs. It provides a wider measurement range than the reference-source/PLL method and eliminates the need to reconnect the analog delay lines used in the frequency-discriminator method. By setting the delay time to zero, the heterodyne digital discriminator method also enables easy and accurate measurements of AM noise with the same setup and RF connections. On the downside, the total dynamic range of this measurement method is limited by the performance of the LNA and the analog-to-digital converters (ADCs).

The two-channel cross-correlation phase-measurement approach provides improved dynamic range compared to the heterodyne digital discriminator method. It employs two duplicate reference-source/PLL channels with the measuring instrument and calculates the cross-correlation between the two resulting outputs (Fig. 7). Because any SUT noise present in both channels is coherent, it is not affected by the cross-correlation computation. In contrast, any internal noise generated by either channel is noncoherent and is diminished in the cross-correlation operation by the square root of the number of correlations. For the two-channel cross-correlation method, the number of correlation operations is a key factor in determining the total measurement time. In a typical instrument, the number of correlation operations is a user-selected value. Increasing the number of correlation operations reduces the noise contribution from both channels (see table) but extends the time required to complete the measurement.

Because it reduces measurement noise, the two-channel cross-correlation technique is capable of achieving excellent measurement sensitivity. Since it relies on DSP techniques rather than analog signal processing, the measurement sensitivity is enhanced without requiring exceptional performance from the measurement hardware. The two-channel cross-correlation technique also provides greater dynamic range than possible with the digital discriminator method. The two-channel cross-correlation approach is a good choice when measuring the phase noise of free-running oscillators, although it can be used effectively for phase-noise measurements on many types of synthesized and stabilized high-frequency sources and oscillators.

Comparing Approaches

Phase-noise measurements rely on a variety of different instrumentation choices, including general-purpose spectrum analyzers, specialized instruments, and personal-computer (PC) -based modular systems. The key differences in these hardware options center on capabilities, flexibility, and performance, which determines the overall performance possible from a given test setup, including such parameters as frequency range, dynamic range, and minimum and maximum offset frequencies that a solution can achieve. For example, the direct-spectrum approach can be implemented with a general-purpose spectrum analyzer or signal analyzer equipped with an optional phase-noise measurement application or personality. In most cases, the measurement application automatically performs the required carrier and noise measurements and then applies the necessary correction factors. The results may be presented as both a logarithmic phase-noise plot (in dBc/Hz versus logarithmic frequency) and a table of phase-noise values at specific offset frequencies from a desired carrier frequency. This solution typically works well when making phase-noise measurements at offsets as close as 10 Hz or 100 Hz from the carrier and at offsets as far as 10 MHz from the carrier, which is a typical offset range used throughout the RF/microwave industry to characterize the phase noise of high-frequency signal sources.

A benefit of performing phase-noise measurements with a software application or test personality and a general-purpose instrument such as a spectrum analyzer is the availablity of the analyzer for test requirements other than phase-noise measurements. In addition, the cost of the measurement setup is essentially spread across any additional number of test applications possible with the spectrum or signal analyzer.

For the more complex phase-detector or cross-correlation methods, a dedicated standalone or modular test solution is typically needed and such a test system may not be suitable for the variety of measurements possible with a spectrum analyzer. For example, an instrument known as a signal-source analyzer (SSA) has been developed as a standalone solution for measurements of phase noise and other signal source characteristics. At least one commercially available SSA includes low-noise reference sources, an extremely low noise floor, and the DSP capabilities necessary to implement the heterodyne digital discriminator method and the two-channel cross-correlation technique. Such instruments are well-suited to measurement offsets as close as 1 Hz and as distant as 1 GHz. The dedicated functionality of an instrument like an SSA often means easy operation, as well as simplified setup and calibration.

Some PC-based modular solutions can be configured with an analog delay line to implement phase-detector techniques such as the reference-source/PLL method or the frequency-discriminator method. In the reference-source/PLL configuration, this type of solution often has the performance and capabilities needed to measure very low phase noise at offsets as close as 0.01 Hz when used with a high-performance LO. In frequency-discriminator mode, a system configured with an analog delay line can measure very low phase-noise levels at offsets far from the carrier. The downside of the versatility provided by this approach is that setup and calibration are more complicated than with the SSA- or signal-analyzer-based solutions.


Validating a Test Setup

Once a measurement approach has been selected and a test solution assembled, how is it possible to know if the results provided by a particular test system are accurate? The answer: by using a calibrated, precisely characterized phase-noise signal. For example, a known-good reference is valuable when developing a direct-spectrum solution that includes self-written software that applies the necessary corrections. Such a reference source can be created by using uniform noise as the FM input to a signal generator. The slope of the noise sidebands will be constant at −20 dB/decade, and the desired sideband level can be reached by varying the deviation of the FM signal. Figure 8 shows a phase-noise-measurement calibration example produced with this FM-driven signal-generator approach. It was produced with a uniform noise signal frequency modulated at 500 Hz. This calibration signal yielded a measured phase-noise value of −100 dBc/Hz offset 10 kHz from the carrier.

Phase noise is one of the most important figures of merit for an RF/microwave signal source. The phase-noise performance of the source can impact the effectiveness of a wide range of electronic systems, from commercial communications to military radars. The type of measurement method used to measure the phase noise of a given source should be guided by the nature of the signal source, such as whether it is relatively stable or is a free-running source with frequency that tends to vary over time. The measurement methods and solutions presented here are among the easiest and most cost-effective to implement. And while these approaches can provide excellent results for engineers who are not specialists in phase noise, an expert can help interpret results that may sometimes be quite puzzling.