The Analyzer100 is a vector network analyzer designed ultimately to operate from 1 kHz to 100 MHz.  It is the further evolution of what was originally the VectorAnalyzer60. It takes an input from an external signal generator and applies it to a device under test (“DUT”).  It will measure either the return loss (magnitude and angle) or transmission gain/loss (magnitude and angle) of the DUT.  When a microprocessor has been added to control the display, the return loss can be displayed as impedance.  Among other things, it can be used to determine the return loss of filters, amplifiers or antennas, and the transmission gain or loss of filters, amplifiers and coax cable.  Frequently, the purpose of such measurements is either to verify that the input or output impedance is near 50 ohms, or to determine what compensation is necessary to make it near 50 ohms.


Figures 1-3 show the Analyzer100 and its innards.




Figure 1—Analyzer100 front and inside views.

The meters on the front show magnitude or phase, depending on the position of the nearer toggle switch

on top.  The signal from the signal generator enters on the left and exits from the front connector

on the right for transmission tests, or the rear connector for reflection.  A DUT (resistor soldered to SMA

connector) is shown attached to the reflection port.



Figure 2—Inside Bottom.

This contains all the RF circuitry.  The board covered with copper mesh is the reflection bridge and attaches

to the reflection port SMA connector.  The board to its right, and soldered to its right edge, is the log amp

board, which amplifies and measures the dbm level of the received signal. It normally has a tin shield, which

was removed for this photo. Soldered to the near edge of the log amp board is the digital phase

detector board, which measures the phase of the received signal.

The small board at the far right is an active splitter, which takes the external signal and buffers and distributes

it to the bridge board, the transmission port and the phase detector. The red switch chooses reflection

or transmission mode, the only difference being that in transmission mode, no signal is sent to the

reflection bridge, which then becomes the input for the signal by the DUT.




Figure 3—Inside Top.

This contains the power supply, the metering circuitry and the meters.  The PCB at left is

mounted directly to the magnitude/phase selection switch and the meter zeroing pot.  It provides

+/-5V supplies for the device, and also provides for adjustment of the DC phase and magnitude

voltages received from the RF circuitry to get proper scaling and centering.  This board can

be eliminated if the display is controlled by a microprocessor.


The Procedure

The process of using the Analyzer100 for reflection measurements is as follows:  With the signal generator tuned to the frequencies of interest, magnitude and phase readings are taken with no DUT attached.  This is referred to as an “open” DUT, and it establishes the reference levels for all further measurements.  All measurements are recorded by hand, as the device is not attached to a computer.  The measurement process is then repeated with the DUT attached.  The final step is to subtract the “open” measurements from the DUT measurements to get the true values.


A similar process is followed for transmission measurements, with the DUT input connected to the transmission port and the DUT output connected t the reflection port.  The reference level for transmission comes from measurements with a coax cable connecting the transmission and reflection ports; this is referred to as a “through” measurement, and serves the same purpose as the “open” measurement for reflections.



The current prototype does not contain a microprocessor, to the data is entered into a spreadsheet to do the necessary math. The next round will contain a processor, which will also have the advantage of allowing a character matrix display, as well as allowing two-decimal resolution in the display.


The processor will also facilitate using “OSL” (Open-Short-Load) calibration to enhance accuracy.  Even without OSL calibration, the Analyzer 100 is remarkably accurate to 60 MHz and respectable to 100 MHz.  It will function above 150 MHz, but at reduced accuracy.  The current plan is to extend the low frequency end to somewhere in the 1 kHz-5 kHz range.  One side effect of that may be to limit the overall instrument to a 60 MHz maximum.


The Two Toughest Tests

The two most difficult tests for a VNA are (1) achieving a high measured return loss for near-50-ohm DUTs, and (2) achieving a “short” measurement which is equal to the negative of the “open” measurement.  These are in fact the measurements used in OSL calibration, and the purpose of OSL calibration is to adjust subsequent measurements to account for discrepancies in these two areas.


Figure 4 shows the measured magnitude and phase of the reflection from a 50-ohm DUT.  In theory, there should be zero reflection and return loss should be infinite and the phase is meaningless.  The actual measured reflection is called “directivity”.  The graph shows directivity above 45 db all the way to 60 MHz, which is excellent.  Even the directivity of 40 db at 100 MHz is very good.  These are good enough to make typical antenna measurements without OSL calibration.



Figure 4—Reflection of 50-ohm load.

Also known as “directivity”, this is the measured reflection when there shouldn’t be any.

This is a source of error that can be reduced by OSL calibration.  These numbers are actually

very good. For perspective, with the amount of error represented by this graph, a true 50-ohm

load would be measured at 60 MHz as 49.75 ohms in parallel with 0.7 pf, if calibration is not used.


Figure 5 shows the measured return loss with the DUT port shorted.  Ideally, it should be 0 db 180 degrees.  It is extremely close to those values beyond 60 MHz, so there is not much error there for OSL calibration to correct.  Above 70 MHz the magnitude deviates more and more from 0 db, which is an indication that the output impedance of the DUT port is deviating from 50-ohms, probably due to parasitic capacitance.


Figure 5—Reflection Measurement of a Direct Short.

Ideally, the magnitude should be zero and the phase should be 180 degrees.

Since the meter has resolution of 0.1, no conclusions can be drawn from the

variations of 0.1 db below 70MHz.  Above that, the magnitude deviation

is an indication that the DUT port ouput impedance is not a perfect 50 ohms. 

This is one of the error sources that can be reduced with OSL calibration.


Measurements of Coax Cable

Coaxial cable makes a good DUT, because it causes significant phase variation in a predictable way.  In theory, the reflection from a perfect 50-ohm coax cable would have 0 db magnitude and a phase which decreases linearly with frequency.  (Actually, the velocity of propagation will eventually change with higher frequencies, which will change the slope of the line at some point.)


Figure 6 shows the reflection phase of a 9.5” unterminated coax, and the amount of “error”—meaning deviation from a straight line.  Some of the high frequency “error” may be due to changes in propagation velocity.  The graph shows that the measurement is nearly perfect until 60 MHz, where the error is about 1 degree.  I believe the deviations result largely from the “short” measurements not being 0 db, as described above, which means they would be corrected by OSL calibration.

(The magnitude was also measured, and was 0 db up to 100 MHz, which makes a boring graph.)


Figure 6—Reflection Measurement of Unterminated 9.5” Coax

The error is minimal up to 90 MHz, where the slope of the phase line suddenly

becomes slightly more shallow. This corresponds roughly with the point where the “short”

magnitude deviates from zero, and may be related to the slight change in output

impedance of the DUT port.  If so, it will be corrected by OSL calibration.


Figure 7 shows reflection and “error” for a much longer length (and different type) of coax.  The maximum deviation from a straight line up to 60 MHz is only 3.5 degrees, and seems to correspond to the half-wavelength point.



Figure 7—Reflection Phase of Unterminated 88” Coax

These small phase errors may be related to the same output impedance considerations

described above, or to losses in the coax, or to the impedance of the coax not being 50 ohms.

The peaks near 40 and 80 MHz are near the full- and half-wavelength points, which likely aggravates

any irregularity.



Figure 8 shows the magnitude of the reflection of the same 88” coax, showing some deviation from zero db at higher frequencies.  This may be due in part to some deviation of the DUT reflection port from a perfect 50 ohms, or to the fact that the coax itself does not have a perfect 50-ohm characteristic impedance.  The longer the coax, the more significant the effects of these impedance mismatches.



Figure 8—Reflection Magnitude of Unterminated 88” Coax



A Note on the Digital Phase Detector

This section is intended for the true “techie” who wants to know something about the innards of the Analyzer100.  Once a processor is installed in the device, the issues in this section will be handled automatically.


The phase detector used by the Analyzer100 is a digital “set-reset” type, which essentially acts as a stopwatch, measuring the amount of time by which a transition in the DUT reflection leads the corresponding transition in a reference signal.  An earlier round of the detector is described here. There are possible difficulties with this approach when the two transitions occur at nearly the same time.  From the phase detector’s point of view, this creates some error when the phase is near zero or 360 degrees.  The metering circuit of the Analyzer100 displays a -180 to +180 degree range, with the meter’s “zero” point equally the 180 degree point of the phase detector.  Hence, the inaccuracies in the Analyzer100 occur near displayed readings of +/-180 degrees.


Figure 9 shows the area in which the Analyzer100 has its best accuracy.  As will be described below, it is possible to shift the reference signal so that the readings always occur in the accurate zone.


Figure 8—The High Accuracy Zone

The zone of highest accuracy of the digital phase detector is approximately

the yellow area, which covers most of the range at low frequency, and narrows

to a range of about +/-120 degrees at 120 MHz. Up to 60 MHz, the accuracy within

that +/-120 degree zone is better than 0.2 degrees.  The shift mechanism is used

to assure that all measurements are taken within the accurate zone.


Using the range from -120 degrees to +120 degrees as a target of high accuracy, let’s see how we can shift the reference to stay within that range. If a reading is outside the accurate zone, we know that by shifting the reference signal +180 degrees, the displayed output will shift -180 degrees.  For any phase outside the -120 to +120 degree range, such a 180 degree shift will result in a new reading within that range.  In fact, any shift within 60 degrees of 180 degrees will accomplish that goal.  So we make the shift, take a reading in the accurate zone, and then add 180 degrees to the reading to remove the effect of the shift.  That’s all there is to it.


Nothing is ever quite what it seems, and the 180 degree shift will turn out not to be quite 180 degrees.  The shift is actually made by inverting the squared-off reference signal, so the result of the shift depends on the duty cycle of the squared-off signal.  Figure 9 shows that the actual shift in the prototype is more in the 170-171 degree range.



Figure 8—Reflection Measurement of Unterminated 88” Coax

The shift is nominally 180 degrees, but in fact is mostly in the 170-171 degree

range.  In the initial adjustment of the instrument, the actual amount of the shift

can be calibrated and “memorized” by the processor.


As long as we know the amount of the shift, and it is not extremely far from 180 degrees, we can follow the same procedure described above, substituting the true value of the shift for “180” degrees.  We can determine the amount of the shift in the initial adjustment of the instrument; it is probably not something that needs to be recalibrated often.


For the prototype, which has no processor, this shifting mechanism was done manually.  This is a bit awkward if a lot of shifting has to be done.  But especially in the below-30 MHz range, where the accuracy zone is very wide, it is not necessary to use the shift all that often.