synthetic instruments: concepts and applications

257

Click here to load reader

Upload: chris-nadovich

Post on 27-Dec-2016

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Synthetic instruments: concepts and applications
Page 2: Synthetic instruments: concepts and applications

Synthetic Instruments

Page 3: Synthetic instruments: concepts and applications

This page intentionally left blank

Page 4: Synthetic instruments: concepts and applications

Synthetic Instruments Concepts and Applications

by C.T. Nadovich

AMSTERDAM • BOSTON • HEIDELBERG • LONDONNEW YORK • OXFORD • PARIS • SAN DIEGO

SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO

Newnes is an imprint of Elsevier

Page 5: Synthetic instruments: concepts and applications

Newnes is an imprint of Elsevier200 Wheeler Road, Burlington, MA 01803, USALinacre House, Jordan Hill, Oxford OX2 8DP, UK

Copyright © 2005, Elsevier Inc. All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photo-copying, recording, or otherwise, without the prior written permission of the publisher.

Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail: [email protected]. You may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by select-ing “Customer Support” and then “Obtaining Permissions.”

Recognizing the importance of preserving what has been written, Elsevier prints its books on acid-free paper whenever possible.

Library of Congress Cataloging-in-Publication Data

(Application submitted.)

British Library Cataloguing-in-Publication DataA catalogue record for this book is available from the British Library.

ISBN: 0-7506-7783-X

For information on all Newnes publications visit our website at www.newnespress.com

04 05 06 07 08 09 10 9 8 7 6 5 4 3 2 1

Printed in the United States of America.

Page 6: Synthetic instruments: concepts and applications

v

Foreword .....................................................................................xiiiPlan of this Book ................................................................................xivChapter Outline ..................................................................................xv

Preface ........................................................................................xvii

Acknowledgments ......................................................................xix

What’s on the CD-ROM? ............................................................xxi

Chapter 1: What is a Synthetic Instrument? ...............................1History of Automated Measurement ....................................................1Genesis ..................................................................................................2Modular Instruments ............................................................................4Synthetic Instruments Defined .............................................................5Synthesis and Analysis .........................................................................6Generic Hardware .................................................................................6Advantages of Synthetic Instruments ................................................11Eliminating Redundancy ....................................................................11Measurement Integration ...................................................................13Measurement Speed ............................................................................14Longer Service Life .............................................................................15Synthetic Instrument Misconceptions ...............................................15Why not Just Measure Volts with a Voltmeter? .................................16Virtual Instruments .............................................................................16Analog Instruments ............................................................................19

Chapter 2: Synthetic Measurement System Hardware Architectures ..........................................................................21System Concept—The CCC Architecture ........................................21Signal Flow .........................................................................................22The Synthetic Measurement System ..................................................23Chinese Restaurant Menu (CRM) Architecture ...............................23Parameterization of CCC Assets ........................................................25Architectural Variations .....................................................................26

Contents

Page 7: Synthetic instruments: concepts and applications

Compound Stimulus ...........................................................................27Simultaneous Channels and Multiplexing .........................................28Hardware Requirements Traceability .................................................34

Chapter 3: Stimulus .....................................................................35Stimulus Digital Signal Processing .....................................................35Waveform Playback ............................................................................36Direct Digital Synthesis ......................................................................37Algorithmic Sequencing .....................................................................39Synthesis Controller Considerations ..................................................41Stimulus Triggering .............................................................................42Stimulus Trigger Interpolation ...........................................................43The Stimulus D/A ..............................................................................44Interpolation and Digital Up-Converters in the Codec ....................45Stimulus Conditioning .......................................................................46Stimulus Conditioner Linearity ..........................................................47Gain Control ......................................................................................47Adaptive Fidelity Improvement .........................................................49Reconstruction Filtering .....................................................................50Stimulus Cascade—Real-World Example ..........................................51

Chapter 4: Response ...................................................................55Response Signal Conditioning ...........................................................55Input Protection ..................................................................................55Response Linearity and Gain Control ................................................56Adaptive Techniques ..........................................................................57The Response Codec ..........................................................................58Fidelity and Measurement Accuracy ..................................................58Ideal Quantization ..............................................................................60Codec Headroom ................................................................................60Headroom Trade-off and System Fidelity ...........................................61Response Digital Signal Processing ....................................................62Waveform Recorder and DSP .............................................................62Matched Filter Demodulator ..............................................................64Response Trigger Time Interpolator ...................................................66Response Cascade—Real-World Example .........................................66

Contents

vi

Page 8: Synthetic instruments: concepts and applications

Chapter 5: Real-World Design: A Synthetic Measurement System ....................................................................................69Universal High-Speed RF Microwave Test System ............................69Background .........................................................................................69Logistical Goals ...................................................................................70Technical Goals ..................................................................................70RF Capabilities ....................................................................................70System Architecture ...........................................................................71Microwave Synthetic Instrument (TRM1000C) ...............................71Supplemental Resources .....................................................................74DUT Interface ....................................................................................74Product Test Adapter Solutions .........................................................75Calibration ..........................................................................................75Primary Calibration ............................................................................75Operational Calibration .....................................................................76Software Solutions ..............................................................................76Test Program Set Developer Interface ................................................77TRM1000C Software ..........................................................................77Conclusions .........................................................................................78

Chapter 6: Measurement Maps ..................................................81Measurement Abstraction ..................................................................83General Measurements .......................................................................85Abscissas and Ordinates .....................................................................86The Measurement Function ...............................................................86Canonical Ordinate Algorithms .........................................................88Multidimensional Measurements .......................................................88Domains ..............................................................................................89Measurement Maps .............................................................................90Ports and Modes ..................................................................................92DUT Modes as Abscissas ....................................................................94Ports as Abscissas ................................................................................94Map Manipulations .............................................................................95Problems with Hysteresis ....................................................................99Stimulus and Response .....................................................................100Inverse Maps .....................................................................................100Accuracy Advantages of Inverse Maps .............................................102Problems with Inverse Maps .............................................................103

Contents

vii

Page 9: Synthetic instruments: concepts and applications

Calibration Strategy and Map Manipulations ..................................104Canonical Maps ................................................................................105Sufficiency of the Stimulus Response Measurement Map Stance ....107Processing a Measurement ................................................................108The Basic Algorithm ........................................................................109

Chapter 7: Signals .....................................................................115Kinds of Signals ................................................................................116Coding, Decoding, and Measuring the Signal Hierarchy ................117Decoding Method Abscissas .............................................................118Direct Real Analog Baseband Signals ..............................................119Digital Coded Baseband ...................................................................121Analog Coded Baseband ..................................................................121Bandwidth .........................................................................................122Bandpass Signals ...............................................................................125Bandpass Sampling ...........................................................................127Image Rejection ................................................................................130Interference and Images ....................................................................131I/Q Sampling .....................................................................................132Broadband Periodic Signals ..............................................................133

Chapter 8: Calibration and Accuracy .......................................137Metrology for Marketers and Managers ............................................137Measurand .........................................................................................138Accuracy and Precision ....................................................................140Test versus Measurement ..................................................................141Introduction to Calibration ..............................................................143Reference Standards .........................................................................143Uncertainty Analysis ........................................................................143Stimulus Calibration .........................................................................144Overall Strategy for Stimulus Calibration ........................................145Using Interpolation to Invert a Map ................................................145Interpolation Example ......................................................................147Sampling Interval versus Resolution Confusion ..............................149Ordinate Quantization and Precision ...............................................153De-Embedding Calibration Objects .................................................154De-Embedding Dimensionality and Interpolation ...........................156Abscissa De-Embedding ....................................................................156

Contents

viii

Page 10: Synthetic instruments: concepts and applications

Chapter 9: Specifying Synthetic Instruments .........................157Synthetic Instrument Definition and XML ......................................158Why XML? ........................................................................................159ATML ...............................................................................................162Why Not SCPI, ATLAS,…? ............................................................162Introduction to XML ........................................................................164Automatic Descriptions ....................................................................164Not a Script .......................................................................................166XML Basics .......................................................................................167Synthetic Measurement Systems and XML .....................................168Describing the Measurement with XML ..........................................169Defining an Instrument .....................................................................171Calibration Strategy Example ...........................................................175Functional Decomposition and Scope ..............................................177Measurement Parameters—A Hazard ..............................................179Describing the Measurement System with XML ..............................180Describing Measurement Results with XML ....................................184Column and Array Data ...................................................................185Self-Documenting Features ..............................................................185Arrays as Elements ............................................................................186SQL Database Concepts and Data Objects ......................................187HDF ...................................................................................................188

Chapter 10: Synthetic Instrument Markup Language: SIML ...................................................................191A DTD for Measurement Description ..............................................192More SIML Details ...........................................................................195Locked Abscissas ...............................................................................196Banded Abscissas ..............................................................................198Constraints ........................................................................................201Modulation .......................................................................................202Ordinate Modifiers: Averaging and Statistical Manipulations ........203

Chapter 11: Ten Mistakes in Synthetic Measurement System Design .....................................................................205Fixing Performance or Functionality Shortfalls Exclusively by Adding Hardware .....................................................................205Fixing Hardware Mistakes with Software .........................................207Adding Modes or Features Dedicated to Specific Measurements ....207

Contents

ix

Page 11: Synthetic instruments: concepts and applications

Designing Synthetic Instruments Procedurally ................................208Meeting Legacy Instrument Specifications ......................................209Developing Stimulus Separate from Response .................................211Not Combining Measurements ........................................................212Hardware Modularity as a Distraction ..............................................212Bad Lab Procedure ............................................................................213Fear of Change ..................................................................................214

Acronym Glossary .....................................................................217

Basic SIML DTD .........................................................................221

Bibliography ...............................................................................223Books .................................................................................................223Periodicals .........................................................................................224Conference Papers ............................................................................224

About the Author .......................................................................225

Index ...........................................................................................227

■ ■ ■

List of FiguresFigure 1-1. Manual measurements ..............................................................2Figure 1-2. Digital hardwired logic versus CPU .........................................7Figure 1-3. Signal-generator leveling loop ...............................................12Figure 1-4. Crowded front panel on Tektronix Spectrum analyzer ..........17Figure 2-1. Basic CCC cascade .................................................................21Figure 2-2. Synthetic architecture cascade—flow alternatives ................22Figure 2-3. Synthetic measurement system ..............................................23Figure 2-4. CRM architecture ...................................................................24Figure 2-5. Parameterized architecture .....................................................26Figure 2-6. Compound stimulus ................................................................27Figure 2-7. Space division multiplexing ...................................................30Figure 2-8. Time division multiplexing ....................................................30Figure 2-9. Time division multiplexing with virtual commutator ............31Figure 2-10. Frequency division multiplexing ..........................................32Figure 2-11. Code division multiplexing ..................................................33Figure 3-1. The stimulus cascade ..............................................................35Figure 3-2. Basic waveform playback controller .......................................36Figure 3-3. Direct digital synthesizer ........................................................38

Contents

x

Page 12: Synthetic instruments: concepts and applications

Figure 3-4. Fractional samples ..................................................................39Figure 3-5. Fine trigger delay control .......................................................43Figure 3-6. Effect of gain control placement on SNR with varying gain .48Figure 3-7. Adaptive nulling .....................................................................50Figure 3-8. Aeroflex CS25000 ..................................................................52Figure 4-1. The response cascade ..............................................................55Figure 4-2. Low-gain versus high-gain ......................................................56Figure 4-3. Adaptive nulling to improve response measurement .............58Figure 4-4. Sources of noise and distortion in synthetic systems .............59Figure 4-5. VU meter ................................................................................61Figure 4-6. Waveform recording controller ..............................................62Figure 4-7. Matched filter .........................................................................64Figure 4-8. AP240 reconfigurable PCI signal analyzer platform ..............67Figure 5-1. TRM1000C functional diagram .............................................72Figure 5-2. Test adapter interface .............................................................74Figure 6-1. Joint manifold .........................................................................84Figure 6-2. A measurement map ...............................................................91Figure 6-3. A color image as a measurement map ....................................92Figure 6-4. Is the gain switch setting a port or mode? ..............................93Figure 6-5. Inverting a map ......................................................................97Figure 6-6. Square rooter ........................................................................102Figure 6-7. Inverse map with multiple branches ....................................103Figure 6-8. Calibration strategy trees ......................................................107Figure 6-9. Basic SRMM measurement algorithm .................................110Figure 7-1. Fuzzy hierarchy of “stances” ..................................................116Figure 7-2. Analog and digital codings ...................................................120Figure 7-3. One scan line of NTSC analog video ..................................122Figure 7-4. The Sampling theorem .........................................................123Figure 7-5. Amplitude and frequency modulation .................................126Figure 7-6. Mixing ..................................................................................128Figure 7-7. Bandpass sampling example .................................................129Figure 7-8. IF at 1/4 the sampling rate ....................................................130Figure 7-9. Preselector signal conditioner ..............................................131Figure 7-10. Time equivalent sampling spectra ......................................134Figure 8-1. Precision versus accuracy ......................................................140Figure 8-2. Test versus measurement ......................................................142Figure 8-3. Clipped cosine ......................................................................147

Contents

xi

Page 13: Synthetic instruments: concepts and applications

Figure 8-4. Fundamental power transfer .................................................148Figure 8-5. Interpolation error (estimate – ideal) ..................................148Figure 8-6. ”Spiked” FFT ........................................................................152Figure 8-7. Interpolated FFT (same data) ...............................................153Figure 8-8. De-embedding applied to temperature measurements .........155Figure 9-1. Bookshelf modular ................................................................158Figure 9-2. Example of SCPI code ..........................................................163Figure 9-3. Tree structure of XML code example ...................................170Figure 9-4. Detailed example tree structure ...........................................173Figure 9-5. Measurement system, switch matrix, and DUT ...................182Figure 9-6. Self-documenting SRMM object .........................................186

List of TablesTable 2-1. CRM architecture ....................................................................25Table 3-1. D/A converter trade-off range ................................................44Table 3-2. BSG performance range ..........................................................53Table 3-3. BSG options .............................................................................54Table 5-1. TRM1000C measurement suit ................................................73Table 7-1. Sampling techniques .............................................................135

List of ExamplesExample 9-1. Simple XML document ....................................................167Example 9-2. Simple oscilloscope ...........................................................169Example 9-3. Alternative XML structure ...............................................170Example 9-4. Flatbed scanner .................................................................171Example 9-5. Network analyzer ..............................................................172Example 9-6. Compound ordinate .........................................................175Example 9-7. Parameter list ....................................................................178Example 9-8. Defining ports ...................................................................183Example 10-1. Complete XML document ..............................................192Example 10-2. Simple DTD ...................................................................193Example 10-3. More sophisticated DTD ................................................194Example 10-4. Distortion analyzer .........................................................197Example 10-5. Enhanced distortion analyzer .........................................199Example 10-6. Constraints .....................................................................201Example 10-7. Signal encoding ..............................................................202Example 10-8. Averaging ........................................................................203Example B-1. Complete SIML DTD ......................................................221

Contents

xii

Page 14: Synthetic instruments: concepts and applications

xiii

Foreword

The way electronic measurement instruments are built is making an evolutionary leap to a new method of design called synthetic instruments. This promises to be the most significant advance in electronic test and instrumentation since the introduction of automated test equipment (ATE). The switch to synthetic instruments is beginning now, and it will profoundly affect all test and measurement equipment that will be devel-oped in the future.

Synthetic instruments are like ordinary instruments, in that they are spe-cific to a particular measurement or test. They might be a voltmeter that measures voltage, or a spectrum analyzer that measures spectra. The dif-ference is that synthetic instruments are implemented purely in software that runs on general-purpose, nonspecific measurement hardware with a high-speed A/D or D/A converter at its core. In a synthetic instrument, the software is specific; the hardware is generic. Therefore, the personal-ity of a synthetic instrument can be changed in an instant. A voltmeter may be a spectrum analyzer a few seconds later, and then become a power meter, or network analyzer, or oscilloscope. Totally different instruments are realized on the same hardware, switching back and forth in the blink of an eye, or even existing simultaneously.

The union of the hardware and software that implement a set of synthetic instruments is called a synthetic measurement system (SMS). This book studies both synthetic instruments, and systems from which they may be best created.

Powerful customer demands in the private and public sectors are driving this change to synthetic instruments. There are many bottom-line advan-tages in making one, generic and economical SMS hardware design do the work of an expensive rack of different, measurement-specific instru-ments. ATE customers all want to reap the savings this promises. ATE vendors like Teradyne, as well as conventional instrumentation vendors like Agilent and Aeroflex, have announced or currently produce syn-thetic instruments. The U.S. Military, one of the largest ATE customers

Page 15: Synthetic instruments: concepts and applications

in the world, wants new ATE systems to be implemented with synthetic instruments. Commercial electronics manufacturers such as Lucent, Boe-ing, and Loral are using synthetic instruments now in their factories.

Despite the fact that this change to synthetic instrumentation is inevi-table and widely acknowledged throughout the ATE and T&M indus-tries, there is a paucity of information available on the topic. A good deal of confusion exists about basic concepts, goals, and trade-offs related to synthetic instrumentation. Given that billions of dollars in product sales hang in the balance, it is important that clear, accurate information be readily available.

Plan of this Book

The basic goal of this book is to explain synthetic instrumentation at a high level, focusing on specific details when necessary to illustrate a crucial point of note. The first half of the book is generally of a hardware flavor, and the second half is generally oriented toward theory and soft-ware, but unifying themes of synthetic measurement system high-level design are presented throughout.

The foremost unifying concept in the book, tying together hardware, sys-tem theory, and software in a tidy package, is the measurement map. This unique and powerful concept serves as a bridge between the synthetic cas-cade hardware architecture, the abstract idea of a measurement, and the expression of a synthetic instrument as an XML schema for automated software implementation and processing.

The power of the XML schema based on the measurement map is that concise, structured descriptions of measurements can be given directly by the test engineering user (possibly with the help of a GUI tool). These descriptions of a measurement can be automatically processed, optimized, and performed. Most importantly, the test engineer can specify exactly the measurement wanted without writing procedural test scripts or doing any other programming.

Thus, the book is a collage of hardware, software, and system concepts all aimed at the single goal of explaining and extending this new approach to measurement system design.

Foreword

xiv

Page 16: Synthetic instruments: concepts and applications

Chapter Outline

The book begins by answering the question: “What is a synthetic in-strument?” First, we review the history of measurement, automated and otherwise. The advantages of the synthetic approach are enumerated and discussed, along with motivation for the fundamental idea of using ge-neric hardware to perform specific measurements. Also presented are the necessary distinctions between synthesis and analysis, and between test versus measurement.

Once this basic concept of a synthetic instrument is explained, the book moves on to a detailed discussion of hardware architecture. This discus-sion begins with the introduction of the control, codec, conditioning (CCC) cascade architecture, as either a stimulus or response asset. Archi-tecture variations are described, including the basic Chinese restaurant menu (CRM) variation, compound stimulus, and multiplexing options.

The hardware discussion then moves sequentially through the stimulus and response system cascades, touching on critical issues and challenges these systems present. Among the issues considered for the stimulus side are direct digital synthesis (DDS) and controller characteristics, trigger-ing and digital up-conversion, linearity, gain control, and adaptive fidelity improvement. In the response cascade, input conditioning, quantization, system fidelity trade-offs, adaptive interference cancellation, and matched filters are discussed.

Examples of commercial, state of the art subsystems are presented for stimulus and response. The book also contains a complete chapter con-taining a detailed description of a real world, production oriented syn-thetic measurement system.

The real world system example is roughly the midpoint of the book. At this point the focus shifts back to the theoretical as the central concept of a measurement map is introduced. The discussion links the measurement map to the test engineer’s desired test, as well as to the hardware and calibration. This lays the groundwork for subsequent theory and software architecture discussions.

After the measurement map, there is a detailed discussion of signals and signal processing issues often encountered in synthetic measurement systems. These include discussions of making measurements at different

Foreword

xv

Page 17: Synthetic instruments: concepts and applications

levels of the signal coding hierarchy, encoding and decoding strategies, bandwidth, and sampling strategies. Some practical issues with up and down converters are explored.

A chapter on calibration and accuracy attempts to clarify the way to think about measurement topics. There is also a general discussion of ref-erence standards and uncertainty analysis. Along with general calibration topics, there is a major section on the topic of stimulus calibration that includes a detailed analysis of certain interpolation issues. The concept of de-embedding is also presented.

The next two chapters are an introduction to the XML method for encapsulating measurement descriptions, and an annotated example of a measurement description expressed in XML.

The book concludes with a listing of The Ten Mistakes in Synthetic Mea-surement System Design. This chapter draws from many of the concepts introduced throughout the previous material, drawing conclusions that apply to real-world applications.

Foreword

xvi

Page 18: Synthetic instruments: concepts and applications

xvii

Preface

My discussions ten years ago with Chris Nadovich about modular, soft-ware-based test instruments were born out of the same frustrations that are now driving the synthetic instrument movement: we were involved in integrating standard instruments into special applications for which they were not designed. As we struggled to program around unfortunate “features” and patched together solutions for new, unique capabilities, we wondered if there wasn’t a better way.

Ten years later, our ideas for this new class of instrumentation are start-ing to take hold in industry. When we began, we didn’t really know what to call this idea. As we have progressed from project to project, the term synthetic instrument became the widely accepted term for this new type of test equipment.

The synthetic instrument concept is as revolutionary as the forward pass was in American football. Just like the idea of actually letting go of the football and throwing it forward where anyone could catch it was a chal-lenge to football’s status quo, synthetic instruments require a paradigm shift from instrument manufacturers. They empower the user and inte-grator to mold the instrument to their specific needs. Just like the early football establishment puzzled over how to run an offense that let go of the football, today’s instrument vendors puzzle over how to deal with the freedom that synthetic instruments bestow upon their customers. As hard as it would be today to imagine football without the forward pass, in the future it will be just as hard to imagine the world of test instrumentation without synthetic instruments.

Widespread adoption of synthetic instrumentation concepts has been hampered by a lack of a common vocabulary. With the revolutionary and somewhat nebulous nature of synthetic instruments, they do not fit comfortably into the language of traditional instrumentation. Since the synthetic instrument can easily be molded through software, using tradi-tional language to describe or specify this new class of instruments tends to mold it into a reflection of the old traditional instruments. Having a

Page 19: Synthetic instruments: concepts and applications

terminology of its own will allow the synthetic instrument to take ad-vantage of the full power and flexibility of the platform. This book helps create and define the lexicon for synthetic instruments.

The lack of measurement science behind synthetic instrument concepts also hinders its acceptance. Because the concepts of synthetic instru-ments are so new, there is a concern about how well instruments based on them will perform. For synthetic instruments to become fully accepted, the basic measurement science behind them will need to be studied and documented. This activity will take time and commitment on the part of company research organizations and academia. Until these organizations publish the necessary science and metrology to support synthetic instru-ments, there will continue to be reluctance to adopt them.

Over the past ten years, Chris and I have had success implementing synthetic instruments for a variety of test applications, allowing us to turn the concepts we had into reality. But as in the early stages of most revolu-tions, there is still considerable work ahead. Until synthetic measurement systems are mass-produced, their cost makes them uncompetitive for all but the most complex and demanding applications. Until their common lexicon is established, defining their requirements is still troublesome. The lack of documented measurement science continues to make it dif-ficult for developers to undertake a synthetic instrument project. This book, with the work and thought that Chris has put into it, is a major step in overcoming some of the limitations we have encountered, and takes us further toward having synthetic instruments fulfill their revolu-tionary destiny in the test and measurement industry.

—Jack Berlekamp

Preface

xviii

Page 20: Synthetic instruments: concepts and applications

Acknowledgments

Although they are clearly the main focus, this book is not just about the synthetic instruments themselves. It also includes an ample helping of what I hope is my wisdom (others may call it my “jaded opinion”) based on my experience regarding how synthetic instruments and synthetic measurement systems should be built and used. I openly admit that my experience is colored by the particular companies and products that I’ve worked for over the years. I have fought many a technical battle to get things done in what I perceived was the right way. I’ve won some, lost some, and in some cases I changed my perception as I was convinced I was wrong.

Regardless of my personal opinion relative to the actual practices in real-world systems, it should not be construed that any particular failure or mistake I may analyze is associated with any specific real-world system or product just because of where I may have worked. Without significant exception, all the valuable opinions I express in this book (and whatever pearls of wisdom they may contain) are derived from the successes I have been involved with, one way or the other.

My interest in synthetic instruments would have never begun if it wasn’t for the influence of Jack Berlekamp, who led me into this topic back in the 1990’s. Jack was a prominent champion of the idea, and I watched him struggle again and again to teach others the techniques and associat-ed benefits of synthetic instrumentation. His crusade convinced me of the importance of setting down, in book form, what synthetic instruments were all about.

Bill Birurakis, who, to my knowledge, invented the term synthetic instru-ment was another of my teachers. Although I interacted with Bill far less than with Jack, Bill’s influence on my vision of synthetic instrumentation was substantial. Bill’s crystal clear vision of what synthetic instruments are (and are not) eventually penetrated my dull brain. I don’t always agree with Bill on some of the gory details, or Jack for that matter, but much of my own point of view on the topic is a “synthesis” of these two

xix

Page 21: Synthetic instruments: concepts and applications

individuals, with “parameterization” and “canonicalization” of my own, for bad or for good.

Any good writing found in the book is a result of Chris Lett’s patient, yet ruthless proofreading suggestions. Chris has proofread every major bit of writing I’ve ever done. What little writing skill I may seem to have, I owe entirely to his literary influence over the years. Dan Frey and Jeff Bronfeld also contributed valuable suggestions to the book as they suffered through proofreading early drafts.

There are many other people, too numerous to list, with whom I’ve worked at Aeroflex, Flam & Russell, Checkpoint Systems, and other companies that had significant influence on my view of automated test. Their impact on this book is considerable.

Acknowledgments

xx

Page 22: Synthetic instruments: concepts and applications

What’s on the CD-ROM?

Included on the accompanying CD-ROM:

A fully searchable eBook version of the text in Adobe PDF format. XML source code examples. Real-world synthetic instrument data sheets, application notes,

and white papers from commercial instrument vendors.

xxi

Page 23: Synthetic instruments: concepts and applications

This page intentionally left blank

Page 24: Synthetic instruments: concepts and applications

1

CHAPTER

1What is a Synthetic Instrument?

Engineers often confuse synthetic measurement systems with other sorts of systems. This confusion isn’t because synthetic instrumentation is an inherently complex concept, or because it’s vaguely defined, but rather because there are lots of companies trying to sell their old nonsynthetic instruments with a synthetic spin.

If all you have to sell are pigs, and people want chickens, gluing feath-ers on your pigs and taking them to market might seem to be an attrac-tive option to some people. If enough people do this, and feathered pigs, goats, cows, as well as turkeys and pigeons are flooding the market, being sold as if they were chickens, real confusion can arise among city folk regarding what a chicken might actually be.

One of the main purposes of this book is to set the record straight. When you are finished reading it, you should be able to tell a synthetic instru-ment from a traditional instrument. You will then be an educated con-sumer. If someone offers you a feathered pig in place of a chicken, you will be able to tell that you are being duped.

History of Automated Measurement

Purveyors of synthetic instrumentation often talk disparagingly about tra-ditional instrumentation. But what exactly are they talking about? Often you will hear a system criticized as “traditional rack-em-stack-em.” What does that mean?

In order to understand what’s being held up for scorn, you need to under-stand a little about the history of measurement systems.

Page 25: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

2

Figure 1-1. Manual measurements

Genesis

In the beginning, when people wanted to measure things, they grabbed a specific measurement device that was expressly designed for the particu-lar measurement they wanted to make. For example, if they wanted to measure a length, they grabbed a scale, or a tape measure, or a laser range finder and carried it over to where they wanted to measure a length. They used that specific device to make their specific length measurement. Then they walked back and put the device away in its carrying case or other storage, and put it back on some shelf somewhere where they originally found it (assuming they were tidy).

If you had a set of measurements to make, you needed a set of matching instruments. Occasionally, instruments did double duty (a chronometer with built-in compass), but fundamentally there was a one-to-one corre-spondence between the instruments and the measurements made.

That sort of arrangement works fine when you have only a few measure-ments to make, and you aren’t in a hurry. Under those circumstances, you don’t mind taking the time to learn how to use each sort of specific instrument, and you have ample time to do everything manually, finding, deploying, using, and stowing the instrument.

Page 26: Synthetic instruments: concepts and applications

What is a Synthetic Instrument?

3

Things went along like this for many centuries. But then in the 20th century, the pace picked up a lot. The minicomputer was invented, and people started using these inexpensive computers to control measurement devices. Using a computer to make measurements allows measurements to be made faster, and it allows measurements to be made by someone that might not know too much about how to operate the instruments. The knowledge for operating the instruments is encapsulated in software that anybody can run.

With computer-controlled measurement devices, you still needed a separate measurement device for each separate measurement. It seemed fortunate that you didn’t necessarily need a different computer for each measurement. Common instrument interface buses, like the IEEE-488 bus, allowed multiple devices to be controlled by a single computer. In those days, computers were still expensive, so it helped matters to econo-mize on the number of computers.

And, obviously, using a computer to control measurement devices through a common bus requires measurement devices that can be con-trolled by a computer in this manner. An ordinary schoolchild’s ruler can-not be easily controlled by a computer to measure a length. You needed a digitizing caliper or some other sort of length measurement device with a computer interface.

Things went along like this for a few years, but folks quickly got tired of taking all those instruments off the shelf, hooking them up to a computer, running their measurements, and then putting everything away. Sloppy, lazy folks that didn’t put their measurement instruments away tripped over the interconnecting wires. Eventually, somebody came up with the idea of putting all these computer-controlled instruments into one big en-closure, making a measurement system that comprised a set of instruments and a controlling computer mounted in a convenient package. Typically, EIA standard 19” racks were used, and the resulting sorts of systems have been described as “rack-em-stack-em” measurement systems. Smaller sys-tems were also developed with instruments plugged into a common frame using a common computer interface bus, but the concept is identical.

Page 27: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

4

At this point, the people that made measurements were quite happy with the situation. They could have a whole slew of measurements made with the touch of a button. The computer would run all the instruments and record the results. There was little to deploy or stow. In fact, since so many instruments were crammed into these rack-em-stack-em measure-ment systems, some systems got so big that you needed to carry whatever you were measuring to the measurement system, rather than the other way around. But that suited measurement makers just fine.

On the other hand, the people that paid for these measurement systems (seldom the same people as using them) were somewhat upset. They didn’t like how much money these systems were costing, how much room they took up, how much power they used, and how much heat they threw off. Racking up every conceivable measurement instrument into a huge, integrated system cost a mint and it was obvious to everyone that there were a lot of duplicated parts in these big racks of instruments.

Modular Instruments

As I referred to above, there was an alternative kind of measurement system where measurement instruments were put into smaller, plug-in packages that connected to a common bus. This sort of approach is called modular instrumentation. Since this is essentially a packaging concept rather than any sort of architecture paradigm, modular instruments are not necessarily synthetic instrumentation at all. In fact, they usually aren’t, but since some of the advantages of modular packaging correspond to advantages of synthetic system design, the two are often confused.

Modular packaging can eliminate redundancy in a way that seems the same as how synthetic instruments eliminate redundancy. Modular in-struments are boiled down to their essential measurement-specific com-ponents, with nonessential things like front panels, power supplies, and cooling systems shared among several modules.

Modular design saves money in theory. In practice, however, cost savings are often not realized with modular packaging. Anyone attempting to specify a measurement or test system in modular VXI packaging knows that the same instrument in VXI often costs more than an equivalent standalone instrument. This seems absurd given the fact that the modular

Page 28: Synthetic instruments: concepts and applications

What is a Synthetic Instrument?

5

version has no power supply, no front panel, and no processor. Why this economic absurdity occurs is more of a marketing question than a design optimization paradox, but the fact remains that modular approaches, although the darling of engineers, don’t save as much in the real world as you would expect.

One might be tempted to point at the failure of modular approaches to yield true cost savings and predict the same sort of cost savings failure for synthetic instrumentation. The situation is quite different, however. The modular approach to eliminating redundancy and reducing cost does not go nearly as far as the synthetic instrument approach does. A synthetic instrument design will attempt to eliminate redundancy by providing a common instrument synthesis platform that can synthesize any number of instruments with little or no additional hardware. With a modular design, when you want to add another instrument, you add another measurement specific hardware module. With a synthetic instrument, ideally you add nothing but software to add another instrument.

Synthetic Instruments Defined

Synergy means behavior of whole systems unpredicted by the behavior of their parts taken separately.

—R. Buckminster Fuller[B4]

Fundamental Definitions

Synthetic Measurement System

A synthetic measurement system (SMS) is a system that uses syn-thetic instruments implemented on a common, general purpose, physical hardware platform to perform a set of specific measure-ments.

Synthetic Instrument

A synthetic instrument (SI) is a functional mode or personality component of a synthetic measurement system that performs a spe-cific synthesis or analysis function using specific software running on generic, nonspecific physical hardware.

Page 29: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

6

There are several key words in these definitions that need to be empha-sized and further amplified.

Synthesis and Analysis

Although the word “synthetic” in the phrase synthetic instrument might seem to indicate that synthetic instruments are synthesizers—that they do synthesis. This is a mistake. When I say synthetic instrument, I mean that the instrument is being synthesized. I am not implying anything about what the instrument itself does.

A synthetic instrument might indeed be a synthesizer, but it could just as easily be an analyzer, or some hybrid of the two.

I’ve heard people suggest the term “analytic instruments” rather than syn-thetic instruments in the context of some analysis instrument built with a synthetic architecture, and this isn’t really correct either. Remember, you are synthesizing an instrument; the instrument itself may synthesize something, but that’s another matter.

Generic Hardware

Synthetic instruments are implemented on generic hardware. This is probably the most salient characteristic of a synthetic instrument. It’s also one of the bitterest pills to swallow when adopting an SI approach to measurements. Generic means that the underlying hardware is not explicitly designed to do the particular measurement. Rather, the under-lying hardware is explicitly designed to be general purpose. Measurement specificity is encapsulated in software.

An exact analogy to this is the relationship between specific digital circuits and a general-purpose CPU. A specific digital circuit can be designed and hardwired with digital logic parts to perform a specific calculation. Alternatively, a microprocessor (or, better yet, a gate array) could be used to perform the same calculation using appropriate software. One case is specific, the other generic, with the specificity encapsulated in software.

The reason this is such a bitter pill is that it moves many instrument designers out of their hardware comfort zone. The orthodox design ap-

Page 30: Synthetic instruments: concepts and applications

What is a Synthetic Instrument?

7

Figure 1-2. Digital hardwired logic versus CPU

proach for instrumentation is to design and optimize the hardware so as to meet all measurement requirements. Specifications for measurement systems reflect this optimized-hardware orientation. Software is relegated to a subordinate role of collecting and managing measurement results, but no fundamental measurement requirements are the responsibility of any software.

With a synthetic instrumentation approach, the responsibility for meet-ing fundamental measurement requirements is distributed between hardware and software. In truth, the measurement requirements are now primarily a system-level requirement, with those high-level requirements driving lower-level requirements. If anything, the result is that more responsibility is given to software to meet detailed measurement require-ments. After all, the hardware is generic. As such, although there will be some broad-brush optimization applied to the hardware to make it adequate for the required instrumentation tasks, the ultimate responsibil-ity for implementing detailed instrumentation requirements belongs to software.

Page 31: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

8

Once system planners and designers understand the above point, it gives them a way out of a classic dilemma of test system design. I have seen many first attempts at synthetic instrumentation where this was not understood.

In these misguided efforts, the hardware designers continued to bear most or all of the responsibility for meeting system-level measurement perfor-mance requirements. Crucial performance aspects that were best or only achievable in system software were assigned to hardware solutions, with hardware engineers struggling against their own system design and the laws of physics to make something of the impossible hand they had been dealt. Software engineers habitually ignore key measurement performance issues under the invalid assumption that “the hardware does the mea-surement.” Software engineers focus instead on well-known TPS issues (configuration management, test executive, database, presentation, user interface (UI), and so forth) that are valid concerns, but which should not be their only concerns.

One of the goals of this book is to raise awareness of this fact to people contemplating the development of synthetic instrumentation: a synthetic instrument is a system-level concept. As such, it needs a balanced system-level development effort to have any chance of being successful. Don’t fall into the trap of turning to hardware as the solution for every measure-ment problem. Instead, synthesize the solution to the measurement prob-lem using software and hardware together.

Organizations that develop synthetic instruments should make sure that the proper emphasis is placed. System-level goals for synthetic instru-ments are achieved by software. Therefore, the system designer should have a software skill-set and be intimately involved in the software development. When challenges are encountered during design or devel-opment, software solutions should be sought vigorously, with hardware solutions strongly discouraged. If every performance specification shortfall is fixed by a hardware change, you know you have things backward.

Dilemma—All Instruments Created Equal

When the Founding Fathers of the United States wrote into the Declara-tion of Independence the phrase “all men are created equal,” it was clear to everyone then, and still should be clear to everyone now, that this

Page 32: Synthetic instruments: concepts and applications

What is a Synthetic Instrument?

9

statement is not literally true. Obviously, there are some tall men and some short men; men differ in all sorts of qualities. Half the citizens to which that phrase refers are actually women.

What the Founding Fathers were doing was to establish a government that would treat all of its citizens as if they were equal. They were per-fectly aware of the inequalities between people, but they felt that the government should be designed as if citizens were all equivalent and had equivalent rights. The government should be blind to their inherent and inevitable differences between citizens.

Doubtless, the resources of government are always limited. Some citizens who are extremely unequal to others may find that their rights are altered from the norm. For example, an 8-foot tall man might find some difficulty navigating most buildings, but the government would find it difficult to mandate that doorways all be taller than 8 feet.

Thus, a consequence of the “created equal” mandate is that the needs of extreme minorities are neglected. This is a dilemma. Either one finds that extraordinary amounts of resources are devoted to satisfying these minor-ity needs, which is unfair to the majority, or the needs of the minority are sacrificed to the tyranny of the majority. The endless controversies that result are well known in U.S. history.

You may be wondering where I’m going with this digression on U.S. po-litical thought and why it has any place in a book about synthetic instru-mentation. Well, the same sort of political philosophy characterizes the design of synthetic instrumentation and synthetic measurement systems. All instruments are created equal by the fiat of the synthetic instrument design paradigm. That means, from the perspective of the system design-er, that the hardware design does not focus on and optimize the specific details of the specific instruments to be implemented. Rather, it considers the big picture and attempts to guarantee that all conceivable instru-ments can flourish with equal “happiness.”

But we all know that instruments aren’t created equal. As with govern-ment, there are inevitable trade-offs in trying to provide a level playing field for all possible instruments. Some types of instruments and measure-ments require far different resources than others. Attempting to provide for these oddball measurements will skew the generic hardware in such a way that it does a bad job supporting more common measurements.

Page 33: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

10

Here’s an example. Suppose there is a need for a general-purpose test and measurement system that would be able to test any of a large number of different items of some general class, and determine if they work. An example of this would be something like a battery tester. You plug your questionable battery into the tester, push a button, and a green light il-luminates (or meter deflects to the green zone) if the battery is good, or red if bad.

But suppose that it was necessary to test specialized batteries, like car bat-teries, or high power computer UPS batteries, or tiny hearing aid batter-ies. Nothing in a typical consumer battery tester does a good job of this. To legitimately test big batteries you would want to have a high-power load, cables thick enough to handle the current, and so on. Small batter-ies need tiny connectors and sockets that fit their various shapes. Adding the necessary parts to make these tests would drive up the cost, size, and other aspects of the tester.

Thus, there seems to be an inherent compromise in the design of a ge-neric test instrument. The dilemma is to accept inflated costs to provide a foundation for rarely needed, oddball tests, or to drop the support for those tests, sacrificing the ability to address all test needs.

Fortunately, synthetic instrumentation provides a way to break out of this dilemma to some degree—a far better way than traditional instrumenta-tion provided. In a synthetic instrumentation system, there is always the potential to satisfy a specific, oddball measurement need with software. Although software always has costs (both nonrecurring and recurring), it is most often the case that handling a minority need with software is easier to achieve than it is with hardware.

A good, general example of this is how digital signal processing (DSP) can be applied in post processing to synthesize a measurement that would normally be done in real time with hardware. A specific case would be demodulating some form of encoding. Rather than include a hardware demodulator in order to perform some measurement on the encoded data, DSP can be applied to the raw data to demodulate in post-processing. In this way, a minority need is addressed without adding specialized hardware.

Page 34: Synthetic instruments: concepts and applications

What is a Synthetic Instrument?

11

Continuing with this example, if it turns out that DSP post-processing does not have sufficient performance to achieve the goal of the measure-ment, one option is to upgrade the controller portion of the control, codec, conditioning (CCC) instrument. Maybe then the DSP will run adequately. Yes, the hardware is now altered for the benefit of a single test, but not by adding hardware specific to that test. This is one of my central points. As I will discuss in detail later on, I believe it is a mistake to add hardware specific to a particular test.

Advantages of Synthetic Instruments

No one would design synthetic instruments unless there was an advan-tage: above all, a cost advantage. In fact, there are several advantages that allow synthetic instruments to be more cost effective than their nonsyn-thetic competitors.

Eliminating Redundancy

Ordinary rack-em-stack-em instrumentation contains repeated compo-nents. Every measurement box contains a slew of parts that also appear in every other measurement box. Typical repeated parts include:

Power Supply

Front Panel Controls

Computer Interfaces

Computer Controllers

Calibration Standards

Mechanical Enclosures

Interfaces

Signal Processing

A fundamental advantage of a synthetic approach to measurement system design is that adding a new measurement does not imply that you need to add another measurement box. If you do add hardware, the hardware comes from a menu of generic modules. Any specificity tends to be re-stricted to the signal conditioning needed by the sensor or effector being used.

Page 35: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

12

Generate Signal

AdjustLevel

AdjustLevelLeveling

Loop

Output toDUT

Stimulus Response Closure: The Calibration Problem

Many of the redundancies eliminated by synthetic instrumentation are the same as redundancies eliminated by modular instrument approaches. However, one significant redundancy that synthetic instruments have the unique ability to eliminate is the response components that are respon-sible for stimulus, and the stimulus components that support response. I call this efficiency closure. I will show, however, that this sort of redun-dancy elimination, while facilitated by synthetic approaches, has more to do with using a system-level optimization rather than an instrument-level optimization.

A signal generator (a box that generates an AC sine wave at some fre-quency and amplitude) is a typical stimulus instrument that you may encounter in a test system. When a signal generator creates the stimu-lus signal, it must do so at a known, calibrated signal level. Most signal generators achieve this by a process called internal leveling. The way internal leveling is implemented is to build a little response measurement system inside the signal generator. The level of the generator is then adjusted in a feedback loop so as to set the level to a known, calibrated point.

As you can see in Figure 1-3, this stimulus instrument comprises not only stimulus components, but also response measurement components. It may be the case that elsewhere in the overall system, those response compo-nents needed internally in the signal generator are duplicated in some other instruments. Those components may even be the primary func-tion of what might be considered a “true” response instrument. If so, the response function in the signal generator is redundant.

Figure 1-3. Signal-generator leveling loop

Page 36: Synthetic instruments: concepts and applications

What is a Synthetic Instrument?

13

Naturally, this sort of redundancy is a true waste only in an integrated measurement system with the freedom to use available functions in whatever manner desired, combining as needed. A signal generator has to work standalone, and so must carry a redundant response system within itself. Even a synthetic signal generator designed for standalone modular VXI or PXI use must have this response measurement redundancy within.

Therefore, it would certainly be possible to look at a system comprising a set of nonsynthetic instruments and to optimize away stimulus response redundancy. That would be possible, but it’s difficult to do in practice. The reason it is difficult is that nonsynthetic instruments tend to be specific in their stimulus and response functions. It’s difficult to match-up functions and factor them out.

In contrast, when one looks at a system designed with synthetic stimulus and response components, the chance of finding duplicate functions is much higher. If synthetic functions are all designed with the same signal conditioner, converter, DSP subsystem cascade, then a response system provided in a stimulus instrument will have the same exact architecture as one provided in a response instrument. The duplications factor out directly.

Measurement Integration

One of the most powerful concepts associated with synthetic instrumen-tation is the concept of measurement integration.

Fundamental Definitions

Measurement Integration

Combining disparate measurements into a single measurement map.

From my discussion of a measurement map in the section titled “Abscis-sas and Ordinates,” you will learn how to describe a measurement in such a way that encourages measurement integration. When you specify a list of ordinates and abscissas, and state how the abscissas are sequenced, you have effectively packaged a bunch of measurements into a tidy bundle. This is measurement integration in its purest sense.

Page 37: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

14

Measurement integration is important because it allows you to get the most out of the data you take. The data set is seen as an integrated whole that is analyzed, categorized, and visualized in whatever way makes the most sense for the given test. This is in contrast with the more preva-lent way of approaching test where a separate measurement is done in a sequential process with each test. There is no intertest communication (beyond basic prerequisites and an occasional parameter). The result of this redundancy is slow testing and ambiguity in the results.

Measurement Speed

Synthetic instruments are unquestionably faster than ordinary instru-ments. There are many reasons for this fact, but the principal reason is that a synthetic instrument does a measurement that is exactly tuned to the needs of the test being performed. Nothing more, nothing less. It does exactly the measurement that the test engineer wants.

In contrast, ordinary instruments are designed to a certain kind of mea-surement, but the way they do it may not be optimized for the task at hand. The test engineer is stuck with what the ordinary instrument has in its bag of tricks.

For example, there is a speed-accuracy trade-off on most measurements. If an instrument doesn’t know exactly how much accuracy you need for a given test, it needs to take whatever time it takes for the maximum accu-racy you might ever want. Consequently, you get the slowest measurement. It is true that many conventional instruments that make measurements with a severe speed-accuracy trade-off often have provision to specify a preference (e.g., a frequency counter that allows you to specify the count time and/or digits of precision), but the test engineer is always locked into the menu of compromises that the instrument maker anticipated.

Another big reason why synthetic instrumentation makes faster measure-ments is that the most efficient measurement techniques and algorithms can be used. Consider, for example, a common swept filter spectrum analyzer. This is a slow instrument when fine frequency resolution is re-quired simultaneously with a wide span. In contrast, a synthetic spectrum analyzer based on fast Fourier transform (FFT) processing will not suffer a slowdown in this situation.

Page 38: Synthetic instruments: concepts and applications

What is a Synthetic Instrument?

15

Decreased time to switch between measurements is also another notewor-thy speed advantage of synthetic instrumentation. This ability goes hand-in-hand with measurement integration. When you can combine several different measurements into one, eliminating both the intermeasurement setup times and the redundancies between sets of overlapping measure-ments, you can see surprising speed increases.

Longer Service Life

Synthetic measurement systems don’t become obsolete as quickly as less flexible dedicated measurement hardware systems. The reason for this fact is quite evident: Synthetic measurement systems can be reprogrammed to do both future measurements, not yet imagined, at the same time as they can perform legacy measurements done with older systems. Synthetic measurement systems give you your cake and allow you to eat it too, at least in terms of nourishing a wide variety of past, present, and future measurements.

In fact, one of the biggest reasons the U.S. military is so interested in synthetic measurement systems is this unique ability to support the old and the new with one, unchanging system. Millions of man-hours are invested in legacy test programs that expect to run on specific hardware. That specific hardware becomes obsolete. Now what do we do?

Rather than dumping everything and starting over, a better idea is to develop a synthetic measurement system that can implement synthetic instruments that do the same measurements as hardware instruments from the old system, while at the same time, are able to do new measure-ments. Best of all, the new SMS hardware is generic to the measurements. That means it can go obsolete piecemeal or in great chunks and the resulting hardware changes (if done right) don’t affect the measurements significantly.

Synthetic Instrument Misconceptions

Now that you understand what a synthetic instrument is, let’s tackle some of the common misconceptions surrounding this technology.

Page 39: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

16

Why not Just Measure Volts with a Voltmeter?

The main goal of synthetic instrumentation is to achieve instrument integration through the use of multipurpose stimulus/response synthesis/analysis hardware. Although there may be nonsynthetic, commercial off-the-shelf (COTS) solutions to various requirements, we intentionally es-chew them in favor of doing everything with a synthetic CCC approach.

It should be obvious that a COTS, measurement specific instrument that has been in production many years, and has gone through myriad optimi-zations, reviews, updates, and improvements, will probably do a better job of measuring the thing it was designed to measure, than a first revision synthetic instrument.

However, as the synthetic instrument is refined, there comes a day when the performance of the synthetic instrument rivals or even surpasses the performance of the legacy, single-measurement instrument. The reason this is possible is because the synthetic instrument can be continuously improved, with better and better measurement techniques incorporated, even completely new approaches. The traditional instrument can’t do that.

Synthetic Musical Instruments

This book is about synthetic measurement instruments, but the concept is not far from that of a synthetic musical instrument. Musical instrument synthesizers generate sound alike versions of many classic instruments by means of generic synthesis hardware. In fact, the quality of the synthesis in synthetic musical instruments now rivals, and in some cases surpasses, the musical-aesthetic quality of the best classic mechanical musical instru-ments. Modern musical synthesis systems also can accurately imitate the flaws and imperfections in traditional instruments. This situation is exactly analogous to the eventual goal of synthetic instrument—that they will rival and surpass, as well as imitate, classic dedicated hardware instruments.

Virtual Instruments

In the section titled “History of Automated Measurement,” I described automated, rack-em-stack-em systems. People liked these systems, but they were too big and pricey. As a consequence, modular approaches were developed that reduced size and presumably cost by eliminating

Page 40: Synthetic instruments: concepts and applications

What is a Synthetic Instrument?

17

Figure 1-4. Crowded front panel on Tektronix Spectrum analyzer

redundancy in the design. These modular packaging approaches had an undesirable side effect: they made the instrument front panels tiny and crowded. Anybody that used modular plug-in instruments in the 1970s and 1980s knows how crowded some of these modular instrument front panels got.

It occurred to designers that if the instrument could be fully controlled by computer, there might be no need for a crowded front panel. Instead, a soft front panel could be provided on a nearby PC that would serve as a way for a human to interact with the instrument. Thus, the concept of a virtual instrument appeared. Virtual instruments were actually convention-al instruments implemented with a pure, computer-based user interface.

Certain software technologies, like National Instruments’ LabVIEW product, facilitated the development of virtual instruments of this sort. The very name “virtual instrument” is deeply entwined with LabVIEW. In a sense, LabVIEW defines what a virtual instrument was, and is.

Page 41: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

18

Synthetic instruments running on generic hardware differ radically from ordinary instrumentation, where the hardware is specific to the measure-ment. Therefore, synthetic instruments also differ fundamentally from virtual instruments where, again, the hardware is specific to a measure-ment. In this latter case, however, the difference is more disguised since a virtual instrument block diagram might look similar to a synthetic instru-ment block diagram. Some might call this a purely semantic distinction, but in fact, the two are quite different.

Virtual instruments are a different beast than synthetic instruments because virtual instrument software mirrors and augments the hardware, providing a soft front panel, or managing the data flow to and from a con-ventional instrument in a rack, but does not start by creating or synthesiz-ing something new from generic hardware.

This is the essential point: synthetic instruments are synthesized. The whole is greater than the sum of the parts. To use Buckminster Fuller’s word, synthetic instruments are synergistic instruments[B4]. Just as a triangle is more than three lines, synthetic instruments are more than the triangle of hardware (control, codec, conditioning) they are implemented on.

Therefore, one way to tell if you have a true synthetic instrument is to ex-amine the hardware design alone and to try to figure out what sort of in-strument it might be. If all you can determine are basic facts, like the fact that it’s a stimulus or response instrument, or like the fact that it might do something with signals on optical fiber, but not anything about what it’s particularly designed to create or measure—if the measurement specificity is all hidden in software—then you likely have a synthetic instrument.

I mentioned National Instruments’ LabVIEW product earlier in the con-text of virtual instruments. The capabilities of LabVIEW are tuned more toward an instrument stance rather than a measurement stance, (at least at the time of this writing) and therefore do not currently lend them-selves as effectively to the types of abstractions necessary to make flexible synthetic instrumentation as do other software tools. In addition, Lab-VIEW’s nonobject-oriented approach to programming prevents the ap-plication of powerful object oriented (OO) benefits like efficient software reuse. Since OO techniques work well with synthetic instrumentation, LabVIEW’s shortcoming in this regard represents a significant limitation.

Page 42: Synthetic instruments: concepts and applications

What is a Synthetic Instrument?

19

That said, there’s no reason that LabVIEW can’t be used to as a tool for creating and manipulating synthetic instruments, at some level. Just because LabVIEW is currently tuned to be a non-OO virtual instru-ment tool, doesn’t mean that it can’t be a SI tool to some extent. Also, it should be noted that the C++-based LabWindows environment doesn’t share as many limitations as the non-OO LabVIEW tools.

Analog Instruments

One common misconception about synthetic instruments is that they can be only analog measuring instruments. That is to say, they are not appro-priate for digital measurements. Because of the digitizer, processing has moved from the digital world to the analog world, and what results is only useful for analog measurements.

Nothing could be further from the truth. All good digital hardware engi-neers know that digital circuitry is no less “analog” than analog circuits. Digital signaling waveforms, ideally thought of as fixed 1 and 0 logic levels are anything but fixed: they vary, they ring, they droop, they are obscured by glitches, spurs, hum, noise, and other garbage.

Performing measurements on digital systems is a fundamentally analog endeavor. As such, synthetic instrumentation implemented with a CCC hardware architecture is equally appropriate for digital test and analog test.

There is, without doubt, a major difference between the sorts of instru-ments that are synthesized to address digital versus analog measurement needs. Digital systems often require many more simultaneous channels of stimulus and response measurement than do analog systems. But band-widths, voltage ranges, and even interfacing requirements are similar enough in enough cases to make the unified synthetic approach useful for testing both kinds of systems with the same hardware asset.

Another difference between analog and digital oriented synthetic mea-surement systems is the signal conditioning used. In situations where only the data is of interest, rather than the voltage waveform itself, the best choice of signal conditioner may be nonlinear. Choose nonlinear digital-style line drivers and receivers in the conditioner. Digital drivers will give us better digital waveforms, per dollar, than linear drivers. Similarly, when implementing many channels of response measurement, a digital receiver will be far less expensive than a linear response asset.

Page 43: Synthetic instruments: concepts and applications

This page intentionally left blank

Page 44: Synthetic instruments: concepts and applications

21

CHAPTER

2Synthetic Measurement System

Hardware Architectures

The heart of the hardware system concept for synthetic instrumentation is a cascade of three subsystems: digital control and timing, analog-digital conversion (codec), and analog signal conditioning. The underlying assumption in the synthetic instrument concept is that this architecture concept is a good choice for the architecture of next-generation instru-mentation. In this chapter, I will explore the practical and theoretical implications of this concept. Other architectural options and concepts that relate to the fundamental concept will also be considered.

System Concept—The CCC Architecture

The cascade of three subsystems, control, codec, and conditioning, is shown in Figure 2-1.

Controller ConditionerCodec

I will call this architecture the three C’s or The CCC Architecture: Con-trol, Codec, and Conditioning. In a stimulus asset, the controller generates digital signal data that is converted to analog form by the codec, which is then adjusted in voltage, current, bandwidth, impedance, coupling, or has any of a myriad of possible interface transformations performed by the conditioner.

Figure 2-1. Basic CCC cascade

Page 45: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

22

Signal Flow

Thus, the “generic” hardware used as a platform for synthetic instrumen-tation comprises a cascade of three functional blocks. This cascade might flow either way, depending on the mode of operation. A sensor might pro-vide a signal input for signal conditioning, analog-to-digital conversion (A/D), and processing, or, alternatively, processing might drive digital-to-analog conversion (D/A), which drives signal conditioning to an effector.

In my discussions of synthetic instrumentation architecture, I will often treat digital-to-analog conversion as an equivalent to analog-to-digital conversion. The sense of the equivalence is that these two operations represent a coding conversion between the analog and digital portions of the system. The only difference is the direction of signal flow through them. This is exactly the same as how I will refer to signal conditioners as a generic class that comprises both stimulus (output) conditioners, and response (input) conditioners.

Controller ConditionerCodec

Controller ConditionerCodec

STIMULUS SIGNAL FLOW

RESPONSE SIGNAL FLOW

Thus, in this book, when I refer to either of the two sorts of converters as an equivalent element in this sense, I will often call it a codec1 or con-verter, rather than be more restrictive and call it an A/D or a D/A. This will allow us to discuss certain concepts that apply to both stimulus and response instruments equally. Similarly, I will refer to signal condition-ers and digital processors (controllers) generically as well as in a specific stimulus or response context.

Figure 2-2. Synthetic architecture cascade—flow alternatives

1 Although the word “CoDec” implies to both a Coder and a Decoder, I will also use this word to refer to either individually or collectively.

Page 46: Synthetic instruments: concepts and applications

Synthetic Measurement System Hardware Architectures

23

The Synthetic Measurement System

When you put a stimulus and response asset together, with a device under test (DUT)2 in the middle, you now have a full-blown synthetic measure-ment system (SMS).

Controller ConditionerCodec

Controller ConditionerCodec

DUT

Chinese Restaurant Menu (CRM) Architecture

When one tries to apply the CCC hardware architecture to a wide variety of measurement problems, it often becomes evident that practical limita-tions arise in the implementation of a particular subsystem with respect to certain measurements. For example, voltage ranges might stress the signal conditioner, or bandwidth might stress the codec, or data rates might stress the controller, and so on. Given this problem, the designer is often inclined to start substituting sections of hardware for different applica-tions. With this approach taken, the overall system begins to comprise several CCC cascades, with portions connected as needed to generate a particular stimulus. Together they form a sort of “Chinese Restaurant Menu” of possibilities—CRM Architecture for short.

Figure 2-3. Synthetic measurement system

2 In the automated test community, engineers often use the jargon acronym DUT to refer to the device under test. Some engineers prefer unit under test (UUT). What-ever you call it, DUT or UUT, it represents the “thing” you are making measurements about. Most often this is a physical thing, a system possibly. Other times it may be more abstract, a communications channel, for instance. In all cases, it’s something separate from the measurement instrument.

Page 47: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

24

To create an instrument hardware platform, select one item from column A (a signal conditioner), one item from column B (a codec asset), and one item from column C (a digital processor) to form a single CCC cas-cade from which to synthesize your instrument.

For example, consider the requirements for a signal generator versus a pulse generator. A pulse generator, unless it needs rise time control, can get away with a “1-bit” D/A. Even with rise time control, only a few bits are really needed if a selection of reconstruction filters are available. On the other hand, a high-speed pulse timing controller is needed, possibly with some analog, fine-delay control, and the signal conditioning would be best done with a nonlinear pulse buffer with offset capability, and spe-cialized filtering for rise/falltime control.

In contrast, the signal generator fidelity requirements lead us to a finely quantized D/A, with at least 12 bits. A direct digital synthesis (DDS) oriented controller is needed to generate periodic waveforms; and a linear, low-distortion, analog buffer amplifier is mandatory for signal conditioning.

Therefore, when faced with requirements for a comprehensive suite of tests, one way to handle the diversity of requirements is to provide mul-tiple choices for each of the “three C’s” in the CCC architecture.

State MachineController

Digital Conditioner

1-bitCodec

DSPController

Broadband Conditioner

12-bitCodec

PIO-BasedController

High Voltage Conditioner

18-bitCodec

Column A Column B Column C

Figure 2-4. CRM architecture

Page 48: Synthetic instruments: concepts and applications

Synthetic Measurement System Hardware Architectures

25

Table 2-1 is one possible menu. Column A is the control and timing circuitry, column B is the D/A codec, and column C is the signal condi-tioning. To construct a particular stimulus, you need to select appropri-ate functions from columns A, B, and C that all work together for your application.

Table 2-1. CRM architecture

The items from the CRM architecture can be selected on the spot by means of switches, or, alternatively, stimulus modules can be constructed with hardwired selections from the menu. The choice of module can be specified in the measurement strategy, or it can be computed using some heuristic.

But the CRM design is somewhat of a failure; it compromises the goal of synthetic design. The goal of synthetic instrumentation design is to use a single hardware asset to synthesize any and all instruments. When you are allowed to pick and choose, even from a CRM of CCC assets, you have taken your first step down the road to hell—the road to rack-n-stack modular instrumentation. In the limit, you are back to measurement-specific hardware again, with all the redundancy put back in. Rats! And you were doing so well!

Let’s take another look at the signal generator and pulse generator from the “pure” synthetic instrumentation perspective. Maybe you can save yourself.

Parameterization of CCC Assets

One way to fight the tendency to design with a CRM architecture is to use asset parameterization. Instead of swapping out a CCC asset com-pletely, design the asset to have multiple modes or personalities so that it can meet multiple requirements without being totally replaced.

Control & Timing Codec Conversion Signal Conditioning

DSP or µP1-bit (on/off) Voltage Source(100 GHz)

Nonlinear Digital Driver

High Speed State Machine 18-bit, 100 kHz D/AWideband LinearAmplifier

Med Speed State Machinewith RAM

12-bit, 100 MHz D/A High Voltage Amplifier

Parallel I/O Board driven byTP

8-bit, 2.4 GHz D/A Up-converter

Page 49: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

26

This sort of approach is not unlike the nonpolymorphic functional factor-izations in procedural software, when type parameters are given to a func-tion. Rather than making complete new copies of an asset with certain aspects of its behavior altered to fit each different application, a single as-set is used with those changing performance aspects programmable based on its current type.

For example, rather than making several different signal conditioners with different bandwidths, make a single signal conditioner that has selectable bandwidth. To the extent that bandwidth can be parameterized in a way that does not require the whole asset to be replaced (equivalent to the CRM design), the design is now more efficient through its use of parameterization.

State MachineController

Digital Conditioner

1-bitCodec

DSPController

Broadband Conditioner

12-bitCodec

PIO-BasedController

High Voltage Conditioner

18-bitCodec

The choice between multiple modules with different capabilities, and a single module with multiple capabilities is a design decision that must be made by the synthetic measurement system developers based on a com-plete view of the requirements. There is no way to say a priori what the right way is to factor hardware. The decision must be made in the context of the design requirements. However, it is important to be aware of the tendency to modularize and solve specific measurement issues by racking up more specific hardware, which leads away from the synthetic instru-ment approach.

Architectural Variations

Although in many circumstances there is good potential for the basic synthetic instrument architecture, one can anticipate that some architec-tural variations and options need to be considered. Sadly, many of these

Figure 2-5. Parameterized architecture

Page 50: Synthetic instruments: concepts and applications

Synthetic Measurement System Hardware Architectures

27

variations backslide to habitual, sanctioned approaches. There’s nothing to be done about this; the realities of commercial development must be acknowledged.

The reasons synthetic instrument designers consider deviations from the pure “three C’s” tend to be matters of realizability, cost, risk, or marketing. State-of-the-art that a more conservative design approach is taken. And, for whatever reason, customers may simply want a different approach.

In this book, I won’t bother to consider architecture variations that are expedient for reasons of risk or marketing, but I will mention a few varia-tions that are wins with respect to cost and realizability. These tend to be SMS architectural enhancements rather than mere expedient hacks to get something built. Enhancements like compound stimulus and multiplex-ing are two of the most prominent.

Compound Stimulus

One architecture enhancement of CCC that is often used is called com-pound stimulus, where one stimulus is used in the generation of another stimulus. The situation to which this enhancement lends itself is when-ever state-of-the-art D/A technology is inadequate, expensive, or risky to use to generate an encoded stimulus signal directly with a single D/A.

The classic example of compound stimulus is the use of an up-converter to generate a modulated bandpass signal waveform. This is accomplished by a combination of subsystems as shown in Figure 2-6.

Synthesizer

Controller ConditionerCodec

Controller UpconverterConditioner

Codec

LO

Figure 2-6. Compound stimulus

Page 51: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

28

Note that the upper “synthesizer” block shows an internal structure that parallels the desired CCC structure of a synthetic instrument. The output of this synthesizer is fed down to the signal conditioning circuit of the lower stimulus system. The lower system generates the modulation that is up-converted to form the compound stimulus.

Whenever a signal encoding operation is performed, there will be an open input for either the encoded signal, or the signal on which it is en-coded. This is an opportunity for compound stimulus. Signal encoding is inherently a stimulus compounding operation.

In recognition of this situation, CCC assets can be deployed with a switch matrix that allows their outputs to be directed not only at DUT inputs, but alternatively to coding inputs on other CCC stimulus assets. This results in a CCC compound matrix architecture that allows us to de-ploy all assets for the generation of compound-encoded waveforms.

Simultaneous Channels and Multiplexing

An issue that is often ignored in the development of synthetic measure-ment systems is the need for multiple, simultaneous stimuli, and multiple, simultaneous measurements. With many tests, a single stimulus is not enough. And although, most times, multiple responses can be measured sequentially, there are some cases where simultaneous measurement of response is paramount to the goal of the test.

Given this need, what do we do? The most obvious solution is to simply build more CCC channels. Duplicate the CCC cascade for stimulus as many times as needed so as to provide the required stimuli. Similarly, duplicate the response cascade to be able to measure as many responses as needed.

This obvious solution certainly can be made to work in many applica-tions, but what may not be obvious is that there are other alternatives. There are other ways to make multiple stimuli, and make multiple re-sponse measurements, without using completely duplicated channels. Moreover, it may be the case that duplicated channels cost more and have inferior performance as compared to the alternatives.

What are these cheaper, better alternatives? They all fall into a class of techniques called multiplexing. When you multiplex, you make one chan-nel do the work of several. The way this is accomplished is by taking

Page 52: Synthetic instruments: concepts and applications

Synthetic Measurement System Hardware Architectures

29

advantage of orthogonal modes in physical media. Any time uncoupled modes occur, you have an opportunity to multiplex.

There are various forms of multiplexing, each based on a set of modes used to divide the channels. The most common forms used in practice are the following:

Space Division Multiplexing Time Division Multiplexing Frequency Division Multiplexing Code Division Multiplexing

These multiplexing techniques can be implemented in hardware, of course, but they can also be synthetically generated. After all, if you have the ability to generate any of a menu of synthetic instruments on generic hardware, why can’t you synthesize multiple instruments simultaneously?

Space Division Multiplexing

Space division multiplexing (SDM) is a fancy name for multiple chan-nels. Separate channels with physical separation in space is the or-thogonal mode set. SDM has the unique advantage of being obvious and simple. If you want two stimuli, build two stimulus cascade systems. If you want two responses, build two response cascades. A more subtle advantage, but again unique to SDM and very important, is the fact that multiple SDM channels each have exactly the same bandwidth perfor-mance of a single channel. This may seem trivial, but it is not strictly true with other techniques. Another advantage of space multiplexing is that it tends to achieve good orthogonality. That is to say, there is little crosstalk between channels. What is generated/measured on one channel does not influence another.

But there are many significant disadvantages to space multiplexing. Fore-most among these is the fact that N channels implemented with space multiplexing cost at least N times more than a single channel, sometimes more. Another prominent disadvantage of space multiplexing is a conse-quence of the good orthogonality: the channels are completely indepen-dent they can drift independently. Thus, gain drift, offsets, and problems like phase and delay skew, among others, will all be worse in a space-mul-tiplexed system.

Page 53: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

30

Time Division Multiplexing

The most common alternative to space multiplexing is time division multiplexing (TDM). The TDM technique is based on the idea of using a multiway switch, often called a commutator, to divide a single channel into multiple channels in a time-shared manner.

DUT

Controller ConditionerCodec

Controller ConditionerCodec

Figure 2-7. Space division multiplexing

Controller ConditionerCodec

Controller ConditionerCodec

DUT

Commutator

Commutator

Figure 2-8. Time division multiplexing

Time multiplexing schemes are easy to implement and to the extent that the commutator is inexpensive compared to a channel, a TDM approach can be far less expensive than most any other multiplexing approach. Another advantage is in using the same physical channel for each measurement or stimulus, there is less concern about interchannel drift or skew.

Page 54: Synthetic instruments: concepts and applications

Synthetic Measurement System Hardware Architectures

31

On the downside, a new concern becomes important: the speed of the commutator in relation to the bandwidth of the signals. Unlike SDM, where it’s obvious that the multiple channels each work as well, band-width-wise, as an individual channel, with TDM there may be a problem. The available single-channel bandwidth is shared among each multi-plexed channel.

It can be shown[B1], however, that if the commutator visits each chan-nel at least at the Nyquist rate (twice the bandwidth of the channel), all information from each channel is preserved. This is a point that recurs in other multiplexing techniques. It turns out that for N channels TDM multiplexed to have the same bandwidth performance as a single chan-nel with bandwidth B, they need to be multiplexed onto a single chan-nel with bandwidth N times B at a minimum. Thus, TDM (and all other multiplexing techniques) are seen as a space-bandwidth trade-off.

TDM is relatively easy to implement synthetically. Commutation or decommutation are straightforward digital techniques that can be used exclusively in the synthetic realm, or paired with specific commutating hardware. One possible architecture variation that uses a virtual commu-tator implemented synthetically is shown in Figure 2-9.

Controller

ConditionerCodec

Controller ConditionerCodec

DUT

Commutator

Virtual Commutator

Figure 2-9. Time division multiplexing with virtual commutator

The application of TDM shown in Figure 2-9 assumes a multi-input, sin-gle output DUT. Multiplexing is used in a way that allows a single stimu-lus channel to make measurements of all the inputs. The corresponding responses are easily sliced apart in the controller.

Page 55: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

32

Frequency Division Multiplexing

Another common alternative to space multiplexing is frequency division multiplexing (FDM). The FDM technique is based on the idea of using different frequencies, or subcarriers, to divide a single channel into mul-tiple channels in a frequency allocated manner.

FDM schemes are somewhat harder to implement than TDM. Some sort of mixer is needed to shift the channels to their respective subcarriers. subcarriers. Later they must be filtered apart and unmixed. Modern DSP simplifies a lot of the issues that would otherwise make this technique more costly. Synthetic FDM is straightforward to achieve.

In Figure 2-10, three frequencies, F1, F2, and F3 are used with mixers to allow a single response channel to make simultaneous measurements of a DUT with three outputs. Within the controller, a filter bank implement-ed as an FFT separates the individual response signals.

ConditionerCodec

Controller ConditionerCodec

DUT

F1 F2 F3

Controller FFT(Filter Bank)

Figure 2-10. Frequency division multiplexing

Unfortunately, unlike TDM, even though the same physical channel is used for each measurement or stimulus, those signals occupy different frequency bands in the single physical channel. There is no guarantee that intermux-channel drift or skew will not occur. On the other hand, the portions of FDM implemented synthetically will not have this prob-lem—or can correct for this problem in the analog portions.

Again it is true that for N channels, FDM multiplexed, to have the same bandwidth performance as a single channel with bandwidth B, they must be multiplexed on a single channel with a bandwidth N times B, at a minimum.

Page 56: Synthetic instruments: concepts and applications

Synthetic Measurement System Hardware Architectures

33

Code Division Multiplexing

A more esoteric technique for multiplexing (although much more well known these days because of the rise of cellular phones that use CDMA) is code division multiplexing (CDM). The CDM technique is based on the idea of using different orthogonal codes, or basis functions, to divide a single channel into multiple channels along orthogonal axes in code space.

To first order, CDM schemes are of the same complexity as FDM tech-niques, so the two methods are often compared in terms of cost and performance. The frequency mixer used in FDM is used the same way in CDM, this time with different codes rather than frequencies. Again, fancy DSP can be used to implement synthetic CDM. Separate code multiplexed channels can be synthesized and demultiplexed with DSP techniques. Figure 2-11 shows a CDM system that achieves the same multiplexing as the FDM system in Figure 2-10.

ConditionerCodec

Controller ConditionerCodec

DUT

Code1

ControllerCode

DeMux

Code2

Code3

As with TDM, because the same physical channel is used for each mea-surement or stimulus, there is less concern about interchannel drift or skew. Unlike FDM, codes will spread each channel across the same fre-quencies. In fact, CDM as the unique ability to ameliorate some frequen-cy-related distortions that plague all other techniques, including space multiplexing.

Again it is true that for N channels, CDM multiplexed, to have the same bandwidth performance as a single channel with bandwidth B, a single channel with bandwidth at least N times B is needed.

Figure 2-11. Code division multiplexing

Page 57: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

34

Choosing the Right Multiplexing Technique

The right choice of multiplexing technique for a particular SMS depends on the requirements of that SMS application. You should always try to remember that this choice exists. Don’t fall into the rut of using one particular multiplexing technique without some consideration of all the others for each application. And don’t forget that multiple simultaneous copies of the same instrument can be synthesized.

Hardware Requirements Traceability

In this chapter, I have discussed hardware architectures for synthetic mea-surement systems. In that discussion, it should have become clear that the way to structure SMS hardware is not guided solely, or even primarily, by measurement considerations.

In a synthetic measurement system, the functionality associated with particular measurements is not confined to a specific subsystem. This is a stumbling point especially when somebody tries to use the so-called wa-terfall methodology that is so popular. Doctrine has us begin with system requirements, then divine a subsystem factoring into some set of boxes, then apportion out subsystem requirements into those boxes.

But synthetic measurement system hardware does not readily factor in ways that correspond to system-level functional divisions. In fact, the whole point of the method was to avoid designating specific hardware to specific measurements. What you end up with in a well designed SMS is like a hologram, with each individual physical part of the hologram being influenced by everything in the overall image, and with each individual part of the image stored throughout every part of the hologram.

The holistic nature of the relationship between synthetic measurement system hardware components and the component measurements per-formed is one of the most significant concepts I am attempting to com-municate in this book.

Page 58: Synthetic instruments: concepts and applications

35

CHAPTER

3Stimulus

In some sense, the beginning of a synthetic instrumentation system is the stimulus generation, and the beginning of the stimulus generation is the digital control, or DSP, driving the stimulus side. This is where the stimulus for the DUT comes from. It’s the prime mover or first cause in a stimulus-response measurement, and it’s the source of calibration for response-only measurements. The basic CCC architecture for stimulus comprises a three-block cascade: DSP control, followed by the stimulus codec (a D/A in this case), and finally the signal conditioning that inter-faces to the DUT.

Controller ConditionerCodec

Stimulus Digital Signal Processing

The digital processor section can perform various sorts of functions, rang-ing from waveform synthesis to pulse generation. Depending on the exact requirements of each of these functions, the hardware implementation of an “optimum” digital processor section can vary in many different, and seemingly incompatible directions.

Ironically, one “general-purpose” digital controller (in the sense of a general-purpose microprocessor) may not be generally useful. When deciding the synthesis controller capabilities for a CCC synthetic measurement system, it inevitably becomes a choice from among several distinct controller options.

Figure 3-1. The stimulus cascade

Page 59: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

36

Although there are numerous alternatives for a stimulus controller, these various possible digital processor assets fall into broad categories. Listed in order of complexity, they are:

Waveform Playback (ARB)

Direct Digital Synthesis (DDS)

Algorithmic Sequencing (CPU)

In the following sections, I will discuss these categories in turn.

Waveform Playback

The first and simplest of these categories, waveform playback, represents the class of controllers one finds in a typical arbitrary waveform generator (ARB). These controllers are also akin to the controller in a common CD player: a “dumb” digital playback device. The basic controller consists of a large block of waveform memory, and a simple state machine, perhaps just an address counter, for sequencing through that memory. A counter is a register to which you repeatedly add +1 as shown in Figure 3-2. When the register reaches some predetermined terminal count, it’s reset back to its start count.

Address Register

RAMData To Codec

Control Interface

Address

Data Download

+1

Control Logic

The large block of memory contains digitized samples of waveform data. Perhaps the data is in one continuous data set, or as several independent tracks of data. An interface controls the counter, and another gets wave-form data into the RAM.

In basic operation, the waveform playback controller sequences through the data points in the tracks and feeds the waveform data to the codec for conversion to an analog voltage that is then conditioned and used to

Figure 3-2. Basic waveform playback controller

Page 60: Synthetic instruments: concepts and applications

Stimulus

37

stimulate the DUT. Customarily, the controller has features where it can either playback a selected track repeatedly in a loop, or just a one-time, single-shot playback. There may also be features to access different tracks and play them back in various sequential orders or randomly. The ability to address and play multiple tracks is handy for synthesizing a communi-cations waveform that has a signaling alphabet.

The waveform playback controller has a fundamental limitation when generating periodic waveforms (like a sine wave). It can only generate waveforms that have a period that is some integer multiple of the basic clock period. For example, imagine a playback controller that runs at 100 MHz, it can only generate waveforms with periods that are multiples of 10 nS. That is to say, it can only generate: 100 MHz, 50 MHz, 33.333 MHz, and so on for every integer division of 100 MHz. A related limi-tation is the inability of a playback controller to shift the phase of the waveform being played.

You will see below that direct digital synthesis (DDS) controllers do not have these limitations.

Another fundamental limitation that the waveform playback control-ler has is that it cannot generate a calculated waveform. For example, in order to produce a sine wave, it needs a sine table. It cannot implement even a simple digital oscillator. This is a distinct limitation from the in-ability to generate waveforms of arbitrary period, but not unrelated. Both have to do with the ability to perform algorithms, albeit with different degrees of generality.

Direct Digital Synthesis

Direct digital synthesis (DDS) is an enhancement of the basic waveform playback architecture that allows the frequency of periodic waveforms to be tuned with arbitrarily fine steps that are not necessarily submultiples of the clock frequency. Moreover, DDS controllers can provide hooks into the waveform generation process that allows direct parameterization of the waveform for the purposes of modulation. With a DDS architecture, it is dramatically easier to amplitude or phase modulate a waveform than it is with an ordinary waveform playback system.

A block diagram of a DDS controller is shown in Figure 3-3.

Page 61: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

38

Address Register RAM Data To

Codec

Control Interface Address

Data Download

Adder

Control Logic

Phase Increment

LSBs

Figure 3-3. Direct digital synthesizer

The heart of a DDS controller is the phase accumulator. This is a register recursively looped to itself through an adder. One addend is the contents of the accumulator, the other addend is the phase increment. After each clock, the sum in the phase accumulator is increased by the amount of the phase increment.

How is the phase accumulator different from the address counter in a waveform playback controller? The deciding difference is that a phase ac-cumulator has many more bits than needed to address waveform memory. For example, waveform memory may have only 4096 samples. A 12-bit address is sufficient to index this table. But the phase accumulator may have 32-bit. These extra bits represent fractional phase. In the case of a 12-bit waveform address and 32-bit accumulator, a phase increment of 220 would index through one sample per clock. Any phase increment less than 220 causes indexing as some fraction of a sample per clock.

You may recall that a waveform playback controller could never generate a period that wasn’t a multiple of the clock period. In a DDS controller, the addition of fractional phase bits allow the period to vary in infinitesi-mal fractions of a clock cycle. In fact, a DDS controller can be tuned in uniform frequency steps that are the clock period divided by 2N, where N is the number of bits in the phase accumulator. With a 32-bit accumulator, for example, and a 1-GHz clock, frequencies can be tuned in 1/4 Hz steps.

Since the phase accumulator represents the phase of the periodic wave-form being synthesized, simply loading the phase accumulator with a specified phase number causes the synthesized phase to jump to a new state. This is handy for phase modulation. Similarly, the phase increment can be varied causing real-time frequency modulation.

Page 62: Synthetic instruments: concepts and applications

Stimulus

39

It’s remarkable that few ARB controllers include this phase accumula-tor feature. Given that such a simple extension to the address counter has such a large advantage, it’s puzzling why one seldom sees it. In fact, I would say that there really is no fundamental difference between a DDS waveform controller and a waveform playback controller. They are dis-tinguished entirely by the extra fractional bits in the address counter, and the ability to program a phase increment with fractional bits.

Digital Up-Converter

A close relative of the DDS controller is the digital up-converter. In a sense, it is a combination of a straight playback controller and a DDS in a compound stimulus arrangement as discussed in the section titled “Com-pound Stimulus.” Baseband data (possibly I/Q) in blocks or “ tracks” are played back and modulated on a carrier frequency provided by the DDS.

Algorithmic Sequencing

A basic waveform playback controller has sequencing capabilities when it can play tracks in a programmed order, or in loops with repeat counts. The DDS controller adds the ability to perform fractional increments through the waveform, but otherwise has no additional programmability or algorithmic support.

0 1 2 3 4 5

Phase Accumulator

MSBs LSBs

LSBs Not Connected

MSBs Index

Waveform Table

LSBs Count Fractional Phase

After Filtering

WaveformTable Data

Amplitude

Time

Figure 3-4. Fractional samples

Page 63: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

40

Both the basic waveform playback controller and the DDS controller have only rudimentary algorithmic features and functions, but it’s easy to see how more algorithmic features would be useful.

For example, it would be handy to be able to loop and through tracks with repeat counts, or to construct subroutines that comprised certain collec-tions of track sequences (playlists in the verbiage of CD players). Not that we want to turn the instrument into an MP3 player, but playlist ca-pability also can assemble message waveforms on the fly based on a signal alphabet. A stimulus with modulated digital data that is provided live, in real time can be assembled from a “playlist.”

It would also be useful to be able to parameterize playlists so that their contents could be varied based on certain conditions. This leads to the requirement for conditional branching, either on external trigger or gate conditions, or on internal conditions—conditions based on the data or on the sequence itself.

As more features are added in this direction, a critical threshold is reached. Instruction memory appears, and along with it, a way for data and program to intermix. The watershed is a conditional branch that can choose one of two sequences based on some location in memory, com-bined with the ability to write memory. At this point, the controller is a true Turing machine—a real computer. It can now make calculations. Those calculations can either be about data (for example: delays, loops, patterns, alphabets) or they can be generating the data itself (for exam-ple: oscillators, filters, pulses, codes).

There is a vast collection of possibilities here. Moving from simple state machines, adding more algorithmic features, adding an algorithmic logic unit, or ALU, along with state sequencer capable of conditional branches and recursive subroutines, the controller becomes a dual-memory Harvard architecture DSP-style processor. Or it may move in a slightly different direction to the general-purpose single-memory Von Neumann proces-sors. Or, perhaps, the controller might incorporate symmetric multipro-cessing or systolic arrays. Or something beyond even that.

Obviously, it’s also beyond the scope of this book to discuss all the pos-sibilities encompassed in the field of computer architecture. There are many fine books[B7] that give a comprehensive treatment of this large

Page 64: Synthetic instruments: concepts and applications

Stimulus

41

topic. I will, however, make a few comments that are particularly relevant to the synthetic instrumentation application.

Synthesis Controller Considerations

At first glance, a designer thinking about what controller to use for a synthetic instrument application might see the controller architecture choice as a speed-complexity trade-off. On one hand, they can use a com-plex general-purpose processor with moderate speed, and on the other hand, they can use a lean-and-mean state sequencer to get maximum speed. Which to pick?

Fortunately, advances in programmable logic are softening this dilemma. As of this writing, gate arrays can implement signal processing with near-ly the general computational horsepower of the best DSP microproces-sors, without giving up the task specific horsepower that can be achieved with a lean-and-mean state sequencer. I would expect this gap to narrow into insignificance over the next few years.

“Microprocessors are in everything from personal computers to washing machines, from digital cameras to toasters. But it is this very ubiquity that has made us forget that microprocessors, no matter how powerful, are inefficient compared with chips designed to do a specific thing.”

—Tredennick and Shimamoto[P1]

Given this trend, perhaps the true dilemma is not in hardware at all. Rather, it might be a question of software architecture and operating sys-tem. Does the designer choose a standard microprocessor (or DSP proces-sor) architecture to reap the benefits of a mainstream operating system like vxWorks, Linux, pSOS, BSD, or Windows? Or does the designer roll-their-own hardware architecture specially optimized for the synthetic instrument application at the cost of also needing to roll-her-own soft-ware architecture, at least to some extent?

Again, this gap is narrowing, so the dilemma may not be an issue. Stan-dard processor instruction sets can be implemented in gate arrays, al-lowing them to run mainstream operating systems, and there is growing support for customized ASIC/PLD-based real-time processing in modern operating systems.

Page 65: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

42

In fact, because of advances in gate array and operating system technol-ogy, the world may soon see true, general-purpose digital systems that do not compromise speed for complexity, and do not compromise software support for hardware customization. This trend bodes well for synthetic instrumentation, which shines the brightest when it can be implemented on a single, generalized, CCC cascade.

Extensive computational requirements lead to a general-purpose DSP- or microprocessor-based approach. In contrast, complex periodic waveforms may be best controlled with a high-speed state sequencer implementing a DDS phase accumulator indexing a waveform buffer. While fine resolu-tion delayed pulse requirements are often best met with hybrid analog/digital pulse generator circuits. These categories intertwine when consid-ering implementation.

Stimulus Triggering

I have only scratched the surface of the vast body of issues associated with stimulus DSP. One could write a whole book on nothing but stimulus signal synthesis. But my admittedly abbreviated treatment of the topic would be embarrassingly lacking without at least some comment on the issue of triggering.

Probably the biggest topic not yet discussed is the issue of triggering. Trig-gering is required by many kinds of instruments. How do we synchronize the stimulus, and subsequent measurement with external events?

Triggering ties together stimulus and response, much in the same way as calibration does. A stimulus that is triggered requires a response measure-ment capability in order to measure the trigger signal input. Therefore, an SMS with complete stimulus response closure will provide a mechanism for response (or ordinate) conditions to initiate stimulus events.

Desirable triggering conditions can be as diverse as ingenuity allows. They don’t have to be limited to a signal threshold. Rather, trigger conditions can span the gamut from the rudimentary single-shot and free-run conditions, to complex trigger programs that require several events to transpire in a particular pattern before the ultimate trigger event is initiated.

Page 66: Synthetic instruments: concepts and applications

Stimulus

43

Stimulus Trigger Interpolation

Generality in triggering requires a programmable state machine control-ler of some kind. Furthermore, it is often desirable to implement finely quantized (near continuously adjustable) delays after triggering, which seems to lead us to a hybrid of digital and analog delay generation in the controller.

While programmable analog delays can be made to work and meet re-quirements for fine trigger delay control, it’s a mistake to jump to this hardware-oriented solution. It’s a mistake, in general, to consider only hardware as a solution for requirements in synthetic measurement sys-tems. Introducing analog delays into the stimulus controller for the pur-pose of allowing finely controlled trigger delay is just one way to meet the requirement. There are other approaches.

For example, as shown in Figure 3-5, based on foreknowledge of the reconstruction filtering in the signal conditioner, it is possible to alter the samples being sent to the D/A in such a way that the phase of the synthe-sized waveform is controlled with fine precision—finer than the sample interval.

The dual of this stimulus trigger interpolation and re-sampling technique will reappear in the response side in the concept of a trigger time inter-polator used to re-sample the response waveform based on the precisely known time of the trigger.

Figure 3-5. Fine trigger delay control

Seconds

Volts

1 2 3 4 5

50% LevelTrigger

0

2.0 Sec

D/ASamples

Reconstructed Waveform

Seconds

Volts

1 2 3 4 5

50% LevelTrigger

0

2.1 Sec

AlteredSamples

0.1 Sec Change

Page 67: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

44

The Stimulus D/A

The D/A conversion section, or stimulus side codec (or really just “Co”), creates an analog waveform based on the output of the digital control section. As I will discuss below, the place where the D/A begins and the controller ends may be fuzzy.

D/A converters tend to be constrained by a speed versus accuracy trade-off much in the same way that controllers trade speed for complexity. This speed-accuracy trade-off reflects the practical reality that fast D/A systems tend to have worse amplitude accuracy than slow D/A systems. I’m using the word “accuracy” in the qualitative sense of finer amplitude resolution, measured in bits, but this may also be measured in SiNaD or IMD performance. Speed can be viewed as sampling rate or equivalently, time resolution. It’s most practical to use low speed D/A subsystems where amplitude accuracy is the primary requirement, and to use moderate am-plitude accuracy systems where speed is paramount.

The following table illustrates typical extremes of this spectrum of trade-offs. It illustrates the typical bandwidth versus accuracy trade-off that one encounters. This does not reflect an exhaustive survey of available D/A technology, nor could any static table on a printed page keep up with the fast pace of change.

Table 3-1. D/A converter trade-off range

Requirement ENOB Speed

Pulse Generation

Analog Waveforms 12-bit

AC/DC Reference

1-bit

18-bit

100 MHz

100 GHz

100 kHz

Note that ENOB refers to the effective number of bits provided by the amplitude accuracy of a D/A in the given category.

I have more to say about codec accuracy, ENOB, and other related top-ics when I discuss the response codec in the section titled “The Response Codec,” as these issues affect both stimulus and response in analogous ways.

Page 68: Synthetic instruments: concepts and applications

Stimulus

45

Interpolation and Digital Up-Converters in the Codec

One of the signal coding operations an SMS can be asked to perform is modulation on a carrier, resulting in a so-called bandpass signal. On the stimulus side of a synthetic instrument, there are two fundamental ways to generate a bandpass signal:

Up-convert digitally before the D/A

Up-convert with analog circuits after the D/A

It’s also possible to do both of these, up-converting to a fixed digital IF be-fore the D/A, and then use analog up-conversion after the D/A to finish the job.

The idea of interpolation in the context of digital signal processing is dif-ferent than what I mean by interpolation in the context of measurement maps. In DSP terms, interpolation is a process of increasing the sampling rate of a digital signal. It is accomplished by means of an interpolating filter that reconstructs the missing samples with predicted data based on the assumption of a limited signal bandwidth. A good reference on the interpolation process is[B11]. Interpolation goes hand-in-hand with up-con-version since a higher frequency up-converted result needs more samples to represent it without aliasing.

You may be puzzled why up-conversion and interpolation is a topic for the stimulus codec section of this book. Isn’t digital up-conversion ac-complished by the stimulus controller? Isn’t analog up-conversion a signal conditioning function? Yes, I agree that logically the up-conver-sion function is not part of the codec, but the fact is that many new D/A subsystems are being built with digital up-conversion and interpolation on board.

Actually, it’s quite beneficial for interpolation and digital up-conversion to be accomplished within the stimulus codec. This is good because it lowers the data rate required out of the stimulus controller. In a sense, the stimulus controller can concern itself only with the meaningful portion of the stimu-lus. The resulting baseband or low-IF signal can then be translated in a mechanical process to some high frequency without burdening the control-ler. In a sense, the codec is merely coding the information bearing portion of the signal, both as an analog voltage, and as a modulation.

Page 69: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

46

The idea of accomplishing signal encoding in the D/A subsystem extends beyond just up-conversion and analog coding. Other forms of encoding can be accomplished most efficiently here. These possibilities include: pulse modulation, AM/FM modulation, television signals such as NTSC/PAL, frequency hopping, and direct sequence spreading.

In general, never assume that all DSP tasks are performed in the stimu-lus controller; never assume all analog conditioning tasks are performed in the stimulus conditioner. All the hardware is available to be used for anything.

Stimulus Conditioning

In a stimulus system, after the analog signal is synthesized by the D/A, some amount of signal conditioning may need to be applied. This condi-tioning can include a wide variety of signal processing and DUT specific considerations including, possibly, one or more of the following:

Amplification: linear, digital, pulse, RF, high voltage or current

Filtering: fixed, tunable, tracking, adaptive

Impedance: matched source, programmable mismatch, constant current/voltage

DUT Interface: probes, connectors, transducers, antennas

DUT interfacing is the normal role for signal conditioning; however, as I just explained with regard to the D/A, there’s no reason that signal encoding or modulation tasks can’t be performed here as well, especially up-conversion and modulation. An analog RF up-converter is a common signal conditioner component.

Signal-conditioning requirements are obviously dependent on the needs of the DUT, but they are also dependent on the performance of the D/A subsystem. If, for example, the D/A is fast enough to generate all frequen-cies of interest, there is no need for up-conversion; if the D/A can pro-duce the required power, current, or voltage, there is no need for amplifi-cation.

The need to interact with a diverse selection of DUTs tends to drive the design of stimulus signal conditioning in the direction of either a param-eterized asset, as I discussed in the section titled “Parameterization of

Page 70: Synthetic instruments: concepts and applications

Stimulus

47

CCC Assets,” or to multiple assets in the CRM architecture sense. Often parameterization makes the most sense as it’s more efficient to contain the switching between different conditioner circuit options within some overall conditioner subsystem than it is to force the system-level switch-ing to handle this. Only when faced with a unique and narrow range of conditioning needs specific to a certain class of DUTs does it make sense to create a unique conditioner asset to address that requirement.

An illustration of this principle would be a variable gain amplifier or selectable filter in the signal conditioner. It would make no sense to build separate conditioners just to change a gain or a filter. Similarly, DC off-sets, impedances, and other easily parameterized qualities are best imple-mented that way.

The opposite case would be a situation where the signal conditioner could produce a signal that would be damaging to some class of assets, or where some class of DUTs could do damage to the conditioner, for in-stance a high voltage stimulus.

Stimulus Conditioner Linearity

Stimulus conditioner circuitry does not have to be linear in all cases. It depends on the requirements of the test. Some applications, pulsed digital test, for instance, might be best served with digital line drivers as the signal conditioner amplifiers. Such drivers are not linear devices.

In other applications, linearity after the D/A is paramount. It’s also gener-ally beneficial to minimize the noise and spurious signals added after the D/A.

Problems with stimulus conditioner linearity are exacerbated when the stimulus conditioner is an analog up-converter. It is very difficult to preserve wide dynamic range, limit the injection of noise, and prevent spurious products from appearing at the stimulus output.

Because linear conditioner design can be challenging and expensive, it’s important to keep an open mind about solutions to these difficulties.

Gain Control

Although it is definitely most convenient to adjust the level (amplitude) of the stimulus by simply adjusting the amplitude of the digital signal en-

Page 71: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

48

tering the D/A, sometimes this is not a good idea. If the signal conditioner has an up-converter, or other gain or spurious producing stages, the junk injected by this conditioner circuitry remains at a constant level as the signal out of the D/A drops. The D/A itself may also inject some unwant-ed signals at a fixed level. Consequently, the signal-to-noise ratio (SNR) of the stimulus will fall as the stimulus level is decreased relative to the fixed noise. In fact, the fixed noise-level may limit the minimum stimulus signal that is discernible, as eventually noise will swamp the signal.

The way around this problem is to adjust signal levels in the stimulus condi-tioner after most of the junk has been added to the signal. That way, when adjusting the signal level, the SNR stays roughly the same. The signal level can be lowered without fear that it will be swamped by the noise.

Cond.Codec

Adjust Level Before Codec

Noise Noise

Noise

Signal Drops Into

Noise

Signal

Noise

Signal and Noise Drop Together

Signal

Adjust Level After

Codec

Cond.Codec

Noise Noise

Controller

Controller

Figure 3-6. Effect of gain control placement on SNR with varying gain

As a result of this idea, stimulus signal conditioners are often designed with variable gain amplifiers or adjustable output attenuation. This allows us to run the D/A at an optimum signal level relative to headroom and quantization noise as discussed in the section titled “Codec Headroom.”

Adjusting gain in the signal conditioner is not without its disadvantages. Foremost of these is that the variable gain must be implemented so as to

Page 72: Synthetic instruments: concepts and applications

Stimulus

49

work and maintain calibration across the range of frequencies and signals that the conditioner might handle. This can be a challenge in broadband designs. Sometimes a compromise approach is used, with only coarse gain steps (perhaps 10 dB) implemented in the conditioner, with fine steps implemented in the codec or DSP controller.

Adaptive Fidelity Improvement

Often, the designers of synthetic measurement systems struggle to achieve impeccable fidelity in the stimulus signal conditioning so as to preserve all the precision in the stimulus they have generated with the finely quan-tized D/A. Building a “clean” enough signal conditioner good enough to match the D/A is often a daunting challenge. It’s particularly difficult to meet fidelity specifications in generic hardware when they derive from a the performance of a signal specific instrument. I have seen stimulus sys-tem designers struggle with this again and again. Designing up-converters with high fidelity and low spurs is additionally difficult.

Granted, it’s harder to make a clean sine wave with a D/A and broadband analog processing than it is with a narrow-band filtered crystal oscillator, but this may be an unnecessary enterprise. As I will say repeatedly in this book, proper synthetic instrument design focuses on the measurement, not on the specifications of some legacy instrument being replaced. Turn to the measurement to see what fidelity is needed. You may find that much less is needed by the measurement than what a blanket fidelity specifica-tion would require.

Once we have focused on the measurement and looked at the fidelity performance of reasonable stimulus signal conditioning, if we see that we still don’t meet the requirements for a good measurement, what do we do then? Clearly, the solution to that underspecified problem depends on the details of the situation. In some cases, parameterized filtering would help, other times higher power amplification can improve linearity. These are all well-known techniques. But there is one technique I want to mention here because it is often overlooked—a technique that has wide applica-bility to these situations: adaptive processing.

Remember, there is a response system at our disposal, and a fully program-mable, DSP driven stimulus system to boot. This combination lends itself nicely to closed-loop adaptive techniques.

Page 73: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

50

If you can measure and you can control, then you can adapt to achieve a measured goal.

Specifically, it is possible to adapt the digital data driving the D/A so as to reduce or eliminate artifacts, spurs, and other fidelity issues introduced by the signal conditioner.

A simple example of this technique would be the elimination of a spuri-ous tone that appears in the stimulus output that is harmful to a measure-ment. Synthesize a second tone of the same frequency as the spur. Figure 3-7 shows a system for adaptively adjusting the amplitude and phase of the second tone to null the spur, eliminating it from the stimulus and thus making the measurement possible.

Non-Ideal Signal "Conditioner" ∑

Adaptive Phase/Amplitude Adjustment

Distortion

Clean SignalS+D

(S+D) –D = S

D

Synthetic Signal

Synthetic Distortion

Figure 3-7. Adaptive nulling

Adaptive nulling, linearization, or calibration is a nontrivial enterprise to be sure, but it has the unique property in this context of being something that can be implemented purely in DSP software. That doesn’t mean it’s necessarily easier or better than a hardware solution to a fidelity issue. My point, however, is that such techniques should always be considered when the hardware has fidelity issues. Adaptive DSP techniques will often have a significantly lower cost in production than any hardware solution.

Reconstruction Filtering

Depending on the needs of the test, it may be possible to directly use the quantized voltage from the output of the stimulus codec. For example, if the stimulus is a digital logic level, it may be used directly. However,

Page 74: Synthetic instruments: concepts and applications

Stimulus

51

when synthesizing a smooth analog stimulus waveform, it’s often better to use a reconstruction or interpolation filter. This filtering at the output of the D/A reconstructs the analog waveform from the “stair-step” ap-proximation. Spectrally, this filter attenuates high-frequency aliases while it can also correct for the sin(x)/x roll-off effect created by holding the samples through each “tread” in the staircase.

If I know the dynamics of the reconstruction filtering when I am calculat-ing the samples I will send into the stimulus codec, it is possible for me to choose these samples to custom tailor and shape in the signal conditioner output waveform based on that knowledge.

For example, fine control of rise time and delay is possible using this tech-nique—ten times finer than the sample interval, even. This is a critical fact to keep in mind when designing the codec. It’s easy to see a specifi-cation that says “rise time programmable in 1 NS steps” and erroneously conclude that the D/A must run at 1 GHz.

The characteristics of the reconstruction filtering, and other fixed or pa-rameterized filters in the signal conditioner, can also be used to facilitate adaptive techniques, as described in the section titled “Adaptive Fidelity Improvement.”

Stimulus Cascade—Real-World Example

Figure 3-8 shows an example of a real-world stimulus subsystem, the Celerity Series CS25000 Broadband Signal and Environment Generator, from Aeroflex. This is only one of the many different stimulus products made by Aeroflex. I selected this one in particular because it includes both high-fidelity signal conditioning and stimulus processing that com-prises both a waveform playback controller and general-purpose CPU with a range of options. Thus, the Aeroflex Broadband Signal and Envi-ronment Generator platform represents a complete synthetic measure-ment system stimulus cascade.

The BSG combines a very deep memory, very high-speed arbitrary wave-form generator and a broadband RF up-converter with powerful signal generation software. The BSGs have bandwidths of up to 500 MHz, and full bandwidth signal memory of up to 10 seconds. The bandwidth, memory depth and dynamic range make the BSG a powerful tool for

Page 75: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

52

broadband satellite communications, frequency agile radio communica-tions, broadband wireless network communications, and radar test. An open, software-defined instrument architecture allows easy imports of user created waveforms. Vector signal simulator (VSS) software creates signal files for commercial wireless standards as well as generic nPSK, nQAM, nFSK, MSK, CW, tone combs, and notched noise signals.

Any of these generic signal types can be gated or bursted in time, as well as hopped in frequency. Real signals, including recorded signals from Aeroflex’s broadband signal analyzers or other recorder sources, can be imported and combined with digitally generated signals, and then played back on the BSG. Impairments can be added to the signals including thermal noise, phase noise, and passband amplitude and phase distortion. VSS provides the unique ability to mix any combination of signals and impairments to generate complex signal environments.

Aeroflex’s Vector signal player (VSP) software provides simple controls for signal file selection, output frequency control and output power con-trol. Aeroflex’s up-converters use real (non-I/Q) conversion architectures, generating high dynamic range waveforms without the carrier leakage and signal image problems associated with I/Q modulators found in other signal sources.

Figure 3-8. Aeroflex CS25000

Page 76: Synthetic instruments: concepts and applications

Stimulus

53

The high-speed stimulus controller in the BSG is designed with an en-hanced version of the waveform playback architecture I discussed in the section titled “Waveform Playback.” This controller allows the BSG to play extremely complicated waveform data files through the use of pro-grammed sequencing. This allows for the predetermined, scenario-based, playback of different sections in memory. During playback, the instru-ment can move from one section of memory to another, on a clock cycle. The control of this analogous to a typical MIDI tone generator driven by a sequencer (a high-tech player piano). The BSG waveform address counter can move in programmed fashion to different sections of memory, building a complete stimulus output without having to put the complete wave train into memory.

This sequencing capability is particularly useful when synthesizing digital modulation waveforms. It makes efficient use of memory while allowing many possible waveforms to be generated. A simple example would be a pulse that is only active for a small amount of time. The pulse can be put into a small block of memory, another small block of memory can hold a piece of interpulse signal (often just zeros). A scenario file programs the system to play the interpulse buffer for a certain number of cycles, then to play the pulse file once, then to start over. There can be multiple pulse profiles and interpulse buffer profiles that can be played to produce ex-tremely complex output stimulus from a small amount of memory.

Table 3-2. BSG performance range

ModelNumber Bandwidth Sample

RatesSample

SizeDynamicRange

MaxMemory

CS25020 75 MHz 200 MS/s 14 bits 70 dB 2048 MS

CS25025 200 MHz 250 MS/s 12 bits 60 dB 2048 MS

CS25040 160 MHz 400 MS/s 8 bits 45 dB 4096 MS

CS25080 280 MHz 700 MS/s 8 bits 45 dB 8192 MS

CS25082 280 MHz 700 MS/s 12 bits 55 dB 8192 MS

CS25130 500 MHz 1300 MS/s 8 bits 45 dB 16384 MS

CS25132 500 MHz 1300 MS/s 12 bits 55 dB 16384 MS

Page 77: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

54

Table 3-3. BSG options

Frequency Down-converterOption

• Tunable or fixed up to 40 GHz in bands

Memory Sequencing Option • High speed address sequencing

Output Options

• Precision attenuators• High speed attenuators• Reconstruction filters

Sample Clock Option • Low Phase Noise

Disk Storage Options

• Fixed and Removable Drives• 73 GB to 146 GB• CD-RW, CD-ROM, DVD

Multiple Signal Options • IF/RF

Baseband • Digital

Controller Options• UltraSPARC/Solaris• Pentium/Linux

Remote Control Options• 10/100Base-T Ethernet• GPIB

Peripheral Options• Keyboard and mouse• Flat panel and CRT Monitors

Playback / Output Options

• Wide-band Analog• High Speed Digital (LVDS, DECL, PECL,

TTL)

Page 78: Synthetic instruments: concepts and applications

55

CHAPTER

4Response

This chapter describes concepts and design issues related to the response side of a synthetic measurement system. The main goal of the response subsystem is to measure some aspect of the DUT in a measurement con-text. A secondary goal is to measure the output of the stimulus system for calibration purposes.

Some of the concepts discussed relating to stimulus also apply to response, and vice versa. Some concepts, however, are unique to response. The re-sponse CCC cascade comprises the interface to the DUT and associated signal conditioning, A/D conversion (response codec), and finally a DSP controller. The ordering of this cascade is the opposite of the stimulus cascade, but the functions are completely analogous.

Controller ConditionerCodec

Response Signal Conditioning

The response signal conditioner is the signal processing interface between the DUT and the response codec. It may be as simple as an amplifier with anti-alias filtering, or it may be as complex as a down-converter.

Input Protection

General-purpose test equipment must be able to withstand typical mis-takes made by test engineers (yes, test engineers do sometimes make mistakes). Some of these may expose the response system to excess signal

Figure 4-1. The response cascade

Page 79: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

56

levels. The response signal conditioner should be designed such that rea-sonable overloads do not damage the subsequent processing.

Response Linearity and Gain Control

As in the stimulus signal conditioner, linearity in the response con-ditioner is a common requirement that leads to challenging hardware design. Digital-oriented test scenarios may not require full linearity, but most other applications do. Linearity is just as difficult to achieve in the response side than it is in the stimulus side, although on the response side there are some additional options that can ease the problems somewhat.

In a response conditioner, there are two fundamentally different ap-proaches to implementing a linear system. I call these the high-gain and low-gain response processing strategy.

Essentially, the difference between these approaches is in where to place the noise floor relative to the A/D quantization noise. Low-gain strategies will place the signal conditioner noise near the quantization noise floor; high-gain strategies will have enough gain to amplify this noise all the way up to the nominal operating point of the A/D.

Full Scale

Zero

Nominal Max Signal Level

Quantization Noise

Max Conditioner Noise

High Gain

Low Gain

Most measurement systems are low-gain. Communication systems are often high-gain. High-gain systems will always need some form of gain control because the noise is already at the nominal loading point of the A/D. As a signal is introduced and its level increases, a high-gain system

Figure 4-2. Low-gain versus high-gain

Page 80: Synthetic instruments: concepts and applications

Response

57

will need to back down its gain immediately to prevent overload. In a low-gain system, variable attenuation is optional and is used only with very large signals that threaten to overload the A/D. Many low-gain sys-tems have no pre-A/D gain control at all.

It’s somewhat ironic that measurement systems tend to be low-gain designs with limited gain control. The irony in this stems from the fact that most response signal conditioners have a signal-level sweet spot that maximizes dynamic range. You normally want to keep the measured sig-nal pinned exactly on this sweet spot in order to achieve the best fidelity and consequently the best measurement accuracy. Unfortunately, other considerations in designs for measurement systems prevent this degree of gain optimization from being achieved. In contrast, communications systems often have automatic gain control (AGC) that keeps the signal level pinned precisely at the optimum level for the detector, even preser-vation of dynamic range and accuracy is the not reason this is done.

It’s also interesting to note by way of analogy with stimulus that stimu-lus conditioners tend to always be low-gain in the sense that they try to minimize the quantization noise from the D/A reflected at the output while running with high signal at D/A nominal. As such, stimulus con-ditioning tends to be more constrained, although they do tend to run at their “sweet spot” more consistently. Response conditioners can get away with taking a low-gain or high-gain approach and move the signal level around, depending on the situation.

Adaptive Techniques

As with the stimulus conditioner, it’s possible to implement system-level adaptive techniques to fix linearity, crosstalk, or spurious signal issues that plague the response conditioner. For example, consider the measure-ment of signal harmonics in the presence of a powerful fundamental. If the fundamental is powerful enough, and the level of the DUT harmonics to be measured is low enough, then the measurement system harmonics (specifically those of the response conditioner) will swamp those of the DUT. The measurement will become impossible.

But if adaptive nulling techniques are used to attenuate the fundamental without affecting the harmonics, the measurement again becomes pos-sible. This nulling happens in the response signal conditioner, up front, as

Page 81: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

58

soon as possible, using the undistorted fundamental from the input of the DUT as the nulling reference.

Response

Stimulus

DUT

Adaptive Amplitude and

Phase Adjustment

Clean Signal

Signal plus Distortion

Added by DUTDistortion Added by DUT

Similar methods can be applied for the measurement of intermodula-tion and spurs. This is also a good technique for measuring one channel alongside several others in a multiplexed arrangement. Without adaptive nulling, the adjacent channels may make accurate measurement of the desired channel impossible.

The Response Codec

In this section I will discuss some issues with regard to response digitiza-tion. Some of the discussion will apply to the stimulus codec as well. Also included in this section is a description of a state-of-the-art commercial digitizer subsystem.

Fidelity and Measurement Accuracy

An issue always on people’s minds when they begin to contemplate a synthetic solution to a measurement problem is the number of bits in the A/D or D/A (what I refer to collectively as the codec). When compar-ing two systems, if one has 12 bits and the other has 14 bits, it’s tempting to conclude that the 14-bit system is somehow “better” than the 12-bit system.

But the number of bits in the codec is a rather superficial and misleading metric for specifying the fidelity of a synthetic instrument. Even the sup-

Figure 4-3. Adaptive nulling to improve response measurement

Page 82: Synthetic instruments: concepts and applications

Response

59

posedly more honest and encompassing effective number of bits or ENOB parameter can be completely misleading. Here’s why.

There are a plethora of sources of error that plague a typical measurement system. The signal conditioner, like any analog system, can have offset, drift, noise, distortion, or spurious signals that corrupt the desired signal. The codec itself, with one foot firmly in the analog world, can also be plagued by these analog troubles. Less acknowledged, but certainly pos-sible, the measurement on the digital side can be additionally corrupted by noise, distortion, and spurious signals.

That last statement probably needs some justification if you are under the impression that digital processing is ideal and works just like it says in Oppenheim and Schaefer[B10]. Unfortunately, reality does not quite live up to this expectation.

Digital filters, for example, can oscillate and generate spurious signals through limit cycles and other nonlinear behavior. They can also gener-ate noise through coefficient round-off error and can introduce distortion through various finite word-size effects. This noise and distortion can be significantly larger than one might expect given the number of bits in the signal processing path.

Controller ConditionerCodec

Intermodulation

Phase Noise

Amplifier Noise

Images

LO Spurs

Thermal Noise

Underflow

Limit Cycles

Roundoff Noise

Aliasing

Rate Conversion Errors

Intermodulation

Phase Noise

Amplifier Noise

Aliasing

Clock Noise

Quantization NoiseQuantization Noise

Overflow

Overflow

Crosstalk

Therefore, focusing only on the bits in the codec distracts attention from the performance of the whole system. It is necessary to analyze signal flow, noise, and distortion through the whole system in order to draw any conclusions about accuracy and fidelity.

Figure 4-4. Sources of noise and distortion in synthetic systems

Page 83: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

60

Ideal Quantization

Assuming the codec is an ideal quantizer (say, an ideal A/D converter) with N bits, and a quantization step size of ∆, and, additionally, assuming that the input voltage has stationary and uniformly distributed statis-tics over the analog range quantized by the N bits, a common textbook exercise shows that the root mean square (RMS) error introduced by the quantization process is ∆2/12 relative to the ideal signal. In terms of dB, with the additional assumption of a bipolar signal (where one bit is used up as the sign bit) then the RMS quantization noise is 6N + 6 dB below the ideal signal. Giving up another 6 dB for headroom, then the digitiza-tion process results in RMS noise that is roughly 6 dB below the desired signal level for every bit in the codec.

This 6 dB per bit quantization noise is a well-known rule of thumb, but it’s important to always remember the assumptions. They are:

1. 6 dB headroom

2. Input signal statistics stationary and uniformly distributed

3. Ideal quantization

Reality may invalidate one or more of these assumptions. Let’s discuss them each in turn.

Codec Headroom

The need for headroom in a codec derives from the fact that real signals (measurements) have a peak-to-average ratio that is greater than one. Even theoretically, the common assumption that a measurement is Gauss-ian distributed about some mean implies a peak to average ratio that is infinite!

The problem with a high peak/average ratio is that the average will deter-mine the overall performance of the codec in terms of quantization noise, but the codec will catastrophically distort the signal if the signal peaks overload the maximum range of the codec. Therefore, it’s best to arrange things so the average is as high as possible with overload never (or rarely) occurring on peaks.

Although real signals aren’t so benign as to have unity peak/average ratios, they are not so malicious as to have infinite peaks. Practice falls

Page 84: Synthetic instruments: concepts and applications

Response

61

somewhere in between. It’s usually possible to come up with a good com-promise.

As an example of how to arrive at a headroom compromise, think about your living room stereo set. If your stereo is of any decent quality, it will have a VU meter that displays the signal level. The VU meter gives an excellent graphical depiction of headroom.

The region in red above the 0 dB mark is the headroom. Most audio equipment works “best” when the signal peaks seem to bounce up to the 0 dB mark, with rare excursions higher into the red. This is exactly the same sort of consideration that guides the design of any codec system.

You set your average below the maximum overload level with the head-room approximately the same as the anticipated peak/average ratio. This is called optimal loading of the codec. A consequence of this practice is that signal-to-quantization noise ratio in an optimally loaded codec will decrease dB-for-dB with any increase in peak/average ratio. The two pa-rameters represent a counterbalancing trade-off. The dilemma therefore is a choice between avoiding overload, and minimizing quantization noise.

Headroom Trade-off and System Fidelity

The codec itself is always in the context of the signal conditioner and the controller. Headroom that optimizes the codec performance may not make sense for the signal conditioner. As I have discussed, the condition-er will have a “sweet spot” that optimizes performance.

In addition, it can’t be assumed that just because there was an optimal balance between headroom and noise at the codec and conditioner, that this optimal balance remains optimal throughout digital processing.

Figure 4-5. VU meter

Page 85: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

62

Things affect dynamic range and fidelity in digital signal processing just like they do in analog processing. Although modern DSP tools make it virtually trivial to slap together processing steps, as much care needs to be exerted in designing each step of DSP as is put into designing each step of analog processing if you are to achieve optimum performance.

Response Digital Signal Processing

Just like the stimulus DSP, the response digital processor section can perform various sorts of functions, ranging from simple measurement and analysis tasks, to full-blown digital demodulation. Therefore, once again, depending on the exact requirements of each of these functions, the hardware implementation of an optimum response digital processor sec-tion can vary widely. One “general-purpose” DSP controller may not be what’s needed, in general.

Following are two broad categories that divide the capabilities of possible response digital processor assets:

Waveform Recorder and DSP

Matched Filter (Demodulator)

In the following sections, I will discuss these categories in turn.

Waveform Recorder and DSP

The first and simplest of these categories, waveform recorder and DSP, is representative of the response controller in most synthetic measurement systems around today. It consists of a block digitizer comprising an A/D and RAM, combined with high-speed DSP immediately after the A/D, and a general-purpose DSP-oriented CPU that that works with the RAM buffered data.

Memory Controller

RAMBuffers

Data From Codec

Data To HostLow Speed

DSPµP

High Speed DSP

(FPGA)Addr

Data

Addr

Data

Figure 4-6. Waveform recording controller

Page 86: Synthetic instruments: concepts and applications

Response

63

The high-speed DSP (HSDSP) is normally used to reduce the quantity of data stored, by one of several techniques. These data-rate reduction schemes might include: decimating, digital down-converting, averaging, or demodulating, decoding, despreading, or computing statistical sum-maries. Some A/D parts build in high-speed DSP. You can live without HSDSP, but as data rates climb, this function becomes essential. Quite often it is implemented with a gate array.

The memory controller manages the large block of waveform memory. The large block of memory contains digitized samples of waveform data. Perhaps the data is in one continuous data set, or several independently acquired tracks or blocks of data. The memory controller is a state ma-chine that allows sequencing through that memory, reading or writing. The better systems allow you to read and write at the same time. Al-though you certainly can get away without simultaneous read and write, when I buy a digitizer system, I look for this feature first. It greatly en-hances the capabilities of the system and is most often worth the money. Dual-port access may be implemented with a FIFO, or “ping-pong” buf-fers, or it may be a true two-port memory design with separate read and write address decoding logic.

Low-speed DSP (LSDSP) represents a microprocessor dedicated to DSP tasks. This may either be within the response controller subsystem, or it may be implemented as part of the host. The purpose of LSDSP is to further reduce the data rate, possibly by computing final ordinates.

Even with memory controllers that allow continuous acquisition capa-bility, typical waveform recording controllers are block-oriented. What they do is “take a block of data” and analyze it. Even the more advanced units with decimators and down-converters will boil down to this limited functionality. I say “limited,” not withstanding the fact that, given a fast enough CPU and a big enough block, any sort of processing could be implemented this way. The reason I say “limited” is because other ap-proaches can be orders of magnitude more resource efficient for certain essential tasks. Thus, the limitation of the simple block digitize and DSP response controller arises from the limits of DSP processing resources.

Controllers can do many more things beyond just “taking a block of data” and running some DSP algorithm on it. They certainly must do more to

Page 87: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

64

handle high-speed, real-time interactive testing, particularly with digital modulations from cell phones and military communications equipment.

Matched Filter Demodulator

A matched filter is something I can prove mathematically to be the best way to detect the information modulated on a signal. It represents the gold standard of measurement devices. Matched filtering is described in any good communications theory book[B12], and is an essential vector signal analyzer (VSA) operation.

The simple block diagram of a matched filter is shown in Figure 4-7. A template of the expected signal waveform is correlated with the input signal. The correlation is integrated over the signal duration, resulting in a metric that indicates how close the input signal is to the template.

SampleTiming

Integrator SamplerCorrelator

h(t)

Conditioner Codec

Signal Template Generator

SampleTiming

Matched Filter Ordinate

Input Signal

Figure 4-7. Matched filter

The signal template, h(t), in the matched filter is an ideal, undistorted copy of the thing the system is trying to detect and measure. This fact implies that a matched filter actually has the ability to store and generate waveform data, at least for internal use. If this sounds to you suspiciously like the stimulus controller, your suspicion isn’t misplaced. It may be surprising that a response detector contains stimulus generation, yet this is just another example of stimulus-response closure as I discussed in the section titled “Stimulus Response Closure: The Calibration Problem.” The response system cannot be separated from the stimulus system. In this case, an ideal response detector must have exact knowledge of the stimulus it is trying to detect.

Page 88: Synthetic instruments: concepts and applications

Response

65

The matched filter is worth considering given what this achieves: linear detection of a signal with minimum mean square error. There is no better detector in a least squares sense. Given how great a matched filter is, it makes a lot of sense to have one or more of these in the response system. You definitely want one for each possible letter in the signal alphabet you are trying to detect.

No doubt a matched filter can be implemented in DSP software, thus one might argue that a matched filter can be implemented with the block digitizer and DSP processor. It’s certainly cheapest to do it this way, and just as certainly a software matched filter would be cheaper than dedi-cated matched filter hardware. Two questions thereby arise: Is there any real need for dedicated matched filtering in the response system hardware? Isn’t this exactly the kind of specificity we should avoid in a synthetic measurement system?

First question: Yes. Matched filtering as a dedicated hardware structure is often needed because it can’t be realistically implemented in DSP soft-ware for many real-world scenarios, particularly real-time scenarios. A true matched filter requires a convolution between the prototype impulse response and the response signal. This is computationally intensive. Even in cases where FFT techniques can be used to speed the process-ing, matched filters take a while to calculate. Then, multiply the already lengthy time for one matched filter by the number of templates in the alphabet, and you see that the computational burden becomes onerous quickly. Certainly, if you want matched filtering for a nontrivial alphabet, you need to dedicate hardware to the task.

Second question: No. Matched filtering isn’t specific. Quite the contrary, matched filtering is the most general form of linear detection around. This is because a matched filter is a general process with all its specific-ity encapsulated in the signal template it seeks to detect. The template is a parameter; actually, the template is more correctly seen as an abscissa. Any finite signal template can be the basis for a matched filter. The template waveform represents a basis function for the ordinate being measured. The matched filter detection process is an inner product (dot product) operation that determines how much of the signal vector proj-ects onto a particular abscissa represented by the basis. What better way to measure an ordinate!

Page 89: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

66

Response Trigger Time Interpolator

I have already discussed triggering from in the context of the stimulus system in the section titled “Stimulus Triggering.” Response triggering is somewhat of a different, perhaps simpler problem. When responding to a signal, triggering from it, you have the option to use post processing to fix-up the acquired data based on the trigger. That’s easier to do than with stimulus where, lacking the ability to move back in time and change his-tory, you are stuck with the data previously emitted.

Response controllers or digitizers often have what’s called a trigger time interpolator that tells us precisely when the trigger happened relative to the digitizer clock. Straightforward digital processing techniques can be used to resample the data, transforming it into data synchronous with the trigger event.

Response Cascade—Real-World Example

Figure 4-8 shows an example of a real-world digitizer subsystem, the Acqiris AP240. This is only one of the many different response digitizers made by Acqiris. I selected this one in particular because it includes both signal conditioning and response processing. Thus, Acqiris’ reconfigurable analyzer platform is more than just a digitizer. The full front-end signal conditioning has up to 1-GHz bandwidth, and an onboard FPGA digital processing unit (DPU) allows digitized signals to be processed and ana-lyzed in real time. In fact, a system such as this, along with a host com-puter for moderate speed processing and control, represents a complete synthetic measurement system response cascade.

With SSR firmware options, the DPU can be programmed to perform processing algorithms at the cards’ maximum sampling rate, easing the requirements on the remainder of the response DSP subsystem.

Onboard reconfigurable data processing unit (DPU) for real-time operations.

Front-panel digital I/O connectors for real-time data processing control (DPU Ctrl).

Synchronous dual-channel mode with independent gain and off-set on each channel.

Page 90: Synthetic instruments: concepts and applications

Response

67

Interleaved single-channel mode on either input, software selectable.

1-GHz analog bandwidth in all FS ranges.

Up to 2 GS/s sampling rate in single-channel mode.

Fully-featured 50Ù mezzanine front-end design with internal calibration and input protection.

Short (1 Mpoints/ch typical) or optional long processing memory (4 Mpoints/ch typical).

Multipurpose I/O connectors for trigger, clock, reference and status control signals.

Continuous and start/stop external clock modes.

High-speed PCI bus transfers data to host PC at sustained rates up to 100 MB/s.

Device drivers for Windows 95/98/NT4.0/2000/XP, VxWorks and Linux.

Auto-install software with application code examples for C/C++, Visual Basic, National Instruments LabVIEW and LabWindows/CVI.

Figure 4-8. AP240 reconfigurable PCI signal analyzer platform

Page 91: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

68

The sustained sequential recording (SSR) firmware for AP240 ana-lyzer platform uses a dual-bank memory system with onboard automatic switching allowing sustained sequential recording to the host PC in sequence mode at sustained trigger and data rates ten times faster than normal. The SSR firmware also allows 1-GHz bandwidth with synchro-nous start-on-trigger dual-channel sampling at rates up to 1 GS/s (2 GS/s in single-channel mode). When triggering, there is minimum dead time between successive acquisitions, allowing recording in sequence mode with a sustained trigger rate of up to 100 kHz.

Page 92: Synthetic instruments: concepts and applications

69

CHAPTER

5Real-World Design:

A Synthetic Measurement System

So far, I’ve talked rather philosophically about synthetic measurement system design issues. The examples I’ve given, and detailed techniques I’ve discussed have all been abstract, not referring to any particular mea-surement system or instrument implementation. This chapter is differ-ent. Here, I present a real-world synthetic measurement system that is in operation today (2004).1

Universal High-Speed RF Microwave Test System

The real-world system I will discuss in this chapter was developed by Raytheon. The RF multifunction test system (RFMTS) was developed to provide for a wide variety of RF test demands, and targeted at radically reducing test times. It is a versatile test system integrating state-of-the-art capabilities in high-speed RF testing, microwave synthetic instrument measurement techniques, product interfacing, and calibrating.

Background

Trends in military product design have been toward modular, solid-state RF microwave architectures, taking advantage of major improvements in solid-state RF component design. For example, radar architectures are now based on using thousands of solid-state modules packaged in man-ageable assemblies of up to 30 modules each. This shift in design archi-tecture has precipitated demand for a flexible high-speed, high-quality RF microwave test system.

1 This chapter is courtesy of Raytheon and Aeroflex Companies and is based on an AutoTestCon Paper[C1] that describes a high-speed, high-performance RF test system targeted for a moderate to high quantity manufacturing environment.

Page 93: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

70

In 1999, Raytheon embarked on a venture of developing such a system. Besides high throughput and versatility, the goals of the project included logistic goals that would address lowering life cycle costs, technical goals that would achieve high performance, and an architecture that would en-able it to maintain technical excellence.

Logistical Goals

The main focus was to develop a system that would permit lowering life cycle costs. The goal translated into a common platform that could be used in a broad spectrum of applications. If achieved this would enable:

Training for one system versus many for operators, maintainers, and TPS developers.

Reduced calibration equipment and procedures.

System self-test to permit increased availability.

A modular architecture to facilitate maintenance, require fewer spares, and deal with obsolescence.

A spares and maintenance program for one system.

A common resource that could be shared across many programs.

An open architecture (hardware and software) that would promote system longevity and permit upgrading.

Technical Goals

Architecture was a major consideration at the core of the technical goals. The objective was to have a modular system based on industry standards from both a hardware and software viewpoint and minimize dependence upon proprietary designs.

RF Capabilities

The first criterion was measurement speed. Experience with heretofore “high-speed” test systems achieved test times of approximately a half-hour for solid-state assemblies. Classical rack and stack RF test systems needed hours to test complex receiver-exciter assemblies. Goals were to reduce these times by factors from 3 to 10.

Page 94: Synthetic instruments: concepts and applications

Real-World Design: A Synthetic Measurement System

71

Nearly as important as speed was RF measurement performance. A fast system with marginal performance would not have a wide range of appli-cations. This new system had to have measurement performance capabil-ity similar to that of typical commercial test instrumentation. For it to be able to do the job, the system needed to have an extensive measurement suite. In the RF world, where cable losses and other transmission line issues can be serious problems, an instrument with good measurement capability at its front panel is only half the solution. Being able to easily extend the measurement all the way to the DUT was a prime consider-ation. Hence, having flexible calibration options was high on the list of priorities.

System Architecture

The blend of a synthetic measurement system (the Aeroflex TRM1000C), and a Raytheon custom-designed 3rd bay (RF switch matrix, DUT interface assembly, and auxiliary COTS equipment) pro-vided a solution that met the desired hardware goals. The software goals were achieved by taking advantage of the TRM1000C industry standard LabWindows/CVI, VXI plug and play type drivers, and LabWindows/CVI-compatible GPIB instrumentation in the Raytheon-designed 3rd bay.

Microwave Synthetic Instrument (TRM1000C)

The Aeroflex TRM1000C is designed to provide reconfigurable, high- speed production test equipment for evaluating a variety of different microwave devices such as amplifiers, transmit and receive (T/R) mod-ules, frequency translation devices, receivers, local oscillators, and phase shifters. It can also perform tests on integrated subassemblies of RF components, as well as on full up-systems filled with any combination of active RF, multiport devices.

The basic architecture of the TRM1000C is consistent with the basic architecture described in this book, a CCC cascade, enhanced with compound signal conditioners. The compound conditioners consist of a stimulus up-converter and a response down-converter. Time multiplexing is used to expand the system to multiple inputs and outputs. A calibration and verification system allows for loopback ordinates and application of metrology standards. Figure 5-1 outlines this high-level functionality.

Page 95: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

72

SignalConditioning

Calibration

Verification

......

......

ReferencePlan

System/DUT Control& Syncronization

Synthesis(AWG)

Digitization&

DSP

DownConversion

UpConversion

DUTInterface

DUTInterface

SignalConditioning

Calibration

Verification

DUT

Figure 5-1. TRM1000C functional diagram

The TRM1000C is designed to dramatically improve module test times and reduce measurement errors introduced by the operator, test hardware, and DUT interface. It is ideally suited for production test applications where throughput and flexibility are paramount. Through its synthetic design, the TRM1000C’s nonspecific RF hardware can be software config-ured to run specific RF and microwave production tests. The TRM1000C hardware architecture is based on advanced synthetic instrument con-cepts; the TRM1000C does the same measurements as several distinct microwave test instruments including a pulsed power meter, a frequency counter, multiple sources, a spectrum analyzer, a vector network analyzer, a noise figure meter, and a pattern generator. The full measurement suite is as follows in Table 5-1:

Page 96: Synthetic instruments: concepts and applications

Real-World Design: A Synthetic Measurement System

73

A standard TRM1000C includes complex stimulus generation including pulsed modulation, AM, FM, phase modulation (PM), and a fast response measurement channel. Since the system is synthetic in design and there-by easily reconfigurable, the system can be reused for different programs and applications, therefore maximizing return on investment (ROI).

If you look carefully at the block diagram in Figure 5-1, you can see that the internal design of the TRM1000C follows the standard synthetic measurement system CCC architecture principles for both stimulus and response. A compound up-converter in the stimulus, and a compound down-converter in the response-side orients the TRM1000C architecture toward the generation and analysis of bandpass signals, as is appropri-ate to its RF measurement mission. A calibration matrix interconnects stimulus and response with the DUT, providing stimulus-response closure. This eliminates many redundancies (for example: duplicated channels, stimulus-side detectors, response-side sources) that would otherwise be necessary in order to maintain calibration.

Table 5-1. TRM1000C measurement suit

• Power• Tone Power• Pulse Power (RMS or Peak)• Total Power• Spectral Power Density• Noise Power

• RF Signal Source• Complex Volts• Pulse Profile• Rise time• Fall time• Droop

• Pout & Pin at “N” dB Compression• AM/PM• Multiport S-Parameters• 12-Term Error Correction• Gain• Input/Output Return Loss• Isolation• Conversion Gain• Group Delay• Noise Figure• Phase Noise

• Envelope Delay• Frequency• Spurii• Harmonic• Nth Order Intercept• Modulation Index• Raw Read• Complex FFT Data Block• Digital Data verification• Analog DMM• Scope Measurements

Page 97: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

74

Supplemental Resources

Practical design realities limited the scope of what could be achieved with a purely synthetic measurement system. For testing the more complex RF products (receiver-exciter elements, frequency translation devices, and so forth), additional nonsynthetic resources were required. These instru-ments do not need to be inside the high-speed loops of the synthetic instrument and therefore do not interfere with measurement speeds. The supplemental test instrument resources include three RF sources, an oscil-loscope, digital multimeter (DMM), and power meter (for troubleshoot-ing purposes). These instruments, the RF switch matrix, and DUT inter-face were incorporated into Raytheon designed 3rd bay to complement the TRM1000C.

DUT Interface

For the RFMTS to easily work with a variety of RF products, some way to interface its test resources to these products had to be developed. The in-terface needed to be rugged, versatile, high performance and easy to use. In addition, a high-performance RF switch matrix was included as part of this assembly in order to permit simple, low-cost interface adapters. The chosen solution integrates a high-performance RF switch matrix and a Virginia Panel interface assembly as shown in Figure 5-2.

Figure 5-2. Test adapter interface

Page 98: Synthetic instruments: concepts and applications

Real-World Design: A Synthetic Measurement System

75

Product Test Adapter Solutions

Some of the products to be tested on the RFMTS were known to be large assemblies. For this reason, time was spent on developing calibra-tion schemes that could be extended out several levels and on providing a rugged, high-performance interface. This approach has permitted using the system on virtually any size RF product. It also accommodates other relatively unique items that support the test, such as pneumatics, hydrau-lics, liquid cooling, and so on.

Calibration

The RFMTS is designed to collect measurement data for a variety of dif-ferent DUTs. By the nature of the measurement, raw data collected by the instrument contains characteristics of both the DUT and the test system hardware (instrument, switch interface, DUT adapter, and so forth). To extract only the characteristics of the DUT, the system must compensate for its own contribution to the measurement data. The process for charac-terizing the system’s contribution is generically termed calibration.

Calibration is an integral part of performing measurements. Calibration procedures are dependent on the application, measurement, and mea-surement method; therefore, the flexible TRM1000C calibration design allows for different applications needs. The measurements must be NIST traceable; therefore, NIST traceable transfer standards are needed to calibrate the system.

TRM1000C calibration is divided into two categories: primary calibration and operational calibration.

Primary Calibration

The TRM1000C utilizes the modular, line replaceable unit (LRU) meth-odology as part of the system. These LRUs are calibrated at a standard metrology calibration lab and become an integral part of the system. The majority of the LRUs are commercially available NIST traceable standards. The production floor can have spares available to remove and replace, minimizing system downtime when performing periodic main-tenance. The LRUs also eliminate the need for external, on-site support

Page 99: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

76

equipment; therefore, the user does not need to bring external equipment up to the system.

A list of primary calibrated LRUs is as follows:

Power Meter (50-MHz Source)

Power Sensor Calibration Factor

Noise Source Excess Noise Ratio (ENR)

10-MHz Rubidium Standard

3.5mm Calibration Kit (S-Parameters)

Operational Calibration

This is an application-specific procedure that transfers the measure-ment standards from the system’s calibration LRUs to the system itself. This calibration can handle a number of different multiport devices and any number of DUT interfaces. The DUT interfaces can be as simple as NIST traceable coaxial connectors (3.5mm), or the interface can be more complex: non-NIST traceable coaxial connectors (for example, GPPO), standard and nonstandard wave guide, or even direct wafer probes. A multitier calibration technique is used to extend the calibration reference plane out to the DUT (de-embed).

Software Solutions

With a synthetic instrument approach, the scope and potential capabili-ties of the software are almost boundless. The entire system is LabWin-dows/CVI-based. A software architecture has been used to address a variety of goals and desired capabilities.

1. Ease of use by TPS developers and test system maintainers.

2. Sufficient depth and flexibility to accommodate dealing with very complex, digitally-controlled RF products.

3. The different levels of software employed by the TRM1000C syn-thetic instrument.

Page 100: Synthetic instruments: concepts and applications

Real-World Design: A Synthetic Measurement System

77

Test Program Set Developer Interface

The objective here was to provide a simplistic means of implementing moderately complex tests, without demanding that all test program set (TPS) developers maintainers be experts in C programming. The solution was to develop C-based test procedures that would be graphical in nature, and then utilize a test executive that would readily permit stringing these test procedures together. This approach requires a few C experts, but permits test engineers to be productive with minimal software specific training.

The test procedure concept simply translates typical RF measurement scenarios in a graphical user interface screen or “panel” where the re-quired parameters can be entered. The TPS designer enters the associated parameters via the panel for each test the designer develops. Test proce-dures have been developed for all of the measurement types.

A primary objective of the test procedure approach is to maximize reuse and provide cost effective TPSs in a timely manner. The approach has proved to be very effective. Test engineers are able to concentrate on the technical aspects of the DUT and the test scenarios. The pure software designers are doing what they do best—developing the C-based proce-dures in support of the test designers. In the event of complex tests, the LabWindows/CVI environment provides a very flexible environment for developing custom procedures for unique or even further speed enhance-ment if required.

TRM1000C Software

The TRM1000C currently uses a scripting language called JavaScript (ECMAScript). Scripts are used to define the logic, control, process-ing and storage of the measurement and resultant data. Any number of scripts may be loaded at any time and sequenced via a test executive or incorporated as part of a test procedure and then called by an executive. Through commanding of the scripts, the TRM1000C may change from one measurement personality (i.e., vector network analysis) to another (noise figure).

There are two categories of scripts that can be designed: low level and high level. Low-level scripts are interpreted and run one step at a time. A

Page 101: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

78

low-level script is not optimized for speed, but is best for situations where the number of firmware states is unknown. Such a situation may arise when conditional branching is utilized. An example of this is when a measurement result is used to decide whether another measurement needs to be made.

High-level scripts are unrolled and loaded into a state table within the processor. The processor can then execute the state table without any in-teraction with the script. This method is optimized for speed, but requires a known number of states.

After the measurement is performed, the requested data is stored in a local data file (slot zero controller). Since the scripting capability allows for com-plex, multidimensional data sets to be collected and the speed of the system allows for considerable data to be collected quickly, these data files can be somewhat large. To minimize the data file size and to allow for easy access, the TRM1000C currently utilizes an open file format called hierarchical data format (HDF). Measurement results can then be either routed back through the host driver link as part of the script or the HDF file can be retrieved remotely by the host. Any number of different test executives can be used with the TRM1000C. Test executive software running on the host PC performs all data presentation and report generation activities.

Conclusions

The system performance is comparable to that of standalone instrumenta-tion and in some cases better. From a practical application viewpoint, the RFMTS performance has met all required DUT test requirements, many of which are very stringent.

Relative to use of the term “typical,” it is very important to note that the test designer has a great deal of leeway relative to optimizing measure-ment and speed performance. The test designer has control over the IF bandwidth, DSP block size (number of samples), block count (averages), receiver gain, and so forth, therefore, allowing for optimizing perfor-mance for a required measurement. From practical experience, there are DUTs tested on the system that are optimized for speed and others that are optimized for measurement performance. In typical applications, the high-volume devices have measurement requirements that permit

Page 102: Synthetic instruments: concepts and applications

Real-World Design: A Synthetic Measurement System

79

optimizing for speed. The DUTs that require the greatest accuracy are much fewer in number on a per system basis, and therefore, minor sacri-fices of speed are easily accepted.

As a last note relative to performance, the TRM1000C synthetic design and system architecture readily permits performance improvements via both hardware and software, and in fact these improvements are continu-ally being made.

From both a logistical and technical goal viewpoint, the system has been a major success. Multiple systems are on the floor, and the envisioned goals are now being realized. The open architecture, the speed, and the technical performance have made the RFMTS a standard core test sys-tem. The architecture permits continual growth from both hardware and software viewpoints.

Hardware growth will focus on continued performance enhancements. With the modular approach, this can and is being done in an incremen-tal fashion. Even speed, one of the system’s main virtues continues to improve as computer speeds and the other building blocks in the system improve. But software will be the most exciting area where continual growth is envisioned. The graphical test procedures will continually be refined. They will be made more user friendly, increased in depth, and more specific elements will be added. As these grow and develop, the cost and schedule for developing test programs will likewise reduce. Because of the nature of the system, software can and will enhance every aspect of the system: performance, utilization, calibration, and maintenance.

Page 103: Synthetic instruments: concepts and applications

This page intentionally left blank

Page 104: Synthetic instruments: concepts and applications

81

CHAPTER

6Measurement Maps

Large, complex ATE systems are often run by large, complex software systems. The complexity of the software is inevitably a result of the com-plexity of the system, which is a direct result of the problem solved by the system.

Fortunately, test engineers are really smart people. As such, they have no problem dealing with the complexity of the measurement problem. In fact, the test engineer sees more complexity in the measurement problem than does someone unfamiliar with the details. Because they understand the problem better than anyone else, the test engineer is the best person to figure out exactly what test to run, and exactly how to run it.

Unfortunately, the large, complex software systems that run large com-plex ATE systems are rarely designed to invite the test engineer to dive in and play. A lot of system-specific software knowledge is required to become productive and avoid breaking things. This specific knowledge has nothing to do with measurements, but rather is related to software architecture and software methodology issues. These issues may be vital from a software perspective, but they really have no direct value to the test engineer. They are pure overhead from their perspective.

Don’t get me wrong. I’m not saying that the typical software found in ATE systems is badly designed. Instead, I’m saying it may have had other priori-ties—that it is of the wrong design to empower end user participation on a daily basis. The all-too-common problem of measurement-irrelevant soft-ware complexity renders big ATE systems inaccessible to the test engineers that need to use them to solve their everyday problems.

Page 105: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

82

This problem is not news to ATE software designers. In fact, ATE soft-ware technology has taken step after step in the direction of providing accessible programming interfaces to test engineers. These developments have had varying success in establishing a clear division between the ma-chinery of the software innards, and the machinery of the measurement. Much progress has been made, but none have eliminated the problem completely.

The holy grail of ATE software is a system such that the full complexity of measurement can be expressed by a test engineer with no knowledge of the software innards. The test engineer should be able to design a measurement, from scratch, without being forced to worry about software artifacts, like calling conventions, parameter lists, communications inter-faces, and, most of all the programming quirks of the individual instruments.

Much of the arcana that creeps into programming measurement systems is caused by those pesky quirks of specific measurement hardware, and their related configuration issues. Designing a measurement becomes a process of orchestrating a collection of instruments to do what you want them to do. This is an inherently complex task because each instrument has its own set of capabilities, expressed in nonuniform ways. Admittedly, things like SCPI and VXI plug and play go a long way toward unifying the look of a collection of unique instruments, but you still have a collec-tion of unique instruments, albeit “smoothed over.”

This is why synthetic instrumentation is such a breakthrough for software accessibility. For the first time, designers can define a measurement appli-cation programming interface (API) that is just about measurements and only about measurements. The irrelevant machinery is hidden. The test engineer sees a totally measurement-oriented interface through which they can express whatever it is about the measurement that needs to be expressed.

In this book, I introduce the stimulus response measurement map (SRMM) model of measurements and XML-based SRMM measurement definitions as one way to define measurements in a synthetic measure-ment system that stays focused 100% on the measurements. I wouldn’t claim that my approach is the only possible approach, but I submit that it provides a proper foundation for ATE application software that is fully and exclusively based on the measurement, and thereby facilitates the

Page 106: Synthetic instruments: concepts and applications

Measurement Maps

83

construction of user interfaces that spare the test engineer from irrelevant considerations outside the measurement.

Measurement Abstraction

It is a sad irony that focusing on the measurement for the purpose of mak-ing life simple for the test engineer leads to abstraction, and a new abstrac-tion is never simple for anyone to accept at first. I would go so far as to say that the most formidable human problems facing the introduction of syn-thetic instrument design concepts in real-world applications is the fact that synthetic instrumentation embodies an abstract model of measurements. When people are accustomed to dealing with concrete things (for example, a specific set of instruments that they use to make a specific measurement), it becomes very difficult for them to let go of this conception of things and try to imagine any other way to accomplish the measurement.

This fact about human nature is why virtual instruments are so popular. With virtual instruments, you can imagine that your favorite old instru-ments are still being used to make your measurement. You can have comforting little virtual knobs on cute virtual front panels constituting your virtual rack of virtual instruments. This collection of instruments are then programmed in a quite familiar manner that mimics the way a corresponding set of physical instruments would be applied to do the measurement.

But actually, if you want to measure something, Yogi Berra might say that all you really need is a thing that measures what you want to measure. If you want to measure A and B vs. C and D, you need an (A,B) = f(C,D) measuring instrument, an AB vs. CD meter, so to speak. Nothing else is really the “right” thing. Yes, maybe you have always measured A and B separately as functions of C and D and stitched it together, but funda-mentally what you really want to do (if you don’t mind me putting words in your mouth) is measure the AB vector over the CD manifold1.

This idea becomes crucial when C and D are not separable. A function of several variables:

f(x0, x1, x2, …)

1 Fear not. Scary mathematical jargon will be explained.

Page 107: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

84

is separable if it can be expressed as a product of functions of the indi-vidual variables.

g0(x0) · g1(x1) · g2(x2) · …

Here’s an example: Suppose you have an ultrasonic transducer. You would like to measure its response versus frequency and its response versus signal input level. These two measurements may not be separable. You could vary input level and plot output at some fixed frequency. Then you could vary frequency and plot output at some fixed input level. If the response was separable, you could then multiply those two functions and have the whole thing.

But it may be the case that the shape of the frequency response curve changes at different power levels. It may also be the case the power trans-fer curve is shaped differently at different frequencies. You really need to measure the response over the joint domain manifold of frequency re-sponse and power response to fully characterize the sensor.

Figure 6-1 is an example of a manifold f(x,y) that isn’t separable.

Figure 6-1. Joint manifold

Synthetic instrumentation approaches don’t yield their biggest payoff unless you are willing to think about measurements themselves, as pure measurements, especially multidimensional measurements. You need to divorce your thought from particular instrumentation used to make the measurements and think only about what you want to measure, in and of itself. Failing to do this will inevitably shift the focus from the measure-

Page 108: Synthetic instruments: concepts and applications

Measurement Maps

85

ment to the instrument being used and result in the introduction of myriad irrelevant and extraneous considerations that would not otherwise appear.

To the end of attempting to get everyone thinking about measurements abstractly, the following section attempts to introduce some vocabulary that does not shackle us to instrument-specific ideas. The vocabulary is based on mathematical concepts that, for the most part, are studied by teenagers in high school.

General Measurements

In a synthetic instrument, measurements are performed by software run-ning on generic hardware. Ideally, this software is completely flexible. Any sort of measurement is possible to define, so long as it falls within the capabilities of the hardware. Unfortunately, if measurements are flexible without limit, one is wandering without guideposts in this total freedom. The system can do anything, so there’s no structure to provide a handle on what can be accomplished.

For example, there is no finite set of distinct measurement parameters that fully describe the required inputs to all these possible measurements. Moreover, there is no possible universal parameter format, type, or struc-ture that can cover all cases. Similarly, in general there is no standard data type to cover all possible measurement results. Nor is there a finite and standard calibration set that can be applied to relate these arbitrary measurements to physical units.

It should be clear, therefore, that some sort of structure must be imposed on this vast abstract possibility in order for our finite human resources to be applied. One type of structure is the virtual instrument structure. This introduces the accustomed structure of everyday instrumentation in order to make our options reasonably finite.

But virtual instruments are not unlike the way stops on a church organ mimic classic musical instruments. The organ can synthesize various clas-sic instruments, but if you wanted something else not in the set of organ stops—something new—you couldn’t have it. This isn’t because of any inherent limitation in the organ, but rather because of a limitation in the model used to parameterize and limit what the church organ can be asked to synthesize.

Page 109: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

86

What is needed is a structure on possible measurements that is limiting enough to result in a tractable system design, but generic enough to allow the full range of possibilities. Returning to the organ metaphor, introduce the idea of a Fourier series, and allow organ stops to be specified as Fourier coefficients. Now the goal is reached. The Fourier series provides a handy and compact structure without introducing practical limitations. The full freedom of synthesizing any past instrument lives along with the possibil-ity of synthesizing future instruments.

Abscissas and Ordinates

I will now propose a system for describing measurements that is free of specific instrumentation focus, and thereby does not require reference to virtual instruments, or any other legacy crutch. It is compactly and use-fully structured, but the full freedom of synthesizing any past instrument lives along with the possibility of synthesizing future instruments.

In this system, I consider only the subset of all possible measurements that I call stimulus response measurement map (SRMM) measurements. I will show that with this conception, a uniform format for parameters, data, and calibration is possible. This has far-reaching significance be-cause the broad class of SRMM measurements comprises all the typical measurements made with conventional instrumentation, as well as much of what is possible to do with any instrumentation.

The Measurement Function

SRMM measurements are based on the concept of an abscissa and an ordinate. You may remember these words from high school. If you remem-ber what they mean, you are ahead of the game because I will not alter their meaning in any fundamental way. All I will do is to observe the fact that these concepts represent a measurement in a generalized manner first elucidated by Isaac Newton.

Consider the equation:

y = f(x)

The variable x represents the abscissa and y represents the ordinate. The function f relates the two. If you don’t like the words abscissa and or-dinate, perhaps you might prefer the alternative: independent variable

Page 110: Synthetic instruments: concepts and applications

Measurement Maps

87

(abscissa), and dependent variable (ordinate). Or maybe you just like x and y. Some purists think that abscissa and ordinate should be reserved strictly for the case of a two-dimensional plot. Fearless of the wrath of the math gods, I will use abscissa and ordinate terms to refer to independent and dependent variables regardless of the dimensionality of each.

In a sense, the function f represents the measurement process, with the abscissa representing the state of things, or possibly some imposed state (stimulus) and the ordinate representing the measured or observed re-sponse. Mathematically, a function is defined by specifying its ordinate for every possible abscissa over some domain. In the case of a discretely sampled domain, a function can be defined by simply enumerating the ordinates in a table. For example:

x y

1 10.0

2 10.41

3 10.73

4 20.0

defines a function over the domain [1,2,3,4]. This process of defining a function by a table is exactly analogous to the process of measuring the value of an ordinate for a uniformly sampled abscissa domain. It’s simply a matter of labeling the abscissa and ordinate. For example, instead of x and y, I could write frequency and power, or time and temperature, as in:

Time (hr) Temp (°C)

1 10.0

2 10.41

3 10.73

4 20.0

Not all possible tables define valid measurement functions. The definition of a function requires that there be one and only one value of y for any value of x. Therefore, a table that listed x = 2 several times with differ-ent values of y would not define a function. Analogously, in the context of a measurement, this requirement translates into demanding that the

Page 111: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

88

system produce one and only one measurement for each value of abscissa. Normally, such a requirement isn’t a problem. In the case of inverse maps, however, it may become a sticking point.

Canonical Ordinate Algorithms

Much of the technique of measurement in a synthetic instrument is encapsulated in so-called canonical ordinate algorithms. These algorithms generally represent the crux of the measurement issues—the down and dirty business of finally getting a number.

Canonical ordinate algorithms, as such, are not concerned with the context of the measurement, abscissa rastering, or data structures. These other issues have been abstracted away and are handled by other algo-rithms and data structures in the stimulus response measurement map model.

Multidimensional Measurements

Functions can have more than one abscissa. In such a case, the domain is a multidimensional manifold or, more loosely, surface. If, for example there are two abscissas, u and v, the the function of two variables:

y = f(u,v)

can be defined over a discretely sampled two-dimensional (u,v) domain with a table, thus:

u v y

1 4 5

1 5 6

2 4 6

2 5 7

Once again, for this table to define a function, there must be one and only one value of y for each unique (u,v) pair.

Page 112: Synthetic instruments: concepts and applications

Measurement Maps

89

Domains

Note the interesting pattern that the (u,v) abscissas make. It should be clear that this pattern is formed with an outer, or cartesian, product of two uniformly sampled abscissas, [1,2] and [4,5]. This produces every pos-sible combination of the individual abscissa values. An outer product can also be visualized as a table. Here is the outer product table that produced the above abscissa pairs:

u\v 4 5

1 (1,4) (1,5)

2 (2,4) (2,5)

Not all discretely sampled multidimensional domains can be represented as outer products; however, those domains that can be represented with an outer product, I call separable. The advantage of the separable property of a domain is that it can be represented very compactly by the sampling grids of the individual abscissas. The abscissa pairs in the outer product do not need to be stored explicitly after the measurement. Fortunately, separable domains constitute, in large measure, the kinds of measurement domains people like to use. There are, however, some restricted special cases of nonseparable domains that are of interest. For example, consider the measurement table:

u v y

1 2 3

2 3 5

3 4 7

4 5 9

I call this kind of domain locked. There is a fixed difference between the u and v value. In many measurements, abscissas are locked in ways analo-gous to this. A received frequency might be locked to a transmit fre-quency, a response port may be relative to a stimulus port, there are many examples of abscissa locking.

The final kind of nonseparable domain that is commonly used for mea-surements is called banded. In such a domain, one abscissa varies inde-

Page 113: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

90

pendently, and the other varies in a restricted range around the first. A banded domain can be represented as a diagonal subset of the outer product of a separable domain. For example, the domain table:

u\v 1 2 3 4

1 (1,1) (1,2)

2 (2,1) (2,2) (2,3)

3 (3,2) (3,3) (3,4)

4 (4,3) (4,4)

might result in a measurement table like this:

u v y

1 1 4

1 2 9

2 1 9

2 2 16

2 3 25

3 2 25

3 3 36

3 4 49

4 3 49

4 4 64

Banded domains are useful for measurements that explore a region in a two-dimensional domain of some ordinate where stimuli or abscissas vary in a coordinated, but not strictly locked, manner.

Measurement Maps

In a measurement process, it is often convenient to make several different ordinate measurements at a given abscissa point. Moreover, the ordi-nates may range over a multidimensional domain manifold with several independent abscissas. Therefore, to represent measurements, I need to

Page 114: Synthetic instruments: concepts and applications

Measurement Maps

91

generalize one-step further from a scalar function of several variables to vector-valued function of several variables, a so-called measurement map.

Measurement Map

Ordinate

Ordinate

Ordinate

Ordinate

Abscissa

Abscissa

Abscissa

Abscissa

Although the concept of a vector field or multidimensional mapping comes from mathematics and therefore evokes latent math anxiety in people, I’m not really saying anything deeply difficult here. Oddly, if I talk separately about measurement data as multidimensional, or the multidi-mensional domain manifold over which the data is taken, people seem to understand that just fine. It’s when I put them together into a vector valued function of a domain manifold, in a simple word, a map, that the eyes glaze and the knuckles whiten.

But consider, for example, an X-Y positioner moving an image sensor around. A common flatbed scanner like you have on your PC is an ex-ample. Clearly the position of the sensor is a two-dimensional thing. And if I talked about an X-Y-Z positioner, possibly with some theta angular rotation of the sensor, the resulting four-dimensional set of variables isn’t very hard to take. Any number of independent variables are easy to understand.

When the scanner acquires color image data in red, green, and blue color dimensions, it establishes a relationship between the image intensity vec-tor in RGB space, and the X-Y spatial location. This relationship is what I call a measurement map, and the data itself is measurement map data. (See Figure 6-3.)

A measurement map is the relationship between a set of independent variables, and a set of dependent variables. It’s how the elements of one table or spreadsheet relates to another table or spreadsheet. That’s it. Nothing more tricky than that.

Figure 6-2. A measurement map

Page 115: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

92

Ports and Modes

I’ve talked endlessly about measurements. How do these relate to a DUT? What is the link between the two? A bridging concept that I use to relate a DUT to its measurement is the concept of a port. A port is a plane of interaction between the measurement system and the DUT—a precisely specified data interface between the signal conditioner and the DUT. Often it is a physical cable interface to a sensor or a connector on the DUT, but it might be an abstract plane in space. It depends on what you are measuring and how.

Many schemes have been developed for managing ports and linking logi-cal ports to the real physical wiring interfaces, registers, or commands they represent. These schemes tend to be proprietary and associated with particular measurement systems. Others are more generic. I will present a sketch of a minimal scheme in the section titled “Describing the Mea-surement System with XML” that uses XML syntax.

Ports can have many attributes, but foremost is if the port is an input to the DUT, or if it is an output from the DUT. This will determine if the measurement system applies a stimulus to the DUT at a given port, or measures a response.

In my model of a synthetic measurement system, ports must be associated with physical point or planes, and must be categorized as either stimulus or response. They can’t be both, or neither, although it is certainly possi-

Flatbed Scanner

X

Y

Pixel Color(R,G,B)

Image

Map Function(R,G,B)=f(x,y)

Figure 6-3. A color image as a measurement map

Page 116: Synthetic instruments: concepts and applications

Measurement Maps

93

ble to define two ports, one stimulus and one response that both connect to the same place, or no place.

Ports are a logical concept, but ultimately they connect to the real world. Stimulus ports are, ultimately, controlled by a register that is written or a command that is sent; response ports are likewise wired to a register that is read or a command that is received.

The port is not the only bridging concept that links from the measure-ment to the hardware. A mode denotes states of the system itself indepen-dent of its physical interfaces.

The distinction between ports and modes is a fuzzy one. Perhaps the signal conditioner has two separate physical interfaces: One through an amplifier and one direct, with a switch internal to the conditioner select-ing between them. I might call that switch a mode switch. On the other hand, suppose these two conditioner ports are connected to different DUT ports. Maybe then the switch is really a port switch. But what if the switch matrix outside the conditioner is able to route either of these signal-conditioning interfaces to the same or different interfaces on the DUT. Now it may be unclear if the gain selection switch should be con-sidered a port or a mode.

Amplifier

Controller ConditionerCodec

It might be argued that implementing a mode by means of a physical in-terface, mixing the two concepts of mode and port, might be considered a hardware design mistake in the same way as a GOTO statement is con-sidered, by some, to be a mistake in software design. This would be true if

Figure 6-4. Is the gain switch setting a port or mode?

Page 117: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

94

there were no other considerations, but the reason hardware is designed a certain way (or software, for that matter) is frequently based on the optimization of certain aspects of performance, along with considerations of safety, cost, reliability, and so forth. There may be good reasons to use separate physical interfaces for different modes in a signal conditioner that override any paradigm purity considerations.

A common voltmeter is a good example, where the high-voltage mea-surement input is often a different interface than the normal, low-voltage input. This is done in order to reduce the chance of damaging the low-voltage circuitry with an accidental high-voltage input, as well as elimi-nating the need for an expensive high-voltage switch in the meter.

In general, it is wise to distinguish and separate modes from ports as much as possible. In a well-designed SMS, there will be a site configuration docu-ment and associated software layer that can disentangle many of these overlaps between ports and modes.

DUT Modes as Abscissas

Quite often, the measurement system is called upon to control the DUT. For example, suppose the DUT is a radio receiver. A reasonable mea-surement of interest might be the sensitivity of that radio receiver. But the sensitivity of a radio depends on its settings: where it is tuned in the band, if it’s set to AM or FM, and so on. If the radio can be controlled by a measurement system, we can ask that system to measure the radio’s sensitivity as a function of tuning and other mode settings on the radio.

When a DUT is controlled by the measurement system during testing, the DUT modes become abscissas. Even though they are not part of the measurement system, per se, they represent an independent variable just as much as any other port, mode, or abscissa within the control of the measurement system.

Ports as Abscissas

Ports are related to the measurement map in an essential way. Each abscissa and ordinate in a stimulus response measurement map must be associated with a port. Also, one or more ports may be defined as port abscissas. It’s best to think of ports as a special kind of abscissa (a child

Page 118: Synthetic instruments: concepts and applications

Measurement Maps

95

class) that behaves for the most part exactly like an abscissa, but has the additional ability to bind other abscissas and ordinates to a physical mea-surement port.

In this way, all the machinery developed to sample abscissa domains becomes available to select the port used for each measurement. This ap-proach makes a lot of sense because abscissas define the independent vari-ables in the measurement; they define what is controlled and specified; they establish the domain over which the ordinate is measured. Similarly, modes are also sensible to make into abscissas as they, too, represent the independent context of an ordinate.

If you make port and mode selection through abscissas, it’s clear that these port or mode abscissas may apply either to stimulus, response, or any combination thereof. This point matters most when specifying a cali-bration strategy that may introduce new calibration abscissas, or other-wise transform the map from what the user specified to what the machine can do.

Map Manipulations

If you have followed me so far, you should see that the description of a measurement and the results of measurements are mappings. Using the stimulus response measurement map model of measurements allows us to see exactly what aspects of each particular measurement are unique to that measurement, and what parts are generic aspects. Looking at the map as a whole allows us to see beyond the particular list of abscissas and ordinates associated with a test and focus on the measurement itself. This measurement focus leads inevitably to a compact implementation as a synthetic instrument.

In regards to abscissas, in the common case of a separable domain, the process of applying stimuli to a device can be reduced to defining the in-dividual abscissa scales. With the addition of banded and locked domains, all usual domain cases are comprised by a small set with compact descrip-tions.

There is a distinction between a map description and the map data itself. The map description is present before the measurement is made. The map data is acquired after the measurement is made. Together they represent

Page 119: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

96

a fully documented measurement map. Often I will discuss the two col-lectively using just the one word map. In cases where the distinction is relevant, I will explicitly specify map description or map data.

Maps are more than just a way to define measurements and record the results of measurements. Maps can change. They can be processed and manipulated both before the measurement and after the measurement in ways that are isometrological—manipulations that do not affect what is ultimately measured. In fact, they must be processed and manipulated iso-metrologically in order to apply to many common measurements, particu-larly relative measurements, or measurements that include a calibration ordinate. This isometrologic manipulation process is called canonicaliza-tion. It is described in detail in the section titled “Canonical Maps” and it is the paramount benefit of the Stimulus response measurement map viewpoint.

Maps can be interpreted (with some restrictions) as a multidimensional entity, as a manifold themselves. As such, all the concepts associated with manipulating objects in space work as tools for manipulating maps. For example, you might take a slice through a particular plane. A slic-ing action represents holding the value of some variable constant, either an abscissa or ordinate. Imagine a measurement of an amplifier gain and power supply drain as a function of input power and input frequency. Maybe you might like the gain versus frequency at constant power sup-ply current. That would be a slicing operation. Slicing operations usually require interpolation because the plane that slices through the data may fall between measured or controlled points.

Processing may also rotate the map. This may be done to reorder the axes, or perhaps to remove dependencies between axes—to orthogonalize axes. An example of orthogonalizing would be a DUT that has two inputs and two outputs. Both inputs affect both outputs, but the inputs affect the outputs in different ways. Imagine, for example, the hot and cold knobs on your sink. They control the temperature and flow rate of the water out of the spigot. Two abscissas, two ordinates. There is an inter-action between the two. Turning one knob changes both the flow and the temperature. Maybe you would like to know how to control just the temperature at a constant flow, or the flow at constant temperature. That is the process of orthogonalization applied to measurements. In this case,

Page 120: Synthetic instruments: concepts and applications

Measurement Maps

97

it involved finding a rotation transformation to apply that orthogonalizes the abscissas.

When rotating more than just the abscissas, the rotation can be inter-preted as an inversion of the map. For example, if you have a map y = f(x) and you find a function g that exchanges y for x, allowing you to get x = g(y), then you have rotated and flipped the x-y plane. That is to say, you have rotated abscissas and ordinates together, as one unit. Inversions, like slicing, require interpolation in order to make the new abscissa fall on nice, uniformly gridded points.

x

y=f(x)

y

x=g(y)

Map Inversion

Sometimes it’s necessary to flatten a map. This process removes an ab-scissa or ordinate. Calibration is often accompanied by flattening. For example, suppose you are measuring the gain of an amplifier. You may do that by measuring its input power, then its output power, then dividing the two, yielding gain. After the division, if you no longer want input and output power, you can flatten the map data manifold by combining two of its dimensions with a calculation. In that case, I call gain a calculated or-dinate as compared to the directly measured ordinates of input and output power.

Maybe you want to expand (or maybe better would be thicken) the map data with the gain calculation result, keeping input and output power in place. This is fine too. Expanding is the dual of flattening. It’s common to add new dimensions to map data during post-processing with addi-tional calculated results; it’s also common to add them in preprocessing to include calibration ordinates or abscissas. Sometimes these expansions are paired with their dual, a flattening, on the opposite side of the data acquisition.

Figure 6-5. Inverting a map

Page 121: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

98

An alternative to expanding and flattening is to make a child map. When creating a child map, it is generated from one or more parent maps by a calculation. It’s a good idea to maintain a link between child and parent so that later you can figure out what the calculated data was based on. This is often done in a nested or hierarchical tree-like manner that can easily be expressed in XML or HDF.

Other dual-pair manipulations that are useful are rastering and ravel-ing. Both of these ideas are implicit in the way domains manifolds are sampled. Normally, where there is more than one abscissa, the abscissa is sampled in raster order. That is to say, one of the axis is sampled so as to vary fastest in an innermost loop with the other axes held constant. Then, once the innermost abscissa is completely sampled across its domain, the next innermost abscissa is incremented to its next domain sample point, and then the innermost repeats its scan. Thus, all the axes are sampled through their ranges in this way.

An alternative to rastering is to ravel the abscissa points in some other order. All points are still sampled, but the samples are ordered differently. Normally, the order in which the abscissa set is sampled does not affect the measurement, but when hysteresis is present, or when certain axes are slow and others are fast to measure, the ravel or raster order can be essential to the success of the measurement.

All these manipulation techniques, and more, can be used on measure-ment maps. They can be used individually, or in combination. What is less obvious that should be emphasized is that they can also be used either on map data, or on map descriptions. This may be surprising to you if you had been thinking of all these cutting, flattening, and interpolation operations as after-the-fact post-processing of map data.

Operations on map descriptions are performed differently than operations on map data, that is true. But the same set of techniques are generally available. One can expand the map description to include extra abscis-sas or ordinates before the measurement; similarly, one can interpolate, invert, or otherwise reshape.

Consider the gain measurement example again, beginning with a map description that gives gain as an ordinate. One of the pre-processing steps might be to expand the map into canonical form with an atomic input

Page 122: Synthetic instruments: concepts and applications

Measurement Maps

99

power and output power ordinate.2 This would be done by canonicaliza-tion so that during post processing the operation could be reversed, yield-ing gain.

In fact, it is a general rule that calibration strategy considerations often will required a transform of the map description prior to the measure-ment. Any time you make a relative measurement, for example, the rela-tive ordinate you want to measure must be split into multiple measure-ments, which are divided or subtracted or otherwise combined through some calculation to compute the desired computed ordinate.

Problems with Hysteresis

There is great advantage to thinking about maps as entities that can be manipulated by standard operations, as if the maps were made of model-ing clay or tinker toys. In many cases, clever manipulation yields great efficiency in measurement activities, giving us faster, better, and cheaper measurements.

Unfortunately, there are metrologic issues regarding accuracy that must be addressed. One of these issues is interpolation, relied on by many map manipulation techniques. Issues with interpolation are discussed else-where in this book. In this section, I will discuss another key issue that hurts the ability to manipulate maps. That issue is hysteresis.

It is a fact that many DUTs have memory. That means that the results of one measurement on the DUT depends on what measurements have been made previously. For example, temperature is an issue for many de-vices, and making measurements can alter the temperature of the device, thereby changing measurement results. Many other examples exist of this phenomenon.

As a consequence of hysteresis, the order in which measurements are tak-en can be crucial. In situations where hysteresis is an issue, the cutting, slicing, and rotating manipulations that are performed on some maps may not be performed arbitrarily without risking significant loss of accuracy.

2 In some situations, response port selection may be viewed as an abscissa, with power being the ordinate. Thus, an appropriate map manipulation can flip the port selection abscissa over and give us input and output power ordinates.

Page 123: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

100

There is a distinction between memoryless devices and devices that can exhibit hysteresis. When faced with a DUT that has memory, there is no alternative but to evaluate what measurements are being taken, at what speed, over what duration, and what effect on the DUT will be retained and affect future measurements. Only once these considerations have been evaluated, can appropriate constraints be applied on map manipula-tion so as to avoid any loss of accuracy.

Stimulus and Response

Stimulus and response are interactions with a device under test (DUT). Don’t make the mistake of thinking an abscissa is a stimulus, or a re-sponse is an ordinate. An abscissa is an independent variable that you set. It establishes a context for a measurement. The ordinate is that mea-surement. Yes, an abscissa is ordinarily related to setting a DUT stimulus because the stimulus plays a major role in setting context. Similarly, a DUT response is ordinarily related to an ordinate, because the fundamen-tal idea of test and measurement is to measure the DUT response to some stimulus. These relationships are typically true, but not always.

In fact, it’s quite common that an ordinate is associated with measuring the applied stimulus so as to verify that the correct stimulus was applied during the measurement. Later, in post processing, the stimulus ordinate may be isometrologically transformed into an abscissa, but it starts life very much as an ordinate.

Another example would be in a system that analyzes modulated or coded responses from a DUT. An abscissa in such a system might be receiver frequency or subcarrier number or any of a number of possible response attributes that independently set the context for the dependent ordinate.

For this reason, there must be a mechanism to explicitly associate the axes in a map to a stimulus or response.

Inverse Maps

A surprisingly helpful concept is the idea of an inverse map. Consider a two-port device under test, like an amplifier. The measurement system provides an input stimulus, and measures the output response. With this sort of setup, you can measure a map like gain versus input power and frequency.

Page 124: Synthetic instruments: concepts and applications

Measurement Maps

101

Now, suppose somebody wanted to know what input power was required to hold the output power of the device constant at some fixed level versus frequency. In essence, they want to measure a stimulus or cause (input power, frequency) that results in a certain response or effect (output power).

This sort of case is another example that demonstrates not all ordinate measurements are of responses. Sometimes we try to find out what stimu-lus causes a certain specified effect in the response.

Assuming we are in a causal thermodynamic universe where effects follow causes in time, it won’t work to choose the effect first, and then see what cause “happens.” All you can do is to try some causes and record their effects. Afterward, on paper, you can invert cause and effect to see what causes would be needed given certain desired effects.

This reversal of cause and effect is an inverse map. When the test engi-neer specifies a map that has reversed cause and effect, the calibration strategy must be to invert the map so that it can actually be measured forward in time.

One way to achieve map inversion is to do a measurement to acquire a causal, noninverted or natural map, and then in post processing invert the map mathematically, resampling and interpolating as needed.

But is there any way to reverse cause and effect without inverting a map after the fact? Is there some way to apply an effect, and measure a cause?

Surprisingly, in some cases, the answer is yes!

Let’s return to the case of the amplifier. I want to know what input power, versus frequency, will keep the output of the amplifier at some constant output power. Suppose I constructed a feedback loop in my measurement algorithm. This loop would implement a goal-seeking algorithm that would adjust the input power so as to keep the output constant. With this loop in place, I could vary the frequency and measure the input cause that produces the constant, specified effect. Consider, as another example, the square root circuit in Figure 6-6 that works using this same principle of inversion performed by a feedback loop.

“Cheater!” you shout. I didn’t reverse the flow of time. Within the feed-back loop, minuscule accidental errors in stimulus cause deviations in the

Page 125: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

102

output that result in corrections of the stimulus. So the map really still does get inverted, in a sense. Well, maybe you’re right. But it is certainly true that feedback loops and adaptive systems seem to invert cause and effect, and they thereby represent a powerful tool for inverting measure-ment maps coincident with the moment of measurement itself.

Accuracy Advantages of Inverse Maps

As a general rule, a measurement designer should always consider the possibility of inverting cause and effect in her measurement, measuring something backward, and afterward inverting the map. This possibility should be compared with doing the measurement the forward way. In many cases, the inverted method leads to more accuracy, simpler hard-ware, or both.

A classic example of the advantage of inversion is when calibrating a variable attenuator. A variable attenuator is a device that takes an input signal and reduces its amplitude by some selectable amount. These atten-uators are calibrated by setting them to all their states and measuring the reduced output level relative to the input. The forward way to accomplish this measurement would be to stimulate the DUT with a fixed input, vary the setting of the DUT as an abscissa, and acquire the output level as an ordinate.

Unfortunately, with attenuators that work over a wide range, this ap-proach may be quite inaccurate and slow, especially for when the attenu-ation setting is high resulting in a very small response. When he realizes that his response system sensitivity is inadequate to the task, a test de-signer who hasn’t read this book might decide that he needs more sensi-tive hardware in the response system.

The right thing to do is to consider an inverse map. Fix the response out of the DUT and allow the stimulus to be the ordinate. In doing that, the

x2 ∑−

+

A

A

Figure 6-6. Square rooter

Page 126: Synthetic instruments: concepts and applications

Measurement Maps

103

response system can work at a level that is comfortable and accurate, and the requirement for wide dynamic range is shifted to the stimulus system, which may already have that capability since it is far easier to generate a wide range of levels accurately than it is to measure them.

Sadly, if there is no software support in the synthetic measurement system for inverse maps, the test designer may see the hardware solution as easier to implement. This illustrates a common mistake in the development of synthetic instrumentation that I discuss elsewhere in the boot: exclusive-ly using hardware to fix problems.

Problems with Inverse Maps

Inverse maps are not without drawbacks. Two that I will mention here are the problem with inverse function branches, and the problem of adaptive stability.

After inversion, a map may not still be a map. That is to say, the result may not be a function. There may be more than one value possible for an ordinate at a given abscissa point. The alternative values are called branches of the inverse map.

x

y=f(x)

y

x=g(y)

Map InversionBranch B

Branch A

Figure 6-7. Inverse map with multiple branches

The right branch to pick for the inverse may not be immediately clear. One option is to split the map and carry each branch forward separately in a collection of maps. In other cases, constraints and defaults provide a way to pick the right branch.

Another difficulty sometimes faced by inverse maps is the problem of stability. When map inversion is performed by real-time adaptive algo-rithms, the feedback loop within that algorithm may oscillate. If the sys-

Page 127: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

104

tem relies on such an adaptive loop to perform map inversion, instability in that loop would be disastrous to the accuracy of the data. Fortunately, there is a large body of theory regarding the stability of feedback loops, and there is much practical advice about how to go about fixing unstable loops.

Calibration Strategy and Map Manipulations

Why did I bother to create the stimulus response measurement map model of measurements? What good is it, really? Maybe it seems like an interesting way to describe measurements, but is there any big payoff? These are good questions. It may not be obvious why any of the math stuff is worth the trouble. Abscissas are just fancy loops. Ordinates are measurement subroutines. Yes, I see how this ties data together with the measurement in a neat package. That’s nice. But is there a bigger payoff?

While I have already pointed out the many small payoffs that derive from the use of the SRMM approach in general, and the benefits of XML sche-ma for describing measurements in particular, the jackpot payoff is with the concept of calibration strategy. Without the concept of calibration strategy, and related concepts, like compound and atomic ordinates and abscissas, the formalizing of measurement descriptions under the SRMM stance has no more benefit than other more generic object-oriented ap-proaches—other approaches that, although they have nothing to do with test and measurement, may be more familiar to software engineers.

Calibration strategy is a method for specifying how maps should be isometrologically transformed both before and after physical interactions with the DUT. Calibration strategy will rewrite the measurement to a new form, changing them from what the user originally specified for the test into what the synthetic measurement system actually can do. After the raw measurement is made, calibration strategy guides the post processing manipulations that occur, transforming the map back to what the user wanted in the first place.

I have already discussed map manipulations in some detail in the section titled “Map Manipulations.” I gave an example of a map manipulation that would be applied in the case of a gain measurement. The gain ordi-nate versus some abscissa, is rewritten into the power-in and power-out ordinates versus the same abscissa. That map data is acquired. Then the

Page 128: Synthetic instruments: concepts and applications

Measurement Maps

105

result is transformed back, calculating the ratio to collapse the power-in and power-out axes, yielding the desired gain map.

Surely, the same thing be accomplished by simply writing a new ordinate called gain. Such an ordinate would operate the measurement hardware explicitly so as to make power-in and power-out measurements, it would divide the result, and it would return gain. What’s wrong with that?

Nothing is really wrong, in the sense that this could certainly be made to work. In fact, I’ve seen many systems where test engineers do exactly this: write a new test script every time they want to measure something new.

Fine, but there are a bunch of methodological problems here. Now you have a growing list of idiosyncratic ordinates to maintain. Improvements made to power-in and power-out may or may not be reflected in an im-proved gain ordinate. Worse yet, it’s not exactly clear what depends on what. Maybe the gain ordinate is written first and later on power-in and power-out versions are abstracted. This creates a labyrinth of dependen-cies. Axes, unit conversions, calibration, calculation, post processing are all embedded in “ordinates” which no longer are worthy of the name. Ultimately, a collection of hand coded, ad hoc measurement scripts ac-cumulate that don’t form any sort of coherent, reusable system.

Don’t get me wrong. I’m as much in favor of hand coding, ad hoc, hacked up, extreme programming as the next guy, but I don’t want to create a big system entirely this haphazard way unless I am planning to quit just after CDR. The concept of calibration strategy that I have been describ-ing in this book leads to a better place where I can keep (and maybe enjoy) my job.

Canonical Maps

Instead of “just coding a new ordinate,” there is a fundamentally better way to deal with what I will call a compound ordinate like gain.

Fundamental Definitions

Atomic Ordinate

An ordinate based on a fundamental response measurement made by the system at a single abscissa point.

Page 129: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

106

Atomic Abscissa

An abscissa based on a fundamental stimulus or mode setting of the system, independent of other modes.

Compound Ordinate

An ordinate that is computed from data acquired with one or more measurements made using other compound or atomic ordinates.

Compound Abscissa

An abscissa that implies the domain of one or more other compound or atomic abscissas.

Stimulus Ordinate

An ordinate obtained by map inversion or adaptive processing that determines the value of a stimulus as a dependent variable. Typically this is not atomic unless the hardware has special adaptive properties, or can travel backward in time.

Loopback Ordinate

An ordinate that is a direct measurement of a stimulus. Typically atomic. Do not confuse with a stimulus ordinate. Beside the fact that it doesn’t involve the DUT, a loopback ordinate is really no different than any other direct measurement by the system.

Map Canonical Form

A map that has been isometrologically transformed so as to con-tain nothing but atomic ordinates and abscissas. Maps in canoni-cal form can be directly measured by the system.

The main goal of calibration strategy is to take a map specified by the user and to transform it into canonical form so that it may be measured by the system. The resulting map data is post-processed based on the map manipulations required for the canonicalization, resulting in data that is reported to the user. Thus, the user sees a system that measures what she asked it to measure, although internally the measurements were remapped to what the machine could actually do.

Page 130: Synthetic instruments: concepts and applications

Measurement Maps

107

You might note that compounds may be composed of “one or more” atomics. Why would anybody ever want just one atomic in the com-pound? The answer to that is the case of unit manipulations. Often a user will specify a measurement to be made in a certain unit: feet, volts, dBm, and so forth. The system itself measures in only a limited set of units. A simple map transformation in the calibration strategy can take a map specified in dBm and turn it into one specified in volts.

A calibration strategy schema describes, constrains, and guides how maps are manipulated to get from specified measurements, to machine level measurements, and back to specified results. With relationships and asso-ciated transformations encapsulated into the calibration strategy, trusted ordinates and abscissas are relied upon to do the work. Once calibration strategy reaches the canonical map, the system can optimize that map based on user constraints so as to make the fastest, most accurate mea-surement possible.

In using the word “schema,” I suggest that calibration strategy can be expressed as an XML schema. Indeed this is the case. The tree-structured nature of XML naturally serves to describe a tree of interrelationships leading from a user specified map, to a decomposition into a canonical map which guide the physical measurements to acquire sets of raw data. The schema then guides the processing, combining the raw data sets through map manipulations that lead back to the user required data.

Gain

Power In Power Out

Cal DataBlock Average Cal DataBlock

Average

Figure 6-8. Calibration strategy trees

Sufficiency of the Stimulus Response Measurement Map Stance

But is a tree structure expressive enough for calibration strategy? Test engineers are clever and want to be able to combine elementary measure-ment processes with complete and unrestricted freedom.

Page 131: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

108

Any algebraic expression can be expressed as a tree (for example: add-ing, subtracting, multiplying, dividing maps, or combinations thereof). So I think I’m OK with any compound ordinate or abscissa that is related to atomics through an algebraic calculation. This covers relative mea-surements, differentials, and unit conversions. It covers even complex calibrations like S-parameter 12-term corrections that are needed for RF network analyzer synthetic instruments.

Inversion would violate the tree structure, but I have made a special case of inversion. With machinery to compute inverse maps, I am free to specify inverse maps with no problem. Inversions for the purpose of measuring stimulus ordinates are the most reason for nonalgebraic map transformations.

What’s left? Orthogonalization? I can treat this like inversion. I can ask the system for orthogonal abscissas even though the system must neces-sarily first measure them as coupled, then transform.

Anything else? It is certainly the case that there are other calibration processes that are iterative or fundamentally procedural and require actual Turing-strength code to be written (for example, sorting and searching might be one class of calibration processes that don’t fit easily). These may be cumbersome or near impossible to cast as tree structures. Not that it can’t be done, but it wouldn’t be pretty. On the other hand, if calibra-tion strategy handles algebraics, inversion, orthogonalization, and possibly recursion as standard manipulations, with some other contextual Turing-machine-strength semantics introduced in a limited manner, it should be enough to provide 99.999% of the expressiveness needed to reach any theoretically possible measurement algorithm without excessively gum-ming up the syntax of the description for everyday, real-world things.

Processing a Measurement

How does the SRMM description of a measurement get translated into an actual measurement in a synthetic instrument? This question is a varia-tion on the oft-heard refrain: “But how do you really do a measurement?” What is the measurement algorithm?

Given the map model of measurements, all possible measurement algo-rithms can be described generically by one specific, unchanging, high-

Page 132: Synthetic instruments: concepts and applications

Measurement Maps

109

level algorithm. In my effort to object-orient everything, I recast the measurement algorithm, something that might be seen as inherently procedural, as a collection of algorithm components that can be dealt with independently, and reused.

Reuse is urgent because nothing is more expensive to produce, per pound, than software. Anything ensuring a software job is done just once is immensely valuable. What I am saying, therefore, is if you want to do a stimulus response measurement map measurement (or you can recast your time-honored measurement in a SRMM framework), I can give you a way to automatically generate software to do that measurement using a generic template.

Establishing an algorithm template therefore leads us toward object- oriented (OO) techniques. You no longer have to specify the parts of the custom measurement problem that are the same as a standard measure-ment; all you need to specify are the differences. In the parlance of OO design, establish a base or parent class that describes what all measure-ment algorithms have basically in common. From that base class create child classes for specific variations. Thus, the variations are created with-out risk of losing the tested, reliable functionality of the parent.

The Basic Algorithm

Acquiring stimulus response measurement map measurements is, fun-damentally, a process of rastering through a set of abscissas, controlling hardware, acquiring data, and filling in ordinates, forming a data map. This leads directly to an obvious algorithm to accomplish this task. This basic algorithm template for any particular SRMM measurement is as shown in Figure 6-9.

This algorithm can be coded in various ways in a measurement system. Some portions of the algorithm may be executed directly in hardware (for example, a state machine or table driving measurement hardware through the abscissas) and other portions can optionally be performed outside the system (for example, post processing in a remote host computer).

The following sections trace through each of the major steps in this ge-neric algorithm.

Page 133: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

110

Hardware Operation

Abscissa Setup

Sequence DefinitionTable Initialization

Abscissa Sequencing

Ordinate Measurement

Data Structure Accumulation

Post Processing

Unit Conversions

Axis Flattening

Map Inversions

Map Rotations

Initialization

Cal Strategy

Canonicalization

Map Optimize

Hardware Release

Hardware Allocation

Supervisor

Exception Handler

Map Validation Figure 6-9. Basic SRMM measurement algorithm

Page 134: Synthetic instruments: concepts and applications

Measurement Maps

111

Initialization

In the initial phase of the measurement algorithm, the measurement system prepares for programming the hardware to do the measurement. These preparatory steps include, primarily, the following:

Map Validation

Calibration Strategy

Canonicalization

Map Optimization

The process starts with a map provided by the user. First, the system checks to be sure it has been given a valid map. To determine this, the map is validated against the DTD or schema. If the map proves to be valid, the system moves on to calibration strategy, otherwise an exception is thrown and the algorithm exits.

Calibration strategy examines the map (now known valid) and figures out how it can be transformed into a canonical map. The way to do this may or may not be unique. It also may be impossible. Thus, various exceptions are possible at this stage, but if the map is canonicalizable, and the sys-tem can figure out the best way to accomplish the canonicalization, the next step is to perform those map transformations. Also, the system must remember what transformations were applied, as these must be undone during post-processing.

Canonicalization will depend not only on the actual hardware available, but on soft constraints on ports and modes. Port selection involves speci-fying which DUT interface is active and which stimulus and response ports on that interface are to be used. The port designation is a parameter to the overall test specification provided by the TPS and must be made part of the map to permit complete canonicalization.

Constraints are specifications, also provided by the user or controlling software, that set bounds on the states the measurement may explore. This constrains the stimuli applied to the DUT or designates acceptable responses from the DUT. When these bounds are crossed, exceptions are thrown from within the algorithm. Constraint limits may be of soft or hard severity, with the severity attribute possibly changing in different parts of the algorithm. Soft limits will generate exceptions that will be

Page 135: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

112

caught and handled, with the overall algorithm continuing after appro-priate action is taken. During canonicalization, a possible response to a soft limit might be to choose a different strategy. Hard limits throw an exception that is caught by the algorithm supervisor, causing the algo-rithm to terminate entirely. Limits are often soft during strategy, but hard during execution.

With the measurement map reduced to canonical form, the algorithm can now optimize it prior to loading it in hardware. Optimization can be based on different definitions of “best.” When instructed to seek the best speed, optimization is a process of sequence reordering, placing faster ordinates and abscissas into the innermost rastering loops. Certain abscissas may have large overhead times associated with switching (for example, mechanical 20 mS switches as compared to solid-state 20 nS switches.). The abscissas should be reordered to put the slow abscissas on the outermost loops, with the fast abscissas within—other user specified constraints, not withstanding.

When seeking the best accuracy, the system may order measurements so as to minimize errors caused by hysteresis, repeatability, and drift. Certain ordinates may be incompatible in the sense that measuring them both for a certain abscissa point may be less accurate than if they are measured independently over the domain. Thus, in the name of accuracy (but sacri-ficing speed), the system my actually run through the same abscissa range twice.

Abscissa Setup

The main functions of abscissa setup are:

Hardware Allocation

Sequence Definition

Table Initialization

So far, the measurement algorithm has been working with abstract measurement maps, but now the rubber meets the road, so to speak. It needs to get the map executed on hardware. The first step for making this happen is to allocate the necessary hardware. Presumably, the map

Page 136: Synthetic instruments: concepts and applications

Measurement Maps

113

has only atomic ports, modes, abscissas, and ordinates. That means the algorithm should be able to find hardware that can handle those atomics. In the case of a single tasking, one-measurement-at-a-time (OMAAT) system, this should never be a problem assuming the calibration strategy algorithm is correct. However, in a multitasking system, there may be other measurements in the scheduler that have prior dibs on some of the hardware. Thus, the hardware allocation algorithm at a minimum (in the OMAAT case) may be simple, but in a multitasking case may need to arbitrate contention for resources between simultaneous measurements.

The list of abscissas, ordinates, ports, and modes required in a given measurement are specified in the map. This list is ordered and optimized to specify the slowest varying to quickest varying as they are sequenced in raster order.

Table initialization is the process by which the now canonical abscissa sequence tables are calculated from (start, increment, number) speci-fications or are loaded from explicit lists. In the case of hardware state sequencers, table initialization also includes loading the hardware table appropriately. Table initialization also involves creating and preparing the empty data structures for storage of the ordinate measurements.

Abscissa Sequencing

Abscissa sequencing occurs around the process of ordinate measurement. The sequencing occurs in raster order as defined in the optimization that occurs after calibration strategy gives us a full list of atomic abscissas. Ap-propriate data structure indexes are maintained for the purpose of saving data in the proper spot in an array or table.

Abscissa sequencing can be implemented as a raveled list of states in a big state table, or it can be calculated on the fly by an algorithm. This is a basic implementation trade-off. Hybrid approaches can be designed, for example, the state table can have rudimentary branching or conditional execution. Asynchronous exceptions can occur that might throw us out of the sequence. These may be soft exceptions, a mechanism must be provided for interrupting the sequence temporarily and returning to the same spot to continue.

Page 137: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

114

Ordinate Measurement

Ordinate measurement occurs within the process of abscissa sequenc-ing. At this stage, all ordinates are atomic, so the assumption is that the system measures them directly. Measurements are performed for each ordinate specified in the measurement map. Ordering is also as specified in the map, as per the user’s constraints and optimization by the system. Data structures are accumulated with each ordinate measurement.

Post Processing

The first post processing task is to release allocated hardware. This should return the system to a safe and sane state, removing all stimuli from the DUT and securing the response system. After this, post processing can perform various map transform and axis flattening functions, reversing the map canonicalization transformations (rotations or inversions). In general, any axis added is now flattened. If additional calibration data structure has been provided, post processing will also combine this with measured data. Units are converted to the required final units.

Page 138: Synthetic instruments: concepts and applications

115

CHAPTER

7Signals

The design of synthetic instrumentation is a signal processing game. Mostly it is digital signal processing (DSP), but also analog signal process-ing (ASP) is intimately involved. Therefore, it should be no surprise that at some point I need to talk about signals: signals being synthesized and signals being analyzed by the synthetic instrument.

But before I begin to talk specifically about signals, I need to warn you that signals are only one viewpoint or stance that I can use to describe the workings of a synthetic instrument.1

When you create the design for a synthetic instrument, you have the hardware as a generic context and are given a test to accomplish. You then define the map as a fundamental step in doing that test. Yet even more fundamental than the map is the signals produced and measured by the map. This is what I call the signal stance description of a mea-surement. The map stance sits above the signal stance viewpoint. The hardware stance is below the signals, and a test stance is above the map. I do speak of “above” and “below” in a hierarchy, but the advantage of the word stance as compared to “level” is that it is less evocative of a hierar-chy. Stances don’t cleanly sort into levels. The signal stance isn’t really “lower” than the map stance. It’s just different. (See Figure 7-1.)

The signal stance viewpoint is better known than the map stance, and some people may even think that signals are the only way to look at things between the test and the hardware. I disagree. I think this area needs to be split into maps and signals. In a sense, signals represent the

1 For a more complete discussion of conceptual stances for understanding the world, you might turn to Dennett.[B3]

Page 139: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

116

machine/assembly code of measurement, and maps represent an applica-tions generator (third or fourth generation or 4GL) language. Signals are one viewpoint that you definitely need to worry about, but quite often, you can profit by thinking with other stances.

Continuing the computer language metaphor, some programmers may feel that every program should be written in assembler, but most folks these days know that higher-level languages increase productivity im-mensely.

Conversely, there is a danger in straying too high in the conceptual hierarchy, focusing exclusively on the test, disregarding the details of the measurement. Possibly, this is where the greatest danger lies. Thinking exclusively about the test without breaking it down into measurements leads back to the old way to develop ATE.

With that warning in mind, let’s talk about signals in their proper context relative to synthetic instrumentation.

Kinds of Signals

Signals are electrical voltages that either may be voltages directly of interest, or may be analog voltages that represent something else. For example, the voltage across a battery is directly of interest as a voltage, whereas, in contrast, the voltage from a thermocouple is an analog of temperature: it’s a voltage that represents (is analogous to) something else.

Test

Measurement

Map

DriverHardware

Signal

Figure 7-1. Fuzzy hierarchy of “stances”

Page 140: Synthetic instruments: concepts and applications

Signals

117

I will restrict the discussion to voltage. Any physical parameter can be used as a signal analog. Thus, it is certainly possible to talk about current or other kinds of units as the fundamental parameter of interest, but in most systems voltage is what people use. I’ll stick with that.

Coding, Decoding, and Measuring the Signal Hierarchy

No differently from anything else that mankind manipulates, electrical signals can carry meaning in complex and multihierarchical manners. Very sophisticated advanced mappings, modulations, and layered codings are possible. What starts as a simple analog signal—a voltage on a wire—can be manipulated and structured so as to represent intricate information.

A simple example of coding is the digital bit. In some systems, anything higher than 2.5 volts represents the binary digit “1”, anything less than that represents a “0” bit. Thus, a signal may be seen as just a voltage or as a coded bit, but there is no reason to stop there. Bits can be grouped into bytes, words, or other data structures. There really is a hierarchy of possible views to a signal. Moreover, there is more than one hierarchy. For example, analog video is “just” a voltage on a wire, but it also repre-sents layers of meaning: lines, fields, frames, luminance, and chrominance coded in a sophisticated analog format.

I can go on and on with examples: cell phone signals, GPS waveforms, JTIDS, 802.11b, Bluetooth. The list is long and diverse.

All these diverse hierarchies are rooted in the basic idea of the voltage on a wire. As such, any and all of these possible sophisticated signals is amenable to analysis, synthesis, and thereby measurement by a synthetic instrument using the CCC architecture that generically digitizes that voltage.

As a consequence of typical hierarchical nature of signals, when someone talks about measuring a signal you should consider what relevant aspect or aspects of signal meaning, modulation, or coding fit in the context of the measurement.

A simple example of this consideration is a bit error rate (BER) measure-ment. In a BER test, the stimulus is a voltage on a wire, but that same stimulus may also be viewed as a signal bearing coded digital information. This stimulus is passed through a DUT (possibly a modem, possibly a

Page 141: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

118

communications channel, or possibly a combination of the two). The re-sponse is a voltage, but may also be seen as a signal bearing coded digital information. The desired ordinate in a BER test is bit error rate, defined as the ratio of incorrect information bits in the response divided by total bits. Different abscissas are used, but the most typical is stimulus power expressed as Eb/N0.

Clearly a BER measurement depends on the coded meaning of the sig-nal, as the map is expressed in terms of bit error ratios and energy per bit that only make sense to talk about when viewing the signal as something representing specific coded digital information. The distortions and ran-dom noise in that cause the measurement errors in the coded information may or may not be themselves visible to the measurement system. In any event, characterizing these low-level signal adversities is not the objec-tive of the measurement; the objective is measuring errors in the coded information. Yes, it is true that distortions and random noise will disrupt the voltage, and in turn will disrupt the information bearing capacity of the signal and cause decoding errors. So, in a sense, BER measures some-thing about the voltage on a wire, but, more correctly, at a higher level, it measures something about how accurately information is transmitted through the DUT, independent of the underlying code. In fact, the at-tributes of the underlying coding are abscissas, measurements of the BER is an ordinate, orthogonal to the abscissas.

Therefore, when decoding the signal before measuring some high level aspect, the measurement is not affected so long as the decoding is ideal (or if not ideal, the actuality of the decoding is parameterized, possibly as one of the abscissas). In fact, one must decode the signal in order to make measurements of the higher-level data since the information sought only exists at the higher level. Similarly, on the stimulus side, levels of mean-ing are encoded into a sophisticated synthesized hierarchical signal.

Decoding Method Abscissas

When layers are stripped away from a signal in order to measure some-thing about a high level aspect of its coding, you always want to do that ideally. That is to say, each decoding must be perfect in the sense that it doesn’t affect the measurement you want to make at the level you are interested in.

Page 142: Synthetic instruments: concepts and applications

Signals

119

For example, if the DUT is a device that produces a digital data stream, serially coded with RS-232 on a wire, the object of the measurement is to get that data. It doesn’t matter how much noise there is on the RS-232 signal, or what the exact voltage levels are, so long as those disturbances cause no errors in the data. For instance, a digital thermometer measur-ing temperature may return the temperature coded with RS-232. The ordinate is the temperature data. Any decoding of the signal is merely a means of getting to the ordinate.

In contrast, consider a BER test on a communications channel with no demodulator or decoder as part of the DUT. What the measurement sys-tem receives as a response is a coded voltage waveform along with junk, such as distortions and random noise that disrupt the voltage. The junk will disrupt the information bearing capacity of the signal to the extent that it would cause decoding errors. The object is to measure those errors quantitatively with a BER ordinate, but the signal isn’t decoded yet.

In situations like these, the measurement system needs to decode the re-sponse in order to make the measurement that results in an ordinate. The way it chooses to decode the response is itself an abscissa.

Direct Real Analog Baseband Signals

As I have explained, ordinates are often concerned with measuring some quantitative aspect of a signal at some level of description. Let’s start with the simplest of these: The sampled value of the signal voltage itself. Say, for example, there is a steady power supply voltage to be measured within the DUT. The signal in this case is a sample of the voltage digitized as a real number with units of volts.

Signals that fit well with this sort of direct voltage digitizing scheme are often called baseband signals, which implies meaningful DC content. I don’t think it’s correct to call them “DC” signals since they can, in fact vary, although DC-coupled signals would make sense to someone who knew how to run an oscilloscope.

But more significant than the DC content aspect of the baseband sig-nals is the fact that they are a direct, linear analog of a real number that they represent, and as such a synthetic instrument may directly digitize them into a quantized real number that captures their meaning (to some

Page 143: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

120

quantization precision) and represents them with no additional transfor-mations, decoding, or processing.

Type R Thermocouple5 uV / Deg C

100 uV is the

voltage "analog" coding of 20 Deg C

A/D Converter

8-Bit256 mV Full

Scale

1000

Analog code of

20 Deg C is now

100 mVafter the Amplifier

Digital Code 0x64 now represents 20 Deg C

20

10

0

-10

30

40

50

60

70

80

90

100

-20

-30

This may seem an obvious point, but when I talk about other sorts of hi-erarchical signals, things may not seem so clear. Therefore, I will be super specific and call these signals direct real analog baseband signals. This name gives us the idea that this is a direct, linear analog representation of a real number by an analog voltage with meaningful DC content. These signals are the bottom of the signal coding hierarchy.

Despite the unsophisticated direct analog coding, direct real analog base-band signals are routinely used to represent all sorts of things. They may be sensor outputs, like temperature or pressure, or they may be simplified or linear mapped aspects of other sorts of signals, like a voltage that rep-resents power or current. They also might be a coded signal, intentionally treated as simple analog for measurement purposes.

Figure 7-2. Analog and digital codings

Page 144: Synthetic instruments: concepts and applications

Signals

121

Digital Coded Baseband

Digital coded baseband signals are a subset of analog signals that use ranges of voltages to represent discrete numbers. Routinely, a binary digit 1 or 0 is represented by two voltage ranges. In rare cases, more than two ranges are used, resulting in a wider range of integers that can be represented. This is similar to the way that more general analog signals use voltage in a continuous mapping to represent a precise real number. Digital signals, in contrast, only attempt to represent a coarse integer, the coarsest and most common case being just a zero or a one.

The contrast affects the way the signals are approached with synthetic instrumentation. With analog signals it’s normally important to measure them to the best accuracy possible so as to get the most accurate and complete knowledge of the precise real number they represent. With digi-tal signals, however, only the coarse integer they represent carries mean-ing. This limits the accuracy needed in the measurement.

Necessarily, every digital coded signal is also an analog signal, or may be viewed as an analog signal by someone interested in making mea-surements. Shifting the point of view back to the larger, parent class of analog signals, a system might precisely measure the voltage of a digital coded signal to an accuracy beyond what it needs to determine the coded integer. The ability to shift the viewpoint back through parent classes of signals represents a general principle of synthetic instrumentation

Analog Coded Baseband

Direct real analog baseband signals, as I have explained, represent a real number ordinate. But there is no limit to the creativity and cleverness of man and therefore, analog baseband signals have been used in myriad ways to represent much more sophisticated information through the use of coding.

A good example of a sophisticated analog coding is NTSC analog encoded video. NTSC video can be thought of as baseband analog, but it doesn’t rep-resent a single real number, but rather, it represents a two-dimensional image composed of three-dimensional color space pixels. This representation evolves continuously through time, but represents discrete samples of the image.

Page 145: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

122

Figure 7-3. One scan line of NTSC analog video

Signals of this sort present a unique challenge as well as a valuable oppor-tunity to synthetic instruments. The challenge is to use generic hardware (for example, a simple CCC architecture) and appropriate software to capture and characterize a very sophisticated and specific analog coded signal. The opportunity is to demonstrate the cost benefit of the synthetic approach by eliminating the need for a special-purpose instrument to measure the specific coding.

Bandwidth

I have discussed several different types of signals, all rooted on analog voltages. Can any signal be analyzed or synthesized with a CCC architec-ture synthetic instrument?

The answer to this question comes down to the idea of bandwidth and information capacity. Nyquist, Shannon, and others have established theories that show how electrical signals have limited information bear-ing content. As a result of this limit, it is always possible to get as close as you please to extracting every last drop of information from a signal. Digital signal processing[B10] and communications theory texts[B1] go into these ideas in detail so I don’t need to belabor them here. The central point is this: these theories state given sufficient bandwidth and precision in a codec, it’s always possible to synthesize or analyze any given voltage signal, extracting all the information, regardless of how sophisticated the coding may be.

White

Black

Sync

Front Porch

Back Porch

Color Burst

Image Scan Line

t

Amplitude

63.5 µS

Page 146: Synthetic instruments: concepts and applications

Signals

123

The sampling theorem from communications theory states that it is al-ways possible to completely recreate a continuous analog waveform from exact, discrete-time samples of that waveform, so long as the frequency of those time samples are at least twice as high as the highest frequency in the waveform.

f

0 B

fB 2B

fB 2B

fB

f

0 B

Signal Spectrum

SamplerSpectrum

Spectrum After

Sampling

Ideal Filter Passband

Reconstructed WaveformSpectrum

4B

Figure 7-4. The Sampling theorem

Page 147: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

124

Thus, if some signal is band-limited and has no energy above some fre-quency called its bandwidth (represented with a capital B), then time samples of that signal taken at a frequency of twice B fully characterize the signal. Another way to say this is if the codec in a synthetic instru-ment can digitize voltages fast enough, it can analyze or synthesize any-thing. Waveforms with frequency content up to B must be sampled at least at a 2B rate. The 2B sampling rate is often called the Nyquist rate.

It should be noted that all practical signals have a finite bandwidth. You can always find some maximum frequency, B, above which the signal has negligible power with respect to any fixed fidelity criteria. Therefore, in a sense, the sampling theorem is a “proof” of the validity of synthetic in-strumentation. It states a clear and easy to apply criteria for deciding what sort of CCC system you need to completely handle any given real-world voltage signal. This criteria works particularly well with direct baseband and analog coded signals, but it applies to all signals in the hierarchy of possible coding, since all signals are, fundamentally, voltages on a wire with some nominal limit on their bandwidth.

But the greatest strength of CCC synthetic instruments that are based on the sampling theorem—their broad applicability to all voltage signals—is at the same time their greatest weakness.

Why? Because in treating all signals as voltages to be digitized, a measure-ment system is forced to process far more information that really is of interest in the signal.

For example, with digital coded signals, many sorts of measurements may not be interested in anything more than the low rate stream of 1’s and 0’s that are represented by the voltage. The output of a modem may be a TTL digital voltage. This signal may have a bandwidth in excess of 100 MHz when viewed as a signal to be Nyquist rate digitized in its fullest representation, but may only produce data in short packets at a low aver-age rate in the KBit/s range. The detailed waveform out of the modem may not be of interest, but the data is. Still, in order to get this data using a Nyquist rate CCC synthetic instrument, the data needs to be distilled from thousands of uninteresting samples of the overall waveform.

In general, whenever measurements need to be made of higher, more elaborate levels in the signal coding hierarchy, processing the lower levels

Page 148: Synthetic instruments: concepts and applications

Signals

125

represents overhead. When a synthetic instrument is designed to work at the lowest level, it becomes less efficient the more elaborate the coding is. If the instrument could somehow get at the higher level coding directly, it would be more efficient.

An obvious example of improving efficiency by going directly to the higher level of coding is easy to see in digital cases. Omitting or bypassing the codec of a CCC instrument, and routing the digital data through con-ditioning and then directly into the controller, avoids the need to digitize the detailed waveform. The result is the bits themselves. When the bits are the object of the measurement, this is far more efficient.

In summary, Nyquist rate sampling is a powerful idea. It works particularly well with direct analog and analog coded signals, I have discussed. But, as I have just explained, there are cases where it misses the forest for the trees, as it digitizes every “tree” when all you want is the general shape of the forest.

Another large class of signals that are not amenable to direct Nyquist digitization is bandpass signals.

Bandpass Signals

The idea of a bandpass signal is an intellectual child of the invention of radio. In radio, information initially represented as a direct analog voltage signal with some low bandwidth is translated or modulated onto a car-rier wave at some much higher frequency. This higher frequency carries the analog information much in the same way DC carries direct analog information.

This simple idea of modulation intertwines with the frequency domain in signal analysis, which is, in-turn, based on the theory of the Fourier series and Fourier transform. Frequency domain analysis is probably one of the most powerful signal processing ideas ever invented. This viewpoint al-lows us to understand bandpass signals in a rigorous mathematical way.

The scope of this book doesn’t allow me to fully describe all the aspects of bandpass signals or frequency domain analysis. The reader may refer to[B1] for a more complete exposition; however, I need to review some basics that are particularly relevant to synthetic instrumentation.

Page 149: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

126

Of specific relevance is the idea of modulation. In the same sort of way that the voltage on a wire can represent or analog some physical param-eter, aspects of a carrier wave can be modulated to represent, in a similar analog sense, some physical parameter. There is a notable difference, however. With a carrier wave, it is possible to modulate two separate aspects of the carrier simultaneously and independently: amplitude and phase (or frequency). But other than that difference, the principle is identical.

AM

FM

Modulated signals are often called bandpass signals because they occupy only a narrow band of frequency spectrum and may be passed by a reso-nant circuit or filter. The carrier, as I have already discussed, is the high frequency wave that is modulated. The modulation on that wave is called the envelope.

Modulation is a basic analog signal coding technique. It is the first step up the hierarchical ladder for many sophisticated signals. As such, synthetic instruments are often asked to do measurements of the modulation, and have no particular interest in the carrier wave. When measuring cellular phone signals, radar signals, GPS signals, or most any kind of RF signal, the carrier is not something that generally matters more than as a means for accessing the envelope.

Figure 7-5. Amplitude and frequency modulation

Page 150: Synthetic instruments: concepts and applications

Signals

127

Therefore, the basic CCC architecture for a synthetic instrument may be seen as inefficient. It is normally the case that the bandwidth of the modulation is less than 10% of the carrier frequency. Quite often, this ratio is even smaller. Modulation bandwidths of 0.1% or smaller (relative to the carrier) aren’t uncommon.

If you try to use direct Nyquist digitization to acquire a bandpass signal (or synthesize one), you end up using a sampling rate that is more than twice the carrier frequency, even though the carrier is not typically of interest. This is a waste. It would be better if bandpass signals could be acquired more efficiently. For example, it would be better if they could be digitized at some rate proportional to the modulation bandwidth rather than the carrier frequency. Is such a technique possible?

Indeed, it is possible. In fact, there are several techniques for doing exactly this. In a sense, all these techniques amount to the same thing. As response techniques, they strip away the carrier and detect only the modulation; they demodulate the signal. In a more general sense, these techniques represent decoding of an analog coded signal. As with any coded signal, there is the option of stripping away lower levels of coding if the object of the measurement is some aspect of the higher-level repre-sentation.

Similarly, when applied to stimulus generation, analogous techniques generate the modulation first, and then encode that upon a carrier. Keep in mind that most techniques can be run either way, with exceptions that I will try to note.

Bandpass Sampling

Sometimes the sampling theorem is stated in such a way that we are led to think the Nyquist rate is some hard limit, like the speed of light. But this isn’t the case. It’s not that frequencies higher than half the sampling rate aren’t allowed, or aren’t possible. Rather, it’s that frequencies above this point are aliased down to apparently lower frequencies.

The way this aliasing happens is quite well understood and predictable. In fact, it is so predictable that it can be used as a method for stripping the carrier from a high frequency bandpass signal, digitizing just the envelope. This method is called bandpass sampling.

Page 151: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

128

Sampling is a form of multiplication of signals, and the reason the band-pass sampling technique works is a result of a mathematical property of multiplying sine wave signals. Any time you multiply or “mix” two sine waves, the result will be a shifting of signal frequencies from the original frequencies to the sum and difference. This mixing process is sometimes called heterodyning.

Multiplier(Mixer)

cos ω1t

cos ω2t

1/2 [ cos(ω1 + ω2)t + cos(ω1−ω2)t ]

When sampling or mixing a signal in practice, the signal is never multi-plied by a pure sine wave. But by virtue of the idea of the Fourier series[B5], it is possible to think of sampling as multiplying by the sum of a series of sine waves each at a harmonic of the sampling rate.

Figure 7-7 illustrates how this works. Start with a bandpass signal at car-rier frequency Fc, with spectrum as shown in (A). The sampling rate is Fs and is illustrated in (B), with harmonics of Fs shown as impulses out at Fs multiples: Fs, 2Fs, 3Fs….

When the bandpass signal is sampled, it is aliased down to an apparent baseband spectrum as illustrated in Figure 7-7. The result is exactly the same bandpass spectrum, and the same envelope, but the carrier frequen-cy is now much, much lower. It’s exactly as if the spectrum has been slid down, or shifted, in frequency.

When using the bandpass sampling technique, care must be taken in the choice of sampling rate relative to the carrier. If, for example, you choose a different sampling rate as in (D), the resulting alias at baseband folds over and results in a spectrum (and envelope) that is now irreparably altered as shown in (E).

Figure 7-6. Mixing

Page 152: Synthetic instruments: concepts and applications

Signals

129

The folding over has an interesting interpretation from a signal coding viewpoint. You may recall that a bandpass signal encodes two, indepen-dent analogs onto the carrier (one as carrier amplitude, one as carrier phase), whereas, a baseband voltage can only encode one analog (the voltage). When the bandpass sampling process shifts the bandpass signal down to baseband, the amplitude information, and phase information get folded together into the voltage waveform. To avoid this folding, you need to keep the signal away from DC, still keeping it on a carrier so that it can still have amplitude and phase as separate things.

f

0Fc

0Fs 2Fs 3Fs

f

0Fc-3Fs Fs 2Fs 3Fs

B

A

C

0Fs 2Fs 3Fs

D

0Fs 2Fs 3Fs

E

f

f

fFc-3Fs

Figure 7-7. Bandpass sampling example

Page 153: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

130

When a mixer shifts a signal down to some lower frequency, but still above DC, the new frequency is called an intermediate frequency, or IF. Given the constraint that the bandpass signal cannot “touch” DC for fear of being folded over, or cannot contain content more than half the sampling rate, Figure 7-8 shows graphically that the IF frequency giving the most room is 1/4 the sampling rate.

f0

FIF Fs=4FIF

Once more, you see the factor of 2 represented by the duo amplitude + phase that are encoded into a bandpass signal. In a sense there are two channels of information multiplexed onto one stream of samples. Further-more, another (perhaps simpler) way to look at it is that bandpass signals have double-sided bandwidth, so you need twice the sample rate relative to baseband single-sided bandwidth.

In practice, there may be difficulty getting a codec to do this bandpass sampling magic unless a very high bandwidth sampler is used. Fortunately, as of the date of this writing commercial samplers with bandwidths up to 100 GHz are becoming available on the market. These devices permit the CCC single architecture to digitize baseband signals up to half the con-tinuous sampling rate: as of this writing up to 1-2 GHz for some devices. The same sampler can be used to capture bandpass signals up to the band-width of the sampler—100 GHz—using bandpass sampling.

Image Rejection

An implicit assumption one makes in the bandpass sampling process applied to measuring a bandpass signal response is that there is only one, narrow band signal in the response being captured. Although this is the case for a large number of real, practical situations, it is not the case uni-versally.

For example, if the DUT is a filter and the object is to measure its fre-quency response, a single sine wave is the stimulus, and the system

Figure 7-8. IF at 1/4 the sampling rate

Page 154: Synthetic instruments: concepts and applications

Signals

131

measures a single sine wave response. In this case, there is only one, narrow band signal in the response. On the other hand, if the object is to measure the frequency response of an amplifier, although the major component of the response will be a single narrow band signal, there will also be noise and harmonic distortion at other frequencies. In a more extreme case, consider that the DUT is an antenna on an open range and the system is measuring an ordinate that is the response to an incident, open-air stimulus. When measuring the response from an antenna, it will contain the desired response component, but it may contain any number of interfering signals picked up from the aether.

If bandpass sampling is used to measure a response that contains more than just one bandpass signal, every signal will fold into the response. The spurious responses that fold on top of the desired responses are called images and they can represent a serious problem in certain measurement scenarios when bandpass sampling is used.

Interference and Images

Any radio engineer is familiar with the problem of images and knows the way to solve it is to uses something called a preselector filter that rejects any energy at image frequencies, allowing only the desired signal to pass. A more sophisticated, powerful solution also well known by radio engi-neers is to create a proper superheterodyne down conversion system that through a cascade of preselectors, mixing stages, and IF filters, a desired bandpass signal can be converted to IF without corruption by images and other spurious responses.

Mixer

FLO

FRF

FRFFLO −

BandpassFilter

FIF =

Preselector

FRF FRFFRF+ 2FIF

Image

Figure 7-9. Preselector signal conditioner

Page 155: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

132

In a synthetic instrument, a simple preselector, a full-blown superhetero-dyne down-converter prior to the codec (or up-converter after the codec in stimulus) can be viewed as just a kind of signal conditioner. Alterna-tively, it can even be viewed as a form of DUT interface or switch matrix since the preselector selects the response signal of interest from several that are frequency multiplexed at the output of the DUT.

I/Q Sampling

An alternative technique to bandpass or IF sampling that can address the factor of two represented by amplitude and phase are so-called I/Q tech-niques that use two independent channels running in phase quadrature in order to retain both phase and amplitude information. This approach works even when the sampling mixes the bandpass signal down to base-band.

The mathematical underpinning for I/Q techniques is the use of a com-plex number to represent the envelope of a modulated signal. In polar coordinates, the magnitude of the complex number represents the ampli-tude of the modulation, and argument of the complex number represents phase modulation. In rectangular coordinates, so-called in-phase (I) and quadrature (Q) components express the same information on a different basis. Hence, these techniques are called I/Q detection, I/Q decoding, or I/Q demodulation (or modulation/coding for stimulus).

The relative merit of I/Q techniques relative to IF or bandpass sampling techniques is beyond the scope of this book. However, there is a relevant point with respect to hardware implementations of I/Q detection or modulation in a synthetic measurement system.

An I/Q codec at baseband requires two A/D or D/A channels as com-pared to the single channel found in the CCC synthetic instrument hardware I have described. In that sense, I/Q techniques implemented in hardware would alter the generic architecture I have been discussing. Whether this alteration is merited, is another question. Hardware I/Q detection does not make the hardware more specific, but in fact makes it more general, giving it the capability to down-convert modulated signals direct to baseband. In that sense, I/Q detection is in line with the spirit of generic measurement hardware. The downside is that the enhanced

Page 156: Synthetic instruments: concepts and applications

Signals

133

generality is paid for with increased complexity and redundancy. I/Q detection requires a dual-channel codec. The second channel represents a redundancy that is not present in IF techniques.

Software I/Q detection is another matter. At some level, the convenience of a complex number representation of a signal is merited for almost any measurement involving bandpass signals. The mathematical process-ing advantages of this representation are many. For that reason, I would expect DSP in the controller to work with I/Q representations, perhaps exclusively for internal processing.

Broadband Periodic Signals

The bandpass sampling technique described in the last section shows how a bandpass signal at some high carrier frequency can have the carrier stripped off. In this way, a CCC synthetic instrument can measure band-pass signal envelopes without needing to sample at twice the frequency of the carrier.

But what about broadband signals, like digital pulses, that are not modu-lated on a carrier? In modern, high-speed digital circuits, pulses with significant energy at frequencies in excess of 10 GHz are not uncommon. These signals are not bandpass. Their energy is spread across the entire wide bandwidth. Can these broadband signals be digitized without resort-ing to 20-GHz digitizers? Yes, in certain circumstances, they can.

For many years now, oscilloscopes have used time equivalent sampling techniques to capture waveforms with far wider bandwidth than maxi-mum the sampling rate realized in the scope. These techniques require that the signal to be digitized is periodic. That is to say, the signal must repeat with some pulse repetition frequency or PRF.

When a signal is periodic, it may be decomposed into a sum of sine waves at discrete frequencies all multiples of the PRF. This decomposition is the signal’s Fourier series. The Fourier series coefficients give the amplitude and phase of each harmonic.

As I explained previously, periodic sampling is also related to the Fourier series. Sampling is like multiplying by a sum of harmonics that are mul-tiples of the sampling rate.

Page 157: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

134

Combining these two ideas, by proper choice of the sampling frequency relative to the PRF, a time equivalent sampler system can sample and time expand a broadband periodic signal. The effect is not unlike a ver-nier scale.

PRI=1/PRF

t

f

PRF

Fs= PRI + τ /12

Sample

Fs

1/τ

01 2 3 4 5 6 7 8 9 10 11

τ

Time equivalent sampling preserves the shape of the waveform, but translates the speed to a much lower rate. In that sense, time equivalent sampling is a similar process to bandpass sampling, which preserves the shape of the spectrum, but translates it to a much lower frequency carrier.

Another way to look at time equivalent sampling is as a stroboscopic technique, sampling at only one point in a waveform cycle, but slowly move that sample point in time so as to trace out the whole waveform. When thought of this way, the technique is often called a sample delay walk technique.

Figure 7-10. Time equivalent sampling spectra

Page 158: Synthetic instruments: concepts and applications

Signals

135

Table 7-1. Sampling techniques

Signal Type Method

Baseband • Direct Digitization

Bandpass

• Bandpass Sampling/Harmonic Down-conversion • Superhet Down-conversion • I/Q sampling

Periodic • Time Equivalent Sampling • Delay Walk

Page 159: Synthetic instruments: concepts and applications

This page intentionally left blank

Page 160: Synthetic instruments: concepts and applications

137

CHAPTER

8Calibration and Accuracy

Calibration and metrology is an immense topic with numerous aspects. Calibration is particularly vital with respect to synthetic instruments be-cause the SI is the new kid on the block and will be scrutinized carefully by expert and tyro test engineers alike. During this scrutiny, proponents of the new approach want to make sure the evaluations are fair, unbiased, and based on a valid scientific methodology.

Metrology for Marketers and Managers

When somebody decides they want to buy a measurement instrument, the only thing they may know for sure right off the bat is that they want to pay as little as possible for it. But if this was the only specification for a measurement instrument, it’s doubtful anything useful would be pur-chased.

Sometimes instrument shoppers will specify that the instrument they want to buy should be able to do whatever it does exactly like some old instrument they currently have. But assuredly, if “doing exactly what X does” is the only performance criteria, the system that best performs just like X will be X itself. That’s not likely to be what they wanted, or they wouldn’t be shopping for a replacement for X.

Intelligent people shopping for measurement instruments will think about the measurements the old instruments made and make a list of the measurements they still want. They will present this list of their require-ments to various vendors. In this way, they allow suppliers freedom to offer them something other than what they already have. But they don’t want to give the vendors too much freedom, so they need to specify the accuracy they need for the measurements. Customarily they get the accu-

Page 161: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

138

racy and measurement capabilities from the specifications of X. If the new system meets these abstract specifications gleaned from the specifications of their old system X, they figure they will get something at least as good as X. Sadly, the concepts of measurement and accuracy are often misun-derstood, misstated, obfuscated, lied about, or otherwise disguised to the extent that instruments supposedly meeting them end up useless anyway.

For that reason, the most intelligent people will take the time to learn a little about metrology before they go about specifying a measurement system. I feel it’s worthwhile for everyone buying instruments (especially with someone else’s money) to at least know these basics.

It’s beyond the scope of this book to discuss metrology and the scientific method in due detail, but I will include some of the basics in the hopes that it will help some readers cross the line into the smartest group.

Measurand

In the science of metrology, the measurand is the thing you are trying to measure.

Some might want to gild the lily in the sense of Plato’s allegory of the cave and talk about “a true value of the measurand” as something sepa-rate, ideal, and unattainable relative to the needle deflections we observe with our human senses. However, metrologists think this “true measur-and” terminology is redundant at best. I think it should be sufficient to merely talk about the value of the measurand in the context of a particu-lar measurement method. It’s OK to leave out the “true” adjective, onto-logical subtleties not withstanding. NIST and other metrology authorities agree with this approach.

The central point is that a measurand is specified in the context of a measurement method. That’s very important. Saying what you want to measure necessarily involves saying how you are going to measure it. It can be meaningless to talk about some parameter you want to measure without specifying a method.1

1 Not to say that things we have no method for measuring are meaningless, just that it may be meaningless to talk about such things in any quantitative way.

Page 162: Synthetic instruments: concepts and applications

Calibration and Accuracy

139

Unfortunately, it’s sometimes difficult to come up with something that people will agree constitutes a valid description of how a measurement is made without including references to measurement specific hardware. Obviously, this will be a problem when specifying synthetic instruments, instruments that are implemented by software using measurement gener-ic, nonmeasurement-specific hardware.

We, as synthetic instrument designers, want to define measurand this way: Whatever the map measures, that is the measurand. All the context you should need about how a measurement is made is the precise stimu-lus response measurement map definition of the map in question. In this book, I give you a way to precisely express that in XML. You don’t need to know anything about the hardware. The answer to the question, “How is this measurand measured?” should be, “It’s measured with this SRMM description specification as expressed in this XML document using the resources of the available hardware.”

This answer from a synthetic instrument designer means that in order to decide what it is you are measuring, what you really need to do is exactly specify the maps. You need to decide what the abscissas are, what the ordinates are, how they are sampled, what the calibration strategy is, define any compound ordinates, stipulate the axis ordering, specify what the post processing is, and so on. As you do this, it should become clear that you are really saying “how” the measurement is done, except that the “how” is now broken up and refactored in a way that may seem unfamil-iar. It’s object-oriented—not procedural—and that confuses people.

Some may object that fundamentally, someplace, you need to say how you are measuring something the old-fashioned procedural way. For example, someplace you will inescapably get down to an ordinate that measures some fundamental quantity, like mass, time, or voltage. This or-dinate will be allocated to a specific measurement asset, loaded a certain way, and applied to the DUT in the context of the abscissa. When you zoom in this close, you will be forced out of the abstraction of the mea-surement map and have to say how you are measuring that fundamental measurand in a procedural way.

That may be so, but it doesn’t show that there was anything left out specifying the measurement by defining the measurand as a map at the highest object-oriented level. Some details are implicit rather than ex-

Page 163: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

140

plicit, yet they are still in there. Some times it’s necessary to zoom in, but you only do that only when you really need to.

Accuracy and Precision

Ironically, the word accuracy as used in common technical discussion is often used inaccurately from a metrology perspective. In fact, the misuse is so widespread and pervasive that it has become the norm in informal technical parlance. Since proper usage of “accuracy” is now in the minor-ity, any attempt to set the record straight may be viewed as nit-picking. People will say, “of course you know what I mean,” referring to their use of the word “accuracy.” But do we know what they mean?

According to CIPM, NIST, and other metrology standard setters, accura-cy is the “closeness of the agreement between the result of a measurement and the value of the measurand.” This is a qualitative concept. Accuracy has nothing to do with numbers. It’s OK to say: “This instrument is very accurate.” Or, you can say: “This instrument is more accurate than that instrument.” But when someone says: “The accuracy of this instrument is 0.001 units,” what they mean is unclear.

The word precision, also a qualitative concept, is defined by ISO 3534-1 as “the closeness of agreement between independent test results obtained under stipulated conditions.” This standard sees the concept of precision as encompassing both repeatability and reproducibility (see subsections D.1.1.2 and D.1.1.3) since it defines repeatability as “precision under re-peatability conditions,” and reproducibility as “precision under reproduc-ibility conditions.” Nevertheless, precision is often taken to mean simply repeatability.

Precision Accuracy

Figure 8-1. Precision versus accuracy

Page 164: Synthetic instruments: concepts and applications

Calibration and Accuracy

141

Both precision and accuracy are qualitative terms. Therefore, to talk quantitatively about the qualitative term accuracy, or, for that matter precision, repeatability, reproducibility, variability, and uncertainty, one needs to use statistical theory. There really is no other choice. Only with rigorous statistical framework supporting you can you make precise quan-titative statements like “the standard uncertainty of this measurement is 0.001 units” and have your exact meaning be perfectly clear.

However, if the speaker doesn’t know what standard uncertainty is, there may be a fresh communication problem. You can’t just plug “standard uncertainty” in where you use “accuracy” and expect everything to be hunky-dory. The two terms are very different. For example, increased standard uncertainty means decreased accuracy, so it should be obvious that the two are not interchangeable.

The phrase standard uncertainty has precise quantitative implications. There really is no choice but to learn what the term means if you want to have any chance of using it correctly.

NIST has an excellent reference document online (http://physics.nist.gov/ccu/uncertainty) that quite nicely describes the precise, quantitative meaning of standard uncertainty along with related terms. Anyone interested in making quantitative statements about accuracy should study it as soon as possible.

Test versus Measurement

Test and measurement are two words that are often used interchangeably, but they actually are two very different things.

Measurement describes the act of acquiring a numeric value that repre-sents quantitatively some physical aspect of a device. Perhaps it’s a mea-surement of length, or temperature, or mass, or voltage output of some device. The defining feature of a measurement is it’s numeric result with associated physical units.

A test, in contrast, describes the act of making a decision about some physi-cal aspect of a device. A test has a qualitative result. For example, is the device long enough, cool enough, heavy enough, or does it have enough output? The defining feature here is that a test is the act of answering a question, making a decision, or rendering a pass or fail determination.

Page 165: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

142

The distinction between these two words matters because the methodol-ogy for synthetic instrument design that I discuss in this book is primarily related to measurement: how generic hardware can make specific mea-surements.

Outside the measurement, the process is basically the same for synthetic and nonsynthetic systems. Although it is true that the synthetic instru-ment, particularly if it’s structure is strictly object-oriented, will likely relate to, or contribute to the test in ways that involve the measurement.2

For example, a synthetic instrument may contribute to how measure-ments are specified within a test, providing units, ranges, and validation. Or it may support how measurement results are stored, analyzed, and presented within a test, offering formats, analysis, and presentation ob-jects. A synthetic instrument could even go so far as to include methods for performing pass/fail determinations. As such, the instrument has now become a tester, with a standalone ability to perform tests based on the measurements it performs.

Figure 8-2. Test versus measurement

2 This isn’t really a character of synthetic instruments. A classical instrument with a proper OO software driver could also contribute to a test in the same way.

Page 166: Synthetic instruments: concepts and applications

Calibration and Accuracy

143

Introduction to Calibration

Now I have explained some of the basic concepts of metrology, I can begin to talk about the different aspects of calibration.

Reference Standards

Since instruments need to produce results with true units, the results need to be acquired by means of a measurement process that is traceable to international metrology standards.

Therefore, any synthetic instrument needs standards, possibly several, within it’s architecture. These standards may be for things like frequency, power, attenuation, voltage, time, delay, temperature, and so on. This list is by no means complete. The best standard to use for a particular mea-surement depends on how that measurement is done.

One calibration philosophy that has certain advantages when applied to synthetic instruments is to employ modular (plug replaceable) standards that allow all the calibration standards in a “sealed rack” instrument to be quickly removed and replaced. This minimizes downtime as the removed standards can be sent as a unit to a calibration lab while the system re-mains in service with a previously calibrated standards set.

Typical rack-em-stack-em or modular instruments do not use this calibra-tion philosophy. Instead, they calibrate the multiple instruments that comprise the system as individual instruments. The system must be taken out of service to be calibrated. The calibration lab may even be brought to the system in the form of a mobile calibration cart.

Uncertainty Analysis

In order to investigate and predict the uncertainty of a measurement made by a synthetic instrument, the investigator needs to establish the calibration process. Industry practice breaks calibration into three main areas:

Primary (Standards) Calibration

Operational Calibration

Calibration Verification

Page 167: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

144

Primary calibration is the methodology by which international metrol-ogy standards are transferred to components in the system. Operational calibration is the methodology of conducting a measurement and ap-plying calibration information to raw data. Calibration verification is a process, separate from self-test or system functional test (SFT) that seeks to determine, within some defined confidence limits, if the system is cur-rently calibrated. Once the overall calibration methodology is established for the instrument, you can then move on to defining other calibration and accuracy related requirements, like measurement uncertainty, bias, calibration and accuracy analysis, drift, and aging.

Stimulus Calibration

Stimulus calibration is the process by which the stimulus to the DUT is applied at the appropriate value for the measurement. For example, a power supply will stimulate the DUT by applying a voltage. It often is crucial that voltage be exactly correct.

Stimulus calibration is related to inverse maps. The goal of stimulus calibration is to determine how to control the stimulus system so that it generates the required output. In point of fact, the output of the stimulus system (as measured by some response system) is the independent abscis-sa, and the system needs to determine the dependent control commands (ordinate) that generates that desired output. Since it is reversing cause and effect, this is an inverse map.

There are two common ways to express the intent to set a stimulus to a predetermined value. Either you want the expected value of the stimulus to be at the intended value with some uncertainty expressed by a known probability density. Alternatively, you might want the stimulus to fall within some intended interval of values, with some confidence probabili-ty. An example of the first case would be to state that the supply produces an expected 1 volt, with a Gaussian uncertainly of variance 0.01 volt. An example of the second case would be to specify that the stimulus voltage be 1 volt ± 0.01 with a 90% confidence level.

Page 168: Synthetic instruments: concepts and applications

Calibration and Accuracy

145

Overall Strategy for Stimulus Calibration

The strategy adopted for stimuli calibration depends entirely on the stimuli involved. In general, however, it represents a good example il-lustrating how to handle inverse maps and the related concept of stimu-lus ordinates. I recommend seeking a compromise between knowing the stimuli accurately and the more difficult alternative of controlling the stimuli precisely.

Performing stimulus calibration of a system involves establishing a rela-tionship between the state of the stimulus subsystem and the true abso-lute stimulus output parameter. Often this is done by creating a calibration map with the following algorithm.

1. Adjust some internal parameter in the stimulus system to a set of arbitrary “state” points (often a grid of approximately uniformly spaced values).

2. Using a calibrated response system, measure the stimulus gener-ated for each of these states.

By means of the calibration map created the measurement can accurately know the stimulus value at each of those states. Calibration strategy algo-rithms can then invert the map to find the states required to generate any desired stimulus value. Normally this inversion is performed by offline in-terpolation, but it could be done with a real-time feedback-leveling loop.

Acquisition of an accurate calibration map does not guarantee that stimulus can be controlled with precision. In general, there will be some quantization and repeatability effects that limit the precision of stimulus control to that which the stimulus hardware can achieve, particularly in an open loop configuration.

Using Interpolation to Invert a Map

Interpolation allows the calibration map (or any map, for that matter) to be inverted. Another good reason to use interpolation is to maximize measurement speed by minimizing the number of measurements taken. The question then arises: what is the minimum number of points required

Page 169: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

146

for a given degree of accuracy when interpolation is used? There are many possible forms of interpolation: linear, polynomial, least squares, and many others. Often, it is good to choose an interpolating function that matches the function that underlies the real process being approximated. A large body of literature is devoted to this problem[B2].

A favored form of interpolation for continuous functions with continu-ous derivatives is cubic spline interpolation. Splines have the advantage of passing through specified points and matching the derivatives at those points smoothly. Experience has shown that cubic spline generally pro-vides optimum accuracy in practice when compared with either lower or higher order methods. In this discussion, therefore, I will focus only on cubic spline interpolation as a widely useful practical technique.

The theory of the cubic spline provides the following expression for inter-polation error:

( ) ( ) ( ) ( ) ( )

422 51

24iv

k

hf x s x f x O h− ≈ θ − θ +

in any interior subinterval,

[xk, xk + 1]

where,

θ = (x – xk)/h.

The expression says that the error is proportional to the fourth power of the sample interval, h. Since it’s a fourth power relationship, decreasing the sample interval by a factor of 2 decreases the error by a factor of 16. In practice, this means there is a sharply defined threshold below which it makes no sense to acquire any additional data points.

I can’t overemphasize the importance of this observation. Anyone design-ing high-speed measurement systems should be aware of the fourth power convergence of the cubic spline interpolation technique, as well as the convergence properties of other, alternative interpolation methods. Why? Because if you aren’t aware of this property, you may design systems that are slower than they need to be, or you may not understand one of the serious issues that affects the speed of your measurements. A fourth power convergence sneaks up on you faster than you might expect.

Page 170: Synthetic instruments: concepts and applications

Calibration and Accuracy

147

Often it is the case that instrument systems are designed to take measure-ments as quickly as possible at some specified accuracy. Basic operation is to make a set of measurements in a vector field over a certain abscissa do-main. The time it takes to measure the map is, simplistically, the number of points in the domain manifold multiplied by how long each of them, on average, takes to acquire. Excess points slow the measurement. The best thing to do is to take just as many points as you need to meet your accuracy goal.

Should the convergence error of an interpolation be much smaller than the specified accuracy of the measured points, then meaningless extra data points are being taken and the measurement is slower than it needs to be.

Interpolation Example

A specific example will make this clearer. Consider an output versus input power measurement. In this sort of measurement, the input power into an amplifier is varied as the level of output power is recorded. Input power is the abscissa, output power is the ordinate. Very simple. In numerical simulation, I modeled a saturating amplifier by a sinusoid of amplitude A clipped at magnitude ±1 as is shown in Figure 8-3.

Figure 8-3. Clipped cosine

Page 171: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

148

I then computed the exact value of output fundamental power from the Fourier series as a function of amplitude. That result is plotted in Figure 8-4. Notice how the curve bends downward as the amplifier saturates.

Figure 8-4. Fundamental power transfer

Finally, I pretended that I had “measured” the output power ordinate only at 1 dB step intervals of increasing input power. With a cubic spline in-terpolation, I estimated the points in between those measured steps. The error between the actual values and my interpolated values was tiny. If my interpolated graph were plotted on top of the ideal graph shown in Figure 8-4, it would exactly overlay the ideal graph. To better see how little error there is, look at the difference between the two plots, which is seen in Figure 8-5.

Even at 1 dB steps, the interpolation error is insignificant, peaking at little more than a mere 0.02 dB.

Figure 8-5. Interpolation error (estimate – ideal)

Page 172: Synthetic instruments: concepts and applications

Calibration and Accuracy

149

The error convergence properties of the spline method establishes a mini-mum sampling requirement, beyond which the error rapidly drops into insignificance. Unfortunately, this threshold can’t be established numeri-cally unless the fourth derivative of the function in question is known. In the case of a real-world measurement, one will never know this derivative analytically. It is possible, however, to make a statistical estimate of the next derivative by using so-called Predictor-Corrector and adaptive sampling methods. With these methods, one can estimate where this convergence occurs and adjust the data taking to maximize speed without sacrificing accuracy.

Sampling Interval versus Resolution Confusion

For some reason, people don’t like interpolation. They think it’s not legitimate in some way, that it manufactures data. A synthetic instrument that leverages interpolation for performance gains is somehow cheating.

I think part of the problem one can have with interpolation has to do with a fundamental misunderstanding. There is a lot of confusion be-tween the term “resolution” and the terms “quantization” or “sampling interval,” with many people using the word resolution erroneously to describe quantization in various contexts. In this section, I hope to set the record straight.

Let me begin with two definitions:

Fundamental Definitions

Abscissa Resolution

The minimum interval in an abscissa between which the measure-ment system when applied to a DUT can produce statistically independent ordinate measurements given a maximum intermea-surement correlation or crosstalk specification.

Abscissa Quantization Interval

The minimum interval in an abscissa between which the measure-ment system when measuring a DUT can produce unequal ordi-nate measurements.

Page 173: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

150

The subtle difference between these definitions should be evident. They are similar in some ways. A consequence of this similarity is the confusion between the two that I have often seen. However, despite the similarity in the definitions, the concepts are totally distinct.

Resolution is about independence in ordinate sampling. How close can the system take two samples in abscissa and have the correspond-ing ordinates be independent. Not just different, but fully independent. Independent, in basic statistical terms, means that the ordinates each can be anything at all within the expected range regardless of what the other was.

Consider a measurement of the temperature versus time of some part in a jet engine. The abscissa is time; ordinate is temperature. How close to-gether in time can the system make two temperature measurements such that one temperature has no influence on the other? Let’s say the range is 0–500°C. The system measures 205.6°C. How long must it wait before taking another measurement such that it can get an ordinate in the 0 to 500 degree range that isn’t influenced by the fact that the temperature was just 205.6? Let’s say we want to measure the engine temperature running, and then cold. How long do we need to wait for it to cool off with the influence of the recent 205.6°C reduced below some acceptable threshold? That’s a question about resolution of measurements.

Obviously, the answer to this question depends on more than just the measurement system. It depends on the thermodynamic characteristics of the DUT. It also depends on some criteria for what it means to “have no influence.” This criteria is the correlation or crosstalk specification I men-tion in the definition.

The word “resolution” is often used in optics to refer to the resolving power of a telescope or microscope. This form of resolution is measured by specifying how close together, angularly, to image components can fall before they merge into one and become indiscernible based an certain criterion given differently by Rayleigh, Dawes, Abbe, and Sparrow. Angle is the abscissa, image intensity is the ordinate, a criteria is specified, and thus the optics definition of resolution correctly corresponds to the defini-tion I have given.

Page 174: Synthetic instruments: concepts and applications

Calibration and Accuracy

151

In contrast, the idea of quantization is not the same thing at all. Abscissa quantization, or sampling interval specifies how close together in abscissa the system can take measurements and possibly have ordinates that are different at all. In the temperature versus time example, how close to-gether in time can you make two potentially different temperature mea-surements. Perhaps the system has internal processing or the sensor has constraints that prevents measurements from being acquired more often than once per second. If you try to get data sooner than this, you must necessarily get the same answer. If you wait longer than this, the measure-ment of temperature may well be different to some tiny degree.

How is this related to interpolation? Well, if you understand the differ-ence between abscissa quantization and abscissa resolution, you should be able to see that abscissa quantization can be made arbitrarily small through the use of interpolation. That is to say, you can achieve as fine quantization in abscissa as you want without actually taking any more data or altering the measurement hardware. Abscissa quantization is purely an artifact of processing, and you can choose it to be whatever you want based on the interpolation post-processing you do. You can even change your mind after the measurement and synthesize finer abscissa quantization should you need it.

But you cannot synthesize finer resolution after the measurement. If the ordinate samples are all correlated across the abscissa, you cannot in general make them magically become the same number (or more) uncor-related samples. Resolution says something immutable about the measure-ment process. Quantization does not.3

The reason this distinction is of great importance for understanding syn-thetic instruments is that resolution is a specification given on physical hardware measurement systems that says something about the quality of the hardware. If the meaning of resolution intended by the specification

3 Actually, the statement that you can’t improve abscissa resolution in post processing is false. For certain kinds of measurements, super resolution techniques exist for synthesiz-ing finer abscissa resolution beyond what seems possible given the correlation of the measured data. These techniques trade off ordinate precision (SNR and repeatability) for abscissa resolution. Although not nearly without drawbacks, super resolution tech-niques have many real-world practical applications and should not be forgotten when abscissa resolution is at issue.[B8]

Page 175: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

152

is actually quantization, the specification does not necessarily say any-thing about the hardware in a synthetic instrument since in many cases the synthetic quantization can be anything you want it to be.

On the other hand, I’m not trying to say quantization is not of any value or importance. Far from it. It is all too often the case that insufficiently fine abscissa quantization is provided despite the fact that it is easy to syn-thesize more. The reason for this may be that well meaning, scrupulous designers fear being accused of the immorality of interpolation; namely, that they don’t want to make true abscissa resolution seem finer than it is by cheating with interpolation post-processing.

Yet this is misguided thinking. It results in an error in design that is potentially more serious than what it seeks to avoid. Consider the classic “spike” effect visible in uninterpolated FFTs.

If you perform a moderately windowed FFT that results in an impulse-like spectrum, but you fail to zero-pad or use a chirp-Z interpolation to pro-vide quantization finer than the resolution, you will see in Figure 8-6, a deceptive spectral plot that shows a lone spike.

f

H(f )

Sometimes the spike sits atop a dome, not unlike an early WWI German spiked war helmet. In all cases, the spike appears suspiciously without sidelobes.

If, however, you use some prudent amount of interpolation, the sin(x)/x structure becomes evident, as you see in Figure 8-7.

Neither plot has more resolution, but I submit that the first plot is mis-leading, while the second plot is clear and plain. It may be that interpola-tion gives no additional information in a statistical sense, but it can vastly improve the readability and usefulness of data for humans.

Figure 8-6. ”Spiked” FFT

Page 176: Synthetic instruments: concepts and applications

Calibration and Accuracy

153

Ordinate Quantization and Precision

Abscissa quantization is probably the most often misnamed as resolution, but ordinate quantization is also sometimes called resolution as well. I prefer to try to keep the matters separate by never speaking of “ordinate resolution,” but rather, to refer to the Precision of an ordinate. Thus, I can also talk about the distinction between ordinate quantization and ordi-nate precision.

Fundamental Definitions

Ordinate Precision

The minimum statistically significant interval possible between two unequal ordinate measurements by a measurement system when measuring a DUT given a specified confidence level.

Ordinate Quantization Interval

The minimum interval possible between two unequal ordinate measurements by a measurement system when measuring a DUT.

When speaking just about an ordinate in isolation, resolution can be de-fined as the statistical blurring caused by measurement uncertainty. In that sense, ordinate resolution is a specification of the precision in the ordinate. It’s a parameter that includes repeatability and random noise that prevents us from being able to decide if two different measurement results are differ-ent because they are measuring two different true values of the measurand, or because random errors have produced the difference.

One can never know for sure if two unequal measurement results are just caused by dumb luck or if they are “really” different, but one can say with some quantified confidence level that the difference wasn’t a fluke. For

f

H(f )

Figure 8-7. Interpolated FFT (same data)

Page 177: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

154

example, I might be able to measure a distance to within 1% precision given a confidence level of 99%. That is to say, if two measurements are more than 1% apart from each other, then 99% of the time this differ-ence represents truly different distances. The other 1% of the time they are different based on dumb luck.

This sort of reasoning is related to hypothesis testing and is part of the realm of the science of statistics. If you want to talk quantitatively about measurements, you need to do your statistics homework.[B9]

Ordinate quantization, on the other hand, is the minimum interval be-tween any two different ordinate measurements. It says nothing about the precision of the measurement. All it says is how many digits are recorded. As any elementary science student knows, adding more digits to your answer does not necessarily make your answer any more precise. Only the significant digits count, and those digits represent the true precision of the ordinate.

As with abscissa quantization, ordinates can be quantized more finely after the fact by applying interpolation techniques. Ordinate precision, on the other hand, says something fundamental about the quality of the measurement process and in general cannot be changed merely with post-processing.4

De-Embedding Calibration Objects

Systems often are required to provide for the general concept of de-em-bedding, which is a generalization of any kind of correction factor applied to an ordinate for the purpose of removing the effects of the physical location of sensors in the measurement. De-embedding effectively moves the measurement from the sensor to some place in the DUT. In support of

4 Actually, the statement that you can’t improve ordinate precision in post processing is false. For certain kinds of measurements, averaging and decimation techniques exist for synthesizing finer ordinate resolution beyond what seems possible given the ran-dom component of the measured data. These techniques trade off abscissa resolution for ordinate precision. In a sense they are the opposite of super resolution techniques that allow us to perform trade-offs in the other direction. Although not nearly with-out drawbacks, averaging and decimation techniques have many real-world practical applications and should not be forgotten when ordinate precision is at issue.

Page 178: Synthetic instruments: concepts and applications

Calibration and Accuracy

155

de-embedding, a synthetic measurement system will accept de-embedding calibration objects from the user or controlling software.

To understand how the de-embedding calibration object is applied, con-sider the simple case of a single y ordinate measured at a single x abscissa. This is a single real number in some units. The de-embedding calibration object for this measurement specifies how to transform the ordinate from the measured value to the virtual value estimated to exist at some speci-fied measurement point of interest. For example, in measurement of tem-perature, there may be some known relationship between the temperature at the sensor location, and the temperature at some significant location inside the device being measured. In the case of an integrated circuit die, for example, you may be able to measure or control the electrical power (Watts) into the die, and you may also know the thermal resistance (°C/Watt) between the die and a location where you can measure tempera-ture. The thermal resistance along with the power gives you a de-embed-ding factor from which you can infer the die temperature.

SUBSTRATE

DIE

CARRIER

Temp Sensor

Electrial Power

(Known)

Thermal Resistance

(Known)

Measured Temperature

Calculated Temperature

Cover

In general, the de-embedding calibration factor for a particular ordinate will be a function of all the abscissa variables and all the ordinates. More-over, each ordinate can have a different de-embedding calibration factor. Thus the ordinate de-embedding calibration object is a map, and there-fore can have the same data structure as the measurement object itself.

Figure 8-8. De-embedding applied to temperature measurements

Page 179: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

156

De-Embedding Dimensionality and Interpolation

Although the measurement data from ordinates and the ordinate de-em-bedding calibration data can be represented by an object with the same overall structure, the de-embedding ordinate calibration object does not need to have exactly the same number of dimensions, dimension lengths, or domain separability as the ordinate measurement object is serves to calibrate.

In the case where the dimensionality is different, it is assumed that the de-embedding calibration factors are constant along those axes. In the case where the dimensions are not equal, it may be necessary to interpo-late or extrapolate. How the interpolation and extrapolation is best done depends on the ordinate. In general, linear interpolation and least-squares extrapolation are often appropriate.

Abscissa De-Embedding

De-embedding calibration objects can be provided for abscissas as well. These objects must be available to the system before the abscissa sequence tables are built. They allow stimuli levels to be more accurately and con-veniently controlled. As with ordinate de-embedding objects, the appli-cation, interpolation, and extrapolation methods are abscissa dependent.

Page 180: Synthetic instruments: concepts and applications

157

CHAPTER

9Specifying Synthetic Instruments

When an ATE designer is first faced with an ATE problem that they want to fit to some synthetic instrument-based solution, the first step is to capture the requirements for solution. These requirements should specify the synthetic instrumentation from a high level, giving the properties and elements for the synthetic solution.

There are many ways to do this. Engineers have attempted to specify synthetic instruments all sorts of ways, and I personally have seen this process attacked from many different directions. However, so far as I am aware at the date of this writing, there is no standard way to do this. I do know that there are many bad ways.

One quite common bad approach is the following: measurement system specifications are written in the vaguest of terms and delivered to the designer. Habitually, this consists of a list of stimuli to be provided and responses to be recorded, and maybe there is some accuracies specified for the recorded values, often based on marketing brochures from legacy products.

At that point, the first mistake is made. The designer begins to specify hardware modules that accomplish the measurement tasks. Already they have left the path to a synthetic solution. The designer goes down the list of stimuli and picks modules that make each stimulus. Maybe, if they’re lucky, they can find a module that does two of the stimuli, but in general the idea is to make a distinct mapping between required stimuli, and required stimulus modules. Specifications on accuracy of a stimulus are applied directly onto the associated module.

Page 181: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

158

Following that, a similar misguided process results in a list of required response modules corresponding to the required responses to be recorded. Specifications on accuracy of a response measurement are reiteratively levied directly onto the associated module.

This results in a modular system design with easy to understand require-ments traceability. Block diagrams are drawn. Specification compliance matrices are compiled. There’s only one small problem. The system isn’t synthetic! Instead, specific hardware is assigned to specific measurement tasks. It’s our old rack-em-stack-em friend in disguise in modular clothing. Sort of a “bookshelf” system, where each book represents a specific mea-surement.

At this point, the designer can try to “back door” a synthetic solution by looking at some of the modules that have been specified. Maybe they can design them with a synthetic approach and in that way call the whole design synthetic. So the designer writes some vague requirements and attempts a synthetic design of the modules themselves. Here the process repeats and the same mistakes are made again.

Synthetic Instrument Definition and XML

A significant observation about the wrong-headed approach described in the previous section is that the system software that makes these modules do the job (a job as yet not clearly defined) is held implicit during the requirements capture and analysis stage. Later on, once the hardware is de-fined, ATE programmers may be invited to generate software requirements

Figure 9-1. Bookshelf modular

Page 182: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

159

based on the original measurement specifications using this modular hard-ware. The software requirements that results will inevitably lead to a proce-dural test program. Maybe in a few spots some synthetic concepts can briefly appear, but the overall focus is on procedural test executive functionality.

How do we fix this? How do we avoid being led down the garden path to ingrained ATE design methods? The answer is that we need to set off in a consistent and beneficial direction toward synthetic design if we want to end up specifying synthetic instruments. We must take a formally rigor-ous approach that focuses on the measurement and fits the problem to a concrete synthetic solution. Only then will we find that our system design is fully synthetic. Only then will we end up realizing all the efficiencies and benefits of a synthetic design.

To provide the consistent and beneficial direction toward synthetic de-sign during the capture of requirements, I have developed an XML-based method of specifying synthetic instruments. I submit that it is a right way to specify synthetic instruments. I believe this XML-based method can lead us more consistently toward a true synthetic design. For that reason, if this approach is eventually adopted by the community as the right way or not, for the purposes of this book it will serve my purposes of illustrat-ing a “right thinking” method for specifying synthetic instruments that propels the designer closer to the goal.

Why XML?

Any popular new software technology has a gee-wizz factor that draws designers to it like so many moths to a flame. Designers are attracted regardless if the new technology is appropriate to the task at hand, often resulting in their demise. So long as a technology is new and interesting, with lots of cryptic acronyms, designers will use it appropriate or not. XML is certainly popular today and has tons of related acronyms, but at first glance XML’s roots as a text document markup language may seem inappropriate for specifying ATE. Why then, exactly, should we use it?

You should use XML because XML has several compelling features that make it ideal to use in the context of synthetic instruments. These com-pelling features make XML appropriate as a mechanism for describing both synthetic instrument hardware and software and the two working together as a system.

Page 183: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

160

XML has the following distinct advantages in the context of synthetic instrumentation:

Hierarchical

XML provides a convenient way to describe hierarchical things. In fact, an XML document must be strictly tree structured. If you use XML to describe a synthetic instrument, the description will inevitably be tree structured. This is a good thing. Synthetic instruments must be designed in this way. Using a tool like XML with a strict hierarchical structure keeps you on the right road when you design your measurement system.

ExtensibleXML is an eXtensible Markup Language. That means it can be extended with a new schema of tags that serve the needs of new contexts. You are free to define an <Abscissa> tag and give it semantic meaning specific to your synthetic instrument applica-tion. Extensibility in XML is a formalized and expected part of the way XML works. In XML, you define a document type description (DTD) or an XML schema that sets down the syntactic rules for your document.

Abstract

An XML description of a measurement is a pure and abstract description that exists outside of any particular hardware context. Often, synthetic instruments developed as replacements for legacy instruments make the mistake of using the legacy instrument specifications as a specification for the new synthetic instrument. This mistake can be largely avoided by abstracting the pure mea-surement capabilities of the old instrument away from its original hardware context.

Standards-Based

XML is a growing open standard derived from SGML (ISO 8879). Because XML is a standard, that means there are many open source and commercial tools available for authoring, manipulat-ing, parsing, and rendering XML[B6]. The standardization of XML

Page 184: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

161

allow these tools to interoperate well. Because of its wide applica-bility, it’s easy to find training to get up to speed. The bookstores are filled with introductory XML books, classes are offered in colleges, and even training videos are available. Note, too, that SGML is a popular standard within the U.S. government, and the U.S. government is the largest market today for synthetic instru-ments.

Programming Neutral

XML is not a programming language. It is a markup language. It’s goal is to describe the logical structure of something, not to directly define algorithms in detail. XML does not replace the use of full-featured programming languages in describing detailed pro-cedures or data structures. This is important since no one wants to give up what has already been achieved with traditional pro-gramming. Nobody will be re-coding FFTs in XML. Instead, XML will assist in the generation of program code by encapsulating the hierarchical structure of a synthetic measurement system in a way that can guide and control and even automate the programming.

Portable

XML is not tied to any one computing platform, operating system, or commercial vendor. It flourishes across the spectrum of comput-ing environments. Moreover, there are standards such as the docu-ment object model (DOM) and simple API for XML (SAX) that give a platform-neutral way for programs to interact with XML. The portability of XML is advantageous. No bias or restriction is placed on the hardware and software options based on the descrip-tion methodology chosen.

I’ve decided to define a system that uses XML to describe and specify synthetic instrumentation. XML is somewhat of a blank slate with regards to test and measurement, and therefore it will be mostly paradigm-neu-tral when applied. I think this is a great advantage. It will help focus the discussion toward what I am trying to explain about defining synthetic instruments and away from widely addressed questions surrounding tradi-tional instruments.

Page 185: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

162

ATML

Automated test markup language (ATML) is a cooperative effort by members of the ATE industry to define define a collection of XML sche-mas that allows ATE and test information to be exchanged in a common format adhering to the XML standard. The work in this book is indepen-dent of that effort.

Based on the work I have seen on ATML to date, the XML techniques described in this book are more complimentary to ATML than they are redundant or conflicting. Because of the measurement focus required by synthetic instrumentation descriptions, I address a narrower scope of is-sues with XML technology than the ATML group is addressing.

I’ll go out on a limb and predict that like many other ATE-related soft-ware tools and techniques, ATML will experience gravitation toward the routine, procedural, instrument oriented measurement paradigm. To counterbalance this inevitable gravitation, I would encourage everyone involved with ATML to attempt as much as possible to use the blank slate of XML as an opportunity to do something truly new and better, rather than to simply translate the same old methods into a new syntax.

Why Not SCPI, ATLAS,…?

Before I get into the use of XML, I need to address an objection that I know will be raised by some people. Some folks might think that it would be reasonable to use something else for describing synthetic measurements.

One possibility I’ve heard suggested is SCPI. After all, The Standard Commands for Programmable Instrumentation (SCPI) defines a standard set of commands to control programmable test and measurement devices in instrumentation systems. That seems, at first glance, an appropriate standard. Automated instrumentation developers already know SCPI. In fact, the whole purpose of SCPI was to provide a standardized lingua franca for programmable instrumentation, particularly over the IEEE-488 bus, but over other interfaces (for example, VXI) as well.

Page 186: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

163

No doubt, SCPI can be used quite readily within a synthetic instrumenta-tion system as a way to talk to the individual physical instruments. SCPI has a tree-oriented structure, not unlike XML. It’s also true that SCPI has a related data interchange format (DIF) for recording output data. It’s also certain that it definitely can be used to talk to synthetic instrument systems as a whole and to define interfaces to new synthetic instruments.

Unfortunately, SCPI has certain aspects that make it somewhat problem-atic for use in describing synthetic instruments, both here in this book, and possibly in a wider context.

First, SCPI provides a standard for an instrument communication inter-face, not a controllably extensible method for describing synthetic mea-surements. As such, SCPI really doesn’t exactly fit the bill for the purpose I intend.

Second, listing the set of functions an instrument performs based on the commands it responds to does, to some degree, tell us what measure-ments can be made. I still could use SCPI to describe measurements, at least SCPI syntax, which is tree oriented, exactly like XML. In this way, I make SCPI do double duty with its role as an interfacing language. How-ever, I believe this would inevitably lead us to a mixed-model, leading us away from an synthetic approach in the design. In its effort to allow for all the diverse functionality of the wide range of automated instrumenta-tion, SCPI provides a rich facility that can be used to describe the inter-face to virtually any instrument, existing or imagined. That’s not to say it has no limitations. Any practical system must have limitations. Rather, I’m saying that SCPI provides too much flexibility, and thereby allows the designer the ability to design measurements and talk to instruments with any paradigm, synthetic or not.

Figure 9-2. Example of SCPI code

Page 187: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

164

Problems with Other Legacy Software Approaches

There really is no point in belaboring the issue by listing all possible legacy software choices available and explaining why they can’t be used or aren’t appropriate to synthetic instruments. To do so leads us into reli-gious war. I indulged in one crusade, my prediction regarding ATML and argument against SCPI, and that should be enough.

Notwithstanding the advantages already stated that argue for the use of XML, in point of fact, there is no fundamental reason why one can’t define and manipulate synthetic instruments and synthetic measurement systems using any system or combination thereof that strikes one’s fancy: SCPI, ATLAS, FORTH, BASIC, Java, SQL, FORTRAN, C, or any other reasonable programming tool or environment. They’re just not as cool as XML.

Introduction to XML

As I alluded to earlier, one might think of XML only as a markup lan-guage for documents, where documents are text-processing things that get displayed on web pages or printed in books. In fact, as I write this book, I’m writing the text with XML markup. XML is in some sense a subset of SGML, the Standard Generalized Markup Language defined by ISO 8879.

But a document is really any data containing structured information. Myriad applications are currently being developed that make use of XML documents in contexts that are far removed from text processing. There are already an amazing number of XML Document Type Descriptions. Any kind of structured information is amenable to description by an XML-based format.

Since XML can describe structured information, it can be used to de-scribe synthetic instruments. However, just because something is possible doesn’t mean it’s necessarily a good idea. Why is the application of XML to the structured description of synthetic instrumentation a good idea?

Automatic Descriptions

XML can be applied practically as a description language in a fully auto-mated context. That is to say it would be practical to start with an XML

Page 188: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

165

description of a synthetic instrumentation system, and turn it into a real, high speed instrument using nothing but automated tools.

XML is easy to use with modern compiler tools. Part of this facility stems from the fact that it is possible to express the syntax of XML in Extended Backus-Naur Form (EBNF). If you’ve never heard of EBNF before, don’t be frightened. EBNF is merely a system for describing the valid syntax of a grammar like XML. It’s a way to describe what can and can’t be said in a purely mechanical way.

EBNF comprises a set of rules, called productions. Every production rule describes a specific fragment of syntax. A complete, syntactically valid program or document can be reduced to a single, specific rule, with noth-ing left over, by repeated application of the production rules. It is possible to express the syntax of XML in Extended Backus-Naur Form, and there-fore it is possible to use modern compiler tools that can take EBNF and turn it into compilable high-speed computer codes. Any modern compiler book[B0] will explain how this works.

Actually, there’s really no need to work at the level of EBNF if you don’t want to. XML has associated with it a large collection of parsers, format-ters, and other tools that allow designers to easily attach semantic func-tionality to XML documents. In most cases, these XML-specific tools are better at doing this than generic compiler tools.

Notwithstanding attempts by some companies to patent encumber vari-ous XML applications, thus far XML remains a relatively free and open technology. XML parsing libraries are freely available across many operat-ing systems, and will be found integrated into many development tools. Similarly, there are a wide collection of tools that can be used to write XML. Constructing a GUI, for example, that generates well formed XML is a simple task given all the help available.

Note the statement that XML can be turned into compilable computer code, as compared with interpreted code. Either is possible. The difference between the two is this: Should an XML measurement description be compiled, that implies that all the work of parsing the description and re-casting it into a form that can be executed at high speed is accomplished once, up front, before the measurement is ever run. Interpreted code, in contrast, is not processed beforehand. It is processed while the measure-ment is being run.

Page 189: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

166

Because the processing of interpreted code occurs at measurement time, it has the potential of slowing down the measurement. It’s my opinion that interpreted scripting should be avoided for this reason. Measurement de-scriptions must be compiled into high-speed, isometrologically optimized state machine descriptions of a canonical map before they are run.

Admittedly, given the ever-increasing speed of computers, this distinction seems less of an issue. Given a fast enough CPU, you can interpret and optimize your measurement every time it runs with no real penalty. Still, I feel that faster CPUs should not be an excuse for slower software or skipping the optimization step. There will always be situations where the maximum possible speed is needed, and you shouldn’t give that capability away for no reason.

Not a Script

The use of XML in automated test applications is nothing new. I have already mentioned the ATML effort. Another application of XML to ATE appears in Johnson and Roselli[C2], where XML is used as a flexible, portable test script language. Although clever and useful in certain cir-cumstances, I believe “scripting” is an inappropriate use of XML in ATE. XML syntax is clunky for detailed procedural programming; a clunky script language is not what you want from XML in the context of syn-thetic instrumentation.

Scripting implies sequential execution of a procedural design. Even if ob-ject-oriented (OO) techniques are used, and even if the script is compiled, the result is still not as map-oriented and OO as you want; it works at much too low a level; it can’t get optimized effectively. Scripting results in too much freedom, and thereby doesn’t constrain the design approach sufficiently to allow for the best performance.

There are better things to use for scripting than XML. This point is true notwithstanding some of the advantages to the use of XML for scripting that are pointed out by Johnson and Roselli. In scripting, some hierarchi-cal structure is definitely used (subroutines, loop blocks, if-else) but the basic flow is top-down event oriented. For the most part, therefore, XML just dirties up the syntax of what could be a clear procedural script if rendered with a clearer procedural syntax, like one finds in C, Perl, Java, or Forth.

Page 190: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

167

XML is better used as a descriptive language. It shines in its ability to mark things up in a hierarchical or tree structured manner. When you mark up something, like a text document, the markup adds attributes to the text content at the lowest level. It also allows higher level logical structure (paragraphs, tables, sections, chapters, and so on) to be built up with hierarchical layers of markup.

Some applications of XML have what is called mixed content with the hierarchical XML markup intertwined with raw text data at every level. In other cases, there is no content other than structural elements that nest downward at lower levels, possibly with raw data at the very bottom of the hierarchy. In this latter situation, tags and attributes are applied to lower level tags and attributes, down and down till you reach the atoms of content that represent fundamental things that do not apply to anything lower.

XML Basics

Let’s dive right in and look at an example XML document.

Example 9-1. Simple XML document

<Instrument name=”Oscilloscope”> <Measurement name=”Trace”> <Abscissa name=”time”/> <Ordinate name=”voltage”/> </Measurement></Instrument>

Study Example 9-1 and see if you can discern the structure of XML syn-tax. If you are familiar with HTML, this should look very understandable to you. This is a great advantage of XML, by the way. It’s human readable and leverages the understanding of HTML already possessed by millions of people.

If you aren’t familiar with tagged markup, like HTML, here’s a brief tutorial.

First of all, you need to know that the angle brackets “<>”are special char-acters that enclose something called a tag. XML consists of these tags that themselves enclose elements. A start tag begins the enclosed area of text, known as an element, according to the tag name. The element defined by the tag ends with the end tag. An end tag starts with a slash.

Page 191: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

168

One difference between XML and HTML is that in XML, a start tag like <foo> must be followed by an end tag </foo>. The end tag is not op-tional. However, in XML something called an empty tag is allowed. These have a slash before the closing angle.

Here is an example of a start and end tag enclosing an item:

<foo> this is an item </foo>

This is an empty tag:

<bar/>

Empty tags don’t enclose anything, so they have no associated item. But that doesn’t mean empty tags are, well, “empty.” End tags, like items with no embedded tags, represent a kind of “leaf” of an XML syntax tree.

As with HTML, XML tags may include a list of attributes consisting of an attribute name and an attribute value separated by an equals sign. An example would be <foo bar=“asdf”>, where “bar” is the attribute name, and “asdf” is the attribute value.

For the moment, that’s it. (That wasn’t hard, now was it?) As you can see, XML has extremely simple syntax. In a sense, XML is a way of writ-ing fancy parenthesis to enclose and nest things in a tree structure with handy places to assign attributes at each branching of the tree. There are more things to talk about with regard to XML. I refer you to the many fine books on the subject that are now available[B6].

Synthetic Measurement Systems and XML

There are many different ways XML fits into the description of a synthetic instrument, but they can basically be divided into these interrelated duties:

Describing the Measurement

Describing the Measurement System

Describing the Measurement Results

The first of these duties is to provide an abstract description of the mea-surement to be performed. Since synthetic instruments do their work on generic hardware, this first duty is most important. There needs to be a way to describe measurements in a way separate from hardware. XML is exactly that way.

Page 192: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

169

Even though it’s most important, capturing the measurements to be performed is still only part of the picture. The available hardware must be defined so that these hardware resources can be allocated to the measure-ment tasks at hand. The description of the hardware suite is best done relative to some anticipated fixed yet abstract model so as to structure the description in a way that allows it to be most effectively used. XML can provide exactly this framework.

When the measurement is brought together with the hardware descrip-tion, the synthetic instrument is generated, loaded into the hardware, and run. The result of the run is a set of measurement data. This resulting data needs to be captured, stored, analyzed, visualized, and possibly aug-mented or reduced by post processing. Again, XML can serve quite nicely as a data language, encapsulating hierarchical data in a way that can be easily manipulated both by a human and by a computer.

Describing the Measurement with XML

Let’s begin with a simple example in order to show how XML is applied to the task of describing a measurement. Consider a synthetic oscilloscope in-strument that measures voltage as a function of time. Here is a simple XML description of the single measurement done by this instrument.

Example 9-2. Simple oscilloscope

<Instrument name=”Oscilloscope”> <Measurement name=”Trace”> <Abscissa name=”time”/> <Ordinate name=”voltage”/> </Measurement></Instrument>

Let’s take a different view of Example 9-2. Figure 9-3 is a diagram of its tree structure.

At the highest level, I have defined an “Instrument” called an Oscillo-scope. Enclosed in that instrument is one “Measurement” which I named a “Trace”. The measurement consists of the “time” abscissa and “voltage” ordinate. In Example 9-2, I chose to use empty tags for abscissa and ordi-nate, with a simple attribute “name=” to give them an identifying title.

Page 193: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

170

Measurement(Trace )

Instrument(Oscilloscope)

Ordinate(Voltage )

Abscissa(Time)

I think Example 9-2 is simple enough that you probably understand what I have described using XML, and how I went about it. Admittedly, this a very superficial description. It is also true that I could have used XML in quite different ways to make the same description. I could have structured the description in XML this way instead:

Example 9-3. Alternative XML structure

<Instrument> <Name>Oscilloscope</Name> <Measurement> <Name>Trace</Name> <Abscissa> <Name>time</Name> </Abscissa> <Ordinate> <Name>Voltage</Name> </Ordinate> </Measurement></Instrument>

As you can see, what were once attributes have been replaced by a deeper nesting of tags. Is there a reason to prefer one approach over the other? Should a tag attribute be used for “name” rather than a child tag?

In many cases, the answer to this XML style question is unclear. It could be done either way without much difference in this case. One approach would be to use attributes for things that are clearly and tightly associated with the specific tag itself, and no other, rather than something, pos-

Figure 9-3. Tree structure of XML code example

Page 194: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

171

sibly reusable, that the tag describes or otherwise comprises. In Example 9-3, you should see how the tag <Name> is reused at different levels. Any reusable entity that might apply to different things is probably best a tag rather than an attribute. On the other hand, extremely generic attributes like “name=” are so common that an argument for syntactic simplicity could be made, suggesting that these common things should be attributes, saving us typing, at the sacrifice of complicating reuse somewhat.

There is one situation where there is no decision, where you must use a deeper tag nesting rather than an attribute. Multiple occurrences of an at-tribute are not permitted. Specifically, it would not be acceptable to make “Abscissa” an attribute of “Measurement” since a measurement could have multiple abscissas. For example, the map description of a image scanner might look like this:

Example 9-4. Flatbed scanner

<Instrument name=”Scanner”> <Measurement name=”Image”> <Abscissa name=”HorizontalPosition”/> <Abscissa name=”VerticalPosition”/> <Ordinate name=”RedIntensity/”> <Ordinate name=”GreenIntensity/”> <Ordinate name=”BlueIntensity/”> </Measurement></Instrument>

Horizontal and vertical position are the two abscissas, and RGB ordinates describe the color image data. It would not be acceptable to list both abscissas and three ordinates as attributes of the measurement. They must be listed as nested, child tags.

Defining an Instrument

Let’s do another example, a little more worked out to illustrate some further points. This example describes a synthetic instrument that can do two measurements. One called Reflection and the other Transmission. RF Engineers will recognize these as the main measurements of a vector network analyzer, like the Agilent 8510.

Page 195: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

172

Example 9-5. Network analyzer

<?xml version=”1.0” standalone=”no”?><!DOCTYPE Instrument SYSTEM “Instrument.dtd”><Instrument name=”Network Analyzer”> <Measurement name=”Reflection”> <Stimulus> <Port> <Constant value=”input”/> <Abscissa name=”Power”> <Units name=”dBm”/> <Constant value=”-10”/> </Abscissa> <Abscissa name=”Frequency”> <Units name=”MHz”/> <List> <ListItem>1.000</ListItem> <ListItem>2.000</ListItem> <ListItem>3.000</ListItem> </List> </Abscissa> </Port> </Stimulus> <Response> <Port> <Constant value=”input”/> <Ordinate name=”Return Loss”> <Units name=”dB”/> </Ordinate> </Port> </Response> </Measurement> <Measurement name=”Transmission”> <Stimulus> <Port> <Constant value=”input”/> <Abscissa name=”Power”> <Units name=”dBm”/> <Constant value=”-10”/> </Abscissa> <Abscissa name=”Frequency”> <Units name=”MHz”/> <UniformSteps start=”1.000” increment=”1.000” count=”3”/> </Abscissa>

Page 196: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

173

</Port> </Stimulus> <Response> <Port> <PortGroup> <ListItem>input</ListItem> <ListItem>output</ListItem> </PortGroup> <Ordinate name=”Gain”> <Units name=”dB”/> </Ordinate> </Port> </Response> </Measurement></Instrument>

The tree structure here is again evident and I have introduced some new nesting elements: Stimulus, Response, and Port.

Measurement(Transmission )

Instrument(Network Analyzer )

Ordinate(Gain)

Abscissa(Power )

Stimulus Response

Port Port

Abscissa(Frequency )

Measurement(Reflection )

Ordinate(Return Loss)

Abscissa(Power )

Stimulus Response

Port Port

Abscissa(Frequency )

Figure 9-4. Detailed example tree structure

The <Stimulus> and <Response> elements tells us if the enclosed elements are associated with stimulating the DUT, or measuring some response. Abscissas or ordinates may be defined as canonical only as a a stimulus or only as a response. Thus the stimulus and response nesting will decide what axes in the measurement map must be inverted prior to data acquisition.

Page 197: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

174

The <Port> element is a deceptively simple way to say what physical DUT ports are associated with what parameters of the enclosed elements. Abscissas and ordinate port parameters within are assumed to refer to the listed ports. In Example 9-5, all the abscissas, and the Reflection ordinate refer to the port named input. That name serves to uniquely identify a particular port. It would be assigned in the measurement system descrip-tion, or possibly as a measurement parameter.

In contrast, the “Gain” ordinate does refer to a particular port; it refers to a <PortGroup>, which are a group of ports referred to collectively. In this case, the port group comprises the ports named input and output. Gain is measured once for this port group. A port group is different than a list of ports, where the ordinate would be measured once for each port in the list. Gain requires two ports to be specified in order to make a single measurement. If you wanted several gain measurements, you would need to give a list of PortGroups.

Ports are, deep down, really just another abscissa. <Port> is defined as an element that contains abscissas and ordinates so as to clarify the parameter passing and grouping issues, but otherwise <Port> will act like any other abscissa. You must set their value explicitly.

You also must say explicitly what the other abscissa values should be for the measurement. Remember, an abscissa is an independent variable, so it needs to be set independently. A Port or an abscissa can be set to a constant value, or it can vary. The “Frequency” abscissa values are an example of an abscissa that varies. It is given as an enumerated list in the first measurement, and by using a <UniformSteps> tag in the second case. Both result in the same actual abscissa points.

The <Unit> element allows us to say what the name of the units are for abscissa and ordinate, as well as giving them an attribute. This allows for automatic unit conversions and tracking.

The ordinates specified in Example 9-5 would probably both be compound ordinates in a real instruments. This concept is described in the section titled “Canonical Maps.” It means that there is a calibration strategy specified that will give a schema to rewrite the map into canonical form, with only atomic ordinates specified. Let’s look at what that simple cali-bration strategy definition might look like for “Gain”.

Page 198: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

175

Calibration Strategy Example

Measurement systems can’t measure gain directly. It’s a relative measure-ment. Gain can be defined as the ratio of output power to input power of some signal passing through a DUT. The units of gain are customarily expressed in dimensionless decibels (dB), which is 10 log of the power ra-tio. Unlike gain, power is something that measurement system can often measure directly—it can be a canonical ordinate. If it can measure power canonically, it can break up the compound ordinate “Gain” into two cop-ies of the canonical ordinate “Power”, one measured at the input port and the other measured at the output port of the DUT.

Since input and output power are measured at different ports, I will ex-pand the map with a port abscissa, then specify post processing to collapse the map back down along the port axis.

Example 9-6. Compound ordinate

<Ordinate name=”Gain”> <Units name=”dB”/> <Measurement> <Response> <Port name=”inout”> <List> <ListItem name=”in”>input</ListItem> <ListItem name=”out”>output</ListItem> </List> <Ordinate name=”Power”> <Units name=”Watts”/> </Ordinate> </Port> </Response> <Collapse axis=”inout”> <Units name=””/> <Algebraic>Power[out]/Power[in]</Algebraic> </Collapse> </Measurement> </Ordinate>

Study Example 9-6 carefully. There are several interesting elements that deserve some comment. First of all, at the highest level in the tree, I have just an ordinate named “Gain” with units specified just like it was in the network analyzer example. Instead of ending there as it did before, now

Page 199: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

176

the ordinate also contains a map for a new (unnamed) internal measure-ment. The new measurement has no stimulus section, only response. The abscissa of this new measurement is actually the <Port> element (remem-ber I said that ports were really abscissas in disguise). This is a canonical abscissa that specifies how to interact with the DUT. Within the port abscissa, you see that I give an enumerated list of response ports where the system will measure the ordinate, “Power”. This isn’t a port group, it’s a list of ports.

The response port named “input” is really a loopback measurement of stimulus power. I assume that the measurement system hardware allows a loopback measurement of the stimulus at the DUT input. If it doesn’t, then the response port is not canonical over this part of the abscissa domain, and would need to be broken down further, possibly in terms of a stimulus power abscissa. Alternatively, I could define gain itself directly in terms of a stimulus power abscissa, instead of loopback response mea-sured at the DUT input. This alternative approach would not work well on systems that did not have loopback capability. In general, it’s best to say what you really want at the highest level, and break things down till you have what the system can actually do (or at least what some standard set of interface maps can do). Don’t try to make it easy on the system by precanonicalizing based on what hardware you know you have. If you try to do this, you will sacrifice portability.

The <Collapse> element is new. This tells us that the “Port” abscissa is to be collapsed by means of the algebraic equation given. Since there is only one ordinate and it’s a scalar, and the abscissa is an enumerated list, I can use a simple, scalar algebraic equation to perform the collapse. The syntax for the algebraic is given as the simplistic “Power[out]/Power[in]”, which is expresses the calculation of Gain as a ration of two Powers. The “Power” manifold is herein treated as an array indexed by the previously named values “in” and “out” along the axis (named “port”) that I have specified for collapsing.

I call the algebraic syntax simplistic, not because it doesn’t work for this case, and many others, but because one may want to use something more complex here in general. For example, MATLAB syntax could be used, or J, or Perl, or C if you must. You could even link to external code here. Calibration strategy axes collapses need to be able to express all sorts of

Page 200: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

177

multidimensional array manipulations, so it would be good to pick some-thing that worked well with that sort of thing.

Here’s an example of a more sophisticated axis collapse one may need. I have assumed power measurement was canonical, but what if it isn’t? If the hardware does not have some fundamental power measuring system like a wattmeter, power would need to be measured by doing a mean square integral of voltage, current, or some other calculation (for ex-ample, FFT) based on a block of data. The canonical ordinate in that case might be an array of voltage samples. The map collapse that case would be summing the squares of the data array, perhaps with a primitive as an algebraic expression, perhaps with a full blown script within a <Script> element.

Note how units are specified for the result collapse. The units are empty, and therefore dimensionless linear by default, with the conversion to dB left implicit by the fact that the enclosing compound “Gain” ordinate is specified as “dB”. Scale and unit conversions should be performed automagically based on what unit is specified for each axis. This is facili-tated by using standard string identifiers for unit names and scales and a separate unit conversion schema that can also be nicely specified in XML.

Another point to notice is the way I introduced identifier references. This is the first time I have used the idea of reference and it represents a water-shed in this XML method of describing measurement maps. I have named an axis and some of its elements to facilitate wiring them to the algebraic function. Identifiers and references lead us to the idea of Measurement Map parameters.

Functional Decomposition and Scope

When I defined a compound ordinate like Gain in terms of atomic ordi-nates, this is really quite similar to the classic functional decomposition that is used with programming languages like C. You start with a complex function, and then partially factor it with several, subroutines. Some of these subfunctions are factored again, and those again, and so on down till you have functions that don’t call any subroutines.1 Each function in

1 Assuming, of course, that you do not recurse.

Page 201: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

178

such a decomposition represents a node in a tree much in the same way as the <Measurement> elements represent nodes in the XML tree I have presented for describing measurement maps.

In a classic functional decomposition, each function can have parameters, and optionally return values. This allows us to pass information down-ward and upward in the functional tree. In the measurement map and calibration strategy schema I have outlined, information flow is implicit between the atoms and the compounds, relying on the fact that their interfaces fit. While it would be nice to believe that this fitting would happen spontaneously on its own, realistically I need a way to specify an interface.

I have used “name=” identifiers to label things within the measurement. Let’s add an explicit <ParameterList> element to say what internal parameters are passed into the measurement from outside. The names associated with these parameters become placeholders for the value passed in. Here’s the calibration strategy map for “Gain” restated with an ex-plicit parameter list.

Example 9-7. Parameter list

<Ordinate name=”Gain”> <Units name=”dB”/> <Measurement> <ParameterList> <PortGroup> <ListItem name=”input”/> <ListItem name=”output”/> </PortGroup> </ParameterList> <Response> <Port name=”port”> <List> <ListItem name=”in”>input</ListItem> <ListItem name=”out”>output</ListItem> </List> <Ordinate name=”Power”> <Units name=”Watts”/> </Ordinate> </Port> </Response>

Page 202: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

179

<Collapse axis=”port”> <Units name=””/> <Algebraic>Power[out]/Power[in]</Algebraic> </Collapse> </Measurement> </Ordinate>

In addition to their use as parameter labels, I have used element “name=” identifiers as local variables within the measurement, allowing us to refer to specific abscissa values from the <Collapse> block. It would be reason-able to expect that the implied scope of a local identifier is delimited at the <Measurement>, just like the scope of functional parameters.

Measurement Parameters—A Hazard

The <ParameterList> element allows measurements to have param-eters. The reason I introduced this capability was because compounds have dependencies on abscissas for calibration strategy. I needed a way to connect this together in an unambiguous way. Using named parameters seems unavoidable here.

Now that they are introduced, measurement parameters have more pos-sible uses that just the atom-compound interface. You may want a way for the test to interface with the measurement, passing down test parameters into a fixed map with variables, rather than rewriting the map with new constants. You can use the <ParameterList> interface for this if you wish.

But do you want to do that? At this point, I start to wonder: When I in-troduced parameters and variables, aren’t I in danger of turning the XML description of measurements into a real programming language? Isn’t that a bad thing?

Yes, indeed, this is a very dangerous point. Introducing reference in the form of functional parameters and local variables was a watershed for the XML stimulus response measurement map method. It potentially opens Pandora’s box, setting free all the demons that plague anything that threatens to become “real” programming. Until now, everything was pretty and perfect in a context-free way, but parameters and variables seem to threaten that austere beauty, introducing ugly semantic context.

Page 203: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

180

Don’t get me wrong. I’m as much a fan of gnarly-old variable naming and scoping as anyone, but you must remember the goal here is to provide a system that focuses on the measurement with a minimum of computer science arcana. I am dangerously close to introducing a whole bunch of issues that are well understood by programmers, but may alienate nonpro-grammers (assuming any are even still reading at this point).

Admittedly I am in danger, but XML itself comes to the rescue. Things are not as bad as they may seem. A Turing-strength programming lan-guage has virtually infinite freedom. From this freedom springs most of the problems people have with programming. But XML is different. XML allows freedom, but only in strictly regulated ways permitted by the schema and DTD. The clever folks that invented XML have already seen this hazard and have paved many ways around it.

Therefore, the trick to avoiding these dangers, I believe, has two impor-tant aspects:

1. Design the XML DTD and schema to enforce strictly unambigu-ous reference of measurement parameters. For example, if an ordinate requires a PortGroup parameter, make sure it gets one.

2. With the assumption of unambiguity guaranteed by the schema, allow measurement parameters and functional data flow to remain implicit as much as possible; only introduce them when absolutely necessary, or when they would make sense to a test engineer.

This trick is easier than it might seem. After all, look at what I have achieved so far with only very limited use of naming and reference. Fur-thermore, as I have already noted, test engineers are smart people. They will know, intuitively that some measurement parameter is missing and be happy to provide it if asked at the right time. Just don’t turn them into namespace accountants (i.e., programmers). If you do, they will rebel.

Describing the Measurement System with XML

By now you should have a pretty good idea how to describe a measure-ment with XML, but what about the measurement system itself? At some point, we need all this XML stuff to interact with real hardware. How does that happen?

Page 204: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

181

A good part of the answer to these questions lies within the map manipu-lation process I have already described. Within calibration strategy, map canonicalization proceeds until it reaches a map expressed entirely in terms of the set of atomic abscissas and ordinates. That set represents the real hardware, at least from the point of view of the map stance. There-fore, a necessary step in describing the measurement system with XML is to identify the atomic ordinates and abscissas that the system can imple-ment natively. Calibration strategy is then given this list; test engineers can write any measurement map they want; as long as the calibration strategy can find a way to canonicalize their map, real hardware can be told how to measure it.

Another part of the answer is given by defining the ports and modes available from the hardware. As I have described in the section titled “Ports and Modes,” ports and modes specify the state of the hardware dur-ing measurement. Ports tend to indicate what DUT interfaces the system is stimulating or is measuring from. Modes tend to indicate the internal settings of the measurement system itself. Both ports and modes can be canonical or atomic. Once again, specifying the complete list of atomic ports and modes is necessary to specify the hardware from the point of view of the map stance. The map then “knows” what the hardware can do. All that remains is telling the hardware to do it.

The “telling the hardware” part of an atomic ordinate, abscissa, port, or mode is a hardware-implementation specific thing. It may be as simple as reading or writing a register, or as complex as you please. There are many established approaches to this problem. There are “plug and play” driver interface standards, and there are proprietary “site file” formats for de-scribing the details needed for hardware interaction.

Any abstract system for describing hardware interactions (setting atomic ports, modes, abscissas, or reading atomic ordinates) that I present here in this book risks being irrelevant. Hardware vendors tend to like to keep ownership of the set of hardware driver standards they support, picking ones to support that they think will sell the most of their product. They jealously guard the low level details of interfacing with their products, preventing any other drivers from being developed, preventing any sales to people using other standards, and thus “proving” they chose the right standard to support in the first place. Therefore, I won’t bother to in-

Page 205: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

182

troduce yet another standard to be shunned. I will, however, risk giving a very simple example of how an interface description could be accom-plished in XML with no intention to propose it as a generalization.

All that said, I fearlessly consider how an atomic port might be specified with enough detail to effect hardware interaction.

Starting with the trivial case, if the measurement system has just one stimulus port, and one response port, and this connects to a DUT with just one gozinta and one gozouta (a.k.a. input and output) there really isn’t any work to do. I can always assume that the port in the Stimulus element of the measurement map is the one stimulus port, and the port in the Response element is the one response port. Done.

Suppose now that I have a DUT with multiple inputs and multiple out-put, but I still have the single stimulus, single response SMS. Commonly the way people solve this is to use a switch matrix between the DUT and the measurement system. This allows an instrument with a small num-ber of interfaces (in the one-to-many case, only one stimulus and one response interface) to be able to make measurements at numerous DUT ports. In the on-to-many case, the matrix is really just a TDM commuta-tor with a different name. (Multiplexing options were discussed in the section titled “Simultaneous Channels and Multiplexing.”)

Controller ConditionerCodec

Controller ConditionerCodec

DUT

Commutator

Commutator

Figure 9-5. Measurement system, switch matrix, and DUT

Page 206: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

183

When a commutator style switch matrix is used for DUT interfacing, all I need to do to specify a port in the measurement map is to somehow set the position of the commutator switch. The proper incantation I must perform to set this switch position depends on the details of the hardware interfacing, but more often than not this involves little more than writing something to a register someplace, or calling an “official” driver function, which secretly writes something to a register someplace.

Under the above set of assumptions about the hardware model, a suitable XML schema to capture the relevant details might be something as trivi-ally simple as this:

Example 9-8. Defining ports

<PortDefinition name=”output” role=”Response”> <Write address=”0x1234” value=”0x5678”/></PortDefinition>

Something as simple as that, either maintained separately, or placed within the scope of the Instrument element in the XML measurement map schema I have thus far defined, could bind the logical port named “output” to an explicit action for interacting with hardware.

To extend this schema to the purpose of setting modes and abscissas would require a more structure and get us involved in the concept of parameter reference that I discussed in the section titled “Functional De-composition and Scope.” Still, despite the additional semantic structure, it’s likely that I need do little more, hardware wise, than map referenced parameters to specific values the system writes to certain addresses. For ordinates, I will need to specify that the system should read values from certain addresses, but the concept is otherwise the same.

Of course, the above discussion is rather simplistic. Some modes and abscissas require a complex algorithm to set. Consider the case of a frequency converter in a signal conditioner. There might be several tun-able frequencies that need setting, amplifier and filter band-switches that need controlling, and so on, in order to get the conditioner tuned to the desired frequency abscissa. Similarly, with the response system, reading the digitizer may require a complex algorithm, timing consideration, and other details. Clearly, a lot more could be accomplished here. However,

Page 207: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

184

for reasons stated above, I’ll leave the rest of the XML schema for hard-ware interfacing as an exercise for the interested reader, or hardware vendor.

Describing Measurement Results with XML

As discussed in the section titled “Abscissas and Ordinates,” measure-ments are mappings (i.e., vector valued functions) over separable and nonseparable domain grids. The typical data types for abscissas and ordinate elements are manifolds over the set of integers, real numbers, complex numbers, or even numeric arrays. Therefore, the basic require-ment for a measurement results data structure is the ability to efficiently accumulate collections of tables of numeric data of the above types.

Obviously, if you define the measurement in XML, and you define the measurement system in XML, it might seem natural to record the data in an XML format. This is certainly reasonable. It’s even a good idea. Listing the actual data values measured for an ordinate right in the XML map description is a great way to create a self-documenting data structure that can be manipulated with the same set of software tools you used for manipulating the measurement prior to the acquisition of data.

The problem with jumping to the obvious conclusion that one should use XML as a data recording format is the existence of a plethora of possibili-ties for data structures and data file formats (for example: Microsoft Excel, MATLAB, DIF, SQL, Flat TDF or CSV, and so on) that can accumulate measurement data. Interoperating with these to various degrees, there is a second plethora of report generation and data visualization tools. People like these tools and understand the formats they rely on. Thus, it’s not possible to ignore this dual-legacy of prior art and go with XML for data storage, regardless of the advantages XML might have.2

Because the options are so diverse, I won’t bother surveying the topic be-yond some observations regarding the basic data structure requirements of a synthetic instrument. To this end, I will explore only two different basic data structures for the storage of measurement data.

2 On the other hand, as of the date of this writing, numerous vendors of data acquisi-tion, storage, analysis and visualization tools have either begun implementing support for open XML data formats in their legacy tools, or announced plans to transition their proprietary data files to XML format. Most prominent of these is Microsoft Corporation.

Page 208: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

185

Column and Array Data

Column data is the most general data format. It is equivalent to a typical PC spreadsheet data structure. Consider the following example:

Abscissas Ordinates

Stim Freq (Hz)

Stim Power (W)

Supply Voltage (V)

Atm Pressure (mBar)

Resp Power (mW)

DUT Temp (°C)

1000 1.0 5.00 985 0.92 22.0

2000 1.0 5.00 985 0.95 22.6

1000 1.1 5.00 985 0.97 22.0

Column data tables can represent any number of scalar ordinates and abscissas, including data taken over abscissas that are not separable. If efficiency was not an issue, a column data structure could serve for all measurement map data. That means you can use Excel, or any other spreadsheet format to store map data.

Column data is inefficient in the case of separable abscissa domains. If the abscissa is separable, it is far more efficient, space wise, to store only the factored individual scales for the abscissas rather than the outer product. The ordinate data is stored in ordinary array format. Spreadsheets can store arrays too, but tools that specialize in arrays (J, MATLAB, Math-ematica) show their power here. Here’s an example for two abscissas and one ordinate:

Gain (dB) vs Freq,Power Frequency (Hz)

200 300 400 500

1 10.1 10.2 10.3 10.4

2 11.5 11.7 11.9 12.0

3 12.6 12.7 12.9 13.2 Power (mW)

4 12.9 13.3 13.5 13.7

The array data format can easily be converted to the column data format, although the reverse is not true, in general.

Self-Documenting Features

Data should always be self-documenting. A flat file of numbers has no meaning once it is separated from the context in which it was acquired.

Page 209: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

186

The requirement to self-document derives from good lab practice in general. All abscissa and ordinates must be traceable to type, title, and units. Calibration offsets and other meta-data may also be of impor-tance, so provision should be made for attaching general attribute data to each abscissa or ordinate axis. This should include the capability for “attribute=value” style attributes or the equivalent.

One of the great advantages of object orientation is that if facilitates self-documentation. If the measurement object comprises both the stimulus response measurement map description and the SRMM data, together, and all subsequent report generation and visualization draws from this unified object, you will accumulate a completely self-descriptive entity that paints a complete picture of your measurement. The SRMM object is to the measurement what the TPS is to the test.

Strategy Graphs

History DUT ID

Transforms Algorithms

Data Units

ScalesInstrument

State

Post Processing

Calibration

Stimulus Response Measurement Map Object

Arrays as Elements

Some ordinates are not scalars. For example, the spectra ordinate is a set of spectral power measurements over some domain of frequencies. There-fore, column data and outer product array data structures must include the capability of handling arrays as elements. In fact, the most general case allows a data element to be a column data or outer product array. With such a hierarchical format, all possible measurement data can be efficiently and naturally represented.

One way to avoid allowing arrays as elements, but still to permit the same hierarchical freedom, is to allow relations to be defined between data sets.

Figure 9-6. Self-documenting SRMM object

Page 210: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

187

For example, instead of storing a list or array at one location in a table, it could be stored in a totally different table with the two tables linked by a common code number or key. Those of you familiar with relational data-bases will know that most fields with array contents (other than strings) are stored in separate tables and linked by relations.

SQL Database Concepts and Data Objects

I have been talking about stimulus response measurement map measure-ment using mathematical terminology from calculus and linear algebra (arrays, vectors, manifolds, functions, mappings). There is, however, an alternative way to conceive SRMM measurements. You can think in database terms (tables, records, fields). The collection of ordinates for a given set of abscissas can be considered a record with the individual abscissas and ordinates representing fields.

Not only can you think about data structures in database terms, you could actually use a SQL database to store measurement maps. This would al-low us to conveniently sort and select portions of a larger dataset using standard SQL commands, and the ability to relate other kinds of contex-tual data for the measurement: date and time, configuration state of the equipment, identifying information from the DUT, and so forth. The idea of using a real database to store measurements has many positive aspects.

The database viewpoint is particularly relevant when arrays are elements. Although linked structures can be built many ways in order to accommo-date this kind of element, the methodology of relational database design can help us immensely here.

On the other hand, the creation of a hashed database can be somewhat wasteful for SRMM measurement data because random access to indi-vidual records is not a typical requirement. More often, the data is ro-tated, subdivided, or abstracted in a sequential process for the purpose of visualizing the data with graphs, charts, or plots. Moreover, the raw data itself tends to be very large, making storage efficiency the paramount requirement over efficiency of random access queries and over sorting and searching. Therefore, although the structure of data will readily translate into typical database structures, a simpler, direct indexed data format for acquisition storage can often be a better choice.

Page 211: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

188

HDF

I will sing the praises of XML a lot, but one alternative data format that meets most of the requirements for recording map data is hierarchical data format (HDF). HDF is a multiobject file format standardized and main-tained by the National Center for Supercomputing Applications (NCSA) that facilitates the transfer of various types of scientific data between machines and operating systems. Machines currently supported include the HP, Sun, IBM, Macintosh, and ordinary PC computers running most any operating system, even Microsoft Windows. HDF allows self-documenting of data content and easy extensibility for future enhancements or compat-ibility with other standard formats. HDF includes JAVA and C calling in-terfaces, and utilities to prepare raw image of data files or for use with other software. The HDF library contains interfaces for storing and retrieving compressed or uncompressed 8-bit and 24-bit raster images with palettes, n-Dimensional scientific data sets and binary tables. An interface is also included that allows arbitrary grouping of other HDF objects.

Any object in an HDF file can have annotations associated with it. There are a number of types of annotations: Labels are assumed to be short strings giving the name of a data object. Descriptions are longer text segments that are useful for giving more in depth information about a data object file an-notations are assumed to apply to all of the objects in a single file.

The scientific data set (SDS) is the HDF concept for storing n-dimen-sional gridded data. The actual data in the dataset can be any of the standard number types: 8-, 16- and 32-bit signed and unsigned integers and 32- and 64-bit floating-point values. In addition, a certain amount of meta-data can be stored with an SDS including:

The coordinate system to use when interpreting or displaying the data

Scales to be used for each dimension

Labels for each dimension and the dataset as a whole

Units for each dimension of the data

The valid maximum and minimum values for the data

Calibration information for the data

Fill or missing value information

Page 212: Synthetic instruments: concepts and applications

Specifying Synthetic Instruments

189

A more general framework for meta-data within the SDS data model (allowing ‘name = value’ style meta-data) is also possible. There is also allowance for an unlimited dimension in the SDS data-model, making it possible to append planes to an array along one dimension.

HDF is an open, public domain standard. It is mature and well estab-lished. It represents a good alternative to XML or SQL databases for the storage and manipulation of synthetic instrument data. The HDF web page is located at http://hdf.ncsa.uiuc.edu/.

Page 213: Synthetic instruments: concepts and applications

This page intentionally left blank

Page 214: Synthetic instruments: concepts and applications

191

CHAPTER

10Synthetic Instrument Markup Language:

SIML

The intent so far has been to convey the basic concept of using eXten-sible markup language (XML) as a language for describing measurements in a hardware-independent way. In a spirit of exploration, thus far, I have proceeded intuitively and used XML freely with no regard for any com-pleteness. Nor have I yet become ossified onto a specific standardized approach, or yet worried about validation. For example, I argued between the alternative of having “name” as an attribute or as a tag element.

Clearly there would be both more detail in a real-world implementation, and rules of consistency must be established. Before XML could be ap-plied effectively in the complex context of a real system, with the XML source document guiding the automated implementation of measure-ments, I need to impose some standardization so that people don’t go writing whatever tags they want, spelling them and nesting them willy-nilly, however their mood strikes them. If allowed, the end result of that anarchy would be a document that was worthless as a source for machine processing.

To avoid this failure, I need to create a well-defined synthetic instrument markup language or SIML.

The “X” in XML means eXtensible, and so within the XML realm there is an established way for me to create my own SIML for use in defining synthetic instruments. I do this by establishing the desired document structure and describing it precisely with an appropriate document type definition (DTD) or schema. This creates the SIML schema for the syn-thetic instrumentation application of XML.

Page 215: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

192

A DTD spells out what tags and attributes are legal for use in a particular XML application, and how those tags interrelate in the tree structure. The DTD allows a particular XML document to be validated in terms of correctness. With a DTD, you must use elements in strict accordance with the DTD if you want your document to parse as valid. A Schema is an extension of this idea; its based on one of the schema description languages and gives a stronger way to specify the document structure. A valid, well formed document can then be the input to an automated pro-cess that turns the abstract measurement into a real measurement imple-mented by a synthetic measurement system.

On the other hand, the description of real-world synthetic measurement systems is a dynamic endeavor. Requirements change. I wouldn’t want to paint myself into a corner with a rigid descriptive standard that stifled innovation. Fortunately, the DTD and schema descriptions afford XML with a good balance of stability and flexibility that allows construction of a stable yet flexible syntactic toolkit for describing measurements consis-tently in all their complexity. That’s why I picked XML in the first place!

A DTD for Measurement Description

To begin the development of a DTD that gives a basic framework for verifying measurement descriptions, let’s return to my simple Oscilloscope example. Here is the XML description of the oscilloscope with a docu-ment type declaration added at the top. This represents a complete, valid, XML document.

Example 10-1. Complete XML document

<?xml version=”1.0” standalone=”no”?><!DOCTYPE Instrument SYSTEM “Instrument.dtd”><Instrument name=”Oscilloscope”> <Measurement name=”Trace”> <Abscissa name=”time”/> <Ordinate name=”voltage”/> </Measurement></Instrument>

The first line at the top of the document indicates that this document is XML format and not standalone. That means that there exists a DTD that it may be validated against.

Page 216: Synthetic instruments: concepts and applications

Synthetic Instrument Markup Language: SIML

193

The second line names this kind of document as “Instrument” and tells us where the DTD can be found. In this case the DTD is a local file, but it could just as easily be a URL, or even given in-line within the XML document itself. I alluded to the structured freedom DTDs afford and this is one example. The DTD provides structure, but you are free to use what-ever DTD you want. Should you want to extend the range of allowable elements in a “Instrument” document, you don’t need to appeal to any standards committee, you just change the DTD.

What does this DTD look like? It’s a file named Instrument.dtd with the following lines in it:

Example 10-2. Simple DTD

<!ELEMENT Instrument (Measurement*)><!ELEMENT Measurement (Abscissa*, Ordinate*)><!ELEMENT Abscissa EMPTY><!ELEMENT Ordinate EMPTY>

<!ATTLIST Instrument name CDATA #REQUIRED><!ATTLIST Measurement name CDATA #REQUIRED><!ATTLIST Abscissa name CDATA #REQUIRED><!ATTLIST Ordinate name CDATA #REQUIRED>

Each line starting with <!ELEMENT is called an element declaration. Essentially, I am defining each of the possible elements you may use and what other elements they contain.

For example, the first declaration

<!ELEMENT Instrument (Measurement*)>

says that the “Instrument” element contains zero or more Measurement elements (the star after the “Measurement” label indicates the ‘zero or more’ quantifier). The other lines make similar statements about what each element contains. The final lines says that Abscissa and Ordinate elements are EMPTY. That won’t be true for very long, but it is true for the simple oscilloscope in Example 10-1.

Also illustrated in this example is how attributes are defined with a decla-ration like this:

<!ATTLIST Instrument name CDATA #REQUIRED>

Page 217: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

194

This declaration says the “Instrument” element has one required attribute named “name” that will contain Character Data (CDATA).

Now that you’ve seen the basics, let’s try a more complex example. Sup-pose you wanted to validate the network analyzer example. You can use the following DTD:

Example 10-3. More sophisticated DTD

<!ELEMENT Instrument (Measurement*)><!ELEMENT Instrument (Measurement+)><!ELEMENT Measurement (Stimulus?, Response?)><!ELEMENT Stimulus (Port+)><!ELEMENT Response (Port+)><!ELEMENT Port ((Constant|PortGroup|List), Abscissa*, Ordinate*)><!ELEMENT Abscissa (Units*, (Constant | UniformSteps | List))><!ELEMENT Ordinate (Units*, Measurement*)><!ELEMENT Constant EMPTY><!ELEMENT Units EMPTY><!ELEMENT UniformSteps EMPTY><!ELEMENT List (ListItem*)><!ELEMENT PortGroup (ListItem*)><!ELEMENT ListItem (#PCDATA)>

<!ATTLIST Instrument name CDATA #REQUIRED><!ATTLIST Measurement name CDATA #REQUIRED><!ATTLIST Abscissa name CDATA #REQUIRED><!ATTLIST Ordinate name CDATA #REQUIRED><!ATTLIST Units name CDATA #REQUIRED><!ATTLIST Constant value CDATA #IMPLIED><!ATTLIST Constant name CDATA #IMPLIED><!ATTLIST UniformSteps start CDATA #REQUIRED increment CDATA #REQUIRED count CDATA #REQUIRED>

This DTD has been expanded to handle the additional elements. There are also some new structural nuances. Studying Example 10-3, a reader with a sharp eye will see how to express alternation with the | symbol, that is, how an element can contain either one or another of some selec-

Page 218: Synthetic instruments: concepts and applications

Synthetic Instrument Markup Language: SIML

195

tion of elements. There are also examples of elements specified as one-or-more (+) or as zero-or-one (?) element wildcards.

Continuing on from here, explaining the detailed syntax of a DTD or XML Schema simultaneous with its application to synthetic instruments is outside the scope of this book. At this point, the reader that wants to take this technology further is well advised to learn more about the details of XML through any of the fine XML books[B6] on the shelf of your favorite coffee-bar-bookstore. Specific resources for XML applied to synthetic instruments is available at www.synthetic-instruments.com web site.

In the following sections I will outline in some more of the XML detail needed in a typical synthetic measurement system description.

More SIML Details

Our measurement descriptions can be divided into the following major categories:

Global Measurement Elements prepare the system for a measure-ment and affect the way measurements are performed in a general sense. They allow the client to control the context and properties of measurement execution. They can be used to place limits on a measurement, adjust global properties of the system, or select stimulus and response interfaces to the DUT. Measurement system ports and modes fall into this category as well as being abscissa ele-ments.

Calibration Processing Strategy Elements specify how maps are ca-nonicalized and data is to be calibrated. They affect what data is taken and how data is processed after the measurement. Calibra-tion strategy can alter or completely redefine the actual measure-ments to be performed by transforming the map.

Abscissa Elements describe the measurement axes, determining the domain of stimuli applied to the DUT. You have already seen abscissa elements in my simple descriptions. The abscissa elements define both the measurement axes and their ordering. Abscissa elements are often stimuli, but may be modes or ports or driven responses based on map inversion. In choosing the ordering of the

Page 219: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

196

abscissas, you decide which independent variable is varied most rapidly.

Ordinate elements describe the measurements to be performed and data to be recorded over the domain. The members of the defined list of ordinates represent the actual measurements performed on the DUT. Typically these are responses, but sometimes may be stimuli after map inversion.

Locked Abscissas

Thus far, I have described abscissas as independent variables. That descrip-tion pretty clearly implies that abscissas are to be set independently. In-deed it is true that by default, abscissa domains are rastered through their fully independent outer product.

There are cases, however, when it’s a good idea to restrict this indepen-dence. As I discussed in the section titled “Domains,” abscissa domains are sometimes restricted into narrower regions by being locked or banded together.

If an axis is locked, it no longer is varied independently, but rather is varied in tandem with another selected independent axis, possibly with some fixed offset. Finally, if it is banded to another axis, it is varied over a restricted neighborhood near the other axis. Abscissas can only be locked or banded to an independent axis with the same units.

I will express locking and banding in SIML through some new elements and by another venture into the dangerous realm of names and refer-ence. Typical uses of locking and banding include locking some response parameter to a stimulus parameter. The locking thereby would overlap stimulus and response elements. In general, since any two abscissas can be locked or banded, overlapping parent elements like stimulus and response, the locking associations may not follow the tree structure thus far defined for SIML. Given how locking and banding associations may transcend strictly tree-structured associations, SIML must say explicitly what abscissa is the master of any other abscissa to which it is locked or banded. The only practical way to do this is with an identifier.

The syntax that accomplishes banding and locking can be seen in the SIML definition of a distortion analyzer given in Example 10-4. This in-

Page 220: Synthetic instruments: concepts and applications

Synthetic Instrument Markup Language: SIML

197

strument makes a measurement of two-tone intermodulation. The stimu-lus comprises two separate tones of slightly different frequency, but of the same power. A response measurement is make of the power in a distortion product at a known frequency.

One stimulus tone is at frequency f1 and the other is at f2, with their frequency spacing f∆ = f2 – f1. The response to measure is the third order intermodulation product located at 2 f2 – f1. In Example 10-4, I assume that f∆ = 0.1 MHz, putting 2 f2 – f1 at 0.2 MHz above f1.

Example 10-4. Distortion analyzer

<?xml version=”1.0” standalone=”no”?><!DOCTYPE Instrument SYSTEM “Instrument.dtd”><Instrument name=”Distortion Analyzer”> <Measurement name=”Third Order Intermodulation”> <Stimulus name=”Tone1”> <Port> <Constant value=”input”/> <Abscissa name=”Power”> <Units name=”dBm”/> <Constant value=”-10”/> </Abscissa> <Abscissa name=”Frequency”> <Units name=”MHz”/> <List> <ListItem>1.000</ListItem> <ListItem>2.000</ListItem> <ListItem>3.000</ListItem> </List> </Abscissa> </Port> </Stimulus> <Stimulus name=”Tone2”> <Port> <Constant value=”input”/> <Abscissa name=”Power”> <Units name=”dBm”/> <Constant value=”-10”/> </Abscissa> <Abscissa name=”Frequency”> <Units name=”MHz”/> <Locked stimulus=”Tone1” abscissa=”Frequency” offset=”0.1”/>

Page 221: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

198

</Abscissa> </Port> </Stimulus> <Response> <Port> <Constant value=”output”/> <Abscissa name=”Frequency”> <Units name=”MHz”/> <Locked stimulus=”Tone1” abscissa=”Frequency” offset=”0.2”/> </Abscissa> <Ordinate name=”Power”> <Units name=”dB”/> </Ordinate> </Port> </Response> </Measurement></Instrument>

Note the new “Locked” element tag I have introduced:

<Locked stimulus=”Tone1” abscissa=”Frequency” offset=”0.1”/>

This empty element serves to specify the value of the abscissa. The ele-ment attributes unambiguously specify the referenced master abscissa. Not only do they say what abscissa type (Frequency) is master, they also says which stimulus (Tone1) contains that master abscissa. The combined attributes are unambiguous. The SIML schema would be in error if there was any ambiguity to the locking specification.

Abscissas can only be locked to other abscissas with compatible units. Trying to lock abscissas with incompatible units (for example, a frequency abscissa to a power abscissa) would be another schema error. In this case, we are locking frequency in MHz to frequency in MHz, so there is no prob-lem with unit conversion or with math. See how I have given a frequency offset for the locked abscissas using the “offset” attribute? When (Tone1, Frequency) is set at 1.0 MHz, (Tone2, Frequency) will be at 1.1 MHz, and the response (unnamed) will be measured at frequency 1.2 MHz.

Banded Abscissas

Banded abscissas work just like locked abscissas, but with an added twist. In addition to an offset, banded abscissas can have an increment and

Page 222: Synthetic instruments: concepts and applications

Synthetic Instrument Markup Language: SIML

199

count. The banded abscissa can then step, independently, beginning at the specified offset from the master.

For example, in our distortion analyzer it might be a good idea to dither the power of one of the stimulus tones slightly so as to be sure to find the spot where the two response tone powers are exactly equal. It also makes sense to measure the fundamentals as a response rather than relying on stimulus calibration. Here’s an enhanced distortion analyzer that does this using abscissa banding.

Example 10-5. Enhanced distortion analyzer

<?xml version=”1.0” standalone=”no”?><!DOCTYPE Instrument SYSTEM “Instrument.dtd”><Instrument name=”Distortion Analyzer”> <Measurement name=”Third Order Intermodulation”> <Stimulus name=”Tone1”> <Port> <Constant value=”input”/> <Abscissa name=”Power”> <Units name=”dBm”/> <Constant value=”-10”/> </Abscissa> <Abscissa name=”Frequency”> <Units name=”MHz”/> <List> <ListItem>1.000</ListItem> <ListItem>2.000</ListItem> <ListItem>3.000</ListItem> </List> </Abscissa> </Port> </Stimulus> <Stimulus name=”Tone2”> <Port> <Constant value=”input”/> <Abscissa name=”Power”> <Units name=”dBm”/> <Banded stimulus=”Tone1” abscissa=”Power” offset=”-0.05” increment=”0.01” count=”10”/> </Abscissa> <Abscissa name=”Frequency”> <Units name=”MHz”/> <Locked stimulus=”Tone1” abscissa=”Frequency”

Page 223: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

200

offset=”0.1”/> </Abscissa> </Port> </Stimulus> <Response> <Port> <Constant value=”output”/> <Abscissa name=”Frequency”> <Units name=”MHz”/> <Banded stimulus=”Tone1” abscissa=”Frequency” offset=”-0.1” increment=”0.1” count=”4”/> </Abscissa> <Ordinate name=”Power”> <Units name=”dB”/> </Ordinate> </Port> </Response> </Measurement></Instrument>

In Example 10-5, I used banding for two purposes. The first purpose is to dither the stimulus power over ten steps spaced by 0.01 dB. The second purpose is to measure the power of 4 response frequencies: the two inter-mod products along with the two fundamentals.

There is one final interesting comment to make about the distortion ana-lyzer map in Example 10-5. I used banded abscissas to dither the stimulus power of one of the tones slightly so as to be sure to find the spot where the two response tones are equal. I could have, instead, specified the response powers as locked abscissas. That would have implied an inverse map for most measurement systems. The measurement system would have to find the necessary stimulus power that made the response powers equal. One way it could find that stimulus setting would be for it to invert the map and dither the stimulus, as I have shown in Example 10-5. The dith-ered axis would then be collapsed by interpolating the data to find the point where the response powers were equal and returning data for just that interpolated point.

Page 224: Synthetic instruments: concepts and applications

Synthetic Instrument Markup Language: SIML

201

Constraints

Constraints on manifolds are needed for many reasons. Maybe the fore-most among these is the need to protect hardware from damage. There are cases where either the DUT or the measurement system is vulnerable to damage should a stimulus or response venture into some forbidden re-gion. The simplest example of this is power supply voltage. Most systems can’t withstand supply voltage much higher they they are designed to operate from. There are other cases where incorrect frequencies, improper control sequencing, or excess signal power levels can lead to catastrophe.

A less dire reason for constraints, but none the less important, is to guide the process of calibration strategy. I have noted in the section called “Inverse Maps” that calibration strategy that relies on map inversion can, in some cases, lead to multiple solutions or branches. Proper application of constraints can eliminate ambiguous branches and simplify the problem of canonicalizing and optimizing maps.

The schema presented in Example 10-6 shows how to apply numeric constraints on an ordinate. The <Constraint> element is a sibling to <Units> and naturally is measured with the same units as the element being constrained.

Example 10-6. Constraints

<Response> <Port> <Constant value=”output”/> <Ordinate name=”Power”> <Units name=”dB”/> <Constraint role=”hard” max=”10”/> <Constraint role=”soft” max=”0” min=”-10”/> </Ordinate> </Port> </Response>

Constraints are specified as “hard” or “soft” thresholds. If a “hard” limit is exceeded, the measurement immediately aborts. If a soft limit is ex-ceeded, the measurement continues after some programmed corrective action is taken by the system. Obviously, other “role” attributes could be defined and other kinds of thresholds. Reference identifiers could be used

Page 225: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

202

to specify relative constraints (for example, Voltage1 always greater than Voltage2).

Modulation

In Chapter 9, I described the signal coding hierarchy and how measure-ment systems can be asked to measure coded aspects of signals. As a hierarchy, signal coding can be expressed in a tree structure, and therefore XML can be used as a way to specify signal coding. I’ve already shown a little of this with the distortion analyzer example. The response tuned frequency is really a mode abscissa for removing the carrier component from a bandpass signal, leaving just the modulation. Other response mode abscissas can be defined in SIML, including matched filtering.

On the stimulus side, we can put together encoded waveforms that are implemented using compound stimulus capabilities the hardware pos-sesses, or, after stimulus map canonicalization, are synthesized with DSP algorithms in the stimulus controller. Consider this example where I define an FM modulated stimulus.

Example 10-7. Signal encoding

<Stimulus name=”Tone1”> <Port> <Constant value=”input”/> <Abscissa name=”Power”> <Units name=”dBm”/> <Constant value=”-10”/> </Abscissa> <Abscissa name=”Frequency”> <Units name=”MHz”/> <Constant value=”1.0”/> </Abscissa> <Abscissa name=”Modulation”> <Envelope type=”FM”> <Abscissa name=”Deviation”> <Units name=”kHz”/> <Constant value=”5”/> </Abscissa> <Abscissa name=”Frequency”> <Units name=”kHz”/> <Constant value=”10”/> </Abscissa>

Page 226: Synthetic instruments: concepts and applications

Synthetic Instrument Markup Language: SIML

203

</Envelope> </Abscissa> </Port> </Stimulus>

Ordinate Modifiers: Averaging and Statistical Manipulations

Averaging is an option on many conventional instruments, especially those capable of digital storage. This option is applied to an ordinate. The ordinate y(x) measurement is repeated N times and its value averaged with the usual formula:

( ) ( )

1

N

ii

y x y x=

= ∑

The average is used as the resulting ordinate. Unless averaging is an atomic feature of the hardware, I would expect the usual way to imple-ment averaging would be to canonicalize it as an axis of repeated ordi-nate measurements and then collapse that axis with the above averaging formula.

Here’s how averaging can be specified in SIML.

Example 10-8. Averaging

<Response> <Port> <Constant value=”output”/> <Average n=”10”> <Ordinate name=”Power”> <Units name=”dB”/> </Ordinate> </Average> </Port> </Response>

Page 227: Synthetic instruments: concepts and applications

This page intentionally left blank

Page 228: Synthetic instruments: concepts and applications

205

CHAPTER

11Ten Mistakes in Synthetic

Measurement System Design

Throughout this book, I have attempted to give you a clear and consis-tent plan for the design of synthetic instrumentation. After reading my plan, and after considering your own individual application requirements, you should be able to design and develop your own SI that reaps the promised benefits of this beneficial design approach.

Unfortunately, it has been my experience that the best laid plans to de-velop synthetic instrumentation go oft astray when battered by the erratic yet often powerful forces that beset system development in real-world industry. Designers don’t really want to stray from the plan, but in the fast pace of development, it’s easy to get hit by a latecomer requirement, an unexpected design shortfall, and get accidentally derailed as a con-sequence. By the time you realize what has happened, you have left the road leading to the Synthetic Instrument City, and are sidetracked, on your way to Modular-ville or Rack-em-Stackopolis with no way to turn back.

Forewarned is forearmed, and in that spirit I will list ten common de-tours that designers can encounter during development on synthetic instrumentation. My hope is that knowing in advance about these com-mon diversions, you will be able to navigate the through rough spots and ultimately stay on track.

Fixing Performance or Functionality Shortfalls Exclusively by Adding Hardware

Synthetic instrumentation systems are a hybrid of software and hard-ware developed to meet some measurement need as expressed by a set of specifications. As the system is developed, performance estimates are

Page 229: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

206

made, and at some point the predicted performance is held up next to the required performance, and compared. The system designers would have to be living a pretty lucky and enchanted life if all the required performance is met. Normally there is a shortfall somewhere.

Faced with an explicit shortfall in the performance of a hybrid of software and hardware, probably the most common reaction is to modify the hard-ware in some way so as to address it, rather than modifying software, or the system design as a whole, or the requirement itself. Instead, just hardware is changed; frequently something is added or complexity is increased.

This is a mistake.

The reason this is a mistake is because any system-level requirement shortfall is a system-level design shortfall. As such, it should be addressed at the system level in the best manner possible. In some cases this may involve changing hardware, but an unbiased evaluation must be made. Unfortunately, software engineers are often not even invited to consider solutions to things that are “obviously” hardware oriented at the system-level. I believe this is wrong and represents a primary source of failure and schedule/budget overrun in synthetic measurement system development.

To avoid this mistake, designers should be aware of the solve-with-hard-ware knee-jerk bias and compensate somehow. For example, software en-gineers should go to hardware meetings, learn the issues, and solve them as best they can. There should be a sincere effort to develop a software solution to all requirement shortfalls even if hardware seems the thing that must change. Sometimes you might be surprised.

“At first I thought we needed an amplifier, but integration solved our sensitivity problem.”

“I was sure we needed another filter, but we coded an adaptive nulling algorithm in the DSP to remove the interference.”

“That spec was impossible to meet with the hardware we were planning, so we convinced the customer that their measurement could be made just as accurately with the system as is.”

“By multiplexing we were able to avoid the need for an-other channel.”

Page 230: Synthetic instruments: concepts and applications

Ten Mistakes in Synthetic Measurement System Design

207

Those are just a few examples of what system designers might come up with after they pushed through their initial instinct to add hardware to solve system-level problems.

Fixing Hardware Mistakes with Software

It may seem that this is the opposite of the first mistake. And it is. “Fixing it with hardware” and “Fixing it with software” are the Scylla and Cha-rybdis of synthetic measurement system design. It’s difficult to steer a wise and safe course that does not wreck on either peril.

Now I am concerned about an over-dependence on software to provide solutions for correctable mistakes made in hardware. Hardware oriented system engineers are prone to say “we’ll just fix it in software” when a shortfall appears that “obviously” can be fixed with an appropriate algo-rithm. The stepping motor that spins backward, the connector pins that are scrambled, the register that can’t be read after being written, the sen-sor that requires compensation for numerous unrelated parameters—the list is endless—are all examples of things that really should be fixed by the guy that made the mistake.

Again, the solution to this side of the dilemma is to keep a systems view to all things. Software team members need to be present when these deci-sions are made, and need to be familiar the hardware issues and politically empowered in order to be able to say no to “just fixing it with software.” Perhaps more beneficial, the hardware team members, who are so often oblivious to the innards of the system software, need to get up to speed on these details. A suggestion to “just fix it with software” is not acceptable. Instead the suggestion should be to “fix it precisely so in this exact point of the software system design.”

Adding Modes or Features Dedicated to Specific Measurements

“It’s so much easier just to add a voltmeter to the system than to add all that signal conditioner stuff and DSP we need to mea-sure boring old DC voltage with our fancy 96 GHz digitizer. After all, this fancy digitizer fundamentally makes a rotten voltmeter. A slower but more precise A/D is more appropriate. So lets just add the voltmeter module, shall we?”

—Anonymous

Page 231: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

208

OK. That may be an extreme example, but you should get the point. The tendency always is to add a new, conventional-style instrument to a SMS rather than to do the extra work needed to make the synthetic design handle the full generality of measurements that need to be made.

This is a mistake in SMS design, although I’ll be the first one to admit that other factors may play a role here. Time and money may be saved in the short run by abandoning the purist SMS design philosophy. I will not argue that point.

But in the long run, the mistake compounds and threatens everything. In the long run you find traditional instruments doing all the measurements, and only a ghost of a synthetic design lurking in the system. All the re-dundancy and instrument specificity you tried to avoid is now back.

Designing Synthetic Instruments Procedurally

Object-oriented software design techniques have been widely known for over a decade, but still there is a pervasive bias toward procedural soft-ware design. This bias is very strong in the ATE community. After all, if you talk about “test procedures” and “test sequences,” everyone knows what you mean. But if you use the word “object,” the eyes around you glaze over and people start to drift away to refresh their coffee.

Using procedural methodologies to design synthetic instruments is a big mistake. The reason this is such a big mistake is because synthetic instru-ments are built on a hierarchy that naturally fits with the OO concept of inheritance. An RMS distortion meter is a special kind of RMS voltmeter, which is a special kind of voltmeter, which is a special kind of meter, which is a special kind of instrument. Similarly, maps, abscissas, ordi-nates, signals, block acquisitions are another family tree.

Not taking advantage of the natural structure of this hierarchy results in software redundancy, which leads to a maintenance nightmare. If one improves the voltmeter, the improvements are not necessary reflected in the RMS voltmeter, or the RMS distortion meter, unless somebody redundantly improves them all in the same way.

Failing to orient the design around objects results in related things be-ing redundantly scattered all throughout the system. Units are a classic example of this. Some ordinates are best expressed in terms of a certain

Page 232: Synthetic instruments: concepts and applications

Ten Mistakes in Synthetic Measurement System Design

209

unit: volts, amps, dBm, for example. It seems to me that the best place to say what units a given ordinate has is in the ordinate itself. But without object orientation, information about the units that a given ordinate is expressed in are hidden in all sorts of places: reports, test parameters, pass fail criteria, database field names, graphs, and so on. Ironically, sometimes you look at the code that computes the ordinate itself and unless you are lucky enough to find a comment, the units being used are anyone’s guess!

It is my recommendation that all synthetic instrument designers learn OOD principles. They should resist the temptation to just put one foot in front of the other, as they have done in the past, and consider where things really should go in order to eliminate redundancy. The limitations imposed by the realities of non-OO tools and legacy applications are fading. Newer tools and applications are more amenable to system level OOD. But no matter how good the tools, SMS design will never turn object-oriented unless the designers force themselves to think that way.

Meeting Legacy Instrument Specifications

Although one of the greatest advantages of synthetic measurement systems is their ability to perform legacy measurements in lieu of some obsolete instrument, there is no more certain way to disembowel a syn-thetic measurement system design effort than to specify that the synthetic instrument needs replace a legacy instrument, or to meet the same speci-fications as some legacy instrument. This approach invariably leads the SMS in directions that are counterproductive. The result is a replacement for the legacy system that is not in any true sense a synthetic instrument.

The reason that this approach goes wrong is because the legacy instru-ment specifications were chosen in the context of a specific implemen-tation of that old instrument. The legacy specifications reflect that implementation, often more than they reflect the underlying measure-ment. Therefore, if you try to use the legacy specifications as any sort of guidance for a synthetic implementation, you are likely to be led astray. You end up addressing issues that are irrelevant to the goal of making the underlying measurement.

Consider, for example, a measurement of relative humidity. Maybe you want to replace wet/dry bulb thermometers with a new digital humidity gage based on hygroscopic polymer sensors. Would you use the all specs on

Page 233: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

210

the wet/dry bulb system as a specification for the digital system? The size of the liquid reservoir? The minimum air flow for evaporation? Of course not. What you would do is abstract the measurement performed by the wet/dry bulb system and require the digital system to do the same measurement.

To take another specific example, consider a RF spectrum analyzer. This instrument is a form of radio receiver that sweeps across some wide fre-quency band and plots the ordinate of power versus frequency. In a legacy spectrum analyzer data sheet, you will find all sorts of specifications about the “sweep speed” and the “video averaging,” but in a modern synthetic spectrum analyzer there may be a CCC system that digitizes and com-putes. There is no sweep; there is no video. To understand them at all, the meaning of those old specifications needs to be recast in terms that are relevant to a DSP implementation. And after you have done the work to figure out what “sweep speed” actually means when nothing is sweep-ing, what have you gained? Yes, there is some correspondence between a wagon wheel, a car tire, and an oar, but does that correspondence tell us anything of value when it comes to designing these devices? I doubt it.

One might argue that some legacy instrument specifications are relevant to the measurement performed. Obviously, the size, weight, dimensions, interfaces, and power requirements of a legacy instrument are irrelevant, but the accuracy specifications aren’t. It may seem reasonable to believe that it’s possible to select those legacy specifications that have relevance to the underlying measurement, and just use those as specifications for the synthetic instrument.

Unfortunately, there is no such free lunch. Legacy instrument specifica-tions (particularly those that appear on manufacturer’s data sheets) are always tuned to the strengths and weaknesses of a particular hardware im-plementation and are always colored by marketing considerations. There is an implicit desire to put one’s best foot forward. The specifications that make it to data sheets are chosen in order to look good to customers and sell the product, not to be a specification for manufacturing the product.

Data sheets are poor sources for quantitative information regarding ac-curacy. Any quantitative information is seldom defined with accuracy estimates in terms of standard uncertainty, but rather, is given as a num-ber with no error quantification, or possibly with absolute “accuracy” numbers. Consequently, any quantitative meaning is absent.

Page 234: Synthetic instruments: concepts and applications

Ten Mistakes in Synthetic Measurement System Design

211

You should realize, therefore, that legacy instrument specifications are a historic qualitative legacy of depreciated technology, not something carved on stone tablets to be preserved in our culture for thousands of years. They represent a distorted snapshot of what was possible in the past with a particular instrument. From them, you should attempt to glean a rough qualitative idea of what the legacy instrument was capable of, mea-surement wise, and accuracy wise, keeping in mind that past and present marketing bias colors everything, and quantitative statements of accuracy are a contradiction in terms.

From the qualitative understanding you can gain from the study of legacy specifications, in combination with the present day test and measurement goals for the new synthetic instrument, you can develop specific mea-surement requirements that need to be addressed by a new design. You can then produce a design that seems to fit the need. This design can be analyzed, simulated, and prototyped to determine its quantitative perfor-mance in the required test scenario. The loop then closes as better re-quirements are written and the design is revised. The end result is a new instrument that addresses today’s need.

Developing Stimulus Separate from Response

As I discussed earlier, the one unique redundancy that synthetic instru-mentation can eliminate are response components dedicated to calibrate stimulus, and the stimulus components dedicated to calibrate response. By using a system-level optimization, you can factor out these sorts of redundancies readily.

If stimulus and response subsystems are developed in isolation, then it becomes impossible to design with the assumption of closure. Therefore, redundant response functions must be added to stimulus, and visa versa. Cost and complexity go up. In general, this is bad.

However, it must be said that there are worse crimes on this list. Some-times circumstances dictate that a stimulus system must be used in isola-tion, or at least that there is a firm requirement for it to have the capa-bility of independent operation. In these situations adding redundant components to close leveling loops and provide calibration signals is simply the cost of meeting the requirement.

Page 235: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

212

Not Combining Measurements

The power of the stimulus response measurement map view of measure-ments is that it allows multiple measurements to be combined into a single, fast acquisition of a single map. When the system is allowed to combine measurements, acquiring data takes far less time than when the same measurements are made separately. In some cases, orders of magni-tude less time.

Unfortunately, there seems to be a reluctance to combine measurements. This reluctance is a result of the way TPSs are constructed: each measure-ment is separate in a firmly procedural sequence. In the normal world-view, considering that measurements might be thought of as nonproc-edural entities borders on insanity. That one could trust to an instrument to combine measurements into a high speed, optimized map is absurd. Inevitably one must specify each switch to throw and each knob to twist for each measurement, mustn’t one? Anything else is madness, isn’t it?

No, it isn’t madness. It is a fact borne out in practice that a combined map does the same set of integrated measurements faster and more ef-ficiently than the same measurements can be done as separate tests. The true madness is to ignore the gain in performance this represents and continue to do measurements separately.

Hardware Modularity as a Distraction

As I have explained, synthetic instruments are not necessarily modular. In fact, the whole idea of running specific measurements on general pur-pose hardware tends to discourage modular approaches. After all, the point of modularity to be able to conveniently plug in the specific hardware you need. If you can do all your measurements with the same CCC cascade, no modules need to be swapped. The hardware can just sit there, happily, doing all sorts of different measurements. All the modularity has been swept into software.

Thus, efforts toward encouraging modularity in synthetic instrument designs can be a sort of a false god. These modularization efforts can drive the design away from a pure synthetic approach. The easier it is to put in different hardware modules, the less incentive to make one particular cascade of hardware to do all the measurement tasks.

Page 236: Synthetic instruments: concepts and applications

Ten Mistakes in Synthetic Measurement System Design

213

On the other hand, the practical reality of realizable hardware may dic-tate that one CCC cascade may not be able to do everything you need. Consequently, you will need to switch in a new conditioner, or codec, or controller. You might as well modularize the portion replaced, so long as you do so in a manner that doesn’t undermine the foundation of your synthetic instrument system.

Bad Lab Procedure

This mistake has been alluded to several times and is a contributor to other mistakes in this list. Even so, it deserves to be listed by itself as it is such a grievous and pernicious error.

Anyone who has taken an introductory college level lab course in any hard science, be it physics or chemistry or biology, has been drilled in proper measurement procedure. You learned how to observe and take data from your observations, again and again. You learned how to use control experi-ments and other techniques to avoid tainted data because of observer bias and wishful thinking. You learned how to tabulate and statistically analyze data, how to calculate sample mean and variance, perform confidence testing, draw X/Y plots with proper divisions, markings and labeling, and so forth. These are basic metrologic and scientific skills that anyone taking these courses either learned (at least a little) or failed the course.

Why then is evidence of a proper grasp of scientific and metrologic tech-niques so seldom seen in the operation of modern automated measure-ment systems?

This problem is not specific to synthetic instruments, but as a relatively immature technology, synthetic instruments show flaws older technolo-gies would have had more time to correct. Thus, it has been my experi-ence that first generation SI efforts are often blemished with basic lab procedure errors that would have earned someone a “D” in physics lab 101. These are not subtle errors, mind you, but simple things, like not putting units on measurements, not making properly labeled plots, or not doing any rudimentary statistical analysis of results to justify the conclu-sions being drawn.

All these mistakes fall under the category of bad lab procedure. De-spite the fact that modern measurement equipment can allow even the

Page 237: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

214

unskilled to make measurements, making good measurements still requires skill. Even so, anybody can make these mistakes, whether they are skilled or not—or use a synthetic instrument or not. However, because synthetic instruments are more automated, and more user friendly, they facilitate shortcuts and consequent blunders. Therefore, it behooves the designers and operators of synthetic instruments not to forget the basic lab proce-dure that they all undoubtedly earned them an “A” back in college.

Fear of Change

Synthetic instruments represent a new way to design instruments. They are different than what came before. They are not a concrete, hardware thing, but rather a software abstraction. The combination of innovation and abstraction loses a lot of people right from the start. They don’t get it. They keep looking for the rack-em-stack-em, or the modules, or even the virtual instruments. They need something familiar to latch on to that isn’t there.

There is a legend about an early inventor of the quartz watch. Supposedly, he took his invention to various mechanical watchmaking companies trying to sell it to them. They looked at his masterpiece and could not see a wristwatch. Where is the spring? Where is the escapement? How does this thing tell time?

They didn’t like it. It wasn’t a real wristwatch. Clearly, he eventually made his point and today, quartz watches dominate. Mechanical watches represent only a small fraction of the total watch sales.

I can tell numerous anecdotes that are basically identical to the this story, but with synthetic instruments representing the quartz watch. When you show people a synthetic measurement system, particularly if the measure-ment software is designed along OO principles, they look at it and can’t see the instruments. They ask “How does this thing do a test?”

This is one reason that I believe LabVIEW and virtual instruments have been successful: Not because it is a superior approach, but because you can see the instruments. The virtual instruments have graphical front panels that evoke the feel of a legacy instrument. These are wired togeth-er from the “back panel” with the interconnections and their procedural interactions clearly in evidence—at least in the smaller systems that are deceptively used to sell the approach.

Page 238: Synthetic instruments: concepts and applications

Ten Mistakes in Synthetic Measurement System Design

215

Synthetic instrumentation systems aren’t as concrete as virtual instru-ments, and as such can be a harder sell to certain people despite their numerous advantages. Therefore there is often a tendency to make the mistake of concretizing synthetic instruments to placate those that want to see familiar concrete hardware patterns. Frivolous hardware modular-ization is a symptom of this disease, as is legacy instrument virtualization on the software side.

The way to avoid this mistake is to focus on the measurements. Express them as maps without any legacy instrumentation context. Think like scientists, metrologists, and statisticians.

Page 239: Synthetic instruments: concepts and applications

This page intentionally left blank

Page 240: Synthetic instruments: concepts and applications

217

APPENDIX

AAcronym Glossary

4GL: Fourth Generation LanguageAC: Alternating CurrentA/D: Analog-to-Digital ConverterALU: Algorithmic Logic UnitAM: Amplitude ModulationAPI: Application Program InterfaceARB: Arbitrary Waveform GeneratorASIC: Application-Specific Integrated CircuitASP: Analog Signal ProcessingATE: Automated Test EquipmentATLAS: Abbreviated Test Language for All SystemsATML: Automated Test Markup LanguageBASIC: Beginner’s All-Purpose Symbolic Instruction CodeBER: Bit-Error RateBSD: Berkley Standard DistributionBSG: Aeroflex Broadband Signal GeneratorCCC: Conditioner, Codec, ControllerCD: Compact DiscCDATA: Character DataCDM: Code Division MultiplexingCDMA: Code Division Multiple AccessCDR: Critical Design ReviewCIPM: Comité International des Poids et MesuresCOTS: Commercial Off-the-Shelf

Page 241: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

218

CPU: Central Processing UnitCRM: Chinese Restaurant MenuCRT: Cathode Ray TubeCSV: Comma Separated ValuesCW: Continuous WaveD/A: Digital-to-Analog ConverterDC: Direct CurrentDDS: Direct Digital SynthesisDECL: Differential Emitter Coupled LogicDIF: Data Interchange FormatDMM: Digital MultimeterDOM: Document Object ModelDPU: Digital Processing UnitDSP: Digital Signal ProcessingDTD: Document Type DescriptionDUT: Device Under TestEBNF: Extended Backus-Naur FormEIA: Electronic Industries AllianceENOB: Effective Number Of BitsENR: Excess Noise RatioFDM: Frequency Division MultiplexingFFT: Fast Fourier TransformFIFO: First In, First OutFM: Frequency ModulationFPGA: Field Programmable Gate ArrayFS: Full ScaleGPIB: General Purpose Interface BusGPS: Global Positioning SystemGUI: Graphical User InterfaceHDF: Hierarchical Data FormatHP: Hewlett-PackardHSDSP: High-Speed Digital Signal ProcessingHTML: HyperText Markup Language

Page 242: Synthetic instruments: concepts and applications

Acronym Glossary

219

IBM: International Business MachinesIEEE: Institute of Electrical and Electronics EngineersIF: Intermediate FrequencyIMD: Intermodulation DistortionISO: International Standards OrganizationJTIDS: Joint Tactical Information and Distribution SystemLCU: Local Calibration UnitLRU: Line Replaceable UnitLSDSP: Low Speed Digital Signal ProcessingLVDS: Low Voltage Differential SignalingMIDI: Musical Instrument Digital InterfaceMS: Microsoft CorporationMSK: Minimum Shift KeyingNCSA: National Center for Supercomputing ApplicationsNIST: National Institute for Standards and TechnologyNTSC: National Television System CommitteeOMAAT: One Measurement At A TimeOO: Object-OrientedOOD: Object-Oriented DesignPAL: Programmed Array LogicPC: Personal ComputerPCI: Peripheral Component InterconnectPECL: Positive Emitter Controlled LogicPLD: Programmable Logic DevicePM: Phase ModulationPRF: Pulse Repetition FrequencyRAM: Random Access MemoryRF: Radio FrequencyRFMTS: RF Multifunction Test SystemRGB: Red. Green. Blue.RMS: Root Mean SquareROI: Return On InvestmentSAX: Simple API for XML

Page 243: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

220

SCPI: Standard Commands for Programmable InstrumentationSDM: Space Division MultiplexingSDS: Scientific Data SetSFT: System Functional TestSGML: Standard Generalized Markup LanguageSI: Synthetic InstrumentSIML: Synthetic Instrument Markup LanguageSMS: Synthetic Measurement SystemSNR: Signal to Noise RatioSQL: Structured Query LanguageSRMM: Stimulus Response Measurement MapSSR: Sustained Sequential RecordingTDF: Tab Delimited FormatTDM: Time Division MultiplexingT&M: Test and MeasurementTP: Test ProgramTPS: Test Program SetTTL: Transistor Transistor LogicUI: User InterfaceUPS: Uninteruptable Power SupplyURL: Uniform Resource LocatorUS: United StatesUSA: United States of AmericaUUT: Unit Under TestVI: Virtual InstrumentVSA: Vector Signal AnalyzerVSP: Vector Signal PlayerVSS: Vector Signal SimulatorVXI: VME bus eXtensions for InstrumentationWWI: World War OneXML: eXtensible Markup Language

Page 244: Synthetic instruments: concepts and applications

221

APPENDIX

BBasic SIML DTD

Example B-1. Complete SIML DTD

<!ELEMENT Instrument (PortDefinition*, Measurement+)><!ELEMENT Measurement (ParameterList?, Stimulus*, Response*, Collapse*)><!ELEMENT ParameterList ((PortGroup)*)><!ELEMENT Stimulus (Port+)><!ELEMENT Response (Port+)><!ELEMENT Port ((Constant|PortGroup|List), Abscissa*, (Ordinate|Average)*)><!ELEMENT Abscissa (Units*, Constraint*, (Envelope|Locked| Banded| Constant | UniformSteps | List))><!ELEMENT Ordinate (Units*, Constraint*, Measurement*)><!ELEMENT Collapse (Units*, Algebraic*)><!ELEMENT PortDefinition (Write*)><!ELEMENT Envelope (Abscissa*)><!ELEMENT Average (Ordinate)><!ELEMENT Write EMPTY><!ELEMENT Constant EMPTY><!ELEMENT Units EMPTY><!ELEMENT Constraint EMPTY><!ELEMENT Locked EMPTY><!ELEMENT Banded EMPTY><!ELEMENT UniformSteps EMPTY><!ELEMENT List (ListItem*)><!ELEMENT PortGroup (ListItem*)><!ELEMENT ListItem (#PCDATA)><!ELEMENT Algebraic (#PCDATA)>

Page 245: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

222

<!ATTLIST Instrument name CDATA #REQUIRED><!ATTLIST Measurement name CDATA #IMPLIED><!ATTLIST Abscissa name CDATA #REQUIRED><!ATTLIST Constraint role CDATA #REQUIRED><!ATTLIST Constraint max CDATA #IMPLIED><!ATTLIST Constraint min CDATA #IMPLIED><!ATTLIST Locked stimulus CDATA #IMPLIED><!ATTLIST Locked abscissa CDATA #IMPLIED><!ATTLIST Locked offset CDATA #IMPLIED><!ATTLIST Banded stimulus CDATA #IMPLIED><!ATTLIST Banded abscissa CDATA #IMPLIED><!ATTLIST Banded offset CDATA #IMPLIED><!ATTLIST Banded increment CDATA #IMPLIED><!ATTLIST Banded count CDATA #IMPLIED><!ATTLIST Ordinate name CDATA #REQUIRED><!ATTLIST Envelope type CDATA #REQUIRED><!ATTLIST Units name CDATA #REQUIRED><!ATTLIST Collapse axis CDATA #REQUIRED><!ATTLIST Stimulus name CDATA #IMPLIED><!ATTLIST Response name CDATA #IMPLIED><!ATTLIST ListItem name CDATA #IMPLIED><!ATTLIST Port name CDATA #IMPLIED><!ATTLIST Constant value CDATA #IMPLIED><!ATTLIST Average n CDATA #REQUIRED><!ATTLIST Write value CDATA #REQUIRED><!ATTLIST Write address CDATA #REQUIRED><!ATTLIST PortDefinition name CDATA #REQUIRED><!ATTLIST PortDefinition role CDATA #REQUIRED><!ATTLIST UniformSteps start CDATA #REQUIRED increment CDATA #REQUIRED count CDATA #REQUIRED

>

Page 246: Synthetic instruments: concepts and applications

223

Bibliography

Books

[B0] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman, Compilers. Copy-right © 1986 Addison-Wesley Pub Co. 0201100886. Addison-Wesley Pub Co.

[B1] A. Bruce Carlson, Communication Systems. Copyright © 1968 McGraw Hill, Inc. 0-07-009957-X. McGraw Hill.

[B2] Elliott W. Cheney, Introduction to Approximation Theory. Copyright © 2000 American Mathematical Society; 2nd edition. 0821813749. American Mathematical Society.

[B3] Daniel C. Dennett, The Intentional Stance. Copyright © 1987 The Massachusetts Institute of Technology. 0-262-54053-3. The MIT Press.

[B4] R. Buckminster Fuller, Synergetics. Copyright © 1975 MacMillan Publishing Company. 002541870X. MacMillan Publishing Company.

[B5] G. H. Hardy and W. W. Rogosinski, Fourier Series. Copyright © 1956 Cambridge University Press. 0521052084. Cambridge University Press.

[B6] Elliotte Rusty Harold and W. Scott Means, XML In a Nutshell. Copyright © 2002 O’Reilly & Associates, Inc. 0-596-00292-0. O’Reilly.

[B7] John L. Hennessy, David A. Patterson, and David Goldberg, Com-puter Architecture: A Quantitative Approach; 3rd Edition. Copyright © 2002 Morgan Kaufmann. 1558605967. Morgan Kaufmann.

[B8] Steven M. Kay, Modern Spectral Estimation: Theory and Application. Copyright © 1988 Prentice Hall. 013598582X. Prentice Hall.

[B9] E. L. Lehmann, Testing Statistical Hypotheses. Copyright © 1997 Springer Verlag; 2nd edition. 0387949194. Springer Verlag.

Page 247: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

224

[B10] Alan V. Oppenheim and Ronald W. Schafer, Digital Signal Process-ing. Copyright © 1975 Alan V. Oppenhiem and Bell Telephone Labo-ratories, Inc. 0-13-214635-5. Prentice Hall.

[B11] P.P. Vaidyanathan, Multirate Systems and Filter Banks. Copyright © 1993 Prentice Hall PTR. 0-13-605718-7.

[B12] John M. Wozencraft and Irwin Mark Jacobs, Principles of Communi-cation Engineering. Copyright © 1965 John Wiley & Sons, Inc. 0-471-96240-6.

Periodicals

[P1] IEEE Spectrum. The Institute of Electrical and Electronics Engineers, Inc.. 0018-9235. Susan Hassler. “Go Reconfigure”. Nick Tredennick and Brian Shimamoto. Copyright © 2003 The Institute of Electrical and Electronics Engineers, Inc. 36–40.

Conference Papers

[C1] Robert. R. Hatch, Randall Brandeberry, and William Knox. Copy-right © 2003 Institute of Electrical and Electronics Engineers. IEEE. Universal High Speed RF Microwave Test System. September, 2003. AutoTestCon 2003. Anaheim, CA, USA. IEEE.

[C2] D. J. Johnson and P. Roselli. Copyright © 2003 Institute of Electri-cal and Electronics Engineers. 0-7803-7837-7. IEEE. Using XML As a Flexible, Portable, Test Script Language. September, 2003. AutoTestCon 2003. Anaheim, CA, USA. IEEE.

Page 248: Synthetic instruments: concepts and applications

225

About the Author

C.T. Nadovich is a working engineer with over 20 years of experience in the design and development of advanced instrumentation for RF and microwave test. He owns a private consulting company, Julia Thomas As-sociates that is involved in many electronic automated test related design and development efforts at the forefront of the Synthetic Instrumentation revolution.

In addition to his hardware engineering work, Nadovich is an accom-plished software engineer. He owns and manages an Internet provider company, JTAN.COM, and is involved with numerous software projects involving network programming.

Nadovich received BSEE and MEEE degrees from Rensselaer Polytechnic Institute in 1981 with specialty in network theory and numerical analysis.

After graduation, he has worked in industry for over 20 years, guiding ground-up development of a number of sophisticated signal processing systems, including systems for analog, digital, microwave and RF auto-mated measurement. This work has given him extensive experience in both electronic automated test hardware and software, including test, and measurement from DC to 94 GHz, and with real-time DSP software using high performance digital systems and embedded computers.

While working in industry as an engineer, he was also a competitive bicycle racer. In 1994, Nadovich, united his skills as an engineer with his love for bicycle racing when he designed the 250 meter velodrome used for the 1996 Olympics in Atlanta.

C.T. Nadovich currently resides in Sellersville, PA along with is wife, Joanne, and their two children.

Page 249: Synthetic instruments: concepts and applications

This page intentionally left blank

Page 250: Synthetic instruments: concepts and applications

227

Index

A

Abbe, Ernst, 150abscissa, 86, 89, 94, 113, 149

port, 94quantization, 149, 154resolution, 149

abscissa de-embedding, 156accumulator, 38accuracy, 14, 44, 57, 102, 112, 121, 140,

146, 149, 210Acqiris, 66adaptive, 49, 50, 57, 101, 102, 103, 104,

149Aeroflex, 51, 52, 69, 71alias, 127, 128alphabet, 65amplifier, 24, 46, 55, 148analog, 116, 124analog baseband, 120analog signal, 121analysis, 6analyzer, 14, 64, 66, 171, 175, 194, 196,

199, 202, 210AP240, 66distortion, 196, 199, 202network, 171, 175, 194spectrum, 210vector signal, 64

antenna, 131architecture, 26, 27array

array, 186ATLAS, 162ATML, 162atom, 179atomic, 105, 107, 182atomic abscissa, 106attenuator, 48, 102

B

banded, 89, 90, 196, 198, 200bandpass, 45, 126, 127, 129bandpass sampling, 134bandwidth, 26, 122, 124, 125, 130

double-sided, 130single-sided, 130

baseband, 119, 124, 128battery tester, 10BER, 117Berlekamp, Jack, xvBirurakis, Bill, xvbit error rate, 117, 118bookshelf system, 158Bronfeld, Jeff, xviBSD, 41

C

C++, 19calibration, 75, 76, 104, 105, 107, 111,

137, 143, 144, 145, 174, 175, 181, 195

object, 155operational, 76, 143primary, 75procedures, 75standards, 75stimulus, 144, 145, 156strategy, 104, 105, 107, 111, 174, 175,

181, 195verification, 143

canonicalization, 96, 111canonical maps, 96cascade, 21, 29, 35, 55CCC, 21Celerity, 51chickens, 1child, 109, 170, 171

Page 251: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

228

Chinese restaurant menu, 23chronometer, 2CIPM, 140closure, 12, 42, 64, 73, 211codec, 22, 36collapse, 176, 177commutator, 30, 182, 183compass, 2compiler, 165complexity, 81compound, 105, 107, 174, 179compound abscissa, 106compound ordinate, 106computer, 3conditional branch, 40constraints, 111, 201conversion, 45

up, 45converter, 27, 52, 55, 63, 71, 73, 132

down, 55, 63, 71, 73, 132up, 27, 52, 71, 73

cost, 27crosstalk, 29, 57cubic spline, 146

D

damage, 201data, 185, 186

array, 185column, 185, 186

database, 187Dawes, William Rutten, 150de-embedding, 154, 156decimate, 63delay, 43, 51demodulator, 10, 63, 64detector, 64digital coded baseband, 121digitizer, 62, 66, 183direct real analog baseband signals, 119distortion, 59, 118, 131domain, 87, 89, 90, 91drift, 29driver, 19, 78

DTD, 191DUT, 23dynamic range, 57, 103

E

ENOB, 44, 59equality, 9exceptions, 113extended Backus-Naur form, 165

F

factor, 34feedback, 101, 103, 104fidelity, 49, 56, 57, 58, 59, 62filter, 43, 46, 50, 51, 55, 59, 64, 123, 126,

130, 131digital, 59matched, 64preselector, 131

filtered, 32flattened, 114football, xiiiFourier series, 86, 125, 133FPGA, 66Frey, Dan, xviFuller, Buckminster, 18functional decomposition, 178

G

gain, 48, 56, 57, 93, 98, 105, 174, 175, 177

gate array, 41generator, 24, 36, 42, 51

arbitrary waveform, 36CS25000, 51pulse, 24, 42, 53signal, 24

generic, 6, 22, 49, 52, 65, 85, 115GPIB, 3, 71

H

harmonic, 133harmonics, 57, 131

Page 252: Synthetic instruments: concepts and applications

Index

229

HDF, 78, 188headroom, 60, 61history, 1, 9hologram, 34HTML, 167hypothesis testing, 154hysteresis, 99

I

I/Q detection, 132IEEE-488, 3image, 131image rejection, 130inheritance, 208instrument, 1, 3, 4, 5, 6, 16, 17, 18, 19,

83, 85analog, 19analytic, 6classic, 16, 85modular, 4musical, 16, 85rack-em-stack-em, 1, 3synthetic, 5traditional, 1virtual, 16, 17, 18, 83, 85

integration, 13intermodulation, 58, 197interpolated, 148interpolating, 200interpolation, 45, 51, 66, 99, 101, 145,

146, 147, 148, 151, 152, 156interpreted, 165inversions, 114inverted, 145isometrological, 96item, 168

K

key, 187

L

LabVIEW, 17, 18, 19LabWindows, 19LabWindows/CVI, 71, 77

legacy, 15, 16, 209, 210length, 2Lett, Chris, xvileveling, 12linearity, 56, 57Linux, 41locked, 89, 90, 196, 198, 200loopback ordinate, 106

M

manifold, 83, 84, 88, 90, 91manifolds, 184map, 96, 97, 98, 100, 102, 103, 104, 107,

108, 111, 115, 145, 174, 177, 181, 191, 200

calibration, 145canonical, 107, 174, 181canonicalization, 111child, 98collapse, 177expansion, 97flatten, 97inverse, 100, 102, 103inversion, 97, 108, 145, 200manipulation, 104optimization, 111orthogonalization, 108parent, 98ravel, 98stance, 115, 181validation, 111, 192

map canonicalization, 114, 202map canonical form, 106map data, 95map description, 95map inversion, 195map manipulation, 181map optimization, 111map transformation, 111map validation, 111marketing, 27matched filter, 65measurand, 138, 139, 153measurement, 141

Page 253: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

230

measurement, x, xi, 2, 3, 99, 108, 212algorithm, 108automated, 3device, 2integration, 212manual, 2map, x, xirelative, 99

measurement integration, 13measurement map, 86, 88, 91

inverse, 88measurement system, 5

synthetic, 5menu, 23microprocessor, 6, 41mistakes, 55mixer, 128mode, 93, 94, 95, 113, 181modular, 4, 17, 25, 143, 157, 158, 205,

212modulation, 45, 73, 117, 125, 126, 127,

202phase, 73

modulator, 38, 52multiplexing, 28, 29, 30, 31, 32, 33, 34,

182

N

Newton, Isaac, 86NIST, 75, 76, 138, 140noise, 48, 56, 57, 59, 60, 61, 118null, 57Nyquist, 124, 125, 127Nyquist, Harry, 122

O

object-oriented, 18, 139, 166, 208, 209object orientation, 186OO, 18ordinate, 86, 88, 105, 113, 153, 154, 174

atomic, 105canonical, 88compound, 105, 174precision, 153

quantization, 153, 154orthogonalize, 96oscilloscope, 74, 169, 192, 193outer product, 89, 186overload, 61

P

parameter, 25, 65, 93parameters, 178, 183parent, 109peak, 60periodic, 37, 133phase, 38, 52, 126

distortion, 52increment, 38modulation, 38

phase increment, 38phase modulation, 73pigs, 1playback, 36, 38, 53playlist, 40port, 92, 93, 94, 95, 111, 113, 174, 175,

181, 182, 183group, 174

positioner, 91precision, 140procedural, x, 26, 139, 208, 214pSOS, 41PXI, 13

Q

quadrature, 132quantization, 24, 48, 50, 58, 60, 61, 119,

151, 152

R

rack-em-stack-em, 25, 70, 143, 158, 214rack-em-stackopolis, 205rack-mount, 3ravel, 113Rayleigh, John William Strutt Lord, 150Raytheon, 69, 71recorder, 62

Page 254: Synthetic instruments: concepts and applications

Index

231

redundancy, 11, 17, 25, 208, 209reference, 179requirements, 7, 34resolution, 44, 150, 151, 152

super, 151, 154reuse, 109rise time, 51risk, 27rotate, 96rotations, 114

S

safety, 55, 201sampling, 44, 151

interval, 151sampling theorem, 123schema, 191scope, 177, 180SCPI, 162, 163, 164script, 77, 78, 166separable, 83, 84, 89, 90, 184sequencing, 39, 53, 68Shannon, Claude, 122signal, 12, 19, 27, 46, 115, 116, 117, 119,

120, 121, 125analog, 19, 116, 119bandpass, 27, 125digital, 19, 121encoding, 46generator, 12hierarchy, 117, 120stance, 115

signal coding, 202signal generator, 24signal processing, 10

digital, 10simultaneous, 28, 34, 63site configuration, 94slicing, 96slowdown, 14soft front panel, 17Sparrow, ?, 150speed, 41, 44, 70, 78, 79, 112, 147, 149spline, 146, 148spurious, 47, 49, 58, 59

SQL, 187stability, 103, 104standard, 143state machine, 36, 41, 78, 166state table, 78stimulus, 27

compound, 27stimulus ordinate, 106structure, 184, 186

data, 184, 186superheterodyne, 131sweet spot, 57, 61switch matrix, 28, 74, 182synchronize, 42synergy, 18synthesis, 6, 35, 37

controller, 35direct digital, 37

systolic, 40

T

tag, 167, 170test, x, 14, 55, 70, 77, 78, 81, 141

engineer, x, 14, 55engineers, 81executive, 78parameter, 77program set, 77self, 70speed, 14

test engineer, 180, 181test engineering user, xtest procedures, 208test program, 159test programs, 79time, 147track, 40trigger, 40, 42, 66, 68TRM1000C, 71, 77Turing, 180Turing, Alan, 40, 108Turing machine, 40

Page 255: Synthetic instruments: concepts and applications

Synthetic Instruments: Concepts and Applications

232

U

uncertainty, 141, 143, 144, 210units, 177, 209up-converter, 39

V

video, 117, 121virtual, 85virtual instrument, 83voltmeter, 94Von Neumann, John, 40VU meter, 61VXI, 4, 13vxWorks, 41

W

waveform, 36waveform playback, 36Windows, 41

Page 256: Synthetic instruments: concepts and applications

ELSEVIER SCIENCE CD-ROM LICENSE AGREEMENT

PLEASE READ THE FOLLOWING AGREEMENT CAREFULLY BEFORE USING THIS CD-ROM PRODUCT. THIS CD-ROM PRODUCT IS LICENSED UNDER THE TERMS CONTAINED IN THIS CD-ROM LICENSE AGREEMENT (“Agreement”). BY USING THIS CD-ROM PRODUCT, YOU, AN INDIVIDUAL OR ENTITY INCLUDING EMPLOYEES, AGENTS AND REPRESENTATIVES (“You” or “Your”), ACKNOWLEDGE THAT YOU HAVE READ THIS AGREEMENT, THAT YOU UNDERSTAND IT, AND THAT YOU AGREE TO BE BOUND BY THE TERMS AND CONDITIONS OF THIS AGREEMENT. ELSEVIER SCIENCE INC. (“Elsevier Science”) EXPRESSLY DOES NOT AGREE TO LICENSE THIS CD-ROM PRODUCT TO YOU UNLESS YOU ASSENT TO THIS AGREEMENT. IF YOU DO NOT AGREE WITH ANY OF THE FOLLOWING TERMS, YOU MAY, WITHIN THIRTY (30) DAYS AFTER YOUR RECEIPT OF THIS CD-ROM PRODUCT RETURN THE UNUSED CD-ROM PRODUCT AND ALL ACCOMPANYING DOCUMENTATION TO ELSEVIER SCIENCE FOR A FULL REFUND.

DEFINITIONS

As used in this Agreement, these terms shall have the following meanings:

“Proprietary Material” means the valuable and proprietary information content of this CD-ROM Product including all indexes and graphic materials and software used to access, index, search and retrieve the information content from this CD-ROM Product developed or licensed by Elsevier Science and/or its affiliates, suppliers and licensors.

“CD-ROM Product” means the copy of the Proprietary Material and any other material delivered on CD-ROM and any other human-readable or machine-readable materials enclosed with this Agreement, including without limitation documentation relating to the same.

OWNERSHIP

This CD-ROM Product has been supplied by and is proprietary to Elsevier Science and/or its affiliates, suppliers and licensors. The copyright in the CD-ROM Product belongs to Elsevier Science and/or its affiliates, suppliers and licensors and is protected by the national and state copyright, trademark, trade secret and other intellectual property laws of the United States and international treaty provisions, including without limitation the Universal Copyright Convention and the Berne Copyright Convention. You have no ownership rights in this CD-ROM Product. Except as expressly set forth herein, no part of this CD-ROM Product, including without limitation the Proprietary Mate-rial, may be modified, copied or distributed in hardcopy or machine-readable form without prior written consent from Elsevier Science. All rights not expressly granted to You herein are expressly reserved. Any other use of this CD-ROM Product by any person or entity is strictly prohibited and a violation of this Agreement.

SCOPE OF RIGHTS LICENSED (PERMITTED USES)

Elsevier Science is granting to You a limited, non-exclusive, non-transferable license to use this CD-ROM Product in accordance with the terms of this Agreement. You may use or provide access to this CD-ROM Product on a single computer or terminal physically located at Your premises and in a secure network or move this CD-ROM Product to and use it on another single computer or terminal at the same location for personal use only, but under no circumstances may You use or provide access to any part or parts of this CD-ROM Product on more than one computer or terminal simultaneously.

You shall not (a) copy, download, or otherwise reproduce the CD-ROM Product in any medium, including, without limitation, online transmissions, local area networks, wide area networks, intranets, extranets and the Internet, or in any way, in whole or in part, except that You may print or download limited portions of the Proprietary Material that are the results of discrete searches; (b) alter, modify, or adapt the CD-ROM Product, including but not limited to decompiling, disassembling, reverse engineering, or creating derivative works, without the prior written approval of Elsevier Science; (c) sell, license or otherwise distribute to third parties the CD-ROM Product or any part or parts thereof; or (d) alter, remove, obscure or obstruct the display of any copyright, trademark or other proprietary notice on or in the CD-ROM Product or on any printout or download of portions of the Proprietary Materials.

RESTRICTIONS ON TRANSFER

This License is personal to You, and neither Your rights hereunder nor the tangible embodiments of this CD-ROM Product, including without limitation the Proprietary Material, may be sold, assigned, transferred or sub-licensed to any other person, including without limitation by operation of law, without the prior written consent of Elsevier Science. Any purported sale, assignment, transfer or sublicense without the prior written consent of Elsevier Science will be void and will automatically terminate the License granted hereunder.

Page 257: Synthetic instruments: concepts and applications

TERM

This Agreement will remain in effect until terminated pursuant to the terms of this Agreement. You may terminate this Agreement at any time by removing from Your system and destroying the CD-ROM Product. Unauthorized copying of the CD-ROM Product, including without limitation, the Proprietary Material and documentation, or otherwise failing to comply with the terms and conditions of this Agreement shall result in automatic termination of this license and will make available to Elsevier Science legal remedies. Upon termination of this Agreement, the license granted herein will terminate and You must immediately destroy the CD-ROM Product and accompanying documentation. All provisions relating to proprietary rights shall survive termination of this Agreement.

LIMITED WARRANTY AND LIMITATION OF LIABILITY

NEITHER ELSEVIER SCIENCE NOR ITS LICENSORS REPRESENT OR WARRANT THAT THE INFOR-MATION CONTAINED IN THE PROPRIETARY MATERIALS IS COMPLETE OR FREE FROM ERROR, AND NEITHER ASSUMES, AND BOTH EXPRESSLY DISCLAIM, ANY LIABILITY TO ANY PERSON FOR ANY LOSS OR DAMAGE CAUSED BY ERRORS OR OMISSIONS IN THE PROPRIETARY MATERIAL, WHETHER SUCH ERRORS OR OMISSIONS RESULT FROM NEGLIGENCE, ACCIDENT, OR ANY OTHER CAUSE. IN ADDITION, NEITHER ELSEVIER SCIENCE NOR ITS LICENSORS MAKE ANY REPRESEN-TATIONS OR WARRANTIES, EITHER EXPRESS OR IMPLIED, REGARDING THE PERFORMANCE OF YOUR NETWORK OR COMPUTER SYSTEM WHEN USED IN CONJUNCTION WITH THE CD-ROM PRODUCT.

If this CD-ROM Product is defective, Elsevier Science will replace it at no charge if the defective CD-ROM Product is returned to Elsevier Science within sixty (60) days (or the greatest period allowable by applicable law) from the date of shipment.

Elsevier Science warrants that the software embodied in this CD-ROM Product will perform in substantial compli-ance with the documentation supplied in this CD-ROM Product. If You report significant defect in performance in writing to Elsevier Science, and Elsevier Science is not able to correct same within sixty (60) days after its receipt of Your notification, You may return this CD-ROM Product, including all copies and documentation, to Elsevier Science and Elsevier Science will refund Your money.

YOU UNDERSTAND THAT, EXCEPT FOR THE 60-DAY LIMITED WARRANTY RECITED ABOVE, ELSEVIER SCIENCE, ITS AFFILIATES, LICENSORS, SUPPLIERS AND AGENTS, MAKE NO WARRANTIES, EXPRESSED OR IMPLIED, WITH RESPECT TO THE CD-ROM PRODUCT, INCLUDING, WITHOUT LIMITATION THE PROPRIETARY MATERIAL, AN SPECIFICALLY DISCLAIM ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

If the information provided on this CD-ROM contains medical or health sciences information, it is intended for professional use within the medical field. Information about medical treatment or drug dosages is intended strictly for professional use, and because of rapid advances in the medical sciences, independent verification f diagnosis and drug dosages should be made.

IN NO EVENT WILL ELSEVIER SCIENCE, ITS AFFILIATES, LICENSORS, SUPPLIERS OR AGENTS, BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, WITHOUT LIMITATION, ANY LOST PROFITS, LOST SAVINGS OR OTHER INCIDENTAL OR CONSEQUENTIAL DAMAGES, ARISING OUT OF YOUR USE OR INABILITY TO USE THE CD-ROM PRODUCT REGARDLESS OF WHETHER SUCH DAMAGES ARE FORESEEABLE OR WHETHER SUCH DAMAGES ARE DEEMED TO RESULT FROM THE FAILURE OR INADEQUACY OF ANY EXCLUSIVE OR OTHER REMEDY.

U.S. GOVERNMENT RESTRICTED RIGHTS

The CD-ROM Product and documentation are provided with restricted rights. Use, duplication or disclosure by the U.S. Government is subject to restrictions as set forth in subparagraphs (a) through (d) of the Commercial Computer Restricted Rights clause at FAR 52.22719 or in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.2277013, or at 252.2117015, as applicable. Contractor/Manufacturer is Elsevier Science Inc., 655 Avenue of the Americas, New York, NY 10010-5107 USA.

GOVERNING LAW

This Agreement shall be governed by the laws of the State of New York, USA. In any dispute arising out of this Agreement, you and Elsevier Science each consent to the exclusive personal jurisdiction and venue in the state and federal courts within New York County, New York, USA.