section 10 process control improvementftp.feq.ufu.br/luis_claudio/segurança/safety... · plant...

60
SECTION 10 PROCESS CONTROL IMPROVEMENT James F. Beall IV Eastman Chemical Company, Longview, Texas William L. Bialkowski EnTech Control Engineering Inc., Toronto, Ontario, Canada Jimmy G. Converse Sterling Chemicals, Inc., Texas City, Texas Mark T. Coughran Fisher Controls International, Inc., Marshalltown, Iowa John Edwards Process NMR Associates, LLC (Foxboro),Danbury, CT 06810 Gregory Gervasio Process Analytical Technology, Solutia Inc., St. Louis, Missouri Tony Harding Spectrace (Division of Thermo Instrument Systems), Fort Collins, Colorado J. B. Klahn Applied Automation, Elsag-Bailey, Bartlesville, Oklahoma K. K. Konrad INTEK Corporation, Houston, Texas Paul Luebbers Solutia Inc., Cantonment, Florida Gregory K. McMillan Solutia Inc., St. Louis, Missouri Michael J. Pelletier Spectroscopy Products Group, Kaiser Optical Systems, Inc., Ann Arbor, Michigan R. J. Proctor GAMMA-METRICS, San Diego, California Joseph P. Shunta, P. E. E. I. du Port de Nemours, Wilmington, Delaware Persons who authored complete articles or subsections of articles, or who otherwise cooperated in an outstanding manner in furnishing information and helpful counsel to the editorial staff. 10.1

Upload: others

Post on 22-May-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

SECTION 10PROCESS CONTROLIMPROVEMENT∗

James F. Beall IVEastman Chemical Company, Longview, Texas

William L. BialkowskiEnTech Control Engineering Inc., Toronto, Ontario, Canada

Jimmy G. ConverseSterling Chemicals, Inc., Texas City, Texas

Mark T. CoughranFisher Controls International, Inc., Marshalltown, Iowa

John EdwardsProcess NMR Associates, LLC (Foxboro), Danbury, CT 06810

Gregory GervasioProcess Analytical Technology, Solutia Inc.,St. Louis, Missouri

Tony HardingSpectrace (Division of Thermo Instrument Systems),Fort Collins, Colorado

J. B. KlahnApplied Automation, Elsag-Bailey, Bartlesville, Oklahoma

K. K. KonradINTEK Corporation, Houston, Texas

Paul LuebbersSolutia Inc., Cantonment, Florida

Gregory K. McMillanSolutia Inc., St. Louis, Missouri

Michael J. PelletierSpectroscopy Products Group, Kaiser Optical Systems, Inc.,Ann Arbor, Michigan

R. J. ProctorGAMMA-METRICS, San Diego, California

Joseph P. Shunta, P. E.E. I. du Port de Nemours, Wilmington, Delaware

∗Persons who authored complete articles or subsections of articles, or who otherwise cooperated in an outstanding mannerin furnishing information and helpful counsel to the editorial staff.

10.1

Page 2: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.2 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

Joseph ZenteEpsilon Industries, Austin, Texas

WORLD-CLASS MANUFACTURING 10.7

INTRODUCTION 10.7

MODEL FOR IMPROVING PROCESS CONTROL

TO ACHIEVE BUSINESS BENEFITS 10.7

ANALYSIS TO IDENTIFY CONTROL IMPROVEMENTS 10.8

Identify Key Product Properties and Process Variables 10.9Identify the Need for Improved Measurement and Control 10.10Statistical Metrics 10.11Assessment of Measurement and Controls 10.13Estimate Benefits for the Improvements 10.13Quality Stake 10.14Yield Stake 10.14Throughput Stake 10.15

SUSTAIN THE BENEFITS 10.15

Performance Metrics 10.15

CONCLUSIONS 10.16

REFERENCES 10.17

PLANT ANALYSIS, DESIGN, AND TUNING FOR

UNIFORM MANUFACTURING 10.17

INTRODUCTION 10.17

PROCESS VARIABILITY 10.18

The Process as a Network 10.18Steady-State Plant Design 10.19Economic Opportunity 10.19Sources of Variability 10.19Operating Constraints, Uptime, Efficiency 10.19Solid versus Nonsolid Product 10.20Process Example—Paper Making 10.20Manufacturing Objectives and Process Dynamics 10.22Process Control Strategy—Paper Machine Blending 10.22Variability Audit Procedure 10.24Variability Examples 10.24Diagnostic Principles 10.27

T IME SERIES ANALYSIS 10.28

Time Series Analysis Tools 10.28Real-T ime Data Acquisition 10.29Sampling Theory—Time Series Data 10.29Selecting Sampling Rates 10.30Data Aliasing 10.30Antialiasing Filters 10.32Statistical Analysis 10.33Normal or Gaussian Distribution 10.34Stochastic Data Structures and Ideal Signals 10.34Histogram 10.35Spectral Analysis 10.36Fourier Transform 10.38Fast Fourier Transform 10.38Power Spectrum 10.39Cumulative Spectrum 10.40Spectral Analysis Plotting Methods 10.40Spectral Analysis Windowing and Detrending 10.41

Page 3: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.3

Cross-Correlation and Autocorrelation Functions 10.42

CONTROL LOOP PERFORMANCE FOR

UNIFORM MANUFACTURING 10.43

Identifying the Manufacturing Requirements—Fine Paper Machine Example 10.43

Ziegler–Nichols Tuning 10.44Coordinated Loop Tuning Based on Operational Requirements 10.45Rule of Thumb for Maintaining Constant Ingredient Ratios 10.46Rules of Thumb for Process Interaction 10.46Rules of Thumb for Cascade Loops 10.47Rules of Thumb for Buffer Inventory Storage Level Control 10.47Tuning Rules of Thumb for Uniform Manufacturing—Summary 10.48

CONTROL LOOP PERFORMANCE, ROBUSTNESS,

AND VARIABILITY ATTENUATION 10.49

Resonance and Bode’s Integral 10.49Lambda Tuning Concept 10.50Controller Types 10.51Industrial Controllers 10.51Common Lambda Tuning Rules 10.51Impact of Dead Time 10.51Control Loop Robustness and Stability Margins 10.52The Control Loop Performance-Robustness Envelope—

Speed of Response versus Robustness 10.53Identifying Plant Dynamics—Open-Loop Step Tests 10.54

CONTROL ACTUATOR AND TRANSMITTER DEFICIENCIES 10.56

Control Valve Nonlinearities 10.56Control Valve Dynamic Specification 10.56Variable-Speed Drives 10.57Transmitter Deficiencies 10.57

INTEGRATED PROCESS DESIGN AND

CONTROL-PUTTING IT ALL TOGETHER 10.58

DEFINING TERMS AND NOMENCLATURE 10.59

REFERENCES 10.60

CONTROL VALVE RESPONSE 10.61

STATIC PERFORMANCE MEASURES 10.61

DYNAMIC PERFORMANCE MEASURES 10.63

VALVE SIZE AND INHERENT FLOW CHARACTERISTIC 10.63

VALVE FLOW STEADINESS 10.65

FRICTION AND DRIVE SHAFT DESIGN 10.66

SELECTION OF ACTUATOR AND ACCESSORIES 10.67

DYNAMIC TEST SIGNAL AMPLITUDE AND SHAPE 10.68

PROCESS VARIABLE AS THE ULTIMATE OUTPUT 10.69

SUMMARY: CONTROL VALVE PRACTICES FOR BEST RESPONSE 10.69

REFERENCES 10.69

PROCESS IMPACT 10.70

PLANT PROGRAM FOR CONTROL OPTIMIZATION 10.70

RESULTS OF CONTROL OPTIMIZATION—CASE STUDIES 10.71

Distillation Tower 10.71Pilot Plant Reactor 10.72High-Pressure Reactor 10.74

COMMON PROBLEMS 10.75

Improper Tuning on Level Loops 10.75Dead Band in Control Valves 10.76Selection of Control Valve Trim Characteristics 10.78Excessive Dead Time in Control Valve Response 10.78

Page 4: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.4 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

PID Implementation 10.80Control Valve Performance Specification 10.81Control Scheme 10.84Other Problems 10.84

KEY POINTS 10.85

REFERENCES 10.85

BEST PRACTICES, TOOLS, AND TECHNIQUES TO REDUCE

THE MAINTENANCE COSTS OF FIELD INSTRUMENTATION 10.86

BUSINESS IMPACT 10.86

ENGINEERING PRACTICES 10.87

PROBLEMS AND CAUSES 10.88

Causes of Measurement Errors and Failures 10.89Causes of Control Valve Errors and Failures 10.90

SELECTION 10.91

Best Practices for Instrument Selection 10.94

INSTALLATION 10.95

Best Practices for Instrument Installation 10.96

MAINTENANCE PRACTICES 10.97

Best Maintenance Practices to Reduce Maintenance Costs 10.98

INSTRUMENT-KNOWLEDGE-BASED DIAGNOSTICS 10.99

Diagnostic Techniques Used or Proposed by InstrumentManufacturers 10.101

PLANT-KNOWLEDGE-BASED DIAGNOSTICS 10.101

Tools and Techniques that Use Plant Knowledge 10.104

KEY POINTS 10.104

RULES OF THUMB 10.105

REFERENCES 10.106

NEW DEVELOPMENTS IN ANALYTICAL MEASUREMENTS 10.107

PERSPECTIVE 10.107

NEW DEVELOPMENTS IN ANALYZER MEASUREMENTS 10.107

Sample Preparation Methods and Hardware 10.107

MULTIPLE-WAVELENGTH NEAR-INFRARED (NIR) ANALYZER 10.108

FOURIER TRANSFORM INFRARED 10.109

FOURIER TRANSFORM SPECTROSCOPY 10.110

Advantages of FTIR 10.110Instrument Operation 10.111Resolution 10.112Apodization 10.112Phase Correction 10.112Trading Rules Relating Resolution, Noise Level,

and Measurement Time 10.113FTIR Interferometer Design [1] 10.113

THE RESOLUTION–ENERGY–TIME DILEMMA [2] 10.115

Examples 10.115Changes Made by the User 10.116

THE MEASUREMENT PRINCIPLE 10.116

REFRENCES 10.118

MASS SPECTROMETER 10.119

Dynamic Mass Analyzers 10.119Ion Cyclotron Resonance Mass Analyzer 10.121

ULTRAVIOLET/VISIBLE ANALYZERS 10.123

Filter Isolation of Discrete Hollow Cathode Lamps [1] 10.124Diode Array Process Spectrometer [2] 10.125

REFERENCES 10.126

Page 5: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.5

RAMAN SCATTERING EMISSION SPECTROPHOTOMETERS 10.126

Raman Analyzers 10.126Capabilities and Limitations 10.128Examples of Raman Analyzer Applications 10.129

SUMMARY 10.131

REFERENCES 10.131

NUCLEAR MAGNETIC RESONANCE 10.133

New Developments in NMR Measurement 10.133Rules of Thumb 10.133Theory of NMR [Conventional versus. FTNMR) 10.134Applications 10.134

FUNDAMENTALS OF THE QUANTITATIVE NMR 10.135

X-RAY FLUORESCENCE (XRF) 10.137

X-Ray Fluorescence 10.137Sulfur in Fuel 10.138Monitoring Catalyst Depletion 10.138XRF Theory 10.140

PRODUCTION OF X-RAY EMISSION AND ACQUISITION

OF XRF SPECTRA 10.140

X-Ray Excitation 10.141Interaction of X Rays with Matter 10.143X-Ray Detectors and Supporting Electronics 10.145

CONCLUSIONS 10.149

SONIC AND MICROWAVE ANALYZERS (ULTRA,

INFRARED, RESONANCE) 10.149

Microwave Spectroscopy 10.149Instrument Bandwidth Differences 10.153Guided Microwave Spectrometry 10.154Interpretation of GMS Spectrums 10.157Typical Software Approach to GMS Spectrum Measurement 10.158Process Effects on Measurement 10.159

NEUTRON ACTIVATION ANALYZERS 10.161

Prompt Gamma Neutron Activation 10.161Signal Processing 10.162Signal Normalization 10.163Sensitivities 10.164Instrumentation 10.165Calibration Process 10.167

REFERENCES 10.167

ATOMIC EMISSION SPECTROMETERS 10.168

LIQUID CHROMATOGRAPHY 10.168

THE IMPROVEMENT OF ADVANCED REGULATORY

CONTROL SYSTEMS 10.170

PROBLEMS AND CAUSES 10.170

Causes of Poor Performance 10.179Best Practices 10.179

CLOSED-LOOP TUNING METHOD 10.184

SHORTCUT TUNING METHOD 10.185

SIMPLIFIED DAHLIN OR LAMBDA TUNING METHOD 10.186

BEST PRACTICES TO IMPROVE PERFORMANCE 10.189

REFERENCES 10.190

MULTIVARIABLE PREDICTIVE CONTROL AND

REAL-TIME OPTIMIZATION 10.190

Page 6: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.6 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

WHAT ARE CONSTRAINED MULTIVARIABLE PREDICTIVE

CONTROL AND REAL-TIME OPTIMIZATION? 10.190

Basic Concepts of Constrained Multivariable Predictive Control 10.191Basic Concepts of Real-Time Optimization? 10.195

MATHEMATICAL FOUNDATIONS OF CONSTRAINED

MULTIVARIABLE PREDICTIVE CONTROL AND

REAL-TIME OPTIMIZATION 10.197

Linear Systems 10.197Process Representations 10.201Predictive Control 10.202Move Suppression 10.204Extension to the Multivariable Case 10.205Constraint Handling and Economic Optimization 10.206

JUSTIFICATION OF CONSTRAINED MULTIVARIABLE

PREDICTIVE CONTROL AND REAL-TIME OPTIMIZATION 10.211

PROCESS MODELING GUIDELINES FOR CONSTRAINED

MULTIVARIABLE PREDICTIVE CONTROL 10.212

Impulse Response Modeling 10.213Time Series Analysis 10.215Process Modeling Rules of Thumb 10.216

CONSTRAINED MULTIVARIABLE PREDICTIVE CONTROLLER

TUNING AND CONSTRUCTION GUIDELINES 10.217

REAL-TIME OPTIMIZATION GUIDELINES 10.218

APPLICATIONS OF CONSTRAINED MULTIVARIABLE

PREDICTIVE CONTROL AND REAL-TIME OPTIMIZATION 10.219

MAINTENANCE ISSUES OF CONSTRAINED MULTIVARIABLE

PREDICTIVE CONTROL AND REAL-TIME OPTIMIZATION 10.220

DEVELOPMENTS AND FUTURE DIRECTIONS OF

CONSTRAINED MULTIVARIABLE PREDICTIVE CONTROL 10.220

REFERENCES 10.221

NEURAL NETWORKS 10.222

WHAT IS AN ARTIFICIAL NEURAL NETWORK? 10.222

HISTORICAL DEVELOPMENT 10.222

CLASSIFICATION OF ARTIFICIAL NEURAL NETWORKS 10.223

THE MULTILAYER ERROR BACKPROPAGATION PERCEPTRON 10.224

APPLICATION CLASSES AND TYPICAL ARCHITECTURES

OF MULTILAYER ERROR BACKPROPAGATION PERCEPTRONS 10.226

Pattern Recognition 10.226Interpolation/Function Approximation 10.227Parameter Estimation and System Identification 10.228Control Applications 10.231

APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS 10.234

Virtual Sensors 10.234Neurocontrollers and Process Optimization 10.235Other Artificial Neural Network Applications 10.236

SELECTION OF AN ARTIFICIAL NEURAL NETWORK TOOL 10.238

PRACTICAL GUIDELINES FOR BUILDING ARTIFICIAL

NEURAL NETWORKS 10.239

RECENT DEVELOPMENTS AND FUTURE DIRECTIONS

FOR ARTIFICIAL NEURAL NETWORKS 10.240

Recurrent Network Architectures 10.240Genetic and Evolutionary Training 10.242Fuzzy Logic/Neural Network Methods 10.242

REFERENCES 10.242

Page 7: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.7

WORLD-CLASS MANUFACTURING

by Joseph P. Shunta, P. E.∗

INTRODUCTION

Companies competing in a global market are constantly under pressure to reduce costs and im-prove quality. One of the ways to gain competitively is by increasing the productivity of manufac-turing operations. This is an area to which process control can bring substantial benefits by im-proving quality of products, increasing yields, production rates, and uptime, and decreasing cycletime.

However, this does not happen automatically by simply installing the most modern control equip-ment as we experienced when distributed control systems became available in the 1980s. Manycompanies installed distributed control systems, expecting improved performance, only to find laterthat it was not significantly better than before. One of the reasons was that the control strategieshad not been improved; they just duplicated the old analog systems. What is needed is to takeadvantage of the power of the new digital systems and upgrade the control strategies to gain con-crete business benefits, not just to ensure stable operation of equipment. Companies are now goingback and reevaluating how process control is applied and looking for ways to increase productivitythrough improved control (Fig. 1). This section presents a methodology for analyzing a process andidentifying where process control can be improved in order to achieve world-class performance inmanufacturing.

MODEL FOR IMPROVING PROCESS CONTROLTO ACHIEVE BUSINESS BENEFITS

Figure 2 is a model for how process control should be applied or improved to gain business benefits.Starting with a manufacturing process, product is made and shipped to a customer. The customer canbe either external or internal. The customer uses the product and feeds back how well the productperformed. The feedback may be in terms of the quality, price, or availability of the product.

Customer feedback is then transformed into internal business metrics that will be used to monitorand drive improvements in process performance. Examples of these metrics are process capabilityand process performance indices (Cp and Pp, respectively) for quality, first-pass, first-quality yield,throughput, percentage of uptime, and cycle time; these are defined in Figure 3. These metrics mustbe achieved in order to meet the customer and business requirements. Technical programs should beaimed at achieving these goals including process control improvement programs.

However, these metrics are at too high a level to apply directly to process control so anothertransformation must take place. This transformation identifies the key product properties and processvariables along with their allowable ranges of variability. The idea is that, by controlling at the requiredtargets and within specified ranges of variability, the business metrics are achieved and ultimately thecustomer needs are met as well. The measurements and control strategies are assessed to determinehow well the controls achieve these operating goals, and improvements are identified when the controlsfall short.

∗ Principal Consultant, E.I. DuPont de Nemours & Co., Wilmington, Delaware 19898.

Page 8: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.8 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

FIGURE 1 Evolution of control capability.

Your Process Customer Process

Process Control System

Process Targets

Business Metrics/Drivers

Products

CustomerNeeds

Process Control Model

MeasureAdjust

flows, temps...

quality, yield, rates...

FIGURE 2 Process control model.

ANALYSIS TO IDENTIFY CONTROL IMPROVEMENTS

The first step in Fig. 2 is to receive feedback from the customer and transform it into specific businessmetrics or goals in terms of product quality, yield, throughput, uptime, and so forth. This task requiresa close working relationship with the customer and falls largely on operations management but couldinclude technical specialists as well. The subsequent tasks are, however, clearly within the realm ofthe control engineer. A methodology is shown in Fig. 4 that identifies what control improvements arenecessary to achieve the business goals. The benefits of following a disciplined approach are shownin Fig. 5.

Page 9: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.9

FIGURE 3 Key business drivers. FIGURE 4 Process control analysis.

FIGURE 5 Benefits of process analysis.

Identify Key Product Properties and Process Variables

In the first step of the analysis, the business metrics are transformed into key product properties andprocess variables (flows, pressures, temperatures, etc.) that must be measured and controlled to ensurethat

� product quality meets the customer specifications� production rates, uptime, and cycle time are achieved so that product orders are filled on time� first-pass, first-quality yields are achieved to minimize rework and costs

In other words, the key product properties and process variables are the ones that have the greatestimpact on meeting the business goals. Prime examples of process variables are reactor temperature,reactor feed ratios, and distillation column temperatures. Product properties are things like denier and

Page 10: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.10 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

viscosity and may be measured and controlled on line or may be measured in the laboratory and notcontrolled directly. In the latter case, we have to identify the process variables that have the greatestimpact on the product property. In addition to identifying the key variables, the associated steady-statetargets, or set points, and the allowable ranges of variability must be determined. It is then the job ofthe controls to maintain the key variables at their respective targets within the prescribed ranges ofvariability.

Often the key variables are fairly obvious, but there are some tools that can be applied when thekey variables are not so obvious. Two useful tools are modeling and design of experiments. Varioustypes of steady-state modeling can be used:

� first-principle models� empirical models such as statistical (regression) models or neural networks.

First-principle models describe mathematically the chemistry, physics, material, and energy balancesof the process. Variables can be selectively varied to see the effect on the product, yield, etc. Empiricalmodels are based on operating data and relate how changes in the variables affect the process outputs.Design of experiments is a procedure for testing the operating plant to see the actual effects of changingvariables. This process is often complex and time consuming and may result in off-quality productbeing generated during the testing. However, the payback gained in increased process understandingand improved control is often well worth the effort.

Identify the Need for Improved Measurement and Control

The second step in the analysis identifies which of the key variables are in need of improved control.This is done by assessing variability since reducing variability is the basis on which we will achieveimproved performance, as shown in Fig. 6. There are basically three ways we can reduce variability,as shown in Fig. 7; these are explained below.

First, an understanding of the types of variability and their sources is necessary to determinehow to reduce it. Special causes of variability are gross upsets in the process. Examples are largechanges in raw-material composition, operating blunders, and equipment malfunctions. It is the jobof statistical process control (SPC) to identify when a special cause has occurred so it can be identifiedand removed. Automatic process control (APC), on the other hand, compensates for these upsets byadjusting a process variable to counteract the effect. What actually occurs is the controls transfer

FIGURE 6 Impact of process variability on business goal. FIGURE 7 Ways to reduce variability.

Page 11: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.11

FIGURE 8 Special-cause and common-cause variability.

the variability from where it matters to where it does not matter. For example, to maintain a reactortemperature, the controls will transfer the variability to reactor coolant flow and temperature. Iffeasible, chronic special causes should be removed rather than rely on control to compensate forthem. However, the analysis described here identifies the improvements in process control that areneeded to compensate for the special causes.

The other type of variability is called common-cause variability and is inherent or built in tothe process. This kind of variability is frequent, random, and generally small, so it is sometimesreferred to as process noise. Some common causes are turbulence, machine vibration, steam sup-ply pressure variations, and small random variations in raw-material composition. Process controlis not recommended to compensate for common-cause variability because it may result in over-control and create instability. A better approach is to identify the cause if it cannot be ignoredand make a process change to correct for it. For example, changing the process design, replacingequipment, and adding damping or filtering to the measurement to remove the variability are oftenremedies. Figure 8 graphically illustrates the difference between special-cause and common-causevariability.

Statistical Metrics

Identifying where process control can be improved is a matter of assessing the magnitude of the specialcauses of variability. The common statistical metric for variability is the standard deviation. One formof the standard deviation measures variability from all sources, special and common. This is calledthe total standard deviation, Stot (Fig. 9). Another form measures only common-cause variability andis called the capability standard deviation, Scap (Fig. 10). Scap gets its name because the variabilityremaining after all the special-cause variability has been removed is the minimum variability theprocess can achieve, that is, the capability of the process. Note that Stot takes differences between themean and each data point whereas Scap takes differences between adjacent data points. This form ofScap is the mean-squared successive difference (MSSD) formula. If the process had no special causesoccurring, Stot would be equivalent to Scap. Statisticians refer to a process in which only common-causevariability is present as being in the state of SPC or being stable. Note that the meaning of stability to

Page 12: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.12 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

Standard Deviation

tot

n

SX X X X X X

n

= − + − + + −−

21

22

2

1

( ) ( ) ... ( )

( )X X X X nn= + + +1 2 ... /

Mean

Standard Deviation and Mean

FIGURE 9 Standard deviation and mean.

Mean Square SuccessiveDifference MSSD

cap

n nS

X X X X X X

n= − + − + + −

−−

22 1

23 2

21

2 1

( ) ( ) ... ( )

FIGURE 10 Mean-square successive difference.

FIGURE 11 Process capability index. FIGURE 12 Process performance index.

a control engineer is different: Stability means that variations in a process are not increasing withoutbounds.

FIGURE 13 Histogram comparing Cp’s.

Standard deviation has units of degrees, pounds per square inch, pounds per hour, etc., dependingon the process variable being measured. To assess variability, it is more convenient to normalize

the variability to remove engineering units. This allows us tocompare variability of temperature, pressure, etc., on the samebasis. The statistical metrics for normalizing variability are theprocess capability indices Cp and Cpk (Fig. 11) and the pro-cess performance indices Pp and Ppk (Fig. 12). The standarddeviations are related to the allowable variability ranges (spec-ifications) that are in the numerator of the index. The rangemay be a product specification range for a quality variable orthe allowable variability range for a process variable. Again,the ranges are chosen to meet the business goals. Cp and Pp

assume that the mean of the variable (also called average) isat the midpoint of the range. Cpk and Ppk are used when thevariable is not midrange, and it is important to reflect that, orif the specification is one sided, that is, there is either a highspecification or a low specification.

The desired value for the indices depends on the needs ofthe customer but as a rule, a value of 2.0 is considered worldclass. A value above 1.5 is also considered very good. Valuesof approximately 1.0 are just adequate and below 1.0 are poor.Figure 13 illustrates graphically the effect of the numerical

Page 13: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.13

FIGURE 14 The capability and performance matrix.

value of the index. A value of 1.0 means that 99.73% of the data are within the specification if themean is a midrange but there is no room for error in the process. A higher value allows some leewayin the position of the controller set point and allows some variation in the process without violatingspecifications.

The capability index is compared with the performance index to identify where control can beimproved. The indices are calculated for each key product property and process variable and thencompared, as per Fig. 14:

� If the capability index is below the goal, a process change is needed.� If the performance index is less than capability, a process control improvement is needed.

Assessment of Measurement and Controls

The next step in the analysis is to assess the measurements and controls for the product propertiesand process variables to identify specific improvements or fixes. The kinds of questions to ask inmaking these assessments are shown in Figs. 15 and 16. This calls for a team effort, a team thatincludes process engineers, operators, instrument technicians, quality-control specialists, and processcontrol engineers. By using process diagrams that show the main control loops, the team discussesmeasurement and controls for each of the key variables needing improved control and arrives at aconceptual design for the improvement. Detailed designs are worked out later.

Estimate Benefits for the Improvements

The final list of improvement opportunities may be large enough to require a prioritization to makesure funds and resources are used effectively. One method of prioritization is to rank candidate

Page 14: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.14 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

FIGURE 15 Assessing measurements. FIGURE 16 Assessing control strategies.

FIGURE 17 Prioritizing opportunities.

improvements according to stake and feasibility (Fig.17). The control assessment just completed may suf-fice to assess cost feasibility and technical feasibility.What may be needed in addition is to estimate theeconomic benefits to the business for each improve-ment.

Quality Stake

Since quality is such a fundamental product param-eter, one would expect that estimating the stake forimproved quality in terms of increased sales would

be easy to obtain. However, that is usually not the case. What is easier to estimate is the effect ofquality on rework, yield, or throughput. The following is a generic stake calculation for improvedquality in terms of increased earnings that can be applied in several possible cases:

Q = (annual production rate in units/yr) × (% increase in earnings/100%)

×(earnings in $/unit of product)

Yield Stake

Yield is related to the amount of raw materials that get lost either through the formation of second-grade or waste byproducts in chemical reactions or lost physically in vents, cleanouts, waste streams,etc. First-pass, first-quality yield describes the amount of raw materials turned into first-grade productswithout blending or rework. Rework introduces yield losses and, in a sold-out market, has the additionalpenalty of tying up equipment that could be making new product instead of reworking old product.

The following two equations are used to estimate the stakes for reducing physical and chemicallosses by improved control.

Y 1 = (flow rate of lost stream in units/h) × (% valuable components/100%)

×(% reduction in loss/100%) × ($/unit) × (h/yr)

Y 2 = (raw material flow rate in units/h) × (% recoverable yield loss/100%)

×(% reduction/100%) × ($/unit) × (h/yr)

Page 15: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.15

Recoverable yield loss is the difference between the current yield and the theoretical maximum yield.This is an important distinction since there are usually physical or chemical limits that prevent amanufacturing unit from achieving 100% yield.

Throughput Stake

Throughput improvements can be achieved by avoiding operating conditions that result in downtimedue to interlocks, plugging, corrosion, etc., and by operating closer to equipment constraints likeflooding in a distillation column that increase the capacity. Improved control in batch operations thatresults in reduced cycle times also increases throughput. Improved throughput has the biggest impactin a sold-out market in which control improvements leading to increased throughput usually have anoverwhelming effect on earnings, a much larger effect that just about anything else:

P = (current production rate in units/yr) × (% increase in throughtput/100%)

×(% time sold out/100%) × (earnings in $/unit of increased production)

The percentage of time sold out is the percentage of the time the plant needs to run at maximumproduction to meet demand. Presumably when the plant is not sold out, there is time to make up forlost production and the production stake is not important.

SUSTAIN THE BENEFITS

After the improvement projects have been implemented, the original team or a subset of it shouldgo back and audit the results to determine how much of the estimated stake has been achieved. Theresults can be used to refine the estimating techniques for future use and discover any additional needsthat were not addressed the first time. Ideally, the plant will institutionalize the process just describedso that control improvements will be addressed periodically. If the process is not repeated, chancesare the benefits will be lost over time because of changes to the process, equipment deterioration,changing demands, turnover of people, etc.

Performance Metrics

Another useful tool for sustaining the benefits is on-line performance metrics. The purpose of metricsis to discover any poor operation of the control system quickly so that the appropriate action can betaken before large losses can occur. There are two kinds of metrics:

� utilization metrics that indicate when key measurements or control strategies are not being properlyutilized

� variability metrics that indicate when the controls are not functioning properly

Figure 18 shows examples of utilization metrics. One metric indicates when an analyzer is out ofservice and needs to be recalibrated or repaired. Another metric indicates when a key control loopis not being operated in automatic mode. This condition might be caused by poor tuning, a failedtransmitter, or just poor design. The metrics are applied only to the areas in which not fully utilizingthe measurement or control strategy will result in off-quality product, low yield, reduced rates, orother penalties.

Figure 19 illustrates a sample of a variability metric report. Again, this metric is applied to keyvariables for which we want to minimize variability. A report of this type contains a lot of informationabout the health of the controls. For example, if the average does not conform to the limits (that are setto ensure meeting the business goals), it may be just sloppy operation (not adhering to the specified

Page 16: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.16 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

FIGURE 18 Utilization metrics.

FIGURE 19 Variability metrics.

operating conditions) or signal that a special cause of variability has occurred that the controls are notable to compensate for. The standard deviation Scap indicates what should be achievable with perfectcontrol and Stot shows actual standard deviation. If there is a big difference between the two [1], thecontrols needs attention. Likewise Cpk and Ppk indicate how far off from world-class performance theplant is operating at that moment. If Stot is approximately equal to Scap but Ppk is small [2], it mayindicate that the average is too close to one of the limits. If Cpk is small, it indicates the process shouldbe modified in some way to eliminate the common cause variability [3] or it may indicate operationtoo close to the limits [4].

CONCLUSIONS

This section presented a disciplined approach to improving the performance of a manufacturing plantby identifying process control improvements aimed at bringing business value and not just ensuringsmooth operation. This approach should not be applied only once but used for continuous improvementon a periodic basis. This is the only way to ensure that the benefits from process control are sustainedover the long term.

Page 17: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.17

REFERENCES

1. Shunta, J. P. Achieving World Class Manufacturing Through Process Control, Prentice-Hall, Englewood Cliffs,NJ, 1995.

2. Shunta, J.P. “Business metrics in control,” Intech, April 1996.

PLANT ANALYSIS, DESIGN,AND TUNING FOR UNIFORMMANUFACTURING

by W. L. Bialkowski∗

INTRODUCTION

This section focuses on plant analysis, design, and controller tuning issues that relate to the efficientmanufacture of uniform product in continuous process plants, such as paper, plastics, rubber, chem-icals, hydrocarbons, pharmaceuticals, food, and others. Joseph P. Shunta, in his book World ClassManufacturing through Process Control [1], provides an excellent balanced treatment of the appli-cation of process control and where this fits into the big picture of World Class manufacturing. Thecentral theme is the reduction of process variability through better process control, and in some cases,better process design, so that a more uniform, higher-quality product that better conforms to customerspecifications can be manufactured more efficiently. These goals link directly with corporate goals ofgaining market share, reducing manufacturing costs, and increasing shareholder value. There are manyissues involved, both technical and organizational, that must be overcome for a company to becomea world class manufacturer. This section embraces Shunta’s work completely where it addresses thebig issues in process control application. There are, however, a number of key technical issues suchas process variability, its origins and sources, time series analysis, model-based controller tuning,and the impact of control valve nonlinearities that are covered in detail in this section. This section isbased on two decades of intensive work in process variability reduction in the pulp and paper industry,from which, of necessity, a slightly different focus has emerged. The work has been published both inthe control [2] and pulp and paper industry literature [3–5]. In a nutshell, the differences between thechemical industry experience and the paper industry experience can be summed up as follows: slowchemical processes making liquid product on the one hand versus fast hydraulic processes makingsolid product on the other. In the chemical industry case, the variability caused by the fast controlloops and control valve limit cycles has in the past often been ignored, since the dominant processtime constants are typically measured in fractions of a hour. Many chemical industry process controlpractitioners have ignored this fast variability and have focused on the upper-level optimizing con-trols, such as Dynamic Matrix Control, in an attempt to achieve higher yield and throughput. In thepaper industry, the fast lower-level control loops are directly linked with the final product uniformity.

∗ EnTech Control Engineering Inc., 16 Four Seasons Place, Toronto, Ontario, Canada M9B 6E5.

Page 18: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.18 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

Ignore these loops, and the process may not be able to manufacture salable product at all. The pulpand paper process control practitioner has had to focus on reducing this fast variability just to keepproduction going at the budgeted rate. This focus on the fast process variability has brought a numberof key techniques into play. These include the following:

1. plant analysis to measure process and product variability by use of time series analysis techniques,plant auditing procedures designed to identify the causes of process variability, an interpretation ofthe results in both the time and the frequency domains, use of spectral analysis for both diagnosticsand design

2. the use of model-based controller tuning such as internal model control (IMC) concepts and LambdaTuning for both plant design and controller tuning

3. using a tuning strategy to achieve coordinated dynamics of a process area by preselecting theclosed-loop time constants for each control loop

4. Understanding the performance-robustness envelope of a control loop.

5. Understanding the impact of actuator nonlinearities on control performance.

6. Understanding the variability propagation pathways through a complex process.

This section sets out to cover this ground at the conceptual level. Theory is presented only to aid inmaking the concepts easier to understand. References are made to other more detailed works. Pulpand paper process examples are used for illustration, as these data were available to the author. Amore detailed treatment of this material is given in Ref. 2. The objective is to learn how to measureand characterize the variability in the final product, how to identify the causes of the variability in theprocess, how to eliminate these causes where possible, how to tune control loops so that they effectivelyattenuate variability as best as possible, and to understand how process variability propagates througha process.

PROCESS VARIABILITY

The Process as a Network

All continuous manufacturing plants consist of an interconnected network of reactors, tanks, vessels,and process equipment that interconnect the raw-material supply sources to the final product storages.The process unit operations consist of many different types: reaction kinetics, separation processessuch as distillation, hydraulic separation processes such as screening and centrifugal cleaning, hy-draulic transport and mechanical operation such as pulp refining, and thermal equilibrium vessels suchas boilers. The unit processes can be interconnected in series, in parallel, or in recycle configurations.The process dynamics are often complicated by the presence of long dead times, material or heatintegration, nonlinearities, inverse response, and severe interaction. There may be many parallel pathsfor the variability to travel through the process network. As a result, it is sometimes very difficult todetermine precisely how the variability reached a certain point in the process.

There are many control loops in a typical process area—from 20 or 30 to over a 1000. The controlloops have nonlinear control valves, they interact with each other, and change their dynamics withoperating point. Even though ideally the control loops were designed and installed with the aimof eliminating process variability, they at best can only attenuate variability at certain frequencies.Unfortunately, they also often increase variability at other frequencies because of their natural tendencyto cycle and resonate. When loops cycle they can become inadvertent sources of variability in theirown right. This tendency to cycle can result from process dead time, aggressive tuning, multiloopinteraction, or nonlinearities present in actuators.

Page 19: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.19

Steady-State Plant Design

In almost all cases plants have been designed to meet steady-state design criteria only, with little if anyconsideration having been given to the dynamics of the process being designed. In the latter stagesof plant design, the major equipment has been selected and the lines have been sized. At this pointthe control valves can be sized and located and the transmitters can also be specified and located.Once this is done, the control loops can be configured by the pairing up of individual transmitters andactuators. Typically before start-up, the control loops are equipped with default tuning parameters,such as a gain of 1.0 and a reset time of 1 min. Next, start-up occurs, and the start-up team worksround the clock. In most cases they will have time to tune only those loops that clearly do not workduring the start-up phase. Further tuning is seldom done after start-up in most plants. As well, analysisof plant process variability is seldom if ever carried out in the majority of plants.

Economic Opportunity

Yet the raw materials entering the process are laden with variability, control loops cycle and increasevariability in automatic mode, and the product uniformity and plant efficiency suffer. The opportunityto increase operating efficiency by use of the techniques described in this section is estimated torepresent at least a 5% increase in manufacturing efficiency.

Sources of Variability

The raw materials are never perfectly uniform, hence variability enters the manufacturing processwith the raw-material streams. The dynamics of the manufacturing process and control strategy arecomplex and often inadvertently add to the variability that is already present because of the rawmaterials. The variability emanates from many point sources such as raw-material streams, cyclingloops, rotating equipment, and so on. The net result is that variability is present throughout the process,and in particular in the final product. For the final product the variability is the sum of the effects of allof the point sources, as well as the variability attenuation characteristics of the variability pathwaysthat connect the sources to final product. The variability pathways are dynamically complex. Theyinclude the effects of mixing or agitation, control loops, and various combinations of series and parallelinterconnections of these effects. Mixing or agitation tends to smooth out fast variability. On the otherhand, control loops regulating to a fixed setpoint tend to smooth out slow variability. Unfortunately,control loops can also resonate and amplify certain frequencies.

Operating Constraints, Uptime, Efficiency

Equally important is the potential that the variability sources and pathways may impose operatingconstraints on the process. For instance, the variability in applied chemical dosage may cause a propertysuch as product strength to vary, which in turn forces the operation to compensate by running thereactor at a slower production rate. In this case, reducing the variability in applied chemical dosagecan be directly linked to an increase in production rate. In effect, the variability in chemical dosage hasimposed an effective production rate constraint. High variability in fiber delivery to a paper machineheadbox will result in a high break frequency on the paper machine. Breaks cause downtime andhave a serious impact on manufacturing efficiency. The paper machine operator will run the papermachine at a slower speed in order to keep the occurrence of sheet breaks at a tolerable level. Again,the variability in fiber delivery has imposed a constraint on production rate. As a general rule, processvariability can have an impact on manufacturing in two ways:

� High process variability will have an impact on process uptime by causing trips, breaks, plugging,etc.

Page 20: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.20 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

� High process variability will reduce manufacturing efficiency as a result of� lower product yield� lower production rate� higher energy consumption� lower product quality� more product downgrade

Solid versus Nonsolid Product

Depending on the industry, the product being made can be a solid product, for instance paper, rubber,or wire, or a nonsolid product such as gasoline, distillates, ethylene oxide, or butane gas. Solid productscapture all of the variability that is present in the process and preserve this variability for as long as theproduct remains in use. The customer will experience the impact of this variability, as this product isused in their secondary manufacturing plants. For example, the paper rolls manufactured in the papermill will be used to feed the customers’ printing presses. If the paper is not very uniform, this caneasily cause the printing press to experience paper breaks, hence curtailing production and loweringthe efficiency of the customers’ operations. Most newspaper pressrooms rate their paper suppliers onthe basis of the number of paper breaks that occur per hundred rolls used in the pressroom. Papermanufacturers can be disqualified from supplying paper to a given pressroom once their rating is toohigh. This is a simple expression of customer dissatisfaction. In contrast, many liquid products arestored in one of the various tanks of a tank farm. Once the production run for a product grade isover, the new product will probably stay in the tank farm for some time before being shipped to thedistribution system. In this period of time, the product will likely equilibriate—hence the contentsof a given tank will become fairly uniform. However, the tanks that were filled during the grade runwill exhibit tank-to-tank variability. Even though the contents of the tanks may not conform to thecustomer’s specification exactly, it is still possible to blend the products of several tanks in order toensure that the customer’s specifications are met for the whole shipment. Adding to this, the finalconsumer is seldom aware of deviations in the product being consumed. Can the consumer of gasolinetell that the actual octane rating in the last tank of gas was below 97? The answer is no. Clearly, inthe manufacture of many liquid or gas products, which involve intermediate storage in a tank farm,the need to focus on variability is not very critical. In contrast, the manufacture of solid product iscompletely unforgiving—the variability is there to stay, and all variability from very slow to very fastcan be measured through testing. Also, the consumer is often critically aware that the variability ispresent. In the case of paper, pressroom breaks, photocopier jams, or dirt specs present in the sheetare obvious reminders to the consumer that variability is present in the product they have purchased.

Process Example—Paper Making

Before any attempt is made to analyze variability or to tune control loops, it is important to clearlyunderstand the manufacturing process. This understanding should include:

1. the process flow diagram from raw material feed through to final product storage

2. each unit operation including its intended role in the overall process

3. the process dynamics of each unit operation and the impact it should have on process variability.

Finally, the manufacturing goals for both the process and the product should be clearly under-stood. As well, the specific operating objectives of each unit operation should be clearly understoodin the context of how these are supposed to satisfy the overall manufacturing goals. This level ofunderstanding is needed before the dynamic objectives for each unit operation, and each control loopwithin the overall control strategy can be set.

Page 21: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.21

HARDWOODH.D.

SOFTWOODH.D.

REFINERS

BLENDCHEST

MACHINECHEST

W.W.CHEST

HEADBOX

SCREEN

CLEANERS SILO

WIRE

PRESSES CALENDER

DRYERSREEL

FIGURE 1 A paper machine schematic.

To illustrate these concepts, a paper mill process flow sheet is shown in Fig. 1. The exampleillustrates a fine paper machine for manufacturing photocopy paper at a production rate of ∼500tons per day. The term fine paper refers to paper for books, photocopiers, laser printers, and otherpublication grades of paper. The primary raw materials typically include two grades of wood pulp,hardwood, and softwood. In the example, the paper machine is using 70% hardwood pulp and 30%softwood pulp. The hardwood pulp is made from hardwood trees, such as maple and oak. These havea short fiber that allows the manufacture of a smooth paper sheet surface for high-quality printingto be achieved. In Fig. 1 the hardwood fiber, is pumped from the hardwood high-density (HD) chestwhere the fiber slurry is stored. It is then diluted with water to reduce the consistency, or slurryconcentration, before being passed through a rotating machine called a refiner, where the fiber surfacearea is increased by mechanical cutting action. After the refiner, the pulp is fed to the blend chest,where it is blended with the other raw materials. The other main raw material for fine paper is calledsoftwood pulp. It is made from softwood trees, such as pine or fir. These have a relatively long fiber,which provides sheet strength. Like the hardwood, the softwood pulp is diluted, refined, and pumpedinto the blend chest. In the blend chest the two ingredients are mixed thoroughly, from where theblended stock is pumped to the machine chest, after further dilution. The machine chest provides afinal opportunity to remove process variability through agitation and mixing. From the machine chestthe stock is pumped to the paper machine headbox after further dilution, cleaning, and screening haveoccurred. The headbox is a pressurized vessel that ejects the diluted stock from a rectangular opening,known as the slice, onto a moving fabric mesh known as the wire or the former. The slice and wireare as wide as the paper machine, which can be up to 10 m or 30 ft. The wire is moving at the speedof the paper machine, which can be well over 1000 m/min, or 3300 ft/min. The water drains throughthe wire mesh, leaving the fiber to form a wet sheet of paper. This sheet is continuously removed fromthe wire, pressed to remove water, dried by steam-heated dryer cylinders, and compressed to form asmooth finish by means of a calender stack consisting of highly polished steel rolls. The final sheet iswound up on a reel, from where it is further cut into rolls or sheets before shipment to the customer.This has been a very brief description of pulp and paper science; for more details see Ref. 6.

Page 22: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.22 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

Manufacturing Objectives and Process Dynamics

The manufacturing goals are to make uniform paper to tight tolerances (typically +/−1%) in finalproperties such as its basis weight or mass per unit area [measured in grams per square meter (gsm)],moisture content, caliper (thickness), color, brightness, and others. Paper properties are measured byon-line sensors also paper samples are tested in the lab. Basis weight is an important final productvariable and is useful for illustration purposes.

The manufacturing objectives for each unit operation can be stated as follows:Refiners: Uniform refining requires uniform mass flow of fiber through each refiner and uniform

application of refining energy to the fiber. In turn this requires the pulp flow and consistency fed to therefiner to be as uniform as possible. Any variability that is present in either of these process variableswill cause variability in fiber bonding and potentially sheet strength. The refiners are low-volumehydraulic units with a residence time of less than a second. As a consequence of the extremely fastdynamics, the refiners do not offer any ability to attenuate variability.

Blend Chest: The mass ratio of the raw-material streams entering the blend chest should be asconstant as possible, as this will determine the fiber composition of the sheet. This requires that theconsistencies of both streams be uniform and that the flow controllers for each stream must be adjustedtogether in unison. The blend chest is agitated and provides an opportunity for process mixing. Theresidence time is typically 12 min.

Machine Chest: The purpose of the machine chest is to provide a final opportunity to mix the pulpfurnish before it is used on the paper machine. The residence time is typically 12 min.

White-Water Silo: This is the tank in which the water, which has drained from the sheet, is collectedfor reuse. This water contains some fiber. The residence time is ∼20 s; however, the silo is not agitated;it may have nonuniform consistency. From a variability standpoint, the silo provides a material recyclepath and tends to prolong disturbances that occur in the recycle path. At the discharge of the silo thenew pulp stock is mixed with the recycled white water to form the feed to the paper machine headbox.Any variability present in either stream is injected into the headbox.

Headbox Feed: The feed to the headbox is passed through centrifugal cleaners and pressure screens.Both devices are intended to remove dirt, oversized particles, and contaminants. In both cases theunits have very short residence times and are essentially flow splitters. They both have a reject flow,which typically runs at ∼5% of the feed flow. These units do not offer any potential for attenuation ofvariability. On the contrary, should their rejects flow rate vary, some of this variability will be passedonto the accepts flow.

Headbox and Wire: The headbox is where the pulp slurry is converted to a sheet product. Theheadbox velocity is a direct function of the pressure in the headbox. The velocity and the consistencydetermine the fiber mass in the paper sheet. Since the headbox consistency is very low (less than 1%typically), a large volume of water must drain through the wire. Process variability in this area willdrive variability in sheet moisture. The dynamics of the headbox are very fast, since the residencytime is less than a second.

Presses, Dryers, Calender: The residence time of the sheet in this section of the paper machine istypically ∼20 s. There is no opportunity to reduce variability through mixing, since the fiber in thepaper sheet is being permanently set. The variability in water content typically follows the variabilityin fiber content. The ability to remove water content is provided mechanically in the presses andthermally in the dryers. The dynamics of the dryers are relatively slow, as the thermal mass of thedryer cylinders is large, and the typical thermal time constant is ∼2 min. As a result, using the dryersto control the moisture content allows the attenuate of only the very slow components.

Process Control Strategy—Paper Machine Blending

Figure 2 shows a process and control diagram for the blend chest area for the fine paper machinereferred to in Fig. 1. The two pulp slurries are pumped out of their respective HD storage chests at∼5% consistency (mass percentage fiber slurry concentration). The hardwood stock is pumped out ofthe hardwood HD chest and is diluted to 4.5% consistency at the suction of the pump by the addition

Page 23: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1:FJB/FC

UP2:FFG

2Septem

ber1999

16:23C

HA

P-10.texchap10.sgm

MH

009v3(1999/07/27)

FIGURE 2 The blend chest area of a paper machine.FIGURE 2

10

.23

Page 24: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.24 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

of dilution water. The dilution water is supplied from a common dilution header (pump and pipe) forthis part of the paper machine. In the example of Fig. 2, the dilution water addition is modulated bythe consistency control loop, NC-104 [typical loop tag based on Instrument Society of America (ISA)terminology], in order to regulate the consistency to a setpoint of ∼4.5%. The consistency sensor islocated after the pump and has a measurement transport delay or dead time of 5 s. After consistencycontrol, the hardwood stock is pumped through the hardwood refiner, where the fiber surface areaand fiber bonding properties are enhanced. The refining process is sensitive to the mass flow of fiberpassing through the refiner. After the refiner, the pulp stream is flow controlled by flow controllerFC-105. The softwood line is identical to the hardwood line and includes consistency controller NC-204 and flow controller FC-205. The two flow controllers FC-105 and FC-205 are a part of a cascadecontrol strategy, and their setpoints are adjusted together in order to maintain the blend chest level atthe desired setpoint (typically at 70%) while maintaining the desired blend ratio for each stock (70%hardwood and 30% softwood).

Variability Audit Procedure

Process variability audits have been carried out in pulp and paper mills since the early 1980s [2–4].The process variability audit is intended to:

1. determine the level of variability in the product and to determine the acceptability of the productfor the customers’ intended purpose, given the variability present

2. determine the level of variability in the process and to determine the how this process variabilityhas an impact on the product variability as well as on the manufacturing efficiency of the process

3. identify the sources for the process variability and in particular those that have an impact on productvariability or process manufacturing efficiency

4. if possible eliminate, or recommend how to eliminate, the sources of variability that have beenidentified.

The procedure involves collecting real-time data while the process is operating. It is advantageous tocollect data from final product variables first. This provides a potential signature of variability beinggenerated by upstream variables. It may then possible to identify which upstream variable is causingthe variability noticed downstream. As an example, if an oscillation in final product with a period of1 min has been detected, the hypothesis is that some process variable upstream is cycling at a periodof 1 min. Once a potential match has been found and a loop that cycles with a period of ∼1 min hasbeen identified, one can prove the hypothesis by altering the behavior of the loop and observing theeffect in the final product. For instance, if the cycling loop is placed in manual mode, it is likely thatthe cycle will stop. Should the oscillation in final product also stop after the loop is placed in manual,then the hypothesis has been proven conclusively—the product oscillation is caused by the loop inquestion. Then the remaining task is to uncover why the loop has a tendency to cycle. This task isrelatively easy, since it involves the loop itself and possibly some adjacent variables.

Variability Examples

Product Variability. Figure 3 shows variability in final product. The data are presented by a com-mercially available software package [7] designed for plant process variability analysis. Two datacollection runs are shown with basis weight in gsm being measured. The upper plot shows a short-25-s run, while the lower plot shows a long 4.9-h run. The data presented show two key pieces ofinformation about each data run:

� The time series graph shows the variable of interest plotted against time. This represents keyhistorical information that can be used to develop insight and intuition about the dynamic behavior.

Page 25: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.25

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

BW Scan Av - bw18k.dat BW Scan Av - bw18k.dat

Time Seriesgsm

Time Seriesgsm

67.20

65.76

64.32

62.88

61.440.00 6.40 12.80 19.20 25.60

Mean = 63.7897 2Sig = 2.084 (3.27%) Sec Win = Sqr., Detr = N, Ovr = 0 Sec/Cycle

65.70

64.77

63.84

62.91

61.980.000 0.442 0.884 1.326 1.768

Mean = 63.9351 2Sig = 1.256 (1.96%) Sec (E4)

Var Power Spectrum0.3302

0.2476

0.1651

0.0825

0.0000

25.60 19.25 12.90 6.55 0.20

Win = Sqr., Detr = N, Ovr = 0 Sec/Cycle (E4)

Var (E-2) Power Spectrum7.450

5.588

3.726

1.863

0.001

1.075 0.808 0.542 0.275 0.008

FIGURE 3 Paper machine product weight variability. Upper left – 0.4 minutes, variability = 3.27%, upper right – power spectrum, cycleat 5 seconds, lower left – 4.9 hours, variability = 1.96%, lower right – power spectrum, cycle at 46 minutes.

� The time series statistics are shown below the time series graph. These include� the mean value, (e.g., 63.7897 gsm)� the two standard deviations (2Sig), which represent the +/−95% confidence limits around the

mean value, assuming a normal distribution, (e.g., 2Sig = 1.256 gsm)� the two standard deviations (2Sig), expressed as a percentage of the mean value (e.g., 3.27%).

This is a useful unitless way of expressing process variability.

The 25-s run collected data at a sample rate of 0.1 s. For this data collection, the basis weight sensorwas taken out of its normal mode of scanning the full sheet width and placed in a fixed position on thesheet. The variability in basis weight is 3.27% over 25 s. This is fairly high compared with that of otherfine paper machines, hence this should raise concern about the market acceptability of this product. Aclear cycle is visible with a period of ∼5 s. The likely cause is somewhere in the white-water, cleaner,screen headbox area of the process, since the dynamics of this part of the process are all very fast.

The 4.9-h run collected data at a sample rate of 40 s. For this data collection, the basis weightsensor was its normal mode of scanning the full sheet width. The scan time to traverse the sheet widthwas 40 s, and each data point collected was an arithmetic average of the basis weight across the sheet.As a result, each data point has had all of the variability faster than the averaging time removed. Thevariability in scan average basis weight is 1.96% over 4.9 h. This is fairly high compared with thatother fine paper machines, the best of which have shown basis weight variability of 1%; hence thisshould raise concern about the market acceptability of this product. A cycle is clearly visible with a

Page 26: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.26 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

period of ∼2750 s or 46 min per cycle. The likely cause is somewhere in the blending area of theprocess, since the dynamics of this part of the process are comparatively slow.

Process Variability—Typical Examples. Figure 4 shows variability in the two pulp streams en-tering the blend chest. All the data were collected simultaneously at a sample rate of 2 s for2.3 h. Figure 4 shows the hardwood consistency (NC-104) and flow (FC-105), as well as the softwoodconsistency (NC-204) and flow (FC-205). All of the variables appear to cycle with a period of ∼46min per cycle. Both flows appear to be in phase with each other. The consistencies are also in phase,although in the softwood consistency, the cycle is not as pronounced as the others.

Figure 5 shows variability in the blend chest level (LC-301) and the consistency coming out ofthe blend chest (NC-302). All the data were collected simultaneously at a sample rate of 2 s for2.3 h. Both the blend chest level and the consistency appear to cycle with a period of 46 min as well.The level cycle appears to be leading the flow cycle by ∼90◦. This suggests that the level controlis the likely cause. The level control is the upper cascade control loop that adjusts the setpoints forthe flow controllers FC-105 and FC-205. The cycle in the blend chest consistency appears to bedelayed version of the cycle in the blend chest inlet consistencies. The apparent time delay is ∼5 min,which is less than the residence time in the blend chest.

When the level controller LC-301 is placed on manual, the 46-min cycle stops everywhere, includ-ing the final product. This proves that the 46-min cycle is caused by the blend chest level controllerLC-301 and that this is also the source of the cycling of all of the loops.

FIGURE 4 Process variability in blend chest consistency and flow variables over 2.3 hours. Upper left – hardwood consistency variability= 2.57%, upper right – hardwood flow variability = 3.75%, lower left – softwood consistency variability = 1.32%, lower right – softwoodflow variability = 5.04%.

Page 27: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.27

FIGURE 5 Process variability in blend chest consistency and flow variables over 2.3 hours. Left – blend chest level variability = 2.62%,right – softwood flow variability = 1.17%.

Figure 6 shows variability of the screens reject flow in automatic mode. The screens reject flow isa flow controller operating at a fixed-flow setpoint. Both the process measurement and the controlleroutput are shown. This example shows a typical control-valve-induced limit cycle caused by whatappears to be ∼2.2% dead band in the control valve. The period of the limit cycle is ∼1.6 min.

Diagnostic Principles

The variability shown in this example is typical of continuous plants that operate flow, concentration(consistency), and level controllers. Even though the data were taken from a paper mill, the diagnosticprinciples are applicable universally. The principles used in the diagnostic process depend on thefollowing:

1. A sinusoidal disturbance will be transmitted through a linear system at the same frequency.This means that, by observation of downstream behavior, the upstream disturbance sources can beidentified. This is not strictly true if the system is nonlinear, as all processes are; however, in mostcases the nonlinearities are not severe enough to invalidate the premise.

2. The amplitude of a sinusoidal signal that has been transmitted from upstream to downstreamwill depend on the effective coupling gain that exists in the transmission path at the given frequency.This gain may be subject to attenuation at certain frequencies and amplification at others. The netresult is that the amplitude of a transmitted variability signal is uncertain and is best determined bycarrying out bump tests from source to destination.

Page 28: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.28 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

FIGURE 6 Process variability in screen reject flow due to control valve dead band.

TIME SERIES ANALYSIS

Time series analysis involves techniques for analyzing data collected at equal intervals of time [8].It forms the basis for performing plant variability audits, in which the objective is to correlate thereal-time data, give it physical significance, and to ultimately uncover the causes of plant operationalproblems. Whereas this seems simple enough as an objective, care must be taken to ensure that high-fidelity real-time data are acquired for meaningful analysis to occur, hence avoiding the garbage-in–garbage-out syndrome. In most cases plant data are stochastic or random in nature, and their analysisover a wide frequency spectrum can provide important clues regarding sources of plant variability.However, the amount of data that can be practically analyzed is limited by computing power. Forinstance, Fourier analysis of a large number of data points (typically more than 32,000) can beunwieldy or impractical on some computers. In turn this means that the sampling rate at which thedata are collected may have to vary from very fast to quite slow in order to meet specific analysisrequirements. When long-term data are needed to measure slow or low-frequency variability, this mayrequire a sparse sampling interval, which in turn may cause data aliasing to occur.

Time Series Analysis Tools

Plant variability analysis usually involves collecting multiple data channels simultaneously, so thatmultivariable dynamics and variability can be studied. All of these data should be collected at the

Page 29: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.29

same sampling interval and in time synchronism with each other. Most modern plants have distributed-control-system- (DCS-) based control systems, with plantwide data archiving systems in place, andit is tempting to consider that these real-time and historical data are ideal for conducting plantvariability audits based on time series analysis. Unfortunately, at the time of writing, this is sel-dom the case, especially for plants that manufacture a solid product such as pulp and paper mills,in which high-frequency data are important. A detailed treatment of digital data requirements isgiven in Ref. 9. Some of the more commonly occurring reasons for the unsuitability of these datainclude:

1. relatively slow sampling of DCS inputs

2. lack of antialiasing filters

3. use of data compression and report-by-exception algorithms, especially for the archived data.

The tools needed to perform effective time series analysis of plant data include:

1. a real-time data acquisition system capable of collecting time-synchronized alias-free plant dataat least 10 times faster than the fastest time constants of interest

2. statistical analysis including the calculation of mean, variance, and standard deviation

3. histograms

4. spectral analysis including power spectrum and cumulative spectrum

5. cross-correlation and autocorrelation functions.

Real-Time Data Acquisition

The requirements for high-fidelity real-time data involve ensuring that

1. each sensor or transmitter, which is the source of the real-time data, is correctly installed, calibrated,and is measuring properly

2. the measurement dynamics and internal filtering characteristics of the sensor or transmitter areknown and accounted for (the measurement dynamics will alter the apparent dynamics and band-width of the resulting information)

3. the input data are sampled with adequate resolution (minimum of 12 bits or 1 part in 4096)

4. the input data are collected and sampled by means of properly designed antialiasing filters to ensurealias-free information.

Sampling Theory—Time Series Data

To understand real-time data acquisition, it is important to grasp a few essential aspects of samplingtheory that pertain to in-plant data collection and analysis. Continuous manufacturing plants containhundreds or thousands of continuous process variables, such as flows, pressures, pH’s, concentrations,and temperatures. These variables are continuous or analog in nature, can be described mathematicallyas continuous functions of time y(t), and can be modeled by differential equations in continuoustime. These signals are measured by sensors and transmitters and are then inevitably digitized. Thedigitization can occur at the DCS input terminals, in a smart transmitter, or in any data acquisitionsystem provided to collect the data. Digitization involves both quantization and sampling. Consideran analog signal y(t) sampled at a given sample interval Ts over a given number of readings N. Theresult is a discrete time signal yt, the time series data, which represents a data record NTs seconds

Page 30: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.30 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

long. The important terms and concepts are the following:

Ts = sampling interval

TN = 2Ts = Nyquist period. This is the periodof the fastest sinusoid that can berepresented at a sample intervalof Ts (e.g., if Ts = 1 s, TN = 2 s).

fN = 1

TN= Nyquist frequency. This is the

fastest that can berepresented at a sample intervalof Ts (e.g., if Ts =1s, fN 0.5 Hz).The Nyquist frequency is basedon Shannon’s sampling theorem,which states that the fastest fre-quency, which can be represen-

by sampled data, has a periodof double the sampling interval.

ωN = 2π

TN= Nyquist angular frequency. This

is the period of the fastest angu-lar frequency that can be repre-sented at a sample interval of Ts

(e.g., if Ts = 1 s, ωN = 3.142rad/s). The angular frequency isuseful as it is determined by theinverse of a time constant (e.g., atransmitter filter time constant of2 s corresponds to a filter angu-lar cutoff frequency of 0.5 rad/s,or a cutoff frequency of 0.0796Hz).

frequency

=

ted

Selecting Sampling Rates

Collecting data for plant analysis requires that sampling rates be chosen carefully. There are tworeasons for data collection—analysis of step tests, or “bump tests,” for controller tuning purposesand analysis of process variability in order to identify variability sources. When step tests or bumptests, are carried out, the sampling interval should be chosen to be fast compared with the expectedprocess time constant. Ideally, the sample interval should be 10 times faster. In the worst case it shouldbe at least 3 times faster. For a flow loop with a time constant of 3 s, the slowest sampling intervalshould be 1 s, and ideally it should be 0.3 s. For bump test analysis purposes, additional filters, suchas antialiasing filters, are not necessary. These will skew the measured dynamics.

The second reason for data collection is to measure the variability and identify cause and effect.Now the objective is to focus on the frequency content of the signal; hence the antialiasing filterswill be needed to ensure signal integrity. The sampling interval will be chosen in order to captureand analyze a significant portion of the frequency spectrum. From a practical point of view there issome maximum number of data points that can be analyzed. A typical number might be 16,000. Ifvery high-frequency data are needed, then the data should be collected at the fastest sample intervalavailable. Suppose this is 50 ms. This means that the length of the data collection run can practicallybe set at ∼800 s, at which point 16,000 data points would have been collected. On the other hand, ifthe focus is on low-frequency behavior, then slower sampling rates will be needed. Suppose a 10-hrun is needed to investigate slow variability. Ten hours represents 36,000 s; hence the sample intervalshould be 2.25 s in order to not exceed 16,000 data points in the record.

Data Aliasing

Data aliasing is a phenomenon that occurs when a signal containing high-frequency content is sampledtoo slowly. Figure 7 shows an example of aliasing in which a sinusoidal input signal with a periodof 0.85 s (frequency of 1.18 Hz) is sampled once per second. The result is a fictitious, or “aliased,”sinusoid with a period of ∼6 s. The aliasing phenomenon occurs whenever the frequency contentof the signal being sampled is faster than the Nyquist frequency of the sampling operation. In theexample shown in Fig. 7, the frequency of the input signal is ∼1.18 Hz, while the Nyquist frequency ofthe 1-s sampler is 0.5 Hz. The resulting alias frequency is always slower than the Nyquist frequency.Data aliasing in a digital control system is potentially dangerous, as the resulting low-frequency

Page 31: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1:FJB/FC

UP2:FFG

2Septem

ber1999

16:23C

HA

P-10.texchap10.sgm

MH

009v3(1999/07/27)

FIGURE 7 Example of data aliasing a 0.85 second input sine wave sampled every second producing an apparent 6 second aliased signal.--

10

.31

Page 32: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.32 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

signal content will allow the control loop to chase this apparent variability. In turn this will introduceactual, yet unnecessary, variability into the process. The possibility of this happening in a typical DCSinstallation is real, especially for fast process variables, such as hydraulic pressures. Sampling rates forDCS variables are often chosen in order to keep digital processor loading manageable. Sample ratesas slow as several seconds are common. Hydraulic pressures can have measurement time constantsfaster than a second. Such an example would guarantee that aliasing is going to occur. The presenceof aliasing can be detected only by sampling the input signal at progressively faster sampling ratesand comparing the results. Each time that the sampling rate is increased, a different result will beseen. Once adequately fast sampling rates are used, then further increases in sampling rate will showthe same signal without change.

Antialiasing Filters

In most cases the sampling rate for a digital system is chosen without any knowledge of the frequencycontent of the signal to be measured. Good signal processing design would prevent aliasing by ensuringthat the input signal is filtered before it is sampled. Such a filter is known as an antialiasing filter, andits purpose is to attenuate the signal amplitude at the Nyquist frequency of the sampler to the pointthat the resulting alias component will be so small that it will not matter. How much attenuation isneeded to achieve this goal depends on the level of noise present at the Nyquist frequency, as well asan acceptable threshold. In absolute terms it would be desirable to attenuate any signal alias down tothe least-significant bit of the analog–digital (A/D) conversion. If a 12-bit A/D converter is used, thismeans that attenuation down to 1:4096 or −72 dB is required. In practice it may be argued that thesignal-to-noise ratio should also be considered. It can be argued that only −52 dB are needed if thesignal-to-noise ratio is 10.

It often happens that signals are sampled several times. The first sampling occurs at the A/Dconverter, but then the signal is sampled again for other purposes. In a DCS, the first sampling occursat the A/D converter; however, the controller often runs at a slower rate. To conform to good designpractice, a second filter is needed before the signal is used for control. This is seldom the case in mostDCSs. Another example involves data collection for time series analysis, as shown in Fig. 8 below.The example is based on a commercially available plant data acquisition package [10, 11]. In thiscase the analog signal is filtered by an analog filter before it is sampled by the A/D converter. Becausethe analog filter has a given cutoff frequency, the sampling rate of the A/D converter must be selectedto ensure sufficient attenuation at the Nyquist frequency of the filter. After the data are sampled atT1 seconds, it must be sampled again, in order to allow analysis at another sampling rate T2. Thisrequires that the signal be passed through another antialiasing filter to allow the second sampler tooperate without aliasing. The second antialiasing filter is digital and is implemented in software andis executed at the A/D execution rate. Third-order filters with three equal time constants are used inthis example for illustrative purposes only. Higher-order filters would normally be used. Assume thata third-order antialiasing analog filter with a time constant of 0.159 s is used. The transfer function

FIGURE 8 Anti-aliasing filter scheme for multiple sampling rates.

Page 33: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.33

for the filter is

Gfilter (s) = 1

(τ s + 1)3with τ = 0.159 s

The cutoff frequency of the filter is

1

0.159(2π )= 1 Hz

Suppose −60 dB of attenuation is needed to achieve adequate antialiasing. The third-order filterachieves −60 dB of attenuation per decade of frequency; hence −60 dB of attenuation will be achievedat a frequency of 10 times 1 Hz, or 10 Hz. Hence 10 Hz should be the Nyquist frequency of the A/Dsampler, giving a Nyquist period of 0.1 s, and a sampling interval for the A/D converter of 50 ms.

Suppose data collection at a sample interval of 2 s is desired. Hence the Nyquist period is 4 s andthe Nyquist frequency of the second sampler is 0.25 Hz. Suppose −40 dB of attenuation is consideredadequate for the second filter, and a third-order filter is to be used for this purpose. A third-order filterachieves −40 dB of attenuation at a frequency 4.5 times the cutoff frequency (corner frequency).Hence the corner frequency must be 0.25 Hz divided by 4.5 or 0.056 Hz. This can be achieved witha time constant of 2.86 s.

Statistical Analysis

Standard statistical analysis of time series data is important. Consider N readings of variable yt.Important statistical measures are the mean, variance, and standard deviation. These are calculated asfollows:

Mean: Mathematically the mean is defined as the expectation of yt. This is written as

E(yt ) = µy (1)

The mean value of yt can be approximated as

Y = 1

N

N∑t=1

yt (2)

Variance: The mathematical expectation

E(yt − µy)2 = σ 2y (3)

The variance of yt can be approximated as

σ 2y

∼= 1

N

N∑t=1

(yt − Y )2 (4)

The variance is expressed in the units of (yt)2. This is usually thought of as being in power units,to borrow from electrical engineering power, = V2/R. In process variability analysis the concept ofvariance is important because the amount of variability present, or that which can be reduced, is invariance units.

Standard Deviation: The standard deviation is

σy =√

σ 2y (5)

Standard deviation is the square root of the variance, and hence is expressed in the units of yt.

Page 34: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.34 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

Normal or Gaussian Distribution

An important concept in statistical theory is the central limit theorem, which holds that, as multiplerandom processes occur, the resulting probability distribution will be normal or Gaussian and willhave the familiar bell-shaped curve. The normal distribution is symmetrical about the mean value andis distributed ±3σ on either side of the mean value. Confidence limits of ±2σ capture 95% of allreadings.

It is common to assume that all probability distributions are normal. In process variability analysisit important to remember that this assumption is not always a good one. For instance, the distributionof a sine wave is bimodal and definitely not normal. When control loops cycle, they tend to appearas noisy sine waves; hence the notion that all variables have a normal distribution is generally notcorrect.

Two-Sigma. A common way of expressing variability is by calculating two standard deviations,commonly referred to two-sigma, or 2σ . This has the property that it represents the 95% confidencelimit. The units of 2σ are the same as those of yt. For this reason 2σ is a more natural way of expressingthe amount of variability about the mean value. In statistical process control (SPC) terminology, threestandard deviations, or 3σ , are used frequently.

Variability—Two-Sigma Percentage of Mean. Another way of expressing variability that has comeinto common usage in plant process variability auditing is 2σ expressed as a percentage of the meanvalue:

% variability = 100(2σ )

mean(6)

This is simply double the coefficient of variation in statistical notation. It has the useful propertyof being expressed as a percentage, as opposed to a specific set of units. This makes it possible tocompare the percent variability of upstream and downstream variables by use of a common basis.

Stochastic Data Structures and Ideal Signals

Stochastic data are in some sense random by nature. In spite of this random nature, stochastic data dohave structure [8]. Figure 9 shows four important examples: white noise, white noise plus a sine wave,filtered white noise, also known as autoregressive noise, and integrating-moving-average noise.

White Noise. White noise is the name given to a signal that is absolutely random. The example shownin Fig. 9 is ideal white noise with a normal distribution. White noise derives its name from whitelight, which contains all of the colors of the visible spectrum. In the same way, white noise containsequal power at all frequencies. As a result, white noise has a perfectly flat power spectrum. It alsohas an autocorrelation function that is zero for all lags except lag 0. White noise is the mathematicalabstraction of a purely random signal. The concept is used extensively in stochastic signal processingas a source for generating other signals.

White Noise plus a Sine Wave. This is a simple yet realistic example of a signal that occurs inpractice. It often happens that a signal consists of a dominant sine wave with noise added.

Filtered White Noise. This is another simple yet realistic stochastic signal, which has the appearanceof a real signal from a plant. This signal is generated when white noise is passed through a first-orderfilter. This type of noise is known as autoregressive noise. Passing white noise through a filter of sometype is the method used to describe the characteristics of stochastic signals. Such a filter is known asa noise-shaping filter.

Page 35: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.35

FIGURE 9 Examples of stochastic signals. Upper left: ideal white noise. Upper right: white noise plus a sine wave. Lower left: filteredwhite noise. Lower left: integrating moving average noise.

Integrating-Moving-Average Noise. This type of noise is characterized by a tendency of a randomsignal to drift, with white noise superimposed. This type of signal is quite characteristic of normalplant behavior.

Histogram

The histogram is a statistical frequency-distribution plot. Figure 10 shows two examples. The upperplot shows white noise with a nearly normal distribution. The mean value is 3.101. The two-sigma is0.0999, or 3.22%. The standard deviation is ∼0.05, hence, three-sigma is 0.15. As a result a normaldistribution should extend from 2.95 through to 3.25. This agrees fairly well with the histogram,which does have a few outliers.

The lower plot of Fig. 10 shows eight and a half cycles of a sine wave. The period is 25 s, and theamplitude is 1.0. The mean is 3.101, and the two-sigma is 0.1417, or 4.47%. The histogram showsa strong bimodal distribution extending from 3.0 up to 3.2. This agrees with the time series plot thatextends from 3.0 to 3.2. The bimodal nature of the histogram is due to the nature of the sine wave,which spends most of the time at the extremities of the sinusoid and relatively little time near themean value. It is interesting to note the two-sigma is 0.1417. The normal interpretation would bethat 95% of all readings of this signal would be contained within +/−0.1417 and that a further 5%of all readings would be outside this range, assuming a normal distribution. In fact, the sine-wave

Page 36: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.36 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

3.262

3.179

3.097

3.014

2.931

5.859

4.395

2.930

1.465

0.000

6.445

4.834

3.223

1.611

0.000

3.200

3.150

3.100

3.050

3.000

0.0 51.2 102.4 153.6 204.8 2.921 3.009 3.097 3.184 3.272Mean = 3.10104 2Sig = 0.09992 (3.22%) Mean-2S = 3.001 Mean + 2S = 3.201

2.994 3.047 3.100 3.153 3.206Mean-2S = 2.96 Mean + 2S = 3.243

0.0 51.2 102.4 153.6 204.8Mean = 3.10149 2Sig = 0.1417 (4.57%)

sec

sec

-White Noise

wn.dat -White Noise

wn.dat

Time Series

-Sine Wave: 25 sec. period

sin-25.dat -Sine Wave: 25 sec. period

sin-25.dat

Time Series

Histogram% of Total

Histogram% of Total

FIGURE 10 Examples of histogram plots: upper left: random noise (white noise) time series, upper right: histogram (normal distribution)lower left: sine wave (25 second period) time series, lower right: histogram (bi-modal).

amplitude is exactly 1.0; hence no readings extend as far as +/−0.1417. The two-sigma of 0.1417can be explained as follows.

The variance of a sine wave is

σ 2 = 1

2A2,

where A is the sine-wave amplitude:

A = 1.0, σ 2 = 0.5, σ = 0.707107, 2σ = 1.4142

This agrees well with the result shown in Fig. 10.

Spectral Analysis

Spectral analysis is a key component of plant process variability analysis and is used to determinethe frequency content of signals. The key techniques include the Fourier transform, the fast Fouriertransform (FFT), power spectrum, and cumulative spectrum. It is helpful in diagnosing variabilitycause-and-effect relationships and also for analyzing control loop behavior. The techniques are basedon the work of the French mathematician Jean Baptiste Fourier (1768–1830). The basic idea is thatany signal can be expressed as the sum of a series of sine and cosine waves at different frequencies,

Page 37: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.37

from very fast to very slow. These fast and slow sine waves are called harmonics. Although Fourierdeveloped the mathematics for the continuous Fourier series—he was analyzing the vibration of violinstrings—the discrete Fourier series is the tool needed to analyze time series data. Consider a timeseries signal yt containing N equally spaced readings taken every Ts seconds. There are n = N/2harmonics present in the Fourier series yt, and can be expressed as the discrete Fourier series:

yk = 1

2A0 +

n−1∑m=1

[Am cos(2π fm Tsk) + Bm sin(2π fm Tsk)] + 1

2An(−1)k (7)

for k = 0, 1, . . . , N−1, and there are n harmonics,

fm = m

N Ts(8)

for m = 1, 2, . . . , n −1.The lowest-frequency harmonic is called the fundamental and is

f1 = 1

N Ts(9)

and has a period of T1 = NTs. The highest-frequency harmonic is called the Nyquist frequency and is

fN = 1

2Ts(10)

and has a period of TN = 2Ts. The mth harmonic is given by

hm,k = Am cos(2π fm Tsk) + Bm sin(2π fm Tsk) (11)

Figure 11 illustrates the concept. Shown is one cycle of a square wave, which is represented by 16data points. Hence N = 16 and n = 8. Note that square waves, because of their symmetric nature,contain only odd harmonics. Hence only the first, third, fifth, and seventh harmonics have nonzerovalues. The Am and Bm Fourier coefficients are listed in Table 1.

For all 16 values of k in Fig. 11, the sum of the harmonics equals the original data exactly.It is important to appreciate that the number of harmonics is fixed and is equal to one half of the

number data points. The fastest harmonic is the Nyquist, and the frequency is fixed by the samplinginterval. The slowest harmonic is the fundamental. This frequency is determined by the length ofthe data run. All of the other harmonics have fixed frequencies determined by Eq. (8). The Fouriertransform treats these fixed frequencies as the bins of a histogram and attempts to represent the powerat the actual frequencies as best as it can at bins on either side of the actual frequency. This processis known as leakage between bins.

TABLE 1 Fourier Harmonics and Coefficients

Harmonic Period Frequency Am Bm

1 16 0.0625 0.25 1.2572 8 0.125 0 03 5.33 0.188 0.25 0.3744 4 0.25 0 05 3.2 0.313 0.25 0.1676 2.27 0.375 0 07 2.29 0.438 0.25 0.058 2 0.5 0 0

Page 38: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.38 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

FIGURE 11 One cycle of a square wave and its harmonics.

Fourier Transform

The Fourier transform transforms the time series signal into the frequency domain by calculating theA and B Fourier coefficients for each harmonic, which in turn represent the frequency content of thesignal. The A and B coefficients can also be represented in magnitude and phase notation. In eithercase, the Fourier transform is a transformation of the amplitude information of the time-domain signalinto the frequency domain. Once the coefficients are known, the inverse Fourier transform will allowthe reconstruction of the original time series data.

Fast Fourier Transform

The calculation of the Fourier transform is computationally intensive. The number of calculations isproportional to N2. For a large number of data points the calculation of the Fourier transform can bea slow process computationally. The FFT is a computationally optimized algorithm that calculatesthe exact transform. The only limitation is that the number of data points must conform to a powerof 2; hence, 512, 1024, 2048, 4096, etc. The number of calculations is proportional to N; hence thecalculation is much faster. The only complication is if the number of data points is not equal to apower of 2. Most FFT algorithms will simply do the calculation on the basis of the first points thatconform to a power of 2. Hence, if 1023 data points are available, the FFT will perform the calculation

Page 39: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.39

on the first 512 points. The consequence is that only approximately one half of the data is used. Forthis reason, most data collection algorithms attempt to collect a binary number of data points. Otheralgorithms attempt to “pad” the data so that it conforms to a binary number. For instance, if only 1023data points are available, one additional point is added. Typically the value is set equal to the averagefor the data run. Naturally, the more padding that is done the more errors that are introduced.

Power Spectrum

The power spectrum is a calculation of the distribution of the variance over the frequency spectrum.The power spectrum is the most useful of the spectral analysis tools and is the most frequently usedtool in plant data analysis. The calculation of the power spectrum relates to the Fourier coefficientsas follows. For each harmonic, the variance is

Vm = 1

2(amplitude)2 = 1

2

(A2

m + B2m

)(12)

The power spectrum for the square wave of Fig. 11 is shown in Fig. 12. There are 8 data pointscorresponding to each harmonic. From Parseval’s theorem it is known that variance of the time seriesdata is equal to the sum of the variances of all of the harmonics. Hence, the power spectrum is avariance, or power, versus frequency plot and is sometimes known as a periodogram.

FIGURE 12 Power spectrum of Figure 11 square wave.

Page 40: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.40 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

Cumulative Spectrum

The cumulative spectrum determines the percentage of the total variance that is occurring at a givenfrequency. This is a very useful diagnostic tool that helps determine if a cycle at a particular frequencyis contributing a significant amount of variance or not. The cumulative spectrum is simply a plot of thepercentage variance contribution, from 0% to 100%, as the frequency increases from the fundamentalto the Nyquist.

Spectral Analysis Plotting Methods

There are many ways of plotting power spectra, and it is important to identify how each method altersthe usefulness of the information. Three methods are shown in Fig. 13.

1. Variance versus frequency (Fig. 13, upper right plot). This is the most common method ofplotting spectral information. It is used by most software packages. It is common in certain types ofplant analysis, such as vibration analysis. For the purpose of plant process variability analysis thismethod is the least useful. It tends to deemphasize the low-frequency information, which is often ofgreat interest, while the high-frequency end of the spectrum often contains relatively little informationas the sensor has filtered it out.

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

Time Seriesgsm

67.20

65.76

64.32

62.88

61.440.00 6.40 12.80 19.20 25.60

Mean = 63.7897 2Sig = 2.084 (3.27%) Sec

0.3302

0.2476

0.1651

0.0825

0.0000

0.039 1.279 2.520 3.760 5.000Win = Sqr., Detr = N, Ovr = 0 Cycle/Sec

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

Power SpectrumVar Power SpectrumVar

Power SpectrumVar

0.3302

0.2476

0.1651

0.0825

0.0000

25.60 19.25 12.90 6.55 0.20Win = Sqr., Detr = N, Ovr = 0

0.3302

0.0330

0.0033

0.0003

0.050 0.500 5.000Sec/Cycle Win = Sqr., Detr = N, Ovr = 0 Cycle/Sec

FIGURE 13 Three ways of plotting a power spectrum. Upper right – power vs frequency, lower left – power vs period, lower right – logpower vs log frequency.

Page 41: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.41

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

BWFP-FixedFixed Point Basis Weight

bw25sec.dat06/21/1992

Var Power Spectrum

% Cumulative Spectrum

Var Power Spectrum

% Cumulative Spectrum

0.3302

0.0330

0.0033

0.0003

100

75

50

25

0

0.050 0.500 5.000

0.050 0.500 5.000

Win = Sqr., Detr = N, Ovr = 0

Win = Sqr., Detr = N, Ovr = 0

Cycle/Sec

Cycle/Sec

Win = Sqr., Detr = N, Ovr = 0

Win = Sqr., Detr = N, Ovr = 0

Sec/Cycle

Sec/Cycle

0.3302

0.2476

0.1651

0.0825

0.0000

25.60 19.25 12.90 6.55 0.20

25.60 19.25 12.90 6.55 0.20

100

75

50

25

0

FIGURE 14 Power spectra and cumulative spectra for Figure 13 data. The 6 second cycle represents about 40% of the total variance.

2. Variance versus period. This is an unusual method of plotting and is illustrated in Figs. 3, 5,6, and 13 (bottom left). This method provides an intuitively obvious interpretation of data that havestrong cyclic content. It is useful for human interpretation in a plant production environment, wherethere may be little formal training in frequency response methods. A disadvantage of the method isthat it skews the frequency scale and overemphasizes the low-frequency content.

3. Log variance versus log frequency. This method is illustrated in Fig. 13, bottom right. Themethod is especially useful for data structure identification and modeling, as the plot has a strongsimilarity to a Bode plot. In Fig. 13 (lower right) the plot looks very much like a Bode plot with acorner frequency of about ∼1 Hz. This would suggest that the data could consist of white noise passedthrough a first-order filter with a time constant of ∼0.159 s.

Figure 14 shows power and cumulative spectra for the same data shown in Fig. 13. The plots includeboth log–log and period presentations. The 6-s cycle evident in the data represents ∼40% of thetotal variance, as is evident from the cumulative spectrum. If it were possible to remove this cyclecompletely, the variance would be reduced by 40%, while the standard deviation would be reducedby the square root of this amount, or ∼20%.

Spectral Analysis Windowing and Detrending

The Fourier transform attempts to fit harmonics to the time series data exactly. Because the timeseries data are sampled, and in many cases the data are also nonstationary (their character changes

Page 42: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.42 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

over time), methods are needed to smooth the spectral information and increase the confidencelimit for the variance present at any given frequency. Two methods of smoothing are available:windowing and overlapping. Windowing results in a smoothed power spectrum that some windowingtechniques achieve by smoothing the time series data before the Fourier transform is taken, while otherssmooth the power spectrum. Common windowing techniques include square (no windowing applied),Daniel, Parzen, Welch, and Hanning [8]. Another method of smoothing involves breaking up the timeseries data into overlapping sections, performing spectral analysis of each section independently, andaveraging the final results. The overlapping process can be controlled by specifying the number ofoverlapping segments to be used in the calculations. Overlapping will have an impact on the resultingspectral analysis by shortening the period of the fundamental frequency that is due to the overlapping.

When data contain a tendency to drift, the resulting power spectrum will interpret the drift asa strong low-frequency component. This low-frequency component may “swamp” the remainingspectral information. The detrending option is useful to avoid this tendency. It involves preprocessingthe original time series data through a first-order backward difference in order to remove the trend.

The power spectral plots shown in Figs 3, 5, 6, and 14 include the legend “Win=Sqr., Detr=N,Ovr=0.” In this time series software package [7], this means that a square window (no windowing)was selected, detrending was not selected, and overlapping segments were not selected either.

Cross-Correlation and Autocorrelation Functions

Time series data can also be analyzed in the time domain, and the two functions that are particularlyuseful are the autocorrelation and cross-correlation functions. Equation (4) gives the calculation forvariance. For two time series variables xt and yt, the cross covariance is calculated as follows:

C2xy(k) = 1

N

N−k∑t=1

(xt − X )(yt+k − Y ) (13)

This calculates the correlation between variables xt and yt for different time lags k. The cross covariancecan be normalized by dividing by the standard deviations of xt and yt to produce the cross-correlationfunction:

rxy(k) = C2xy(k)

σxσy(14)

The cross-correlation function is a plot of the correlation coefficient between +1.0 and −1.0 fordifferent time lags between the data sets. The interpretation is that 1.0 represents perfect positivecorrelation, 0.0 represents no correlation, and −1 represents perfect negative correlation.

Cross-Correlation Function Use. The cross-correlation function is useful to provide insight intothe relationship between two variables. In particular, it is used to investigate if one variable is relatedto another through a time delay.

Autocorrelation. The autocorrelation function for a time series signal yt is based on the autocovari-ance function, which is calculated as

C2yy(k) = 1

N

N−k∑t=1

(yt − Y )(yt+k − Y ) (15)

The autocorrelation function then is

ryy(k) = C2yy(k)

σ 2y

(16)

Page 43: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.43

The autocorrelation function is a plot of the regression coefficient for a time series signal regressedwith a time-lagged version of itself. The function varies from +1.0 to −1.0 and always has a value of+1.0 for a time lag of zero. It represents an effective way to test a signal for degrees of randomness,as a completely random signal (white noise) has an autocorrelation function that is zero for all lagsexcept lag zero. The interpretation of zero correlation really means inside the confidence limits. Theconfidence limits for an autocorrelation function are calculated as ±2/

√N , where N is the number

of data points. Hence for 1024 data points the confidence limits are at +/−0.0625.

Autocorrelation and Minimum Variance Control. The autocorrelation function also serves an im-portant purpose as a control performance benchmark. From minimum variance control theory [12]we know that the lowest variance that a control loop could ever have is if minimum variance controlwere achieved. Under this condition, the resulting control signal would be perfect white noise, as thecontroller would remove all nonrandom variability. In the presence of dead time in the loop, perfectwhite noise could not be achieved, and under this condition the autocorrelation function would bezero for all lags except those less than the dead time plus one sample.

CONTROL LOOP PERFORMANCE FOR UNIFORM MANUFACTURING

How should loops be tuned to enhance, and not hinder, the manufacture of uniform product efficiently?To answer this question the manufacturing requirements for each unit operation in the process must beidentified and translated into dynamic performance requirements. A controller tuning strategy mustbe formulated that translates these into a desired speed of response for each loop and coordinateddynamics for all of the control loops in the plant. At the heart of this concept lie model-based controllertuning methods, such as IMC and Lambda tuning [13–15], all of which require the selection of theclosed-loop time constant—often called Lambda, for each loop before the tuning process taking place.Lambda tuning is discussed in the next section. It achieves closed-loop responses that are first orderby nature and have a selectable time closed-loop constant. It differs from earlier tuning methodologiessuch as those of Ziegler and Nichols [16], Cohen and Coon [17], and integral performance criteriamethods [18], all of which achieve a closed-loop performance that is in some sense as fast as possibleand is also oscillatory. Uniform manufacturing requires two key criteria. First, oscillatory loop tuningis unacceptable as it induces resonance and amplifies variability. Second, the speed of response ofcontrol loops should be selected to suit the manufacturing requirements. This requires the selectionof the closed-loop time constant for each loop. This affects the robustness of the control loop—thefaster the loop, the less robust; this issue is dealt with in the next section. In this section, the focusis on how to determine the closed-loop time constant in such a way that manufacturing is enhanced.The fine paper machine example is used to illustrate this tuning process.

Identifying the Manufacturing Requirements—Fine Paper Machine Example

Consider Fig. 2, which shows the blend chest area of the fine paper machine example. Fine paper ismanufactured from two raw materials—hardwood pulp and softwood pulp. Hardwood fibers are shortand provide the surface smoothness needed for fine paper grades. Softwood fibers are long and areneeded to provide sheet strength. In a typical operation, a 70%-to-30% hardwood-to-softwood blendis required. In the example shown in Fig. 2, hardwood pulp is pumped from the hardwood chest. Afterconsistency control, it is passed through a refiner to improve the fiber bonding properties. It is thenflow controlled into the blend chest. Exactly the same procedure is used for the softwood pulp. Theblend chest level controller cascades its output to the setpoints of the two flow controllers in order tomaintain level while at the same time maintaining the correct blend ratio. The blended pulp is pumpedout of the blend chest and is consistency controlled as shown in Fig. 15 before being pumped into themachine chest under both flow and level control. From the machine chest, the pulp is diluted, cleaned,

Page 44: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.44 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

FIGURE 15 Examples of auto and cross correlation functions.

screened, and pumped to the headbox. From the headbox it discharges onto the wire and forms a sheetof paper.

Ziegler–Nichols Tuning

The oldest and best-known tuning method is the one developed by Ziegler and Nichols in 1942 [16].This method is used for illustration purposes in this example, although any of the earlier methods [17,18] could also have been used as effectively. If the Ziegler–Nichols tuning method was used to tunethe blend chest flows, then the setpoint responses of the hardwood (FC-105) and softwood (FC-205)flow loops might look like that shown in Fig. 16. This shows the response of both loops to a stepchange in blend chest level controller LC-301 output in manual mode. Both responses have a quarter-amplitude-damped decay ratio, as suggested by the Ziegler–Nichols tuning method. They also havedifferent settling times as a result of differences in open-loop dynamics that are due to differences inpumps, line dimensions, flow meters, and control valves. The proponents of fast oscillatory tuningargue [18] that the positive half cycles of the transient response are averaged out by the negative halfcycles. However, in the case of solid product manufacture such as paper, this does not translate intoacceptable product. Figure 17 shows the impact of the oscillatory tuning on the fiber blend ratiosentering the blend chest. The blend ratios vary by 5% with a period of ∼20 s throughout the transientresponse. Assuming a paper machine speed of 1000 m/min and with downstream mixing neglected(there is no reason to require effective mixing to overcome the effects of badly tuned loops), theZiegler–Nichols tuning will result in paper with smoothness and strength properties that alternate

Page 45: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.45

FIGURE 16 Blending problem with flow loops tuned using Ziegler-Nichols.

every 170 m. This will cause most of the product to be unsatisfactory as the level controller changingdemand will cause the flow loops to cycle continuously.

Clearly, the oscillatory responses and unequal speed of the two loops has resulted in a significant,yet unnecessary disturbance being created in the blend ratio. Oscillatory tuning offers no beneficialeffect for uniform manufacturing. This example serves to illustrate the need for coordinated tuning forthese two flow loops from a manufacturing standpoint. Clearly, the requirement for the hardwood andsoftwood flow is to respond in a nonoscillatory manner, with exactly the same speed of response. Ifboth loops were tuned to have first-order responses with equal closed-loop time constants, this wouldachieve the setpoint change without causing an upset to the blend ratio. This is illustrated in Figs.18 and 19, which show the impact of Lambda tuning of both loops to a closed-loop time constant of20 s. The transient lasts ∼80 s, while the blend ratio is maintained absolutely constant. This is onlyone example of how the selection of a specific speed of response for multiple control loops has animpact on uniform manufacturing.

Coordinated Loop Tuning Based on Operational Requirements

The previous example serves to illustrate that the fastest possible speed of response for each loopdoes not in any way guarantee uniform manufacturing of a product in a continuous plant. In fact theexample clearly shows the need to carefully select the desired closed-loop time constant in order toavoid unnecessary disturbances. The paper machine blend chest of Fig. 2 will be used to illustrateother rules for making the control loop speed of response decisions. Collectively these rules can beused to evolve a tuning strategy for uniform manufacturing.

Page 46: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.46 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

FIGURE 17 Blend ratios vary due to Ziegler-Nichols tuning of flow loops.

Rule of Thumb for Maintaining Constant Ingredient Ratios

The paper machine blend flow example can be generalized to the feed of multiple ingredients toany reactor, which requires the ratio of the ingredients to be maintained constant. Assuming that allingredient flows are controlled by a common cascade control strategy, all of the flow loops should betuned to have the same closed-loop time constant.

Rules of Thumb for Process Interaction

Control loops often interact because of process coupling. If both loops are tuned for the same closed-loop time constant they will probably cycle as a result. A good rule of thumb is to select the loopthat is most important from an operational point of view and tune it to be as fast as possible, inkeeping with the need to maintain robustness and avoid resonance. The next most important of theother interacting loops should then be tuned 5–10 times slower than the loop already tuned. As anexample, the consistency loops and the flow loops for both pulp streams are known to interact as aresult of process coupling. A change in stock flow immediately results in a need to change dilutionwater if the consistency is to be maintained constant. On the other hand, a change in consistencywill alter the fluid friction in the piping system, and as a result will cause the flow to change. Theoperational requirement is to deliver a constant mass flow of fiber to the refiners. The refiners arecritically important as they determine the degree of fiber bonding and hence the strength of the papersheet. In view of the need, the consistency loops are probably more important than the flow loops. Areasonable tuning strategy might be to tune the consistency loops to be as fast as possible, subject to

Page 47: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.47

FIGURE 18 Blending problem with flow loops tuned for Lambda = 20 seconds.

considerations about resonance that is due to the dead time present in the consistency loops. In turn,it would be reasonable to tune the flow loops to be 5 or 10 times slower than the consistency loops, soas not to inadvertently upset them dynamically as the flow changes. At the same time, the flow loopsmust be tuned to have identical speeds of response to maintain the blend ratio.

Rules of Thumb for Cascade Loops

Another way that control loops can interact is through cascade control, with the output of one loopbecoming the setpoint of another. In general the 5–10-times rule used for solving process interactioncan also be applied to the inner (faster) and outer (slower) cascade loops. When a level controller isthe outer loop, the rule should be more stringently set at 10 times faster, because of the sensitivity oflevel controllers to slow inner loop dynamics. For the paper machine example, the flow loops shouldbe at least 10 times faster than the blend chest level loop.

Rules of Thumb for Buffer Inventory Storage Level Control

The blend chest is a buffer inventory storage, which by design is intended to decouple the downstreammanufacturing requirements of the paper machine from the upstream pulp blending process. As such,the blend chest level controller should be tuned as slowly as possible. The blend chest has a residencetime of typically 15 min. The operational need for the blend chest level controller is to ensure that theblend chest never overflows and that the level never sinks below some safe level with respect to theagitation zone. The lower limit on the speed of response of the flow loops is at least 10 times faster

Page 48: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.48 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

FIGURE 19 Blend ratios are constant due to Lambda tuned flow loops.

than the blend chest level controller. In order to allow the blend chest to work effectively as a bufferinventory, the level control should be tuned as slowly as possible while ensuring that the productionrate changes expected on the paper machine will not cause overflow and underflow problems. Asimple rule of thumb to achieve this is to set the closed-loop time constant for the level controllerto be equal to the residence time of the tank. More detailed considerations should take into accountthe level setpoint, relative to the total tank volume, as well, as a priori knowledge of the expecteddownstream flow demand changes that must be accommodated. These considerations should resultin a closed-loop time constant selection that could well be longer than the residence time of the tank.

Tuning Rules of Thumb for Uniform Manufacturing—Summary

The tuning requirements illustrated with the blend chest problem above can be generalized as follows:

1. Loops should never be tuned to cycle—control should be smooth.

2. Each loop should be tuned for a specific speed of response as dictated by a plant coordinationtuning strategy.

3. The tuning method should be applicable to all loops.

4. The tuning method should identify process dynamics and actuator nonlinearities.

5. Tuning should be robust for all operating conditions.

6. All reactor ingredient feed controls should have equal speeds of response so as not upset theingredient ratio—feed flows into a blend tank, chemical feed flows, boiler fuel, and air flows.

Page 49: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.49

7. Key loops should be 5–10 times faster than other loops coupled through process interaction

8. Inner cascade loops should be 5–10 times faster than outer loops (10 times faster for level control).

9. Inventory storage tank level controls should be as slow as possible while satisfying productionneeds.

CONTROL LOOP PERFORMANCE, ROBUSTNESS,AND VARIABILITY ATTENUATION

FIGURE 20 Control loop structure.

Before loops in a plant are turned with the hope of reducing processvariability, it is important to understand the control loop performance-robustness envelope, as well as the ability of a control loop to attenuateprocess variability. Figure 20 shows a control loop block diagram inwhich GC is the controller transfer function, GP is the process transferfunction, ysp is the setpoint, and y is the measurement. Process dis-turbances enter the loop at d, which can be modeled as white noise npassed through the disturbance transfer function Gd. This in turn canbe shaped to fit realistic conditions. The loop responds simultaneouslyto both the setpoint ysp and the disturbance d. The setpoint response is

given by the forward, or transmission, transfer function T(s):

T (s) = y

ysp= GC GP

1 + GC GP(17)

The disturbance or load response is given by the sensitivity function S(s):

S(s) = y

d= 1

1 + GC GP(18)

S(s) determines what a control loop can or cannot do, regarding variability attenuation under theassumption of linear dynamics.

Resonance and Bode’s Integral

An important property of the sensitivity function that was initially reported by Henrick Bode in 1945and is known as Bode’s integral [19] (also known as the area formula [20] in more modern work)states that ∫ ∞

0log |S( jω)|dω = 0 (19)

for all cases with stable (left half-plane) poles, as long as

GC GP �= k

s(20)

What Eq. (19) states is that the log of the amplitude ratio (AR) integrates to zero over all frequencies.Practically this means that control loops do not in fact reduce variability; they only redistribute thevariability over the frequency spectrum. This is really bad news for achieving variability reductionthrough control, as the ability to attenuate variability at low frequencies is offset by the tendency ofthe control loop to resonate and amplify variability at higher frequencies. Inequality (20) provides theonly exception to this rule, and offers a ray of hope for variability reduction. Inequality (20) excludesthe special in which when the loop transfer function—the product of the controller and the process— results in a pure integrator. Loops that achieve this loop transfer function have the ability to reduce

Page 50: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

2 September 1999 16:23 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.50 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

variability at low frequencies without amplification at high frequencies. This can only be achievedin theory by achieving complete pole-zero cancellation, in which the controller cancels out all of theprocess dynamics, except for one pure integrator. This is precisely what the Lambda tuning techniqueattempts to achieve.

Lambda Tuning Concept

The term Lambda tuning applies to all controller tuning methods that require the closed-loop timeconstant to be specified before the tuning is applied [2, 3, 13–15, 21, 22]. The initial concept isattributable to Dahlin [14]. Lambda tuning attempts to achieve a first-order closed-loop response oftime constant λ. This can be represented as

T (s) = y

ysp= GC GP

1 + GC GP= 1

λs + 1(21)

Since the process transfer function can be determined from plant testing, as described below, it followsthat

GC = 1

G P

1

λs(22)

It also follows that, in theory, the loop transfer function would be

GC GP =(

1

GP

1

λs

)GP = 1

λs(23)

This does obey the special case of Bode’s integral as specified by inequality (20), which allows thecontrol loop to attenuate frequencies slower than the loop cutoff frequency at 1/λ without amplifyinghigher frequencies.

Let us consider a first-order process and a PI controller. The process transfer function is

GP = KP

τ s + 1(24)

When Eq. (22) is applied, the controller type and its parameters can be determined:

GC = 1

G P

1

λs=

(τ s + 1

K P

)1

λs(25)

This controller has one integrator and one transfer function zero. Hence it has the form of a PIcontroller:

GCPI = KC

(1 + 1

TRs

)= (KC )

(TRs + 1

TRs

)=

K Pλ

) [τ s + 1

τ s

](26)

Lambda tuning can hence be achieved by the following PI controller settings:

TR = τ, KC = (1/K P )(τ/λ) (27)

The Lambda tuning concept hence consists of the following steps:

1. identifying the process dynamics

2. using the process dynamics to determine the controller type needed to control this process

3. determining how to calculate the controller settings to achieve the desired λ.

Page 51: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.51

TABLE 2 Process Transfer Functions and Controller Types

Process Model Controller Type

Pure gain I or PIFirst order PISecond-order overdamped PI or PIDSecond-order underdamped PIDSecond order with lead PI or PID.FSecond order with lead and overshoot PI or PID.FSecond-order nonminimum phase (wrong-way response) PIIntegrator PI or PIntegrator with first-order lag in series PI or PIntegrator with first-order lead PI or PIntegrator with nonminimum phase (wrong-way response) PI or P

Controller Types

Lambda tuning is based on the IMC concept and produces controllers that are nonoscillatory and havefirst-order responses with a closed-loop time constant λ. The controllers are approximate inverses ofthe process transfer function. There are many process transfer functions that may occur in practice,and, as a result, many different controllers are involved [2, 3, 13, 15]. One tuning software package[23] allows for 11 process models (all of which may have dead time) and the controllers needed foreach of these case, as shown in Table 2.

As illustrated in Table 2, above, the type of process dynamics dictates the controller structure. ThePID.F controller referred to consists of a PID controller with a first-order filter in series. This type ofcontroller results from the IMC concept when the process has a transfer function zero, such as a lead.

Industrial Controllers

The problem of tuning is complicated further by the fact that each industrial controller has implementedthe PID algorithm in a slightly different way. One tuning software package [23] recognizes over 70different industrial controllers, each one with its own structure, parameter limits, and idiosyncrasies.For instance, some use gain, others proportional band, some use reset time, others reset rate, someuse minutes, other seconds, etc. In general, however, most of the PID forms fall into three maincategories: ISA standard form, classical form, and parallel form. It is critically important that theperson tuning a control loop be thoroughly familiar with the form of the controller, the meaning ofthe tuning parameters, and their ranges. Some controllers cannot be tuned with the Lambda tuningconcept as a result of parameter limits or internal structure peculiarity, so it is important that theseissues be understood in detail. A controller dynamic specification [24] provides more detail.

Common Lambda Tuning Rules

The two most common Lambda tuning rules are listed in Table 3 and apply to first-order and integratorprocesses with dead time. Collectively, these represent more than 80% of all process dynamics incontinuous plants.

Impact of Dead T ime

Dead time is the most destabilizing dynamic effect possible. It is important to note that inequality(20) cannot be satisfied in the presence of dead time, as dead time cannot be canceled (even dead-time

Page 52: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.52 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

TABLE 3 PI Tuning Rules for First-Order and Integrating Dynamics

Process Process Dynamics KC TR

First-order plus dead time GP (s) = K P e−sTd

(τ s + 1)

τ

K P (λ + Td )τ

Integrator plus dead time GP (s) = K P e−sTd

s

2λ + Td

K P (λ + Td )2 2λ + Td

compensation algorithms do not cancel dead time [22]). Dead time causes resonance according toBode’s integral. When dead time is present, good process design practice should attempt to reducedead time to a minimum wherever possible.

Let us consider a simple first-order process with dead time and apply Lambda tuning by means ofa PI controller. The process is

GP (s) = K −sTdPe

τ s + 1(28)

From Table 3, a PI controller is needed for lambda tuning, and the settings are

TR = τ, KC = τ

KP (λ + Td )

The resulting loop transfer function is

GC GP =[

τ

KP (λ + Td )

] [(τ s + 1)

τ s

]KP e−sTd

(τ s + 1)= e−sTd

(λ + Td )s(29)

Note that the loop transfer function is not a pure integrator, as required by the Bode integral if resonanceis to be avoided. The result is that the control loop with dead time will resonate, and as a result it willamplify high-frequency variability.

Control Loop Robustness and Stability Margins

Control loop robustness refers to the ability of the control loop to deliver consistent and predictableperformance. The danger to be avoided is that the loop may become unstable under certain conditions.Unstable means that the loop will oscillate with increasing amplitude, thereby endangering the process.Once the loop has been tuned, the need for robustness results from the fact that the process dynamicsmay change with time, operating point, or product grade. As the process dynamics change, the processgain, dead time, and time constants may all vary. The loop gain determines the stability of the loopand is the product of the controller gain and the process gain. For example, the slope of the controlvalve characteristic determines the process gain for a flow loop. The slope of an equal percentagevalve characteristic can easily vary by a factor of 5 or more. Similarly, the slope of the titration curvedetermines the process gain of a pH loop. This can vary by a factor of 10 or more. The degree ofrobustness of a loop can be measured by the gain margin and phase margin. These are based on theNyquist stability criterion, which defines a loop to be unstable at frequencies for which the loop gainexceeds 1.0 and the phase shift is 180◦.

Gain Margin. The gain margin expresses the ability of the loop to absorb changes in the loop gainresulting from changes in process gain. A gain margin of 4 means that the loop gain can changeby a factor 4 before the loop will become unstable. For a flow loop with an equal percentage valvecharacteristic, a gain margin of 5 or more would be advisable.

Page 53: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.53

Magnitude

dB

10

0

−10

−20

−30

−40

−50

−6010−4 10−3 10−2 10−1 100 101

Frequency (rad/sec)

0.000166300

0.0016630

0.01663

0.166.3

FreqPeriod

Hzsec

e d c

b a

Case Lambda Cut-Off Period Resonancea 5 sec 63 sec 4 dBb 8 sec 82 sec 3 dBc 10 sec 94 sec 2.6 dBd 15 sec 126 sec 2 dBe 25 sec 188 sec 1 dB

Deadtime Td = 5 seconds

FIGURE 21 Sensitivity function (load frequency response) for loop with deadtime.

Phase Margin. The phase margin expresses the ability of the loop to absorb changes in parameterssuch as dead time and time constant. A phase margin of 60◦ means that the combined effect of thesechanges can cause an additional phase lag of 60◦ before the loop becomes unstable. Generally loopsstart to become oscillatory as the phase margin is reduced below 60◦ or so.

The Control Loop Performance-Robustness Envelope—Speed of Responseversus Robustness

The speed of response λ determines the cutoff frequency (or cutoff period), the resonance, and therobustness of the loop. It is a strong function of dead time. Clearly, the faster the speed of response,the greater the resonance and the wider the bandwidth (shorter cutoff period). Figure 21 shows thesensitivity function, or load frequency response Bode plot for a control loop with 5 s of dead time, tunedfor λ varying from 5 s (equal to the dead time) through to 25 s. Table 4 shows the speed of response

TABLE 4 Speed of Response versus Robustness and Resonance

Speed of Cutoff Gain PhaseResonance

Response λ Period Mar gin Margin dB AR (%)

5Td 12π Td 9.4 81 1.0 1.12 123Td 8π Td 6.3 76 2.0 1.26 262Td 6π Td 4.7 71 2.6 1.35 35

1.6Td 5.2π Td 4.1 68 3.0 1.41 41Td 4π Td 3.1 61 4.0 1.59 59

Ampli cation

Page 54: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.54 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

FC-724-CO-MANFC-724-MAN

Screen Rejects Flow OPScreen Reject Flow

Bump Overview

fc724bop.datfc724bpv.dat

50.93

41.96

32.98

55.22

52.05

48.88

45.72

42.550.0 32.1 64.2 96.2 128.3 160.4 192.5 224.6 256.6

sec

FIGURE 22 Open loop step tests, or bump tests.

in terms of dead-time ratio. It suggests that closed-loop time constants faster than approximately twodead times are probably unacceptable from a robustness point of view. For a closed-loop time constantas fast as two dead times, the gain margin is 4.7 and the phase margin is 71◦. These are reasonablevalues for a loop in an industrial environment. However, faster tuning tends to reduce the margins tovalues that may be dangerously low. Similar conclusions can be reached from a resonance point ofview, since the amplification at the resonant frequency grows to over 40% as the speed of response isincreased to a closed-loop time constant of two dead times.

The control loop issue is discussed in more detail in Refs. 2, 21, and 22.

Identifying Plant Dynamics—Open-Loop Step Tests

Lambda tuning requires that the process dynamics be identified. This is best done by placing the loopin manual mode and carrying out a series of step tests, known also as bump tests. Bump tests involvemaking small step changes in the controller output while collecting data on both the controller outputand the process variable. Figure 22 illustrates the procedure. It is important to work very closely withthe process operator during this time, as the process is operating and making product for a customer.The operator must at all times be aware of what is happening and should be aware of the potentialconsequences of each test. From a testing point of view, the objective is to carry out a series of tests—about 10 or more, if possible—that typify the process response. Because the process is nonlinear, thesize of the steps should be varied over some practical range from small to large. Of course, the processoperator must determine the maximum size of the steps that can be carried out.

Figure 22 shows a series of seven bump tests carried out on a paper machine, Screen Rejects Flow,FC-724, which were collected by a commercial tuner software package [23]. Not all of the bumptests show responses. This is because the control valve actuation nonlinearities limit the valve travelresolution. This aspect is discussed in the next section. The larger bumps appear to produce responsesthat are first order in character. The identification process is completed by somehow fitting transfer

Page 55: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.55

FC-724-CO-MANFC-724-MAN

Screen Rejects Flow OPScreen Reject Flow

Bump 06

stolop.datstolpv.dat

39.76

38.67

37.59

49.48

48.51

47.54

46.57

45.60201.1 204.9 208.6 212.4 216.1 219.9 223.6 227.4 231.1

EnterEnter

DU = (0.2161/Div)

DY = (0.1941/Div)

Help1st Ordert = 207 U = 37.9 DU = 1.61 Y = 46.1 DY = 2.51Kp = 1.56 Td = 3.09 Tau1 = 1.65

X = (0.375 sec/Div) sec

Auto Man

U = DU =

Output

FIGURE 23 Fitting a process transfer function to a bump test.

function models to this data. Figure 23 illustrates how this can be done. In this software package[23], the bump tests are first windowed. This involves breaking the whole data record into sections,or windows, with each one concentrating on the relevant data for each bump. Figure 23 shows thedetails of bump 06, which occurred at second 207 in the data record. By fitting a transfer functionmodel to each bump, with the user adjusting the goodness of fit, the user generates set of parametersfor each bump test. In the case in point, the model being fitted is first order. The process gain is 1.56%of span /% valve. This suggests that the control valve is slightly oversized (the ideal process gainshould be 1.0). The dead time is 3.09 s. Ideally, there should be no dead time for an incompressibleflow loop. The dead time that is present is most probably due to the control valve actuator andpositioner. The time constant is 1.65 s. This is likely due to the damping adjustment in the flowtransmitter.

The parameters for each bump will vary. For the seven bumps shown in Fig. 22 the statistics arelisted in Table 5.

TABLE 5 Bump Test Parameter Variability

Parameter VariabilityParameter Average Value (%)

Process gain 0.644 143Dead time 1.87 100

Time constant 3.33 57

Page 56: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.56 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

Knowledge of this parameter variability is invaluable in determining the robustness environmentin which the loop is being tuned and should be taken into account.

CONTROL ACTUATOR AND TRANSMITTER DEFICIENCIES

Effective process control depends absolutely on the operation of control actuators and transmitters.Some of the more serious process control problems occur in plants as a result of nonlinearities inthe final elements and primary elements. When these are severe, they can make effective controlimpossible.

Control Valve Nonlinearities

Control valve nonlinearities can wreak havoc and create variability where none existed before. Controlvalve nonlinearities have been identified as the largest single source of variability generation in thepulp and paper industry, based on plant surveys [4]. Consider Fig. 6, which shows a strong cycleof a paper mill screen reject flow loop, FC-724. The flow controller is on local automatic, runningto a fixed setpoint of 1578 usgpm. The flow is cycling by +/−2.3%, with a period of ∼100 s. Thecontroller output is cycling by ∼2.5% and is following a sawtooth pattern. This is a classic control-valve-induced limit cycle. A limit cycle is a type of cycle caused by the nonlinear dynamic elementsof a control loop. Limit cycling behavior cannot be cured by loop tuning, which can only alter thefrequency of the cycle. The amplitude can be corrected only by removing or reducing the offendingnonlinearities through maintenance or replacement of the control valve.

The cycle is initiated because the control valve has a tendency to stick once it comes to rest.Unfortunately if the process variable is below setpoint, the controller integral action will slew thecontroller output in the increasing direction. Eventually the valve input signal will change by anamount big enough to cause the valve to move. In Fig. 6, this amount is ∼2.2%. The valve will thenmove, but unfortunately it will move too far. In turn, this causes the flow to overshoot the setpoint,and the cycle will start all over again in the opposite direction.

Control valve nonlinearities include dead band and resolution [25]. Dead band is the amount bywhich the input signal must change in order to cause a reversal of movement of the control valve.Dead band is caused by backlash in linkages, shaft windup in the drive train, and many other potentialcauses in the actuator and positioner. Resolution [25] is a term used to represent the smallest changethat a control valve can execute while traveling in the same direction. It is caused by the differencesin static and dynamic friction, sometimes referred to as stick-slip, or stiction. In the case of Fig. 6, thedead band is ∼2.2%. Figure 22 shows a series of open-loop bump tests carried out on this flow loop.There are seven bumps shown. It can be seen that there is no response for bumps 3 and 5. In the caseof bump 3, the change in output was −1.5%. The previous bump was −5% in the same direction.There was a response for bump 2. There was no response for bump 3 because the valve was stuck, andthe −1.5% step change was not sufficient to overcome static friction. In the case of bump 5, whichwas +1.4%, there was no response because this was the first bump in the positive direction and thesize of the bump was less than the dead band.

Analysis of the seven bump tests showed that the dead time varied from 1.5 s to 3.3 s. This longdead time and its tendency to vary are results of the inability of the positioner to initiate valve motionquickly and in a consistent manner. This valve does not meet the requirement of the EnTech ControlValve Dynamic Specification [26].

Control Valve Dynamic Specification

The EnTech Control Valve Dynamic Specification [26] requires that a control valve be tested in such away that changes in the input signal actually cause changes to occur in the flow in the pipe. It specifiesthat the dead band be kept below 1%. It specifies the maximum dead time, speed of response, and

Page 57: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.57

overshoot for various valve sizes. It also specifies that the control valve be sized so that the resultingprocess gain is close to 1.0.

Since this specification was issued in 1994, significant changes have occurred in control valveperformance. Many paper companies have adopted the practice of using the specification when order-ing control valves. The specification has also been widely used by the control valve manufacturers,who have built control valve test facilities to test valve dynamics with fluid flowing under realisticconditions [27]. As a result of this attention, control valve performance has improved in recent years.The specification has also been used in other industries, such as oil and gas, hydrocarbons, energy,and food. The specification has also drawn the attention of the ISA, which has formed the S75.25subcommittee that is charged to draft a new control valve standard [25].

In 1988, Version 3.0 of the EnTech Control Valve Dynamic Specification [30] was issued toharmonize terminology and concepts with the emerging ISA S75.25 standard.

Variable-Speed Drives

In recent years electric variable-speed drives have become attractive alternatives on prime movers,such as pumps and fans. In principle, variable-speed drives can be exceptional actuators, being veryfast, nearly linear, and highly repeatable. There are, however, a few cautions that must be taken intoaccount.

1. The variable-speed-driven pump or fan can be used to control only one variable. In applicationsin which multiple streams are to be controlled from a common pressure source, control valves offerthe only solution.

2. Variable-speed drives have the potential for response times in the subsecond range. This appliesfor acceleration, providing the current limit is not exceeded. When the current limit is exceeded, theresponse becomes velocity limited. Care should be taken in setting up the current limit settings so thatfor small speed changes needed for good regulation the current limit is not exceeded. The dynamicsof deceleration may be governed by the inertia of the rotating machine and can potentially be quiteslow. Installing a regenerative breaking system can provide fast dynamics for deceleration.

3. When replacing a control valve with a variable speed drive, one must take care to ensure thatshutdown conditions are considered. Shutdown may require a shutoff valve. As well, care must betaken to ensure proper sizing of the pump and pressure drops in the fluid system, which will besignificantly different if the control valve is no longer throttled back.

4. In some cases it may be possible to use an existing motor when installing a variable-speed drive.Care should be taken to ensure that this motor will be able to operate reliably in this new environment.

Transmitter Deficiencies

Assuming that the transmitter is correctly installed and has been calibrated, two kinds of dynamicdeficiencies may render it unsuitable for effective control. First, it may be installed in such a locationthat the effective process dead time is too long to allow effective control. Second, there may be oneof a number of issues that may degrade the quality of the resulting signal. This is may be true even ifthe transmitter is a digital smart transmitter. Some of the key issues that govern signal quality are thefollowing.

1. Signal resolution should be at least one part in 4000 or 12 bits. Report by exception should notbe used for control purposes. Should it be part of the data transmission design, the resolution shouldbe at least equivalent to 12 bits.

2. Signals should be sampled at an adequately fast rate, which is at least three times faster thanthe process time constant. Hydraulic flow and pressure loops should be sampled at least every 0.25 s.

3. Nonlinear and adaptive filters should not be used for control.

Page 58: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.58 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

4. The transmitter should not have latency or dead time greater than one sample.

More detailed recommendations are given in the EnTech Digital Measurement Dynamic Specificationfor Process Control [9].

INTEGRATED PROCESS DESIGN AND CONTROL–-PUTTING IT ALL TOGETHER

Most process plants have been designed with steady-state principles only. The instrumentation istypically chosen after the piping design is complete, and the control loops are configured beforestart-up. This design process has produced the plants of today, in which process variability is oftenfar in excess of potential, as is typically uncovered when plant variability audits are carried out. Is itpossible to design a low-variability plant from the outset? The answer to this question is yes; however,this requires an integration of the process and control design disciplines. How this integration mighttake place is the subject known today as integrated process and control design, which was initiated byDowns and Doss [28]. Control algorithm design is a very general methodology and is largely basedon linear dynamics. When thinking about control loop performance, the engineer pays no attentionto the actual behavior of the process. For instance, an important phenomenon in the pulp and paperindustry concerns pulp slurries and the transport of fiber in two- or three-phase flow. The physics thatgovern these phenomena involve the principles of Bernoulli and Reynolds and are very nonlinear.The linear transfer function is a necessary abstraction to allow the control engineer to perform linearcontrol design—the only analysis that can be done well. Yet in the final analysis, a control loop onlymoves variability from one stream to another, where it is hoped it will be less harmful. Yet the processdesign has defined the streams without any consideration for the resulting dynamics.

Integrated process and control design must take a broader view of control strategy design. Controlprovides only high-pass attenuation; hence, it can attenuate only slow variability. Often even thisprocess is flawed, as it is accompanied by the amplification of higher-frequency variability. Processmixing and agitation provide low-pass attenuation of variability. Yet both of these techniques attenuatevariability by only so many decibels. Yet the customers demand that absolute variability of the productbe within some specified limit in order to meet the manufacturing needs of their processes. Surelythe elimination of sources of variability provides the best method to ensure that no variability willbe present. These issues are in the domain of process design and control strategy design. Controlstrategy design does not lend itself to elegant and general analysis. Each process is different and mustbe understood in its specific detail. Nonlinear dynamic simulation offers a powerful tool to allowdetailed analysis of performance trade-offs and is the only available method for investigating thevariability impact of different design decisions.

In the blend chest example, for instance, all of the consistency control loops share a commondilution header, fed by a common pump. In the example, this header is not pressure controlled. As theupstream consistency control loops attempt to reduce their variability, they cause pressure variability,which causes consistency variability to be created downstream after the blend chest and the machinechest. One design alternative is provide the blend chest and machine chest consistency loops withtheir own sources of dilution water so that they will not be effected by the upstream variability. Otherprocess design changes, which may be attractive for the blend chest example, could be (1) to eliminateall flow control valves and use variable frequency pump drives (this will eliminate all control valvenonlinearities); (2) to redesign the existing blend chest to have two compartments, each one beingseparately agitated) (this will convert the existing agitation from a first-order low-pass filter to asecond-order filter and provide a better high-frequency attenuation); (3) to control the pressure ofthe dilution header by a variable-frequency pump drive. Each of these design alternatives should beevaluated by dynamic simulation before significant funds are committed. The alterations proposedabove vary in capital cost from $10,000 to $1,000,0000. Hence the simulation must have high fidelityin its ability to represent the phenomena of importance. In addition, network analysis techniques maybe useful in determining how variability spectra may propagate through a given process and controlstrategy. Changes in process and control strategy design alter these pathways by creating new streams.These concepts are discussed in more detail in Ref. 29.

Page 59: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

PROCESS CONTROL IMPROVEMENT 10.59

DEFINING TERMS AND NOMENCLATURE

Basis weight = the paper property of mass perunit area (gsm, lb/3000 sq. ft.,etc.)

Bump test = step test

β = process transfer function zerotime constant

C = ISA symbol for control

Chest = tank.

consistency = mass percentage of solids orfiber content of a pulp slurry(concentration).

dB = decibel, a method of repre-senting attenuation as 20 log10

(AR), or 10 log10 (power ratio)

Dead time = time delay

F = ISA symbol for flow

GC(s) = controller transfer function inthe continuous (Laplace) do-main

GP(s) = process transfer function inthe continuous (Laplace) do-main

HD chest = High-density chest, a largeproduction capacity stock tankwith consistency typically inthe 10%–15% range with a di-lution zone in the bottom

IMA = integrating-moving-averagenoise structure

ISA tags = ISA tagging convention(e.g., FIC-177 means FlowIndicating Controller No. 177)

KC = controller gain

KP = process gain

L = ISA symbol for level

Lambda tuning = tuning that requires the userto specify the desired closed–loop time constant λ

λ = the desired closed-loop timeconstant, usually in seconds

N = ISA symbol for consistency

P = ISA symbol for pressure.

PI = proportional-integral controller:standard or classical form

GC(s) = KC

(1 + 1

TRs

)

parallel form

GC(s) =(

KC + KC

TRs

)

PID = proportional-integral-derivative controller

ISA standard form

GC(s) = KC

[1 + 1

TRs+ TDs

(αTDs + 1)

]

classical form

GC(s) = KC

(1 + 1

TRs

)[1 + TDs

(αTDs + 1)

]

parallel form

GC(s) =[

KC + KC

TRs+ KC TDs

(αTDs + 1)

]PID.F = proportional-integral-

derivative controller with se-ries filter

Positioner = control valve accessory thatacts as a local pneumatic feed-back servo

Pulping = the process of removing indi-vidual fibers from solid wood

Refiner = a machine with rotating platesused in pulping to cause thewood chips to disintegrateinto individual fibers throughthe mechanical action andin papermaking to fibrilatethe fibers to enhance bondingstrength

Stock = pulp slurry

Td = dead time or time delay

TR = controller reset or integraltime/repeat

τ 1, τ 2 = process time constants (τ 1, ≥τ 2).

Page 60: SECTION 10 PROCESS CONTROL IMPROVEMENTftp.feq.ufu.br/Luis_Claudio/Segurança/Safety... · PLANT ANALYSIS, DESIGN, AND TUNING FOR UNIFORM MANUFACTURING 10.17 INTRODUCTION 10.17 PROCESS

1.8P1: FJB/FCU P2: FFG

31 August 1999 22:57 CHAP-10.tex chap10.sgm MH009v3(1999/07/27)

10.60 PROCESS/INDUSTRIAL INSTRUMENTS AND CONTROLS HANDBOOK

REFERENCES

1. Shunta, J., World Class Manufacturing Using Process Control, Prentice-Hall, Englewood Cliffs, New Jersey,1995.

2. Bialkowski, W. L., “Pulp and Paper Process Control,” in CRC Control Handbook, W. S. Levine, Ed., CRCPress, Boca Raton, Florida, 1996.

3. Bialkowski, W. L., and F. Y. Thomason, “Title of Chapter,” in Process Control Fundamentals for the Pulp &Paper Industry, N. Sell, Ed., Technical Association, P & P Industries, Atlanta, Georgia, 1995.

4. Bialkowski, W. L., “Dreams Versus Reality: A View From Both Sides of the Gap,” Keynote Address, ControlSystems ’92, Whistler, British Columbia, 1992, published Pulp Pap. Can. 94 (11), 1993.

5. Bialkowski, W. L., B.C. Haggman, and S. K. Millette, “Pulp and Paper Process Control Training Since 1984,”Pulp Pap. Can. 95 (4), 1994.

6. Kocurek, M. J., Series Ed., Pulp and Paper Manufacture, Joint Technical Association, P & P Industries andCanadian Pulp and Paper Association textbook, Committee of the Pulp and Paper Industry, 1983–1993, Vols.1–10.

7. EnTechTM, “Analyse” time series analysis software package.

8. Box, G. E. P., and G. M. Jenkins, Time Series Analysis: Forecasting and Control, Holden-Day, San Francisco,California, 1976.

9. EnTechTM, Digital Measurement Dynamics—Industry Guidelines (Version 1.0, 8/94) (EnTech literature).

10. EnTechTM, “Signal Conditioning Module” data acquisition equipment.

11. EnTechTM, “Collect” data acquisition software package.

12. Astrom, K. J., Introduction to Stochastic Control Theory, Academic, New York, 1970.

13. Chien, I. L., and P. S. Fruehauf, “Consider IMC Tuning to Improve Controller Performance,” HydrocarbonProcessing, Oct. 1990.

14. Dahlin E. B., “Designing and Tuning Digital Controllers,” Instrum. Control Syst. 41(6), 77, 1968.

15. Morari, M., and E., Zafiriou, Robust Process Control, Prentice-Hall, Englewood Cliffs, New Jersey, 1989.

16. Ziegler, J.G., and N.B. Nichols, “Optimum Settings for Automatic Controllers,” Trans. Am. Soc. Mech. Eng.pp. 759–768, 1942.

17. Cohen, W. C., and G. A. Coon, Trans. Am. Soc. Mech. Eng. 75, 827, 1953.

18. Lopez, A. M., P. W. Murill, and C. L. Smith, “Controller Tuning Relationships Based on Integral PerformanceCriteria,” Instrum. Technol. 14, (11), 57, 1967.

19. Bode H. W., Network Analysis and Feedback Amplifier Design, Van Nostrand, Princeton, New Jersey, 1945.

20. Doyle J. C., A. Francis, and A.R. Tannenbaum, Feedback Control Theory, Macmillan, New York, 1992.

21. Haggman, B. C., and W. L. Bialkowski, “Performance of Common Feedback Regulators for First-Order andDeadtime Dynamics,” Pulp Pap. Can. 95(4), 1994.

22. Bialkowski, W. L., “A Review Of Deadtime Compensator And PI Controller Regulation Performance,” PulpPap. Can. 89, 1997.

23. EnTechTM, “Tuner” Lambda Tuning software package.

24. EnTechTM, Automatic Controller Dynamic Specification (Version 1.0, 11/93) (EnTech literature).

25. ANSI/ISA S75.25 Control Valve Response Measurement from Step Reponses Inputs.

26. EnTechTM, Control Valve Dynamic Specification (Version 2.1, 3/94) (EnTech literature).

27. Taylor, G., “The Role of Control Valves in Process Performance,” presented at the 80th Annual Meeting,Technical Section, Canadian Pulp and Paper Association, Montreal, Canada, 1994.

28. Downs, J. J, and J.E. Doss, “Present Status and Future Needs—A View from North American Industry,”presented at the Fourth International Conference on Chemical Process Control, Padre Island, Texas, Feb.17–22, 1991 (AIChE Proceedings).

29. Tseng, J., W. R. Cluett, and W. L. Bialkowski, “Variability Propagation Through A Stock Preparation System:Implications For Process Control And Process Design,” presented at Control Systems ‘96, Halifax, NovaScotia, 1996. Published, Pulp Pap. Can. 89(9), T322–T325, 1997.

30. EnTechTM, Control Valve Dynamic Specification (Version 3.0, 11/98)(EnTech Literature).