the university of reactive power the challenges management...reactive power management the abb u.s....

44
Summer 2006 Welcome to the CRC Journal, a publica- tion of ABB Corporate Research Center locations in the United States featuring news, pertinent articles and conference papers prepared by CRC scientists, tech- nical specialists and others involved in CRC research activities. If you would like to be on the distribution list for this electronic document, please e-mail Robert Loeffler at [email protected] Research Collaboration with the University of Waterloo on Reactive Power Management The ABB U.S. Corporate Research Center in Raleigh, North Carolina, has established strong ties with university programs in power systems engineering throughout North America. The U.S.CRC is continuously contacting universities to gain an understanding of their expertise and to see where there may be a shared interest in technology development. As one of their corporate research strategies, CRC specialists are looking to universities for developing new ideas and concepts that can eventually be integrated into ABB product offerings. The University of Waterloo ABB’s collaboration with Prof. Claudio Cañizares at the University of Waterloo in Waterloo, Ontario, Canada, was initiated in 2005 as a two-year research project to develop and demonstrate new concepts in reactive power, ancillary service, market design, and analysis, which are of great interest to electricity markets around the world. Professor Cañizares, who wrote his doctoral thesis on voltage collapse analysis of AC/DC systems, is an internationally recognized expert in power system stability studies and optimal power flow applications within in the context of competitive electricity markets. The Challenges Maintaining adequate levels of reactive power and voltage support have become major challenges as power grid operations move to regional structures. Electric energy delivery systems are nonlinear consumers of reactive power. At heavy loading the system consumes a large amount of reactive power that must be injected, while at light loading the system generates reactive power that must be absorbed. Thus, reactive power support is needed to maintain the voltages of transmission systems through reactive power injection or absorption. A recent Federal Energy Regulatory Commission (FERC) staff report has concluded that effective reactive power management is not only necessary to op- erate transmission and distribution sys-

Upload: others

Post on 24-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

  • Summer 2006 Welcome to the CRC Journal, a publica- tion of ABB Corporate Research Center locations in the United States featuring news, pertinent articles and conference papers prepared by CRC scientists, tech- nical specialists and others involved in CRC research activities. If you would like to be on the distribution list for this electronic document, please e-mail Robert Loeffler at [email protected]

    Research Collaboration with the University of Waterloo on Reactive Power Management The ABB U.S. Corporate Research Center in Raleigh, North Carolina, has established strong ties with university programs in power systems engineering throughout North America. The U.S.CRC is continuously contacting universities to gain an understanding of their expertise and to see where there may be a shared interest in technology development. As one of their corporate research strategies, CRC specialists are looking to universities for developing new ideas and concepts that can eventually be integrated into ABB product offerings.

    The University of Waterloo ABB’s collaboration with Prof. Claudio Cañizares at the University of Waterloo

    in Waterloo, Ontario, Canada, was initiated in 2005 as a two-year research project to develop and demonstrate new concepts in reactive power, ancillary service, market design, and analysis, which are of great interest to electricity markets around the world. Professor Cañizares, who wrote his doctoral thesis on voltage collapse analysis of AC/DC systems, is an internationally recognized expert in power system stability studies and optimal power flow applications within in the context of competitive electricity markets. The Challenges

    Maintaining adequate levels of reactive power and voltage support have become major challenges as power grid operations move to regional structures. Electric energy delivery systems are nonlinear consumers of reactive power. At heavy loading the system consumes a large amount of reactive power that must be injected, while at light loading the system generates reactive power that must be absorbed. Thus, reactive power support is needed to maintain the voltages of transmission systems through reactive power injection or absorption. A recent Federal Energy Regulatory Commission (FERC) staff report has concluded that effective reactive power management is not only necessary to op- erate transmission and distribution sys-

    mailto:[email protected]

  • tems reliably, but also can substantially improve efficiency with which power is delivered to customers. Professor Cañizares’ research is of special interest to ABB since the company is one of the few vendors of network management systems to the Independent System Operators (ISOs) of electricity markets.

    A Premier Center for Power System Engineering The University of Waterloo has one of the top programs in power system engineering in North America. Working with Professor Cañizares (principal investigator) is Prof. Kankar Bhatta-charya (co-investigator), a renowned expert in the area of power system optimization, power system dynamics, and electricity market design and analysis, as well as others specializing in power system technologies. Professor Cañizares and his associates are helping ABB to develop new optimization methods for reliable and efficient reactive power management that could be eventually incorporated into network management products currently being provided to System Operators such as the IESO (Independent Electricity System Operator) of Ontario. One advantage of working with the University of Waterloo is that collaborative research is highly encouraged in Canada with provision of matching funds from government and various science and research foundations. Thus far, this project has been awarded a collaborative R&D grant from the Natur- al Science and Engineering Research Council of Canada. Additional matching

    funds are expected from the Research Partnerships Program of the Ontario Centres of Excellence. Other university resources could be involved in this project with extended research scope and durations. Work So Far The first year of research work has resulted in good initial output and positive feedback from the IESO. The main accomplishments include: the study of existent and proposed reactive power dispatch and clearing procedures, improvement of computational perform-ance of chosen reactive power market models, the introduction of system security in existent reactive power dispatch and clearing algorithms, and the identification of possible Energy Management System (EMS) appli-cations. The second year of research work will extend the developed concepts and algorithms to address short-term reactive power dispatch issues in electricity markets. Optimal reactive power dispatch has been identified as one of the most needed research areas during joint ABB/University of Waterloo visits to the IESO and the ISO-NE (Independent System Operator of New England). Both the IESO and ISO-NE have agreed to review and comment on proposed concepts and provide their system data to test the developed methodologies. According the project plan, a prototype of optimal reactive power procurement and dispatch tools will be developed by the end of 2006. CRC engineers have been working closely with Professor Cañizares and his

  • associates to define the scope and con- tent of the research and to ensure application specific requirements and general requirements are considered during the development process. Monthly telephone conferences and quarterly project reviews have been maintained for effective communications and progress monitoring. In addition, a representative from the ABB Network Management unit has been actively involved in this university collaboration project, guiding the direction of research work and participating in meetings arranged by Professor Cañizares with ISOs.

    The CRC will also be hosting the students involved in this project for a three month internship during the fall of 2006 at its research facilities in Raleigh, North Carolina. The CRC plans to continue to strengthen its collaboration with well-known uni-versity programs in power systems engineering to help ensure a range of good ideas and proven concepts is generated. University collaboration also has a positive impact on the company’s public image and adds a unique channel to communicate with our strategic end customers.

  • TABLE OF CONTENTS

    “Chatter Analysis of Robotic Machining Process,” by Zengxi Pan, Hui Zhang, Zhenqi Zhu, and Jianjun Wang “A Model to Evaluate the Economic Benefits of Software Components Development,” by Aldo Dagnino, Hema Srikanth, Martin Naedele, and Dennis Brantly “Economic Evaluation of Transmission Congestion Relief Based on Power Market Simulations,” by Xiaoming Feng, Jiuping Pan, Le Tang, Henry Chao, and Jian Yang “Web-Enabled Tools for Asset Management,” by Jorgen Hasselstrom, David Lubkeman, Fangxing Li, Yuan Liao, and Zhenyuan Wang “Methodology and Algorithm for Ranking Substation Design Alternatives,” by K. Koutlev, A. Pahwa, Z. Wang, and L. Tang

  • A Model to Evaluate the Economic Benefits of Software Components Development

    Aldo Dagnino ABB-USETI

    1021 Main Campus Dr Raleigh, NC 27606 US

    [email protected]

    Hema Srikanth College of Engineering

    NC State University Raleigh, NC 27695

    [email protected]

    Martin Naedele ABB-Corporate

    Research Switzerland Daettwil, Switzerland

    [email protected]

    Dennis Brantly ABB-USETI

    1021 Main Campus Dr Raleigh, NC 27606 US

    [email protected]

    Abstract - ABB is a multi-national corporation that is developing a new generation of products based on the concept of Industrial IT. This concept provides a common integration platform for product interoperability. As Industrial IT enabled products are developed across ABB, software reuse must be considered. Component Based Software Development (CBSD) is an effective means to improve productivity and quality by developing reusable components. Measuring the economic benefits and performing sensitivity analyses of CBSD scenarios in the development of Industrial IT products is important to improve efficiency. This paper presents a model that allows project leaders to evaluate a variety of software development scenarios. The model is based on a Goal-Question-Metrics (GQM) approach and was developed at the ABB Corporate Research Laboratories. 1 Introduction This paper presents a model that can be used by Project Managers to evaluate different cost and effort scenarios for the development of a software application using a CBSD approach. A Project Manager can use this model to determine estimated cost [$] (dollars) and effort [PM] (person/months) scenarios and their expected benefits in a project using CBSD. This paper includes a description of CBSD metrics that can be used to evaluate the benefits of developing an application using CBSD. The GQM approach was utilized for the creation of the CBSD metrics that are fundamental to the sensitivity analysis model described in this paper. A top-down method that stresses the importance of goal-oriented approach to deriving metrics is presented in [6]. The GQM approach emphasizes the following steps: a) List all the major Goals. b) Derive Questions that are needed to determine if the

    Goals are achieved or not. c) Decide what is to be measured in order to answer the

    Questions.

    2 Metrics for CBSD This section explains the basic algorithm utilized by the sensitivity analysis model. For the purpose of this paper, metrics are determined following the GQM approach. First, Goals are identified, then questions are established, and the associated metrics are defined. 2.1 Goals Definition Two goals that evaluate the benefits of CBSD are identified below. Goal-1: Evaluate whether there is a reduction in cost in a project as a result of selecting CBSD. Goal-2: Evaluate if there is a reduction in effort as a result of selecting CBSD in a project. 2.2 Questions and Metrics Once the goals have been identified, questions and metrics can be defined. To define questions and metrics, the concept of measurable unit [MU] is used. 2.2.1 Measurable Units [MUs] The size of a software project is the key input for the estimation of the cost and effort. For the purposes of this paper, the project size is represented in measurable units [MUs]. Two conventional measures of estimating the size of a software project include lines of code (LOC) and function points (FP). LOC is the sum of non commented lines of code (NCLOC) and commented lines of code (CLOC) [9]. The FP method divides the software into five parts of functionality: Inputs, Outputs, Data Files, Inquries and Interface, which are weighted differently. To make the estimate, the number of types of each functionality are counted. This number is multipied by each functionality weight to produce the total function-point value [1].

    ___________________________ 0-7803-7952-7/03/$17.00 © 2003 IEEE.

    mailto:[email protected]:[email protected]:[email protected]:[email protected]

  • 2.2.2 Questions and their Associated Metrics The discussion below explains the GQM approach to CBSD. The reader is referred to the tree structure presented in Figure 1 to follow up the discussion of the model. The questions that have to be answered to satisfy the two goals described in Section 2.1 are defined in this section. The answer to each question can be obtained by asking several sub-questions. The questions on the leaf nodes in Figure 1 which do not have any sub-questions, correspond to the derived metrics. The meaning of each identified metric is explained in this section as well. For clarity purposes, the goals are prefaced by the letter “G”, the questions are prefaced by letter “Q” and the metrics are prefaced by letter “M”. The questions associated with each goal are stated below. G-1: Evaluate whether there is a reduction in cost in a project as a result of selecting CBSD. Q-1: What is the difference in cost between CBSD and a customized software development?

    Q-1.1: What is the cost of labor to produce one MU of code? [$/MU] M-1: Determine the cost of labor per hour that is different in every organization . [$/HR]. M-2: Determine the productivity rate of software development in the particular organization. [MU/HR]. The labor cost per hour and productivity rate varies in each organization. The cost of labor per measurable unit (MU) can be calculated by dividing M1 by M2 [$/MU]. The cost of labor per MU depends on the cost per hour and the productivity rate per hour. Q-1.2 (also Goal-2 or G-2): What is the difference in effort required between CBSD and Custom (or traditional) SW development?[MU] Q-1.2.1: What is the effort required in CBSD? [MU]

    Q-1.2.1.1: What is the effort required to develop some fraction of the software application using a custom approach? [MU] M-3: Size of the piece of code in measurable units that requires some customized code (such as “glue” code) needed in CBSD [MU]. Total effort required when custom SW approach is used for a fraction of the development. The size of the functionality (LOC or FP) is a prime factor to estimate the

    development effort. For the purpose of this paper, the project size is estimated in Measurable Units (MU). Total effort for new code [MU] = TENC TENC = size of the product in MU For example, if new code has a size of 100 MU, then estimated effort is is 100 MU. Q-1.2.1.2: What is the effort required to reuse (ER) components? [MU] Total effort to reuse components is the sum of effort involved in reusing components that require modifications and the effort involved in reusing components that do not require any modifications. Q-1.2.1.2.1: What is the effort required to reuse components with modification? [MU] The effort required to reuse the software components in the components library or the components that have been purchased from a third-party vendor has to be estimated. The effort involved in re-using a component with modifications varies with the degree of modifications. The degree of modification to components can be categorized into two levels, one with modifications ≤ 25 % and other with modifications > 25%. For example if there is a component of 600 MU and 140 MU of the component are modified before reusing, the modification level is under 25% and this value corresponds to M - 4. Q-1.2.1.2.1.1: What is the effort required to reuse components that require modifications ≤ 25%? [MU] M-4: Accummulated size of all reused components with modification ≤ 25%. [MU]. If the size of the entire project is for example 1000 MU and a component is re-used with 600 MUs, the size of the functionality represents 600 MUs.

    M-5:Defines the ERLM factor for all components reused with less than 25% modification [%]. The ERLM factor [%]. ERLM represents the effort to reuse factor that require less than 25% modification [LM]. This factor includes the effort required to select, understand, integrate, test and modify the component. The estimated effort required to reuse a component that require ≤ 25% modifications is estimated to be 40% of the effort required to write the same component using custom approach [14]. The Project Manager can determine this ERLM value either from past projects completed within the organization or can use a default value of 0.40. The ERLM factor gives us the

  • total effort involved in reusing a component as opposed to producing it using custom approach. For example, if a component of 100 MU with an ERLM factor of 40% is plannned to be reused. The effort involved to produce this component using custom approach is 100 MU and the effort to reuse it is only 40 MU and we have a savings of 60 MU by reusing the component. The 40 MU corresponds to the effort for selecting, understanding, integrating, testing and modifying the component. Total Effort to reuse component with modification less than or equal to 25% [MU] = ER(m25%) ER(m>25%) = Size of component * ERMM factor (2)

    Total Effort to reuse component with modification [MU] = TER(m) TER(m) = (Size of the component with modification of less than or equal to 25% * ERLM factor) + (Size of the component with modifications greater than 25% * ERMM factor) Q-1.2.1.2.2: What is the effort required to reuse software components without modification? [MU]

    M-8: Size of all reused components without any modifications [MU]. M-8 corresponds to the size of the functionality of the application that is reused with no modifications made to the components. M-9: ERNM factor for reused components that do not require any modifications [%].

    The ERNM factor [%]. ERNM represents the effort to reuse factor for components that do not require modification any modification. The ERNM factor represents the effort involved in selecting, understanding, integrating and testing the components as opposed to developing the component using custom approach. Grady [7] reports the ERNM factor as 25% based on the case study conducted. Lim [10] reports an ERNM factor of 19% on another case study. Margano and Lindsey [11] report ERNM value of 20% on another study. A default value of 0.20 is recommended for an ERNM factor for components that need no modification. An organization can also use an ERNM factor using historical data from an earlier project completed within the company. Total Effort to reuse component without modification [MU] = TER(wm) TER(wm) = Size of component * ERNM (3)

    Total Effort to reuse components = TER TER = Effort to reuse components with modification + Effort to reuse components without modifications or TER = [TER(m) + TER(wm) ] (4) Q-1.2.1.3: What is the effort required for writing reusable (EWR) components? [MU] M-10: Size of the component written for reuse [MU]. M-10 corresponds to the size of the functionality of the new software component that is written to be reused in the future. M-11: EWR factor [%]. EWR Factor [%]. The effort involved in writing a reusable component is significantly more than writing functionality using a custom or traditional approach. The added effort comes from conducting domain analysis, generalizing requirements, additional testing, packaging, maintenance, certification, documentation and managing the reuse repository. All these efforts are incorporated in the EWR factor. The benefit behind writing a component is that it can be reused in the future and the effort involved at that time would be only 20% of a custom approach. Tracz [16], based on extensive reuse case studies in IBM, gives a

  • EWR factor of 160%. In other words, if effort for component using custom approach is estimated to be 100 MU, then the estimated effort for writing a component is 160 MU. The Computer Sciences Corporation reports that writing for reuse on the Federal Aviation Administration’s Advanced Automation System costs twice as much as development for-one-time-use [11]. Lim [10] reports that the effort in developing for reuse is between 111% – 180% of the effort involved in developing using custom approach. Based on his findings, Reifer reported that an extra effort of 10% – 20% more is incurred in building a component for reuse. Bardo et al. reports that it took 15% – 20% additional effort to develop reusable software in Avionics Simulators and Trainers . Poulin [12 ] reported an EWR factor of 186% based on a case study that lasted three years at Lockheed Martin Federal Systems. Based on these studies a default value of 1.55 is recommended for the EWR factor. An organization can also use an EWR factor using historical data. Total Effort to write a reusable component [MU] = TEWR TEWR = Size of the component * EWR factor (5) Total Effort for developing a project using CBSD [MU] = Total Effort to write some fraction using custom approach [MU] + Total Effort for reusing components [MU] + Total Effort to write reusable components [MU]

    or TE (CBCD) = [TENC + TER + TEWR] (6) Q-1.2.2: What is the Effort required in custom SW development? [MU] M-12: Size of the product or module developed using custom SW development approach [MU]. M-12 corresponds to the estimated size of the product or module(s) that will be developed using a cutomized approach. A significant factor that affects the estimated effort required is the size of the product developed. Therefore, if the product size is 1000MU, then the total effort to develop the project using custom SW development approach is 1000MU. Total effort for developing a project using Custom SW [MU] = TEC TEC = Size of the product [MU] (7) To evaluate a difference in effort between developing a project using a CBSD or custom (or traditional) approach can be calculated by evaluating TEsave : TEsave = [Effort for the project developed using custom or traditional approach - Effort for the project developed using CBSD]

    TEsave = [TEC - TE(CBSD)] (8) if TEsave > 0, there are savings using CBSD if TEsave < 0, there are savings using customized software development Q-1.3: What is the cost of buying components? [$] M-13: Cost of buying components [$]. M-13 represents the cost incurred in buying software components from a third-party vendor. The oval-shaped nodes in Figure 1 correspond to the parameters specific to the project. The nodes in the shape of a parallelogram are the input parameters (e.g. productivity rate) which are specific to the organization. 2.3 Evaluate whether there is a reduction in cost when CBSD approach is used The following parameters should be estimated to evaluate the reduction in cost of a software project due to CBSD. 2.3.1 Cost of labor to produce one MU? [$/MU] The cost of labor per MU is dependent on two factors internal to the organization: (a) labor cost per hour; and (b) productivity rate per hour. Cost per MU, given productivity rate and cost of labor per hour is as follows:

    Cost / HR [$ /HR] (9) Cost / MU [$ /MU] = Productivity Rate[MU/HR] 2.3.2 What is the difference in effort (Goal-2) required? [MU] 2.3.3 Cost of buying components? [$] When CBSD approach is used, the components that are reused can be taken from the repository or can be built for reuse or can be purchased from an outside vendor. The cost incurred, if any, to purchase a component from an outside vendor is a key factor in calculating the difference in cost. Evaluate whether there is a difference in cost Csave = (Cmu * TEsave ) – Cb (10) Where

    Cmu: Cost of labor per MU. [$/MU] TEsave: Difference in Effort required between CBSD and custom approach. [MU].

    Cb : Cost of buying components. [$]

  • if Csave > 0, CBSD is preferred due to savings if Csave < 0, customized software development is preferred due to savings 3. References 1. A. J. Albrecht and J. E. Gaffney, Jr. “Software Function,

    Source Lines of Code, and Development Effort Prediction: A Software Science Validation”, IEEE Transactions on Software Engineering SE-9, pp. 639-648, 1983.

    2. M. Aoyama. “Componentware: Building Applications with

    Software Components”, J. of IPSJ, Vol. 37, No. 1, pp. 71-79, 1996.

    3. M. Aoyama. “Process and Economic Model of Component-

    Based Software Development”, IEEE SAST (Symposium on Assessment of Software Tools), June 1997.

    4. M. Aoyama. Componentware, Kyoritsu Shuppan, 1998. 5. T.Bardo, D.Elliot, T. Krysak, M. Morgan, R. Shuey, W.

    Tracz. “A Product Line Success Story”, Crosstalk: The Journal of Defense Software Engineering, 1996.

    6. V. Basili & D. Weiss. “A Methodology for Collecting Valid

    Software Engineering Data”, IEEE Transactions on Software Engineering, Vol. 10, November 1984.

    7. R. Grady. Practical Software Metrics for Project Management and Process Improvement, Prentice Hall, Englewood Cliffs, NJ, 1992.

    8. I. Jacobson, M. Griss, P. Jonsson. Software Reuse, Addison

    Wesley, 1997. 9. Humphrey, W. S. (1995), A Discipline for Software

    Engineering, Addison-Wesley, USA (pp. 69-95). 10. W. Lim. “Effects of Reuse on Quality, Productivity, and

    Economics”, IEEE Software, 1994. 11. J. Margano, L. Lindsey. “Software Reuse in the Air Traffic

    Control Advanced Automation System”, Joint Symposia and Workshops: Improving the Software Process and Competitive Position, Alexandria, VA, 1992.

    12. J. Poulin. “Software Reuse on the Army SBIS Program”,

    Crosstalk: The Journal of Defense Software Engineering, 1995.

    13. J. Poulin. Measuring Software Reuse, Addison Wesley,

    1997. 14. V. Rajlich, J. Silva. “A case study of Software Reuse in

    Vertical Domain”, Proceedings of the 4th Systems Reengineering Technology Workshop, Monterey, CA, 1994.

    15. D. Reifer. “SOFTCOST-Ada: User Experiences and Lessons Learned at the Age of Three”, Proceedings of TRI-Ada, 1990.

    16. W. Tracz. “Software Reuse Myths”, ACM SIGSOFT

    Software Engineering Notes, 1988. 17. E. Yourdon. “Software Reuse”, Application Development

    Strategies, Vol.VI, no.12, pp. 1-16, 1994.

  • Figure 1. Tree Structure to Evaluate the Cost-Effort Path for CBSD

    Q-1Goal_1

    Difference in Cost[$$]

    Q-1.2Goal_2

    Difference in Effort[MU]

    Q-1.1Cost of labor per MU

    [$$/MU]

    Q-1.2.1Effort for CBSD

    [MU]

    Q-1.2.2Effort for Custom SW

    [MU]

    Q-1.2.1.1New Code

    [MU]

    Q-1.2.1.3EWR[MU]

    M-3Size of the Product

    [MU]

    M-12Size of the Product

    [MU]

    Q-1.3/M-13Cost of BuyingComponents

    [$$]

    M-1Cost of

    labor perhour

    [$$/HR]

    M-2Productivity

    Rate[MU/HR]

    M-10Size of Component

    [MU]

    Q-1.2.1.2.1ER With Modification

    [MU]

    Q-1.2.1.2.2ER WithoutModification

    [MU]

    Q-1.2.1.2.1.1Modification 25%

    [MU]

    M-8Size of Component

    [MU]

    M-9ERNMFactor

    [ % ]

    M-11EWR Factor

    [ % ]

    M-4Size of Component

    [MU]

    M-5ERLMFactor

    [ % ]

    M-6Size of Component

    [MU]

    M-7ERMMFactor

    [ % ]

    Q-1.2.1.2ER

    [MU]

  • 1

    Abstract – In recent years, a combination of increased bulk powertransactions and insufficient transmission capacity expansionhas resulted in frequent congestion in electric transmissiongrids in the United States. Congestion restricts the transfer ofeconomic power between regional energy markets and efficientmarket operation. It is an important but challenging work torealistically assess the severity of transmission congestion andits economic value of transmission expansion to relieve thecongestion. This paper describes an application of a powermarket simulation program in transmission congestion andtransmission capacity expansion studies. Power marketsimulation programs, such as the one used in this paper,GridView, provide objective and systematic platform fortransmission asset utilization assessment, bottleneckidentification, and benefit analysis for system expansions.Effects of transmission congestion and interface capacityimprovement are discussed through a real regional electricmarket simulation analysis of the NYCA system based onpublicly available transmission, generation, and demand data.

    Index Terms – Transmission congestion, Bottleneckidentification, Power market simulation, congestion cost,Expansion option valuation, Economic impact assessment,NYCA (New York State Control Area).

    I. INTRODUCTION

    ESTRUCTURING is causing fundamental changes in therules of the game for the electric power industry. These

    changes are posing many tough decision making challenges toall parties involved in the new power market place, includingpolicy makers, market designers, consumers, merchantplant/transmission investors, transmission owners, systemplanners, and power marketers. To address many of the toughissues, it is critical to have an objective and systematicapproach to analyze the electrical power systems and marketsin an integrated environment, capturing both the economic andthe engineering aspects of the problems. This paper attemptsto address two important and interrelated issues associatedwith transmission planning under deregulated electric markets:

    X. Feng ([email protected]), J. Pan

    ([email protected]), and L. Tang ([email protected]) are withABB Corporate Research, Raleigh, NC 27606 USA.

    H. Chao ([email protected]) and J. Yang ([email protected])are with ABB Consulting, Raleigh, NC 27606 USA

    transmission congestion analysis and expansion alternativeevaluation.

    Transmission constraints have resulted in extensivedispatch of expensive generations, consumers in some areaspaying higher prices for electricity than used to be. There areincreased concerns with the reliability of nation’s energydelivery infrastructure. The impact of electric transmissionconstraint was investigated in the recent Federal EnergyRegulatory Commission Study [1]. The importance ofidentifying major transmission bottlenecks and estimating theeconomic benefits from mitigating/removing these constraintswas featured in the National Energy Policy 2001 and the recentNational Transmission Grid Study sponsored by USDepartment of Energy [2,3]. Congestion varies over time andlocation as a function of system conditions, and as suchestimating actual cost of transmission congestion is extremelydifficult if not entirely impossible without the use oftransmission constrained or security constrained optimalpower flow program or market simulation program [4,5]. Theuse of market simulation models is essential to accuratelycapture the time dependent operation and congestion of thetransmission system. A comprehensive transmissioncongestion study should answer the following commonquestions:• Where are the bottlenecks in a system?• How often do congestions occur?• What are the economic consequences of inadequate

    transmission capacity?• Which interface or path constraints are more attractive for

    improvement?• What system expansion or upgrade options are available?• What is the market and system impact of any proposed

    system expansion project?Transmission congestion is one of the most important

    indicators for the need for system expansion. However, variousmeasures for congestion condition of transmission systemfrom market simulations should be carefully interpreted whenused to support network expansion decisions. For example, theintegrated shadow prices of binding transmission interfaces orcircuits over a year are good screening indexes for networkexpansion needs, but are not appropriate for determining thefinal ranking of potential candidate expansion projects.Expansion alternative evaluation is a rather complicated studyprocess due to large number of expansion or reinforcementoptions and the interactive effect among individual expansionprojects on transmission constraints. A comprehensiveexpansion evaluation process using market simulations can

    Economic Evaluation of Transmission CongestionRelief Based on Power Market Simulations

    Xiaoming Feng, Member, IEEE, Jiuping Pan, Member, IEEE, Le Tang, Member, IEEEHenry Chao, Member, IEEE, and Jian Yang, Senior Member, IEEE

    R

  • 2

    provide the needed decision making information with respectto the best choice of candidate expansion projects anddetermine the economic impact of respective expansionprojects on various market participants [6].

    This paper presents a case study transmission congestionanalysis and expansion options evaluation using power marketsimulation program on a test system built on the NYCAsystem. The remainder of this paper is organized as follows:Section II provides a description of the system, a study systembased on a real regional energy market in the US. Section IIIgives a functional overview of market simulation model and themarket simulation program used in this analysis, GridView.Sections IV through V are devoted to using the marketsimulation program for transmission congestion and expansionstudies on the large regional energy market and the analysis ofthe results. Concluding remarks are provided in Section VI.

    II. THE STUDY SYSTEM

    In this paper, a test system built on the NYCA (New YorkControl Area) system is analyzed using a market simulationprogram, GridView. Although the study system is built on theNYCA system data available from public sources and systemmodel used is comparable to the real system in scope andcomplexity, no claim is being made by the authors that theprojected system conditions and modeling assumptions areconsistent with NYCA projections. The simulation results andanalysis are presented for illustration purpose in discussingthe issues, procedures, and tools in congestion analysis andexpansion study.

    Establishing a credible base case is an important task in anycompetitive market simulation studies, which requires a clearlydefined scope of regional market, quality data for transmission,generation, and demand, as well as reasonable assumptions forany uncertain information such as new generation in-servicedate and facility maintenance schedule. For illustrativepurposes, the GridView market simulation model used in thispaper is created primarily based on publicly available datasources, including FERC Form 714 and Form 715, RDI’sPowerDat database, NYISO transmission planning studyreports, etc.

    The study period represents the Year 2003. The summerpeak and annual energy consumption of NYCA system areprojected as 27,085 MW and 161,163 GWh, respectively,approximating a annual load factor of 68%. The GridViewmarket simulation model for this market includes major interfacetransfer capabilities published by NYISO and all plannedgeneration capacity additions in NYCA system that will beeffective by the end of 2003, with a total capacity of about 6600MW. Annual generation maintenance schedule is determinedbased on the maintenance requirement and the criterion ofcapacity reserve leveling against the NYCA regional load.

    One of the difficulties in creating a credible marketsimulation case is to account for existing bilateral contractsand economic power exchanges between the study system andthe external systems, which, if significant in number andquantity, can considerably impact the results of marketsimulation. Depending on the study objective and data

    availability, the three modeling methods of external systemscan be implemented in GridView: constant power exchange,scheduled power exchange, and economic power exchange.Constant power exchange means holding fixed tie-line flows asdefined in the original power flow base case while scheduledpower exchange means maintaining tie-line flows as defined byindividual transactions reflecting seasonal and daily importand export variation patterns. Economic power exchangemeans allowing for flexible tie-line flows determined bycompetitive market economics and transmission capabilities. Inthis study, scheduled power exchanges between NYCA andsurrounding systems (i.e., PJM, New England and Canada) aremodeled.

    III. MARKET SIMULATION MODELS

    At the core of a market simulation program is an optimizationengine to perform transmission/security constrained unitcommitment and economic dispatch. It is essential for theprogram to include detailed transmission network model sothat physical feasibility power flows can be ensured in themarket simulation. Programs with only transportation modelcapability are clearly inadequate. Programs with grosslyoversimplified transmission system model, usually referred toas “bubble model” also suffer from unreliable transmissionsystem representation. This simulation function, used inconjunction with the various data modeling options, such asusing bid rather than actual cost, enforcing or ignoring ofgenerator operating constraint, can simulate the operation ofpower market in different configurations. A simulationprogram requires data from the following broad categories:• Supply – Generating capacity location, heat rates, fuel

    cost, operation constraints, and bidding information• Demand – Spatial load distribution over time• Transmission – Load flow model, interface limitations,

    transmission nomogram, and security constraints• Market Scenarios – Market configuration, different fuel

    prices, generating capacity additions and/or retirements,and bidding strategies

    • Market Rules – Transmission tariffs and congestionmanagement policies

    • Reliability Performance Data – Generation andtransmission availability information

    The simulation program mimics the operation of openelectricity markets by performing transmission securityconstrained unit commitment and economic dispatch. This isdone sequentially in chronological order for a period from aweek to a few years, depending on the objective of theapplication. The output information typically includes:• Locational market clearing prices• Generator utilization – dispatch hours, total generation,

    production cost, and revenues• Transmission utilization – maximum loading, loading

    factors• Transmission bottlenecks – hours of congestion,

    economic value of expansion• Reliability indices and other market performance measures

  • 3

    The market simulation program used in this study isGridView, a software tool designed and developed specificallyto perform market simulation analysis [7]. GridView derivesmany of its advantages from its comprehensive functionality,ease of use, Windows-based user-friendly graphical userinterface. A salient feature of GridView is the transmissionsecurity constrained unit commitment. This proceduredetermines the startup, shutdown schedules, and dispatchlevels of generators to minimize the total system cost whilesatisfying the various generation and transmission constraints.The unit commitment is performed using the followinginformation:• Day-ahead (or week-ahead) load forecast• Maintenance scheduling• Average full load production cost• Generation operation constraint• Start-up cost• Load plus operating reserve (spinning reserve and 10~30

    minute quick starts)• Transmission constraints and contingencies

    The economic dispatch with transmission securityconstraint, also known as security constrained optimal powerflow (SCOPF), solves an optimization problem subject tovarious transmission related constraints. Here the objectivefunction is to minimize a generalized cost function that, inaddition to the cost (or generation bid) of serving the demand,also includes costs for unserved load, transmission tariffs, andpenalty cost for transmission limit violations. The cost term forunserved loads provides cost account for emergency loadshedding in case of supply shortage or import transmissionlimitations.

    Building system model for market simulation from nothingcan be quite tedious and time consuming. Much data has tobe collected to cover generation, load, and transmissionsystems. Fortunately, most of the information required doesnot change much over time, and only needs to be incrementallyaugmented and adjusted. For this study, we took advantageof the GridView Database for US regional power market toquickly build the market model for the study system.

    IV. TRANSMISSION CONGESTION ANALYSIS

    A. Congestion Cost Estimation

    As indicated in [4,5], the best measure of congestionseverity in a network is the change of system production costdue to transmission constraints with reference to the dispatchresults assuming unlimited network capacity. If contingencyconstraints are to be considered, then congestion cost is thedifference between the SCOPF solution and the simulationwithout any transmission limitations. Table 1 shows theestimated congestion cost of NYCA system in Year 2003. Inthis table, the first case refers to a simulation run with unlimitednetwork capacity. The second case is associated with asimulation run with transmission constraints including alldefined interfaces and individual circuits of 230 kV and above.The third case is associated with a SCOPF simulation run, inwhich a list of critical contingencies of bulk transmission

    network (i.e., 230 kV and above) are considered. Thecontingencies involving components that link NYCA andexternal systems are not included in this contingency list.

    Table 1: Estimated Congestion Costs

    (M$/yr) (%)

    1unlimited network capacity 2451.0 0 0

    2w/ transmission constraints 2640.6 189.6 7.74

    3w/ security constraints 2768.4 317.4 12.95

    Congestion CostSolution Approach Production Cost (M$/yr)

    Case

    From Table 1, it can be seen that transmission constrainedsystem dispatch incurs a total congestion cost of 189.6 M$ forthe study year, which is about 7.74% increase compared to theleast-cost dispatch without network limitations. As for securityconstrained system dispatch, the total congestion cost wouldbe 317.4 M$. In other words, the contingency constrainedsystem dispatch further increases the congestion cost by 127.8M$ beyond the level caused by transmission constraints. Thisis due to the more conservative operation of the energy marketunder the contingency constraints.

    NYCA Zonal Average LMP ($/MWh)

    0

    10

    20

    30

    40

    50

    Capita

    l

    NY Ci

    tyWe

    st

    Centra

    l

    Dunw

    oodie

    Gene

    ssee

    Hudso

    n

    Long Is

    land

    Millw

    ood

    Moha

    wk North

    transmission constrained

    security constrained

    Figure 1: NYCA Zonal LMP Differences

    Transmission congestion resulted in large difference ofmarket clearing prices across the NYCA system. Figure 1shows the zonal average LMP values for two market simulationruns: transmission constrained system dispatch and securityconstrained system dispatch. The basic message revealed bytransmission constrained market simulation is that three loadzones in NYCA (i.e., NY City, Long Island and Dunwoodie) willsee a higher LMP than other locations, and the differencecould be between 10~15 $/MWh. Security constrained systemdispatch will further limit the utilization of transmissioncapacity and therefore stretch the LMP differences to a levelclose to 20 $/MWh. Apparently, the impact of securityconstrained system dispatch must be carefully studied, and thebenefits from security constrained system dispatch (i.e.,improved system reliability and reduced outage damage)should be weighed against the increased congestion costs.

  • 4

    B. Interface Utilization Analysis

    NYCA bulk power transfer capabilities are mainlyconstrained by the capacity limitation of several interfaces.Some of those interfaces are voltage stability constrained, butmost of them are thermal loading limited. Table 2 presents thetransmission constrained simulation results with respect to theNYCA interface utilization. Interface utilization informationreported by GridView market simulation model includes loadingfactor, expansion value (e.g., cumulative shadow price), pathcost, and congestion hours.

    Table 2: NYCA Interface Utilization Analysis

    CENTRAL EAST 65.10 0 0 0MOSES SOUTH 29.09 0 0 0

    TOTAL EAST 60.30 0 0 0CONED-LILCO 98.99 67044 65.4 7974LILCO-IMPORT 87.74 0 0 0

    DYSINGER EAST 60.78 0 0 0WEST CENTRAL 69.51 12822 22.1 1343

    SPR/DUNW SOUTH 97.24 27309 107.9 3185UPNY SENY 77.10 7567 31.6 1264

    UPNY CONED 92.74 20204 85.4 2020

    Expansion Value

    ($/MW-yr)

    LoadingFactor

    (%)

    Interface NameCongestion

    Hours(Hrs/yr)

    PathCost

    (M$/yr)

    Loading factor and congestion hours are good indicators ofexpressing the status of transmission interface utilization. Forthe case studtied, the most heavily utilized interfaces areCONED-LILCO and SPR/DUNW South. For example, theCONED-LILCO interface is loaded at 99% almost for the wholestudy period and the power transfer on this interface is limitedfor up to 8000 hours. Note that CENTRAL EAST interface isnot congested in the simulation because of two main reasons:more stressed downstream interface limitations assumed andproposed generation additions concentrated in thedownstream side of CENTRAL EAST interface.

    Both expansion value and path cost are useful informationwith regards to the economic attractiveness of interfaceexpansion. Expansion value is the shadow prices of a bindingtransmission interface integrated over time, representing thetotal system wide savings (in $/MW-yr) per incrementalincrease in the power delivery limit of that interface. Path costis calculated as the summation of the shadow price associatedwith the binding interface constraint times the flow in thatinterface. The value of path cost may be roughly regarded asan indicator of overall economic value from removing interfacecongestion if interactive effects from other interfaceconstraints are not significant.

    The market simulation also indicated that transmissioncongestion happens frequently on the NYCA system not onlyduring system peak hours but also during off-peak hours.Table 3 shows interface congestion breakdown statistics byon-peak and off-peak hours. In this case study, the on-peakhours are defined as from 7am to 10pm, totally 16 hours forevery weekday. The remaining hours are then defined as off-peak hours. From Table 3, it can be seen that the congestionproblem of CONED-LILCO is almost evenly distributedbetween on-peak and off-peak hours. It is generally true that

    congestion during peak hours has greater impact because thedemand is high; however, in some situations, like SPR/DUNWSOUTH interface, transmission congestion problems duringoff-peak hours are more significant.

    Table 3: Interface Congestion Analysis

    on 31372 30.6 3938off 35672 34.8 4036on 7293 12.6 845off 5529 9.5 498on 9085 35.9 1014off 18224 72.0 2171on 7235 30.2 1155off 333 1.4 109on 19483 82.3 1935off 722 3.0 85

    UPNY CONED

    Expansion Value

    ($/MW-yr)

    System Peak Hour

    (on/off) Interface Name

    CONED-LILCO

    WEST CENTRAL

    SPR/DUNW SOUTH

    CongestionHours

    (Hrs/yr)

    PathCost

    (M$/yr)

    UPNY SENY

    V. EXPANSION ALTERNATIVE EVALUATION

    Transmission expansion planning under deregulatedmarkets can be categorized into three major objectives: systemreliability enhancement, economic efficiency improvement, andgeneration facility interconnection. Besides generationinterconnection, most transmission expansion projects wouldimprove both system reliability and market efficiency. That is,reliability-needed expansion projects may have added valuesof improving market efficiency while economic-drivenexpansion projects may have added values of enhancingsystem reliability. As such, the evaluation of transmissionexpansion projects needs an integrated system impact studyinvolving market simulation and reliability evaluation.

    Though GridView has the capability in reliabilityassessment, such as contingency analysis for selected systemconditions, this paper will only address the expansionevaluation from improving economic efficiency perspective,focusing on investigating economic values from incrementalinterface transfer capability improvement as well as potentialmajor interconnection projects between regional markets.

    A. Expansion Project Identification and Evaluation

    Of course, eliminating all congestion on the transmissiongrid would not make sense if at all possible. Therefore, it isimportant to identify most economically attractive expansions.As mentioned earlier, the expansion value and path costinformation by GridView can be directly used to prioritizecandidate interface improvement options. Technologiesavailable for improving interface capability include adding newlines, upgrading existing lines, installing voltage and flowcontrol devices, as well as using advanced IT solutions.

    Congestion of thermal limited interfaces is closely correlatedwith the loading conditions of individual interfacecomponents. Table 4 below lists the most heavily loadedcomponents belonging to SPR/DUNW SOUTH and CONED-LILCO interfaces. It can be noted that three components ofSPR/DUNW SOUTH interface are more frequently congestedthan the interface itself. Improving thermal limited interface

  • 5

    should involves analyzing the possibility of increasing thethermal rating of most constrained components without newconstruction and improving voltage constrained interfacerequires engineering study to determine appropriate locationand rating of voltage support devices.

    Table 4: Heavily Loaded Interface Components

    SPRBROOK W 49 ST 4951 98.76 SPR/DUNW SOUTHDUNWODIE RAINEY 2023 96.48 SPR/DUNW SOUTHTREMONT PARK TR1 5727 99.12 SPR/DUNW SOUTHDUNWODIE DUN NO 3760 97.78 SPR/DUNW SOUTHSPRBROOK REACBUS 5877 89.96 CONED-LILCODUNWODIE SHORE RD 5639 99.24 CONED-LILCO

    SPRBROOK W 49 ST 188182 138.4 SPR/DUNW SOUTHDUNWODIE RAINEY 62885 42.7 SPR/DUNW SOUTHTREMONT PARK TR1 27330 5.6 SPR/DUNW SOUTHDUNWODIE DUN NO 27219 7.9 SPR/DUNW SOUTHSPRBROOK REACBUS 46847 29.4 CONED-LILCODUNWODIE SHORE RD 43109 24.5 CONED-LILCO

    Interface NameFrom Bus To BusExpansion

    Value ($/MW-yr)

    PathCost

    (M$/yr)

    Interface NameFrom Bus To BusCongestion

    Hours(Hrs/yr)

    LoadingFactor

    (%)

    This study assumes the feasibility of increasing WESTCENTRAL, SPR/DUNW SOUTH and CONED-LILCO interfacelimits by 100 MW, 150 MW and 90 MW respectively. It is alsoassumed that the upgrading of WEST CENTRAL interface canbe achieved by installing properly sized and sited voltagesupport device while the thermal limits of SPR/DUNW SOUTHand CONED-LILCO interfaces can be improved by increasingthe thermal limits of most constrained underground cables withforced cooling facilities.

    Table 5: Interface Improvement Options and Benefits

    Interface ∆ MW (M$/yr) (%) 1 WEST CENTRAL 100 2638.5 2.14 0.082 CONED-LILCO 90 2632.4 8.21 0.313 SPR/DUNW SOUTH 150 2631.9 8.70 0.334 SPR/DUNW SOUTH 150 2636.9 3.69 0.145 2+3 90+150 2625.3 15.31 0.58

    Congestion CostSavings

    ProductionCost

    (M$/yr)

    Capacity ImprovementOption

    Table 5 lists five hypothetic interface capabilityimprovement options with the estimated economic values interms of reduced congestion costs. In option 3, the interfaceimprovement is achieved by increasing the loading capabilityof parallel 345 kV cables from SPRBROOK to W 49 ST (10% foreach 345 kV cable). In option 4, the interface improvement isrealized by increasing the loading capability of parallel 345 kVcables from DUNWODIE to RAINEY (10% for each cable).Different economic values resulting from option 3 and option 4can be expected based on the expansion value and path costinformation associated with these two 345 kV circuits. Option 5is an expansion alternative involving two expansion actions.Expansion options 2 and 3 are economically promising,

    reducing congestion cost by more than 8 M$/yr. This casestudy clearly indicates that significant economic values can berealized even through incremental interface limit improvements.Congestion information of interfaces and components shouldbe used together for prioritizing candidate expansion optionsand thus reducing the number of options to be analyzed indetail.

    B. Economic Impact Assessment

    Transmission planning studies under competitive electricmarket also requires comprehensive economic impactassessment to determine which parties benefit most fromrespective expansion projects and which market participantsmay be negatively affected seeing increased load payments orreduced generation profits. Table 6 shows the economicimpact analysis with regards to implementing expansion option3, in which all numbers are the changes from the base case.

    Table 6: Economic Impact of Interface Improvement UnderOption 3 (values shown are differences between two cases)

    Load AreaTotal

    Generation(MWh)

    GenerationReveune

    (M$)

    GenerationCost(M$)

    Load Payment

    (M$)

    Average LMP

    ($/MWh)

    Capital 174957 6.2 5.0 2.0 0.13

    NY City -666616 -23.9 -28.8 -21.2 -0.39West 15819 1.7 0.4 0.6 0.04

    Central 123565 5.9 3.8 0.8 0.05Dunwoodie 0 0.1 0.0 -1.8 -0.42Genessee 1493 0.3 0.0 0.3 0.04

    Hudson Valley 322329 16.7 9.6 7.7 0.61Long Island 20967 1.4 1.0 1.4 0.08

    Millwood 0 40.6 0.0 10.9 2.48Mohawk Valley 1937 0.3 0.1 0.5 0.06North 5548 0.7 0.2 0.4 0.06

    Total 0 49.9 -8.7 1.6

    As can be expected, New York City customers would benefitmost from the SPR/DUNW SOUTH interface improvement,reducing load payments by 21.2 M$. On the other hand,customers in Millwood and Hudson Valley will see increasedload payments. Note that the overall load payments in NYCAare increased by 1.6 M$ although the total production costreduced by 8.7 M$. This is an interesting, but possiblephenomenon of LMP based market operations.

    Further analysis indicates that improving SPR/DUNWSOUTH interface would shift some congestion hours to theupstream UPNY CONED interface. As a result, the customerslocated between SPR/DUNW SOUTH and UPNY CONEDinterfaces would be affected for paying higher prices forelectricity while the generators located in this area wouldbenefit for seeing increased profits.

    Another exercise of this case study involves two interfaceupgrade activities: improving SPR/DUNW SOUTH interface by150 MW as in option 3 and improving UPNY CONED interfaceby 100 MW as well. This results in an overall reduced loadpayments in NYCA by 18.6 M$ because of more savings ofload payments for NY City and mitigated increase of loadpayments for Millwood and Hudson Valley.

  • 6

    C. Major Interconnection Projects

    The economic potential of interconnecting generationresources outside NYCA system directly into New York Citythrough high-capacity AC or DC links is briefly discussed inthis sub-section. Here we assume the possibility of two suchinterconnections: one interconnection will result in a maximum800 MW injection at the W49 ST station and anotherinterconnection will be able to deliver 600 MW at the RAINEYstation. It is also assumed that the generation resourcesassociated with these interconnections are combined cycleplants. Table 7 shows the estimated economic values of thesetwo interconnections in terms of reduced congestion costs.Market simulation results also indicated that interconnectingeconomic generation resources directly into New York Citywould greatly reduce the congestion on SPR/DUNW SOUTHinterface and the energy prices for the customers in New YorkCity.

    Table 7: Interconnection Options and Benefits

    Location MW (M$/yr) (%) 1 W49 ST 800 2551.2 89.40 3.392 RAINEY 600 2571.1 69.50 2.63

    Option NY City InjectionProduction

    Cost(M$/yr)

    Congestion CostSavings

    VI. CONCLUSION

    This paper presented a case study of transmissioncongestion analysis and expansion impact evaluation using amarket simulation program GridView. The analyses areperformed on an illustrative model for the NYCA system basedon publicly available information.

    Transmission congestion restricts the efficient operation ofenergy market. The congestion costs in a system can be verysubstantial. For the system shown here, congestion costamounts to 7.74% of the total production cost when nocontingency constraints are modeled and 12.95% when somecontingency constraints are modeled. All congestion costs,however, cannot be completely economically removed. Insystems having many congested facilities, both loadingfactors, integrated shadow price values, path costs, and hoursof congestions provide good screening indicators useful forfiltering and prioritizing transmission facilities for furtherdetailed analysis. Due to the complex interactions of amongpower flows and thermal resource re-dispatch, the bindingconstraints could shift to different portion of the system.Reliable estimates of benefits and impact of a systemexpansion project can be made only with comparative analysisof market simulations performed with and without theexpansion projects being studied included in the systemmodel.The benefits of system expansion projects could be measuredaccording to several indexes, for the analysis presented here,the difference in total production cost with and without aproject is used. Other indexes, such as changes in generatorrevenue, customer payment, are not reliable indicators of theeconomic benefits of system expansion and could lead toconfusing counter intuitive results, which by itself the subjectof a separate study.

    As shown in this paper, the evaluation of system expansionprojects is not complete without analyzing the impact of theexpansion projects on various market participants. The overallmarket efficiency will definitely improve. But some participantscould be worse off while others are better off with a givenexpansion projects. Quantitative analysis of these impacts ispossible only with a market simulation program like GridView.

    VII. REFERENCES[1] Electric Transmission Constraint Study, Division of Market

    Development, FERC, December 2001.[2] The National Energy Policy, US Department of Energy, May

    2001.(http://www.energy.gov/HQPress/releases01/maypr/energy_policy.htm)

    [3] National Transmission Grid Study, US Department of Energy, May2002 (http://tis.eh.doe.gov/ntgs/reports.html).

    [4] Thmos J. Overbye, “Estimating the Actual Cost of TransmissionSystem Congestion”, the 36th Hawaii International Conference onSystem Sciences, Hawaii, January 2003.

    [5] Narayan S. Rau, Transmission Congestion and Expansion UnderRegional Transmission Organizations”, IEEE Power EngineeringReview, September 2002.

    [6] Willie Wong, Henry Chao Danny Julian, Per Lindberg and SharmaKolluri, “Transmission Planning in a Deregulated Environment”,IEEE T&D Conference, New Orleans, April 1999.

    [7] Xiaoming Feng, Le Tang, Zhengyuan Wang, Jian Yang, WillieWong, Henry Chao, and Rana Mukerji, “An Integrated ElectricalPower System and Market Analysis Tool for Asset UtilizationAssessment”, IEEE PES Summer Meeting, Chicago, July 2002.

    VIII. BIOGRAPHIESXiaoming Feng (M’87) is currently an executive consulting R&Dengineer with ABB Corporate Research US. He has over a dozen years ofexperience in applying state of the art simulation and optimizationtechnology to the operation planning and strategic planning of electricpower system and market. His research interests include reliability andasset management applications in power systems, new optimization andsimulation technology to power system analysis, monitoring, andcontrol.

    Jiuping Pan (M’97) received his B.S. and M.S. in Electric PowerEngineering from Shandong University of Technology, China and thenhis Ph.D. in Electrical Engineering from Virginia Tech, USA. He iscurrently a principal consulting R&D engineer with ABB CorporateResearch. His main research interests include power system planning,reliability assessment, network assessment management, and powermarket simulation studies.

    Le Tang (M’95) joined ABB in August 1995. Le earned his B.S. inElectric Engineering from Xi'an Jiaotong University in 1982 andobtained his M.E. and Ph.D. in Electric Power System Engineering fromRensselaer Polytechnic Institute in 1985 and 1988 respectively. Le iscurrently managing ABB's Group R&D operation in the United States.He is involved in various types of research and development activities inpower transmission and distribution areas. Prior to joining ABB, Le wasa Senior Consulting Engineer at Electrotek Concept Inc., where he wasresponsible for a wide range of studies and seminars in power qualityfield.

    X.Y (Henry) Chao (M ’88) is the VP of Technology with ABBConsulting. He obtained his Ph.D. degree from Georgia Tech, USA. Dr.Chao has 20 years of experience in all aspects of electric utility planning

  • 7

    and operations. Since joining ABB in 1997 as technology manager forIPP and transmission project development, his main responsibility hasbeen to direct development and consulting using advanced models andtechnologies. His areas of activities include generation and transmissionasset evaluation and optimal utilization, transmission reliabilityassessment, merchant energy project development, and competitivepower marketing and risk management.

    Jian Yang (M’96, SM”02) received his B.S. and M.S. in ElectricalEngineering from Tsinghua University, China and then his Ph.D. inElectrical Engineering from the University of Missouri-Rolla, U.S.After graduation, he worked at GE Power Systems Energy Consulting formore than two years on market simulation and software development.He is now a program manager at ABB Inc., where he is responsible forABB's market analysis and simulation program - GridView. His mainareas of interests include electrical industry deregulation, marketassessment and transmission planning.

  • WEB-ENABLED TOOLS FOR ASSET MANAGEMENT

    Jorgen Hasselstrom, ABB Inc. David Lubkeman, ABB Inc.

    Fangxing Li, ABB Inc. Yuan Liao, ABB Inc.

    Zhenyuan Wang, ABB Inc.

    Introduction Utilities are under considerable pressure to reduce costs while still maintaining adequate levels of reliability. As a result, there has been an increased focus on improving asset management processes. Typical improvements include introducing new diagnostic techniques, moving from time to reliability-centered maintenance, and introducing computerized maintenance management systems. As an aid in rigorously optimizing O&M and capital expenditures, predictive reliability assessment tools have also been introduced that enable utilities to predict customer reliability based on circuit topology and component reliability information. However to date, the predictive analysis has not been tied to actual component condition. As a result, it hasn’t been possible in the past to use network reliability analysis to effectively guide equipment maintenance activities. The purpose of this paper is to describe a process that has been developed for linking collected equipment condition data to network reliability analysis and maintenance planning. This is facilitated by the use of an asset evaluation suite of tools that include: component failure rate prediction models based on condition data, and a maintenance/testing advisor expert system. The output of this tool set is used to drive a reliability analysis that ranks components by their criticality to the operation of the network and their likelihood to fail. Based on this ranking, a maintenance/testing planning tool employs a cost/benefit analysis to prioritize possible preventative maintenance and/or diagnostic testing actions. A web-based architecture has also been developed to host these tools. This makes use of the new web service model being supported by the new Microsoft .NET integration platform. Asset Management Evaluation Process The management of distribution assets to both optimize economic benefits and manage related risks is a continuous process. ABB’s view of this process is shown in Figure 1. Information is gathered from the field on asset condition and then used to drive model-based network reliability analysis. By making the link between a component’s criticality in the network and its maintenance condition, utilities can start to apply a variety of reliability-centered maintenance (RCM) practices. Results from RCM practices will result in maintenance/testing/replacement schedules that attempt to optimize O&M and capital expenditures with respect to reliability targets. After these changes have been instituted then follow-up performance analysis and asset assessment is made and the whole asset management process is repeated.

  • AssetManagement

    Cycle

    Assess AssetCondition

    Build NetworkModel

    ReliabilityAnalysis

    Plan Changes

    OperateNetwork

    MaintainNetwork

    Replace/Repair

    PerformanceAnalysis

    Figure 1 - Asset Management Cycle

    Utilities are starting to take advantage of recently developed distribution analysis methodologies that use component failure rates to predict reliability indices such as SAIFI and SAIDI (defined below) [1]. SAIDI – System Average Interruption Duration Index. This index is measured in minutes per year. SAIFI – System Average Interruption Frequency Index. This index is measured in interruptions per year. For doing this type of network analysis, each component and/or subcomponent in the network needs to be characterized by a failure rate per year. Network models then use simulation techniques to predict the impact of each component’s failure characteristics on total system reliability. The sensitivities of system reliability indices to individual component failure rates can then be used to make decisions regarding O&M and capital expenditure priorities. However, one problem with applying this type of analytical approach is that the failure rates are not directly tied to the actual condition of the components. Often times industry standard figures are used for substation and feeder components. Overhead failure rates are often calibrated to match existing feeder failure rates. Hence any resulting recommendations involving specific assets are not a true reflection of each asset’s actual condition. ABB is exploring how to integrate the network reliability analysis into a utility’s asset management planning processes so that asset condition is accurately represented. The basic planning process being explored is shown in Figure 2. The process is driven by component asset management condition data that is stored in a common database. This data is collected via inspections, real-time monitoring, or diagnostic testing. The condition data is then converted into component failure rates by a Component Reliability Prediction module. The failure rates are then combined with network models to perform a reliability analysis that assigns a criticality index to each component that relates the impact of its failure on total network reliability. Once the component criticalities are available, then a generated list of possible maintenance and diagnostic activities can then be ranked according to cost-benefit ratios to reach desired reliability targets.

  • AMDatabase

    Field DataCollection

    ComponentReliability Prediction

    ReliabilityModels

    Failure Modes, Failure Rates

    Network-basedCriticality Analysis

    DistributionNetwork Model

    Critical ComponentSensitivity Analysis

    Maintenance/InspectionProject Generation

    Cap Ex and O&MOptimization

    Maintenance/DiagnosticOptions,Cost/Benefit Analysis

    Maintenance &InspectionCosts

    Financial &ReliabilityGoals

    Maintenance/DiagnosticPlan

    Figure 2 - RCM-Based Planning Utilizing Actual Component Data The three key software components of this RCM-based planning process will be described below. These include the “Asset Evaluation Modules”, the “Asset Criticality Module”, and the “Maintenance Prioritization Module”. Their interaction is shown in Figure 3. The AM database can be of a stand-alone type or it can be linked to some kind of equipment/monitoring software package. ABB uses its own commercially available, web-based, Asset Sentry package for this purpose. The criticality analysis is conducted using the (web-based) Power Delivery Optimizer (PDO) software package, which is a tool specially designed for distribution reliability analysis..

    PowerDeliveryOptimizer

    AssetSentry

    Asset Evaluation Modules

    Asset Criticality Module

    Maintenance Prioritization ComponentSpecific Results

    MTTF

    SAIDI, SAIFI

    FR

    Criticality

    • Maintenance Ticket Prioritization List

    • Recommended Maintenance/Diagnostic Action List

    • Cost/Benefit Index for Each Maintenance/Diagnostic Action

    AMDatabase

    Cost info, customerpreference, etc.

    ComponentInspection/TestResults/History

    Δλ/$

    Figure 3 - Interaction between Software Modules

  • Asset Evaluation Modules The asset evaluation modules include the condition-based network component failure rate model (FR Model), the recommendation action cost/benefit analysis model, and the knowledge based asset advisor reasoning engine, as shown in Figure 4. These three modules are based on two groups of data: the network component data, and the maintenance/diagnostic rule data. They generate a recommendation list of maintenance, diagnostic, replacement and retrofit actions based on internal ABB equipment knowledge and industry standards [2-4].

    Network ComponentSpecification, Condition,

    Maintenance, Diagnostics,and Economical Data

    Knowledge-Base andRule-Base for

    Maintenance/DiagnosticsRecommendation

    Asset AdvisorReasoning Engine

    Condition based FailureRate Model

    Recommendation ActionCost/Benefit Analysis

    Model

    Maintenance, Diagnostic,Replacement, Retrofit

    Recommendation Action List

    FR

    Figure 4 - Asset Evaluation Modules and Their Relationships The condition based failure rate model takes into consideration the current and historical measurable conditions (features) of the network component, and adjusts the industrial standard or utility specified component failure rate to reflect a real probability the component should fail. Figure 5 shows the structure of such a model. The computed failure rates are stored in an asset database and used as the input to the criticality analysis described later in the paper.

  • TransformingFunction

    TransformingFunction

    CombiningFunction

    MappingFunction

    Sum Function

    TransformingFunction

    CombiningFunction

    MappingFunction

    TransformingFunction

    TransformingFunction

    Component FR

    Sub-Component FR

    CombinedAdjustment Index

    CombinedAdjustment Index

    Feature

    Feature

    Feature

    Feature

    Feature

    Feature

    Feature

    Feature

    Feature

    Adjustment Index

    Figure 5 - Structure of the Condition Based Failure Rate Model The recommendation action cost/benefit analysis model takes failure probabilities before and after the action as inputs and links them with utility savings in the long term. Therefore the cost of action and cost of failure are all inputs to the model. This model gives a cost/benefit index as a result. The asset advisor reasoning engine will look into the network component data and apply the maintenance/diagnostic rules, either automatically or under operator control. Combined with the cost/benefit analysis results, it gives a list of maintenance, diagnostic, replacement, or retrofit recommendation actions. Asset Criticality Analysis This section describes an approach to assess the criticality of distribution system components. The criticality analysis/ranking will form the starting point for an effective RCM program. A criticality analysis is conducted using some kind of a predictive reliability assessment program. ABB utilizes its own proprietary Power Delivery Optimizer (PDO) software. PDO is a planning tool that enables alternative configurations and maintenance strategies to be compared in order to engineer a network to acceptable reliability performance. In Figure 6, an example of a reliability assessment is shown for a small US city. Segments shaded in red indicate areas with poor reliability.

  • Figure 6 – Example of a Predictive Reliability Analysis for a US City

    Figure 2 shows how the criticality analysis fits into the overall process. As a first step, the network model is downloaded into PDO. Each component and/or subcomponent mean time to failure (MTTF) and mean time to repair (MTTR) is then identified. In addition, the cost of failure may be entered if a value-based rather than reliability centered approach is selected. The component data can be obtained from a number of different sources. In the present approach, the MTTF and MTTR is obtained from the Asset Evaluation Modules and the AM database, respectively. For maintenance planning purposes this approach is superior to using generic frequency/repair times for components of the same kind. PDO calculates the contribution of each component, e.g., to the system SAIDI. Different objective function can be used depending on the reliability focus of the network owner. One utility might view SAIDI as twice as important as SAIFI and MAIFI. Weighting factors of 2:1:1 would then be used when assessing the overall component criticality/contribution. The output of the analysis is a list ranking the contribution of each component and/or subcomponent to, e.g., SAIDI. In this context, SAIDI contribution and SAIDI criticality cannot be used interchangeably. The criticality of a component is obtained by dividing the SAIDI contribution with the likelihood of the component failing.

  • SAIDI Contribution is given in “customer interruption hours per year” whereas the SAIDI Criticality is given in “customer interruption hours per interruption”. For instance, assume a system with 10,000 customers. For a certain subcomponent (say a transformer tap-changer with λ = 0.01 per year) the SAIDI contribution is found to be 45 “customer interruption hours per year” and the SAIFI contribution to be 68 “customer interruption per year.” The SAIFI criticality of this component is obtained as: SAIFI Contribution / λ = 68 / 0.01 = 6800 Every time the transformer tap changer fails, 6800 customers will experience an interruption of power. The SAIDI criticality of this component is similarly obtained as: SAIDI Contribution / λ = 45 / 0.01 = 4500 Every time the transformer tap changer fails, 6800 customers will experience an interruption of 4500 / 6800 = 0.66 hours. Depending on the objective and reliability concerns of the utility the contribution index and/or critically index may be used when planning for RCM. The former method will be most effective when system SAIDI and SAIFI values are most important whereas the latter approach will tend to focus the maintenance to avoid major high-profile outages. Regardless of the method chosen, a predictive reliability assessment forms the basis for understanding what components are critical to overall system performance, and helps manage the maintenance budget, achieving as good reliability as possible within the assigned resources. Maintenance Ranking As previously discussed, maintenance can reduce the failure rate of a component. However, under ongoing deregulation, the utilities may not have enough financial resources to complete all desired maintenance tasks. A solution should be identified to achieve the maximal benefit within a limited budget. The asset management process outlined in Figure 2 employs a cost-effective maintenance ranking approach to achieve the above goal. While the cost is the operational cost of a maintenance task, the effectiveness is represented as the improvement of system reliability, or the improvement of component criticality. Hence, the cost-effectiveness of a maintenance task is given as follows.

    Criticality after maintenance Criticality after maintenanceECost

    −=

    As previously mentioned, component criticality is a measure of the importance of a component from the point of view of system reliability. However, it may not represent the efficiency of the maintenance action on this component. For example, assume there are two maintenance tasks with the same cost. The criticality of the more critical component can be improved by only 10 in per unit, while the criticality of the less critical component may be improved by 30 in per unit. Certainly, the less critical component should be maintained from the point of view of efficiency, if only one of these two maintenance tasks can be carried out due to a limited budget.

  • The prioritizing process is described as in Figure 7. It should be noted that each component may have various maintenance options. However, among these options, only the option with the best cost-effectiveness should be accepted (if budget has not been overrun) and the remaining options for this component should be skipped. As previously mentioned, the common stopping criterion of this process is that the budget limit is reached. Another possible criterion is that the reliability target is met before the budget limit is reached, although this may not be a common case. This criterion implies that the system reliability is good enough, and the remaining money may be allocated to other non-maintenance tasks.

    Obtain and rank the cost-effectiveness of maintenance tasks

    Call Criticality Analysis Module to get the current criticality ofeach component

    List all possible maintenance tasks for each component andidentify the cost of each task

    Call FR Module to identify the expected change ofcomponent failure rate for each maintenance task

    Call Criticality Analysis Module to identify the expectedchange of criticality of each maintenance task

    Is there a higher-ranked task operating on the same component?

    Eliminate thistask from theranking list

    No

    Is the budget limit(or reliability target) reached ?

    Stop

    Select the next maintenance taskfrom the ranking list

    NoYes

    Yes

    For each task

    Start from the first task in the ranking list

    Figure 7. – Maintenance Prioritizing Process

  • Web-Based Deployment This section focuses on the software implementation of the AM concepts and processes presented above. The software architecture and integration of different modules and tools are described. Two typical application scenarios are presented. The developed AM system consists of various modules and takes advantage of different existing tools. The software design needs to meet certain important requirements. First, the design should allow for easy expansion and upgrading. It is likely that new modules fulfilling new functionality will be added to the system and existing modules may entail modifications to suit new needs in the future. Second, the design should support application interoperability. For example, a different tool rather than PDO may be used to obtain the reliability indices of the system. Third, the design should facilitate using the developed modules under different scenarios. Of the two typical scenarios, one exposes developed functionality through ASPs (Active Server Pages), and the other through the XML web services. To meet these requirements, the open software architecture and modular component design approaches are utilized in our system. Microsoft .NET platform is chosen to implement the system since it facilitates developing XML (Extensible Markup Language) web services that allow easy integration of disparate systems using ubiquitous Internet standards, such as XML and HTTP (Hypertext Transfer Protocol) [5-6]. Figure 8 shows the architecture of the system.

    .NET Framework Customer

    Applications

    Microsoft IIS (Internet Information Server) and .NET Framework XML Web

    Services Interfaces

    Business logic (AM modules): Asset Evaluation Asset Criticality Maintenance Prioritization

    Data access logic: ADO.NET & SQL Server Managed Provider

    Data source: AM Database SQL Server

    Internet Explorer

    XML web services interfaces

    Reliability software: PDO

    Condition Monitoring Tool: Asset Sentry

    Web UI logic: ASP.NET Active server pages

    Figure 8 - Software Architecture of the System The figure shows the above-mentioned two typical application scenarios. The first one allows the users to access the functionality through the Internet Explorer by sending requests and getting

  • results from the ASPs. The second one, indicated in the dotted line, allows the users to develop their own customized applications, which can either be web or Windows based, to take advantage of the functionality and data provided by the AM business logic. The architecture distinctly separates the data access logic, business logic, and web UI (User Interface) logic modules. Such a design facilitates system expansion and maintenance. The data access logic realizes the data exchange between the data source and the business logic. The business logic implements the functional modules such as Asset Evaluation, Asset Criticality, and Maintenance Prioritization. The Web UI logic implements the ASPs (Active Server Pages) that are to interact with the users through the Internet Explorer. XML web services interfaces enable the customer’s applications to consume provided functionality and data. The reliability software tool such as PDO and condition monitoring tools such as Asset Sentry interacts with the AM modules through web services interfaces. This enables the system to readily use alternative tools other than PDO and Asset Sentry with minimum changes. Example: In the below example, the Criticality and Maintenance Prioritizer Modules will be illustrated for the system shown in Figure 6. To keep things simple, only the six distribution transformers on the system are modeled. Moreover, these transformers are viewed as built up by five subcomponents, i.e., main tank, bushings, tap changer, transformer core, and the auxiliary system. In reality, any number of components and subcomponents may be included. In Table 1 the basic data is given for the transformers. The failure rate information contained in the table was obtained from the AM database via the Asset Evaluator Modules. These failure rates represent the sum of the failure rates of each subcomponent. Included in the table are also the calculated SAIDI and SAIFI Contributions for each transformer.

    Table 1 – Component Failure Rates Obtained from the Asset Evaluator Modules.

    NameSustained Faults: /yr

    SAIFI Contribution (interruptions x customers)

    SAIDI Contribution (hours x customers)

    Normalized SAIFI Contribution

    Normalized SAIDI Contribution

    Overall Criticality (Normalized)

    Transformer 79 0.072 1086.78 16573.7 200.00 100.00 300.00Transformer 4 0.018 488.001 8104.02 89.81 48.90 138.70Transformer 3 0.061 100.8 867.999 18.55 5.24 23.79Transformer 29 0.022 297.404 3271.44 54.73 19.74 74.47Transformer 24 0.023 825 11550 151.82 69.69 221.51Transformer 2 0.044 334.904 2009.42 61.63 12.12 73.76

    In the example, weighting factors of 2:1 were used for SAIFI and SAIDI, respectively (shown as the normalized contributions). Since SAIDI and SAIFI have different units, the contributions are normalized with respect to its most critical component (set to 100). In the example above Transformer 79 has both the highest SAIDI and SAIFI contribution. However, the SAIFI contributions are doubled to reflect for the weighting factors used. The overall criticality is obtained as the sum of the two contributions. Table 2 shows a selection of maintenance alternatives for the transformers and their corresponding benefits and costs. Each maintenance action relates to one or several

  • subcomponents. For instance, replacing the tap changer will only affect the tap changer failure rate. Each action is associated with a lowering of the failure rate. A lowest possible value is also given essentially representing replacing the component.

    Table 2 – Example of Maintenance Actions For Distribution Transformers Maintenance Actions Failure Rate reduction Sub Component Category Lowest Possible λ Cost ($)Clean Bushings 0.008 Bushings 0.005 1000Replace Bushing N/A Bushings 0.003 5000Maintain Tap Changer 0.01 Tap Changer 0.008 500New Tap Changer N/A Tap Changer 0.005 3000Oil Rejuvination 0.005 Main Tank 0.001 8000Maintain Aux System 0.002 Aux. System 0.002 1000 Applying the Maintenance Prioritizer Module to the system will yield a list of actions ranked according to their cost-benefit in reducing the components SAIDI and SAIFI Contribution. It is a fairly straightforward process. An example is shown in Table 3. Note that since the criticality is normalized to the most critical component, the cost-benefit ratio (cost of the maintenance action divided by the reduced criticality) will be dimensionless, i.e., it will not be given in units such as cost per minute, or cost per reduced interruption. However, their relative ranking forms the basis for the maintenance activities. The system also allows for the planner to easily check the expected improvements on the system reliability. For a given number of maintenance actions the new improved component failure rates can be downloaded back to PDO for further analysis.

    Table 3 – Example of Maintenance Ranking Ranking PDO ID Name Action Cost ($)

    1 2 Transformer 2 Clean Bushings 10002 2 Transformer 2 Maintain Aux. Syst. 10003 2 Transformer 2 New Tap Changer 30004 24 Transformer 24 Maintain Aux. Syst. 10005 24 Transformer 24 New Tap Changer 30006 3 Transformer 3 Clean Bushings 10007 3 Transformer 3 Maintain Aux. Syst. 10008 17 Transformer 17 Maintain Aux. Syst. 10009 2 Transformer 2 Oil Rejuvination 8000

    10 14 Transformer 29 Maintain Aux. Syst. 1000

  • Summary This paper presented an overview of an asset management process developed for supporting the planning of maintenance and diagnostic activities. This process links asset condition information collected in the field to prioritized maintenance activities through the use of network-based distribution reliability models. The ongoing automation of utility maintenance activities through the use of computerized maintenance management systems, outage management systems, and remote condition monitoring, will eventually provide the IT structure for driving this process. While a large amount of effort is required to build the proper failure rate model, there is a large payback in that maintenance activities can then be prioritized according to a dollar value. Having the means to assign a specific benefit value to maintenance will be invaluable as utilities continue to respond to pressure to reduce network operating costs. References 1. Brown, Richard, Electric Power Distribution Reliability, Marcel Dekker, Inc., 2002. 2. IEEE C57.19.00-1991, IEEE Standard General Requirements and Test Procedure for

    Outdoor Power Apparatus Bushings, September 26, 1992. 3. IEEE 57.106-1991, IEEE Guide for Acceptance and Maintenance of Insulating Oil in

    Equipment, June 27, 1992. 4. IEEE Std C37.10-1995, IEEE Guide for Diagnostics and Failure Investigation of Power

    Circuit Breakers, 1996. 5. Microsoft Corporation, Microsoft .NET Framework Software Development Kit (SDK), 2001. 6. Microsoft Corporation, Microsoft SQL Server 2000 User’s Manual, 2000.

  • 1

    Methodology and Algorithm for Ranking Substation Design Alternatives

    K. Koutlev, Member, IEEE; A. Pahwa, Fellow, IEEE;

    Z. Wang, Member, IEEE, L. Tang, Member, IEEE

    Abstract — The design of an electric power substation is a very complex process, which greatly depends of customer functional requirements and existing boundary conditions. Many different solutions are possible to fulfill the substation functional requirements within these conditions. The problem is to select one of these possible solutions that will be the closest to fulfilling the customer requirements. The paper presents a methodology and algorithm implemented in a tool for evaluation and comparison of different substation alternatives. A systematic procedure using multi-objective value hierarchy is applied to make the final decision. Three major objectives used for the decision making process include Life Cycle Cost, Substation Performance, Environmental Factors. The methodology includes customer preferences (weights) as well as an estimation of the values of all the attributes for