communication methods between interconnected … 2/8-guideline_for_fast... · communication methods...

40
PDF generated on 24-Mar-2013 DISCLAIMER : UNCONTROLLED WHEN PRINTED – PLEASE CHECK THE STATUS OF THE DOCUMENT IN IDM How To Guideline for Fast Controllers, I/O Bus Systems and Communication Methods between Interconnected Systems This is the ITER I&C guideline to Fast Controllers. It is part of the Plant System Design Handbook (27LH2V) documentation set. The document summarizes the various input and output (I/O) bus systems and communication methods between interconnected I/O systems. Potential alternatives are listed. A recommendation is made for the selection of specifications of both electrical and communications protocols to be used in ITER Instrumentation and Control Systems. A short list of suitable systems and ... Approval Process Name Action Affiliation Author Makijarvi P. 08-Feb-2013:signed IO/DG/DIP/CHD/CSD/CDC CoAuthor Reviewers Wallander A. 11-Feb-2013:recommended IO/DG/DIP/CHD/CSD Approver Thomas P. 24-Mar-2013:approved IO/DG/DIP/CHD Document Security: level 1 (IO unclassified) RO: Makijarvi Petri Read Access AD: ITER, AD: External Collaborators, AD: Division - Control System Division - EXT, AD: Section - CODAC - EXT, AD: Section - CODAC, project administrator, RO, LG: CODAC Fast Controllers, LG: PCDH Fast Controllers, LG: CODAC team IDM UID 333K4C VERSION CREATED ON / VERSION / STATUS 08 Feb 2013 / 2.0/ Approved EXTERNAL REFERENCE

Upload: voliem

Post on 26-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

PDF generated on 24-Mar-2013DISCLAIMER : UNCONTROLLED WHEN PRINTED – PLEASE CHECK THE STATUS OF THE DOCUMENT IN IDM

How To

Guideline for Fast Controllers, I/O Bus Systems and Communication Methods between Interconnected Systems

This is the ITER I&C guideline to Fast Controllers. It is part of the Plant System Design Handbook (27LH2V) documentation set.

The document summarizes the various input and output (I/O) bus systems and communication methods between interconnected I/O systems. Potential alternatives are listed. A recommendation is made for the selection of specifications of both electrical and communications protocols to be used in ITER Instrumentation and Control Systems. A short list of suitable systems and ...

Approval Process Name Action AffiliationAuthor Makijarvi P. 08-Feb-2013:signed IO/DG/DIP/CHD/CSD/CDCCoAuthorReviewers Wallander A. 11-Feb-2013:recommended IO/DG/DIP/CHD/CSDApprover Thomas P. 24-Mar-2013:approved IO/DG/DIP/CHD

Document Security: level 1 (IO unclassified)RO: Makijarvi Petri

Read Access AD: ITER, AD: External Collaborators, AD: Division - Control System Division - EXT, AD: Section - CODAC - EXT, AD: Section - CODAC, project administrator, RO, LG: CODAC Fast Controllers, LG: PCDH Fast Controllers, LG: CODAC team

IDM UID

333K4CVERSION CREATED ON / VERSION / STATUS

08 Feb 2013 / 2.0/ Approved

EXTERNAL REFERENCE

PDF generated on 24-Mar-2013DISCLAIMER : UNCONTROLLED WHEN PRINTED – PLEASE CHECK THE STATUS OF THE DOCUMENT IN IDM

Change Log

Title (Uid) Version

Latest Status Issue Date Description of Change

Guideline for Fast Controllers, I/O Bus Systems and Communication Methods between Interconnected Systems (333K4C_v2_0)

v2.0 Approved 08 Feb 2013

For PCDH v7

Guideline for Fast Controllers, I/O Bus Systems and Communication Methods between Interconnected Systems (333K4C_v1_3)

v1.3 Approved 05 Jan 2011

for PCDH 6.0 - after internalreview and with corrections from the proof reader (J . Poole)

Guideline for Fast Controllers, I/O Bus Systems and Communication Methods between Interconnected Systems (333K4C_v1_2)

v1.2 Approved 01 Feb 2010

After PCDH 5.1 review, approved modifications done, links updated.

Guideline for Fast Controllers, I/O Bus Systems and Communication Methods between Interconnected Systems (333K4C_v1_1)

v1.1 Signed 14 Dec 2009

Layout settings, IDM numbers and references corrected.

Guideline for Fast Controllers, I/O Bus Systems and Communication Methods between Interconnected Systems (333K4C_v1_0)

v1.0 In Work 07 Dec 2009

1

1 EXECUTIVE SUMMARY

The ITER Organization has selected PCI Express as the base I/O bus technology for use in ITER Instrumentation and Control System fast controllers.

o The use of Conventional PCI (32-bit parallel bus I/O devices) and all of the spin-offs (CompactPCI, PXI, ..) are facilitated by the backward compatibility of PCI Express.

The main recommended items of the ITER Instrumentation and Control System fast controller catalog are the PXI, Compact PCI and PXI Express I/O boards.

A PXI Express chassis is used to carry the I/O boards.

The PXI Express I/O chassis is separate from the CPU chassis, which will be a standard industrial computer.

o The two systems are interconnected using a PCI Express link.

o The industrial computer contains CPU, disk, memory and network connectivity.

o The configuration with separate I/O chassis and CPU/Network minimizes the ties to single vendors (vendor lock-in) and simplifies maintenance.

The usage of PXI Express chassis and PXI I/O boards creates a strong relationship with National Instruments and other members of the PXI Systems Alliance.

By using an external CPU/Network unit, the vendor lock-in is relaxed for the rapidly changing part of the hardware.

By running Linux and EPICS on external CPU/Network units the ITER Organization emphasizes its position of being against the use of proprietary software solutions on embedded hardware.

Performance upgrades will be easier because the I/O chassis will never be touched; only the industrial computer with CPU/Networking will be replaced, thus avoiding modifications and manipulation of the I/O cabling.

xTCA (µTCA and AdvancedTCA) are proposed as the platform for diagnostics which require high-end electronics design. This type of diagnostic application is not well covered by the CODAC Catalog of Commodity off the Shelf (COTS) items.

System integration services will be essential to configure and build I&C systems with fast controllers and they should be offered to all industrial partners by the ITER Organization.

o It would be unrealistic to suppose that all plant system manufacturers would be able to purchase, build and configure fast controller systems to comply with ITER CODAC requirements entirely by themselves.

o The ITER Organization should not only make catalogs but have industrial partners who would be able to propose system integration services to plant system manufacturers.

2

Table of contents

1 EXECUTIVE SUMMARY..................................................................................................................1

2 INTRODUCTION ............................................................................................................................4

2.1 PCDH context ...........................................................................................................................4

2.2 Objectives ................................................................................................................................5

2.3 Scope of the Document ............................................................................................................5

2.4 Acronyms .................................................................................................................................5

2.5 References ...............................................................................................................................6

3 REVIEW OF BUS TECHNOLOGIES ...................................................................................................7

3.1 VMEbus systems ......................................................................................................................7

3.2 PCI – Peripheral Component Interconnect ................................................................................7

3.3 PCI in Industry and in Science ...................................................................................................9

3.3.1 PICMG 1.0 and 1.1.....................................................................................................................93.3.2 CompactPCI ...............................................................................................................................93.3.3 PXI 103.3.4 PCI Express ..............................................................................................................................11

3.4 PCI Express in Industry and in Science.....................................................................................13

3.4.1 PICMG 1.3................................................................................................................................133.4.2 CompactPCI Express, CompactPCI Plus and CompactPCI PlusIO.............................................143.4.3 PXI Express ..............................................................................................................................153.4.4 ATCA ........................................................................................................................................163.4.6 µTCA ........................................................................................................................................18

4 I/O INTERCOMMUNICATIONS.....................................................................................................20

4.1 PCI Express I/O intercommunications .....................................................................................20

4.2 Ethernet I/O intercommunications .........................................................................................21

5 SIGNAL CONNECTIVITY ...............................................................................................................22

6 EXAMPLE CONFIGURATIONS.......................................................................................................23

6.1 General Purpose Fast I/O........................................................................................................23

6.1.1 How-to Build a General Purpose Fast Controller ....................................................................236.1.2 I&C Grade Fast Controller requiring large number of I/O modules ........................................25

6.2 DO’s and DON’Ts with I/O cards based on Conventional PCI ...................................................26

6.2.1 DO: Integrate a DAQ System requiring 6U CompactPCI digitizers ..........................................266.2.2 DON’T: General Purpose I/O requiring Conventional PCI or direct PCI Express I/O................27

6.3 High-End fast controllers for specific projects .........................................................................28

6.4 Alternative solutions for Plasma Control front-end computers ...............................................29

6.4.1 PCS - Activating the 10 Gb/s Ethernet Option of Basic Fast Controller ...................................296.4.2 PCS – GPU Processing Expansion ............................................................................................296.4.3 PCS – Total segregation of pre-processing from front-end computer ....................................30

3

6.4.4 PCS – PCI Express Expansion in ATCA based systems..............................................................31

7 SYSTEM INTEGRATION................................................................................................................32

7.1 Opportunities for Industrial System Integrators......................................................................32

7.2 CODAC I&C Integration Kit ......................................................................................................33

7.3 How to make new I/O cards compliant with the ITER I&C Standards ......................................34

8 CONCLUSIONS ............................................................................................................................36

4

2 INTRODUCTION

The ITER project’s long time span and the nature of the instrumentation and control (I&C) procurement procedures for the plant systems require that the ITER Organization defines and follows well recognized standards which are used both by industry and in physics experiments. For I&C manufacturers, one of the most important standards is the computer bus specification which is needed for the selection of input and output (I/O) devices, CPU boards and other components used in plant systems.

However, today’s I/O specifications cover not only the electrical and mechanical aspects of an I/O bus system but also the interconnection and communication protocol issues between sub-systems. The ITER Organization is following a top-down approach in the specification process. In the first stage, the communication protocol specifications and electrical specifications for I/O connectivity will be chosen. The second stage will be the selection of the mechanical form factors and commercial products for the chassis, CPU, I/O and communication modules.

2.1 PCDH context

The Plant Control Design Handbook (PCDH) [1] defines methodology, standards, specifications and interfaces applicable to the whole life cycle of ITER plant instrumentation and control (I&C) systems. I&C standards are essential for ITER to: Integrate all plant systems in one control system. Maintain all plant systems after delivery acceptance. Contain cost by economy of scale.PCDH comprises a core document which presents the plant system I&C life cycle and recaps the main rules to be applied to the plant system I&Cs for conventional controls, interlocks and safety controls. Some I&C topics will be explained in greater detail in dedicated documents associated with PCDH as presented in the picture below. This document is one of them.

5

2.2 Objectives

At the current first stage of the standards selection process, the primary objective of this document is to present the ITER Organization’s decision concerning communication protocols and the bus to use with input and output systems. This gives a solid base for selection of the fundamental technologies. The secondary objective is to give guidelines for the second stage of the selection process to allow rapid evaluation of commercial products which could – but not necessarily will – make their way into the ITER CODAC certified hardware catalog. This catalog is to be published in continuous editions, together with the Plant Control Design Handbook (PCDH) [1].

2.3 Scope of the Document

The standards and specifications presented are applicable to all CODAC input and output systems. However, control systems within a plant system containing exclusively slow I/O, will be built using Siemens Series 7 PLC devices and these are therefore out of the scope of this document. The document mainly addresses the definitions and standards to be used with plant system fast controllers. In exceptional cases, this document is applicable for slow I/O, when the system contains both fast and slow I/O (see PCDH [1] to learn more about slow and fast I/O definitions).The network interconnect is out of scope of this document and the starting point of the Plant Operation Network interconnect is discussed in the ‘Integration Kit for PS I&C’ satellite document [9].

2.4 Acronyms

AMC Advanced Mezzanine Card – often used in ATCA and in µTCA

ATCA Advanced Telecommunications Computing Architecture (PICMG 3.0)

CPU Central Processing Unit

CODAC COntrol, Data Access and Communication

COTS Commodity off the Shelf

I & C Instrumentation and Control

I/O Input and Output

ISA Computer bus standard found in early PCs

µTCA microTCA – simplified specification from ATCA

PCDH Plant Control Design Handbook

PCI Peripheral Component Interconnect – computer bus standard

PCI Express Serialized communications protocol for computer I/O

PCI-X Extended PCI – computer bus standard

PICMG PCI Industrial Computer Manufacturers Group

PLC Programmable Logic Controller

PMC PCI Mezzanine Card – a piggy-bag form factor

PROFINET Industrial Fieldbus communications protocol on Ethernet

PSH Plant System Host – a dedicated CODAC computer for Slow I/O management

PXI PCI Extensions for Instrumentation

PXI Express An evolution of PXI using PCI Express technologies

SAS Serial Attached Small Computer System Interface – computer disk connection

6

SATA Serial Advanced Technology Attachment – computer disk connection

TCP/IP Also called Internet Protocol Stack – A communication protocol

VLAN Virtual Local Area Network

VME Versa Module Eurocard – computer bus standard

2.5 References

1. Plant Control Design Handbook (27LH2V v7.0)2. CODAC Catalog of I&C products – Fast Controllers (345X28 v2.0)3. CODAC Conceptual Design (27LBW6 v1.1)4. Fast Controller Survey (2M3YGM v2.2)5. Technology Survey Report for Plant System Simulator project (ITER/CT/09/4100000771)

(2NC4LT v2.0)6. Get CODAC Core System http://www.iter.org/org/team/chd/cid/codac/coresystem7. Cubicle Products (35LXVZ v2.1)8. I&C cubicle internal configuration (4H5DW6 v2.0)9. Integration Kit for PS I&C (C8X9AE v1.0)

7

3 REVIEW OF BUS TECHNOLOGIES

This chapter makes a short review of the different input and output technologies and communications methods considered during the design phase of the CODAC system. Those familiar with the industrial I/O bus systems can safely skip this chapter.

The intention is to give background information on the proposed specifications and standards in a simplified manner. The ITER Organization has conducted technology surveys on plant system fast controllers (see [4] and [5]) which should be consulted for an in-depth analysis of the technologies presented. Those who are familiar with PCI Express I/O bus system can safely skip this chapter.

3.1 VMEbus systems

VMEbus (VME) was created in early 80’s mainly as a result of Motorola’s effort to push for an open embedded bus standard based on their popular 680x0 family of processors. It is a parallel, asynchronous bus (in its original form) with one bus arbiter but allowing several bus masters for multiprocessor applications. Data and address lines are separated, with three different address spaces. Various form factors and variations have been produced since the introduction of VMEbus.

VMEbus has been widely successful and a large number of manufacturers have produced a huge selection of processor and I/O boards with excellent interoperability. Many high-energy physics experimental facilities have adopted VMEbus technology for their embedded front-end

control and data acquisition systems.

The golden era of VMEbus based systems was in the 1990’s in its 6U height format (as shown) withP1/P2 connectors and a 32-bit asynchronous bus. Single board computers for VMEbus were mainly 680x0 or PowerPC based.

The ITER Organization does not consider VMEbus based systems suitable for the ITER project for two reasons. Firstly, all attempts to move the VMEbus design from 32-bit asynchronous bus to 64-bit synchronous bus have failed to gain ground commercially. Secondly, there was no serial communications protocol specification for VMEbus available at the time when the PCDH [1] document was created.

Guideline 1-1. VMEbus based systems will not be used in the ITER Instrumentation and Control System or in any of its subsystems.

3.2 PCI – Peripheral Component Interconnect

There is no doubt that PCI is the all-time winner in computer peripheral bus systems from the sheer number of installations and I/O cards available. It has a quite unexciting specification of synchronous, parallel data bus with 33 MHz and 32 bit data width with multiplexed data and address cycles. The theoretical bandwidth for a single card has a maximum of 132 MB/s.

8

PCI bus was originally developed by Intel while the PCI-SIG industrial interest group (www.pcisig.com) maintains the specification. The most important specification, PCI 2.0, was published in 1993. The name has recently been changed to Conventional PCI to make a distinction with the PCI Express specification.

What makes the PCI-bus a true winner is its capability of device enumeration and identification. This has allowed the creation of today’s plug-and-play enabled operating systems.

In real-life systems the performance is hampered by bus arbitration because of the shared connection topology. The number of PCI cards attached to a PCI bridge is limited to five. In practice, systems will contain several PCI bridges connected in series to increase the total number of PCI cards. This will further increase the latency time of the most distant PCI devices.

On the right is a diagram of the device topology that was used on all PC based systems for well over a decade. Note the presence of what is called North (core system) Bridge and South (I/O) Bridge. They are part of the CPU chipset (Intel) and to a large extent their quality and characteristics determine the performance and the scalability of the entire system.

There have been several design iterations to try to resolve the most evident performance bottleneck, the shared connection topology, by simply putting in more horsepower. As a result there is a 64-bit PCI, 3.3 Volt 64-bit PCI and last but not least the PCI-X specification which can do a whopping 2.1 GB/s at 266 MHz. However, the physical limits of the bus make every step forward extremely painful.

In spite of its evident drawbacks, the Conventional PCI specification has been proposed by the ITER CODAC team because it has such a dominant market position. Further PCI specification variants are studied in the following sections.

Guideline 1-2. Conventional PCI will not be used in the ITER Instrumentation and Control System as a plain parallel bus, nor as a unique mechanical form factor.

Guideline 1-3. The ITER Instrumentation and Control System will contain fast controllers that would allow usage of Conventional PCI cards. Their usage is, however discouraged.

9

Guideline 1-4. The ITER Instrumentation and Control System will contain fast controllers that would allow usage of 32-bit and 64-bit PCI-X cards. Their usage is, however discouraged.

10

3.3 PCI in Industry and in Science

The PCI bus found its way into industrial products very quickly. The governing specification body for PCI bus industrial applications is the PICMG Interest Group (www.picmg.org).

3.3.1 PICMG 1.0 and 1.1Two of the most successful specifications are PICMG 1.0 (1994) and PICMG 1.1 (1995). These specifications are still widely used in industrial grade computers. They define a passive PCI/ISA bus backplane on which an industrial grade single board computer can be plugged. Then everything can be wrapped in an industrial grade, rackable enclosure. For example, to have a 4U height industrial computer with 4 ISA slots and 7 PCI slots, the following components would be selected:

Select motherboard Select CPU board Select 2U or 4U casing

The ITER Organization does not consider PICMG 1.0 or 1.1 suitable for CODAC system’s needs for two reasons. Firstly, it is based on Conventional PCI bus which is 32 bit technology. Secondly the Team estimates that although the industry’s needs would keep the market alive, there will be no further innovation.

Guideline 1-5. PICMG 1.0 or 1.1 will not be used in the ITER Instrumentation and Control System.

3.3.2 CompactPCIOther, more rugged industrial standards followed PICMG 1.0. The CompactPCI (PICMG 2.0) specification defines a Eurocard PCI card form factor, assigning a PCI pinout on 2 millimeter hard metric connectors. J1 = 110 pins for PCI, J2 = 110 pins defined in different specifications, for Telecom, Ethernet, ANSI I/O, and so on. 3U and 6U form factors are defined, where the 6U format has additional J3/J4/J5 connectors that are used in many telecommunication and signal processing applications. PICMG 2.1 defines the hot swappable CompactPCI.

11

CompactPCI is a clever specification which was in direct concurrence with the VMEbus based systems. The limitations of the PCI bus have always hampered the popularity of CompactPCI while the PC compatibility of the CPU board, operating system and drivers, the J2-J5 connectors' massive and reliable extra pin population and excellent mechanical conception have been the attractive factors.

ITER Organization will not consider using CompactPCI as an I/O bus system because the only commercially viable specification is 32-bit and because the market position in the long term is questionable due to lack of innovation.

The ITER Organization recognizes the availability and reliability of I/O cards based on CompactPCI specification. Therefore the Team wishes to promote more advanced I/O bus solutions which allow backward compatibility with the CompactPCI I/O cards.

Guideline 1-6. CompactPCI will not be used as an I/O bus or as a dominant form factor in the ITER Instrumentation and Control System.

Guideline 1-7. The ITER Instrumentation and Control System allows using 3U height CompactPCI I/O cards within other, backward compatible form factors recommended in the CODAC fast controller catalog.

3.3.3 PXIPICMG 2.8 defines PXI, namely the use of the J2 connector of the CompactPCI for instrumentation purposes. National Instruments split from the specification in 1997, creating PXI System Alliance (www.pxisa.org). The rest is a commercial success story, where the driving force is the National Instruments software flag-ship, LabView. Although PXI is backward compatible with CompactPCI, the majority of the more than 1000 PXI I/O cards available come from either National Instrument's own catalog or from their commercial

partners who have an interest to see their instrumentation solution integrated into the National Instruments’ products. For testing, laboratory instrumentation or similar applications there is a high probability that PXI/LabView can provide a solution to meet the requirements.

Despite PXI’s commercial success, the ITER Organization is not considering defining the use of pure PXI based chassis and CPUs because they are based on 32-bit PCI bus technology.

Guideline 1-8. The ITER Instrumentation and Control System discourage the use of 32-bit, parallel bus PXI based chassis and CPUs

Guideline 1-9. The ITER Instrumentation and Control System allows using 3U height parallel bus PXI I/O modules within other, backward compatible form factors recommended in the CODAC fast controller catalog.

12

3.3.4 PCI ExpressThe ITER Organization has studied the use of serial I/O bus protocols because of their impressive performance and rapid market penetration. PCI Express combines many assets of the old and new technology because it wraps the Conventional PCI specification in a communication protocol specification.

Banging their heads on the wall with PCI-X, Intel came up with completely new thinking. Their idea was to create a completely new approach for peripheral communications:

Telecom techniques Serial communications Communication protocol Bus data is the payload Switches and routers Point-to-point connections Backward compatible with Conventional PCI No special Graphics bus Very high bandwidth Scalable

The requirements for serial communication and the scalability for different performance levels come together in the PCI Express specification. A connection between two devices is done with lanes. A lane contains two pairs of connection media (copper, fibre): one pair for transmitting and one pair for receiving. Lanes can be added between devices giving a gain in performance. Data transmission multiplexing is done automatically by switching.

Therefore for example, a 4x lane device can be operated with just 1x lane – it still works, but it is slower. This incredible scalability can be used, for example to move a device hundreds of meters away from the host device, using a single pair of optical fibres, carrying 1x lane of PCI Express.

13

PCI Express layers are the following:

Software layer Transaction layer Data link layer Physical layer Logical Electrical Mechanical layer

The data payload is the actual bus data which is, of course, backward compatible with the PCI bus data. Therefore it is trivial to create a bridge between a PCI Express switch and a Conventional PCI bus. In fact, all the current CPU chipsets provide one such a bridge by default. Therefore Conventional PCI bus I/O cards can be connected to any PCI Express enabled computer The different layers create about a 20% overhead on data transmission efficiency. But despite that, the capacity of a single PCI Express 2.0 lane for 5 GT/s (giga transactions per second) means some 500 MB/s of data transfer speed per lane.

Another important feature of PCI Express is the virtual channeling which allows prioritized routing of data packets. This ability to have virtual point to point connections is used today to create high speed networks between systems. In fact, the performance of 4x or 8x PCI Express based networks carrying TCP/IP as payload are much better than those of a 10 Gb/s Ethernet.

Guideline 1-10. PCI Express will be used in the ITER Instrumentation and Control System and in its sub-systems as the principal I/O Bus System and as an alternative communication method between interconnected systems.

As discussed in section 3.2, Conventional PCI based I/O cards are available in different form factors in abundant quantities and the ITER Organization plans their use in the ITER Instrumentation and Control System and its different sub-systems. Only the pure Conventional PCI bus based form factors will be forbidden.

14

Guideline 1-11. Conventional PCI form factor cards cannot be used in the ITER Instrumentation and Control System and in its sub-systems while CompactPCI and PXI based cards can be used.

3.4 PCI Express in Industry and in Science

In industry, PCI Express has mainly been pushed by the PICMG interest group. There is a single but big obstacle to the fast propagation of the PCI Express in industrial world: the huge amount of Conventional PCI I/O used in industry. Luckily the excellent backward compatibility of PCI Express is helping the transition. Instead of immediately building pure PCI Express based systems, all manufacturers propose mixed systems so that the mechanical appearance and the number of Conventional PCI / PCI Express slots can be selected according to end user's needs.

3.4.1 PICMG 1.3The most obvious transition path from Conventional PCI to PCI Express in industrial applications is provided by the PICMG 1.3 specification. Its extensions even allow running an old PICMG 1.0 system using a PICMG 1.3 leveraged PCI Express enabled System Host Board (single host board (SHB) or single board computer).

As with PICMG 1.0, an industrial computer integrator and PICMG 1.3 components would be selected according the particular requirements.

Select motherboard (here a butterfly model for 2U) Select CPU board

Select 2U or 4U casing (or even Shoebox!)

PICMG 1.3 based systems inherit the mechanical design that is largely inferior to form factors such as CompactPCI. The ITER Organization understands the obvious problems this presents for cabling and reliability of some type of I/O cards. It is not suggested that the ITER Instrumentation and Control System could be built solely with PICMG 1.3 computers, but PICMG 1.3 based industrial grade computers have the following advantages that would make them excellent elements in the CODAC fast controller catalog:

Large casings with good cooling capacity allow low-cost solutions with high-end processors, such as Intel® Xeon® with excellent power / price ratio

Extreme flexibility to resolve upcoming configuration problems, ranging from systems with PCI-Express-only-connectivity to systems with PCI-Express, Conventional PCI and even 64-bit PCI-X

Longevity and availability proven by PICMG 1.0

Guideline 1-12. PICMG 1.3 based systems will be used in the ITER Instrumentation and Control System.

15

Guideline 1-13. PICMG 1.3 based systems will be used for their CPU power and PCI Express connectivity in mixed, interconnected systems.

Guideline 1-14. PICMG 1.3 based systems will not be used to host high-end I/O cards which require either front-panel connectors or large, heavy or otherwise complicated connectors.

3.4.2 CompactPCI Express, CompactPCI Plus and CompactPCI PlusIOCompactPCI Express is an extension of the PICMG 2.0 definition to define PCI Express-ready connectors and backplane on CompactPCI standard.

CompactPCI Plus (PICMG 2.30) is a supplement to the PICMG 2.0 CompactPCI Standard. The workgroup started its work at the end of 2008 and the aim is to add a standard pin-out for BP (User I/O) signals for serial high-speed interconnects. This would allow four 1x lane PCI Express, four SATA, four USB and two Ethernet serialized connections to “satellite” slots, as the following picture illustrates.

CompactPCI PlusIO standardizes the above interfaces at the rear J2 connector of CompactPCI, maintaining full compatibility with the CompactPCI specification and thus allowing a migration path to CompactPCI Plus, which will the major new standard for CompactPCI.

The ITER Organization welcomes the initiative and finds it an attractive way to modernize the CompactPCI standard while maintaining the compatibility with the existing, huge catalog of CompactPCI I/O cards. In particular, the presence of USB 2.0 in the standard is considered very interesting. The fact that the PCI Express is limited to 1x lane only will limit the use to general purpose I/O.

Guideline 1-15. The ITER Organization will consider adding CompactPCI PlusIO compliant items into the CODAC fast controller catalog when compatible COTS products will become available.

16

3.4.3 PXI ExpressThe PXI System Alliance (www.pxisa.org) has homologated a definition named PXI-5, which is the current definition of PXI Express. As the alliance states, the specification allows increasing PXI system’s bandwidth from 132 MB/s to 6 GB/s by taking the advantage of PCI Express in the backplane.

In fact, the specification is derived from the CompactPCI Express specification, maintaining backward compatibility with it (although not many CompactPCI Express I/O cards actually exist). The specification maintains backward compatibility with 32-bit CompactPCI and PXI cards allowing manufacturers (for any practical purpose, National Instruments) to create hybrid chassis allowing the combination of different standards within the same chassis.

The PXI Express chassis manufacturers have been scarce, practically only National Instruments. Other manufacturers, such as Agilent Technologies have come up with their own PXI Express chassis. The ITER Organization considers PXI Express as a good platform to be used in the ITER Instrumentation and Control System for following reasons:

Excellent compliance with 32-bit PXI and CompactPCI standards, allowing the usage of an extremely wide range of COTS I/O cards

Availability of some high-end PXI Express compliant I/O cards from National Instruments and Agilent Technologies

Possibility to operate a PXI crate as a PCI Express extension (without National Instrument’s CPU) from another computer, for example from an industrial grade PICMG 1.3 computer

Guideline 1-16. PXI Express chassis hosted I/O sub-systems will be used in the ITER Instrumentation and Control System.

In order to minimize the vendor lock-in and to maximize flexibility in the upgrade path regarding the operating system and the CPU power, the ITER Organization considers it necessary to avoid the use of National Instrument’s embedded CPU controllers in the PXI Express chassis,

17

Guideline 1-17. PXI Express chassis is recommended to be used as PCI Express extension of another, CPU oriented industrial computer, such as PICMG 1.3 based system.

3.4.4 ATCAAdvanced Telecommunications Computing Architecture (AdvancedTCA or ATCA) is another of the PICMG consortium specifications (PICMG 3.0 being the base specification). It contains innovative chassis specification with no bus but a fabric connection between ATCA boards (aka. blades). ATCA equipment originally targeted telecom applications where high speed interconnect communications are essential. Other criteria were reliability, availability and serviceability. An ATCA chassis is highly manageable, which means that almost every aspect of its operation can be monitored remotely and actions can be taken, for example if an environmental sensor indicates an alert, or if a watchdog reports a non-functional blade. This, together with the hot-swappable electronic units

allows creation of systems with high availability (minimum 99.999 % - "five nines" – or 5 minutes non-availability per year).

The most common ATCA blade fabric interconnect method is, by far, Ethernet (GbE), specified by PICMG 3.1 standard. PICMG 3.4 specifies the PCI Express use in the ATCA chassis for fabric interconnection. There are other communication interconnection methods available e.g. the RapidIO (PICMG 3.5) which is used for packet switched interconnection at FPGA chip level.

The selection of the interconnection method depends of the application and the data exchange requirements. For example, TCP/IP based communications without high performance criteria are easiest to set up with Ethernet fabric which is readily available in commercial products. PCI Express, on the other hand allows the performance to be increased simply by adding lanes, as seen above. For example, it is possible to create a TCP/IP network connection between two blades simply by passing the TCP/IP as payload data in PCI Express. By using, say eight lanes the data transmission performance can easily be multiplied compared to a single link GbE connection between two blades. In general, PCI Express performs better with larger data packets. RapidIO performs well with small data packets.

However, only Ethernet based backplane fabrics are supported in most COTS systems on the market.

COTS I/O card availability in ATCA is poor for classic control system needs. There are many in-house ATCA I/O card designs in control systems for physics experiments. They take advantage of the large printed circuit area of the ATCA boards but on the other hand they also have to resolve the connectivity issues, which often lead to other specific design problems when the signals have to be routed through a dedicated ATCA Rear Transition Module (RTM).

Some ATCA blades have dedicated Advanced Mezzanine Card (AMC) or even PCI Mezzanine Card (PMC) carriers. With these cards, the COTS availability is certainly considerably expanded but only the AMC

18

boards fall into the category of PCI Express enabled devices. It is precisely on the AMC side where the spectrum of I/O cards available is significantly smaller.

Even when it is possible to find a suitable a COTS AMC I/O module, the problem of finding a suitable carrier and routing the I/O through an RTM module of specific design remains, as illustrated in the below on the left (courtesy of W. Koprek & T. Jezynski, DESY).xTCA Extensions for Physics are currently created under a PICMG Technical Subcommittee. The PDG.0 R1.0 standard addresses the physics clocks, gates, triggers and timing on PICMG 3.0, amongst others. The standard will be available for ATCA in 2013 but the ITER Organization estimates that it will take a long time before its recommendations

find their way into COTS products.

AXIe is another standard based on AdvancedTCA with extensions for instrumentation and tests (http://www.axiestandard.org/).

Today’s situation is that ATCA based systems cannot be considered for use as general purpose fast controllers in ITER instrumentation and control systems. However, it looks as if many ongoing efforts to bring out standards making the ATCA based systems suit the high energy physics experiments better will make a brake through, especially for the “high-end” applications such as plasma diagnostics. This is true in particular where the performance is not the only issue but also redundancy and high availability are important.

The ITER Organization is actively prototyping the ATCA platform presented in the fast controller catalogue in different pilot projects related to the data acquisition use cases. When applicable, xTCA Extensions for Physics and AXIe standards are also studied in related projects. Timing and PCI Express interconnect solutions have been made available by external companies.

The above observations and the somewhat uncertain COTS market situation lead the ITER Organization to publish the following, quite generic guideline regarding the use of ATCA based systems in the ITER Instrumentation and Control System.

Guideline 1-18. ATCA with xTCA extensions for Physics (PDG.0 R1.0 or greater) is a recommended platform for high end diagnostics applications, such as DAQ applications participating in the Plasma Control System.

19

20

3.4.6 µTCAIndustry has quickly taken advantage of the ATCA specification and the outcome is a simplified µTCA (microTCA) specification from PICMG, the MTCA.0.

A microTCA system is basically a simplified carrier for Advanced Mezzanine Cards (AMC). The chassis management is maintained, although much simplified by concentrating multiple tasks into a µTCA carrier hub. As the name implies, the hub carries from one to a maximum of twelve AMC mezzanine cards, depending on the mechanical arrangement.

The PCI Express (and Ethernet) connectivity remains in the microTCA zone connectors, but as the picture illustrates, the main job of the µTCA carrier hub is to make SATA, Ethernet and PCI Express packet switching between the AMC slots.

One of the arguments to use PCI Express is the possibility to use it for system extension. One could extend a µTCA chassis, for

example, by connecting it to a PXI Express chassis which would considerably enlarge the available COTS standard I/O selection. Some attempts have been made as the picture on the right illustrates but again, but the general availability of the PCI Express extension products for µTCA is non-existent.

The specification for xTCA Extension for Physics, mentioned above is more advanced in the µTCA platform. It is now possible to purchase a full width / full height µTCA shelf, compliant with the xTCA Extension for Physics, MTCA.4 from Schroff or ELMA, for example.

Compared to the 6U VME and CompactPCI this platform is particularly interesting with its µRTM (Rear Transition Module) illustrated below. Together with the AMC board, the total real estate area is considerable and can be used for

21

modular design, for example, doing the signal conditioning on the µRTM module.

The ITER Organization is aware of the µTCA’s potential to become the “next VME” when it comes to interoperability and availability of I/O cards. The problem is that it has not yet reached that stage. Concerning the industry, one can only wait until the market situation clarifies and winning technologies come out. The ITER Organization is closely following the xTCA Extensions for Physics Standard and is conducting pilot projects to evaluate the µTCA/µRTM platform suitability especially for diagnostics applications. Timing and PCI Express interconnect solutions have been made available by external companies.

Guideline 1-19. MTCA.4 is a recommended platform for high end diagnostics applications, especially as design solutions where various signal conditioning requirements are fulfilled on separate µRTM modules while the DAQ FPGA base board on AMC side remains the same.

22

4 I/O INTERCOMMUNICATIONS

The preceding chapter explained the transition from parallel bus systems to serial data connections where the actual bus information is wrapped in a communication protocol. This means that geographical limitations of input and output devices are largely relaxed and remote I/O is possible over hundreds of metres using fibre links.

4.1 PCI Express I/O intercommunications

A typical use of PCI Express I/O intercommunications is with I/O expansion. An industrial system which is PCI Express enabled can easily interconnect with an I/O chassis having a different form factor as long as it has a PCI Express slot and/or perhaps a bridge towards its Conventional PCI bus slots. On the right is an example how PXI Express chassis can be controlled from any PCI Express enabled computer, such as PICMG 1.3 or even from an office workstation.

The picture on the left presents a similar expansion card in AMC format which allows bridging of an ATCA AMC carried blade with an industrial computer which has Conventional PCI and PCI Express I/O cards. From the CPU and operating system’s point of view, all I/O boards are seen as any PCI device, there is no geographical notion of cards being remote instead of local.

Guideline 1-20. PCI Express will be used in the ITER Instrumentation and Control System for I/O expansion.

Another way to use PCI Express to interconnect systems is through an external PCI Express switch, as illustrated in the below picture. The result is a meshed network of computers with shared memory or reflective memory structure. This structure can be used in many ways, one of which is to create a high performance, real-time network between computers by running a network protocol stack on it.

23

Guideline 1-21. PCI Express is a recommend communication method between interconnected I/O systems in ITER Instrumentation and Control System but only if latency and jitter requirements exceed those of the ITER CODAC Synchronous Databus Network (SDN).

4.2 Ethernet I/O intercommunications

There are several cases where Ethernet is an adequate communication method between interconnected I/O systems. One example is the Advanced TCA meshed backplane, PICMG 3.1 (see section 3.4.4).

If one takes the commercial reality into account, there is a much higher probability of finding a suitable ATCA shelf product with Ethernet backplane fabric than anything else. Direct interconnections between I/O cards could be therefore be made using Ethernet; some designs have already been exploiting this possibility successfully.

Another important use is various fieldbus protocols on Ethernet, taking the example of PROFINET. The next diagram illustrates how a fieldbus communication can be organized using VLANs on commercial Ethernet L2 level switches.

ID2

PSHNetwork Services:

DHCP (MAC)DNS (name/IP)

NTP (time)

Plant System ABuilding and Power – Cubicle A

Physical Location #1

PLCMPI Eth.

Plant System ABuilding and Power - Cubicle B

PLC Remote I/O Module

PLC Remote I/O Module

ID1002ID1002

c

Ethe

rnet

Fie

ldbu

s

PLC

CAS/CACController

IOC

ID2

Guideline 1-22. Switched and meshed Ethernet is a recommend communication method between interconnected I/O systems in the ITER Instrumentation and Control System.

24

5 SIGNAL CONNECTIVITY

The connectivity between the I/O signals and fast controller I/O cards in many cases resembles the slow controller connectivity problems. The ITER Organization is coordinating the choice of signal connectivity products so that whenever possible, similar signal patch panels and other products can be used with both slow and fast controllers.

Guideline 1-23. The slow and fast controller catalogs published by the ITER Organization will make reference to a common way of building ITER I&C Cubicles, including the cabling and signal interfacing.

For more information please consult the I&C cubicle internal configuration (4H5DW6 v2.0)guide [8].

25

6 EXAMPLE CONFIGURATIONS

This chapter gives some block diagrams for possible configurations based on the conclusions of the preceding chapters.

6.1 General Purpose Fast I/O

Guideline 1-24. Separated I/O chassis and CPU/Network chassis is the preferred solution in the ITER Instrumentation and Control System.

I/O from catalog:- PXI- CompactPCI- PXI Express

From catalog:- PXIe Chassis

From Catalog:- 4U PICMG 1.3chassis

1 Gb/s Ethernet

Optional10 Gb/s EthernetBus Extension

PCI-Express x1

The lifespan of the PXIe chassis and the I/O units will be the lifetime of the ITER project. The CPU/Network chassis’ lifetime is estimated to be 7 to 10 years after which it will be replaced to provide the latest CPU and networking power for the system and control software.

The “PICMG 1.3 chassis” is an industrial level computer but not particularly designed for redundancy. However, the most obvious unreliability issue of spinning hard disk is resolved by using two, RAID-1 configured SSD (solid state) disks. The dual power supply is usually available except for systems with most powerful GPU cards.

The ITER Organization estimates that some 60% of fast I/O problems can be resolved with the simple type of configuration above.

6.1.1 How-to Build a General Purpose Fast ControllerThis section simplifies the detailed instructions which are available in the I&C cubicle internal configuration (4H5DW6 v2.0) guide [8]. We will point in parenthesis to the datasheets in IDM folders of proposed items.The following items from the PCDH Fast Controller Catalog [2] have to be obtained:

1. Standard ITER cubicle from the Cubicle Products Catalog [7]

26

a. Accessories: four L-shaped glider shelves (to install two computer chassis) and caged counter nuts to attach them into the mounting rails.

2. I&C Grade PICMG 1.3 PC 4U industrial computer (BKLT5Z)3. PXI-Express hybrid chassis, for example PXIe-1065 (BW8XTV)4. PCI-Express connectivity between the industrial computer and the PXI-Express chassis, for example

one PCI-Express lane version (BW8XTV) – 192 MB/s5. General purpose I&C Multi-I/O board with full CODAC Core System support, for example PXI-6259

(C8S269) 2.8 MS/s total maximum sampling rate shared by all channels.6. Synchronization and timing board for system synchronization using PTP protocol, time based

triggering and for automated time stamping in CODAC Core System (provided by the ITER Organization, together with the I&C Integration Kit [9] for procurement arrangements) (34RRE2 v1.2).

7. Two termination blocks for the PXI-6259 Multi-I/O board, NI SMB-2090A (incorporated in ITER part number for PXI-6259 board kit, see the reference above for the board).

It may be desirable to purchase the following additional item(s) for convenience:

8. An CD-ROM (DVD) disk drive with USB connectivity for the CODAC Core System installation

The pictures below illustrate the example configuration installed into an ITER standard cubicle:

The following are required in order to get the CODAC Core System installed:1. If the institute or company has not been registered as a CODAC Core System user the instructions

for registration published on the ITER Organization’s web site [6] should be followed. 2. Within a few days instructions on how to download an ISO image from the internet together with

the installation instructions will be delivered.3. The ISO image should be used to burn a CD. Connect the CD player to the USB port of the industrial

computer.4. Ensure that the system’s upper network plug is connected to the organization’s network

General purpose fast I/O PXI 6259

SMB 2090A patch panel in the back.

27

a. The network plug is allowed to connect to the Internet and reach the CODAC Core System distribution server (http://codac-dist.iter.org) – this can be easily verified by connecting a laptop or workstation to the plug and using the browser to connect to the above server

b. In case of doubt, contact your IT or network personnel and explain what you want to do – they may need to disable the network proxy usage for the port you are using, depending of your corporate network policy.

c. If the system does not boot on the USB CD player, press the Del (Delete) key when you see the boot splash screen. Go to the BIOS settings, Boot section and verify that the USB device has been detected, recognized and enabled as boot device.

5. Follow the instructions on the CD’s boot screen to select the appropriate system role (e.g. usually, fast controller with development role). When the connection to the CODAC Core System distribution server has been established, the installation will take anything from two to several hours, depending of the network connection speed.

6. Once the system has been installed, log in using the credentials provided by the CODAC Core System support team. Browse through the example applications to an example application which makes use of the PXI-6259 multi-I/O board in this configuration.

a. Study the application and make some modifications to learn how to interface hardware and signals with the CODAC Core System.

6.1.2 I&C Grade Fast Controller requiring large number of I/O modules

As the example below illustrates, a very large number of PXI, PXI Express and CompactPCI 3U I/O cards can be interfaced by adding a second 18 slot crate to the system and using the double channel PCI Express connectivity product from the catalog.

What is the total number of PXI Express chassis that can be connected into a single PICMG 1.3 industrial computer? In practice only two or four, depending of the fast controller catalog PICMG1.3 items, because each high-speed link requires four lanes (x4) and two chassis eight lanes (x8) of PCI Express. The available number of x8 PCI Express slots in the PICMG 1.3 passive backplane is the limitation. Backplanes exist which can take multiple SHB computers (not in the catalogue), each of which drives its own set of PCI Express extension slots. Using this type of motherboard, a total of twelve PXI Express chassis can be connected to a single PC 4U PICMG 1.3 computer, running a total six SHB computers, each connecting with two PXI Express chassis. Contact the ITER Organization’s CODAC Section ([email protected]) to discuss your problems and obtain more information about this type of configuration.

28

I/O from catalog:- PXI- CompactPCI- PXI Express

From catalog:- PXIe Chassis (hybrid)- PXIe Chassis

From catalog:- 4U PICMG 1.3 chassis

1 Gb/s Ethernet

Optional10 Gb/s EthernetBus Extension

2 x PCI-Express x4

6.2 DO’s and DON’Ts with I/O cards based on Conventional PCI

As explained in the chapters above, the ITER Organization does not promote or encourage the use of I/O cards based on Conventional PCI but neither does it forbid them completely. For example, PXI cards and CompactPCI cards can be used in the 3U hybrid PXIe chassis found in the fast controller catalog. There are other ways to connect some interesting, existing, I/O cards such as 6U CompactPCI cards. These use PCI Express to PCI bridges and expansion chassis, which are readily available. This section gives some examples of the good - and the bad - practices.

6.2.1 DO: Integrate a DAQ System requiring 6U CompactPCI digitizersIn the example below, there is an external chassis, carrying 6U CompactPCI digitizers in a 9U chassis from a well-known instrumentation manufacturer, with 80 channels, each sampling with 1GS/s. Without discussing if the same performance could be achieved some other way, assume that the chassis is here and it needs to be integrated in a fast controller solution from the ITER catalog: select a solution with no embedded CPU from the manufacturers catalog but with a chassis bus extension module to make a cPCI to Conventional PCI connection and put the latter into the PICMG 1.3 industrial computer. It now sees both the PXI Express hybrid chassis and the cPCI chassis.

The software integration of the non-standard cPCI chassis into the CODAC Core System software, EPICS control system frame work and the Red Hat Linux operating system needs to be done. For non-catalog items, the ITER Organization has no ready solutions available so the integration works remain under the user’s responsibility.

29

I/O from catalog:- PXI / PXI Express- CompactPCI

From catalog:- PXIe Chassis (hybrid)

From catalog:- 4U PICMG 1.3 chassis

1 Gb/s Ethernet

Optional10 Gb/s Ethernet

Bus ExtensionPCI-Express x4

I/O outside of the catalog:- example: 6U CompactPCI Digitizers

Conventional PCI / cPCI Bus Extension

6.2.2 DON’T: General Purpose I/O requiring Conventional PCI or direct PCI Express I/O

If there are some old, maybe in-house built conventional PCI card designs, or even a modern PCI Express board the temptation to put this card into the PICMG 1.3 industrial computer should be avoided.

This would be acting against two guidelines: Guideline 1-11 (page 13) explains that Conventional PCI form factor cards should not be used in the ITER I&C. Guideline 1-14 (page 14) explains that PICMG1.3 based systems are used to host the CPU, networking, disk and chassis interconnection components, not I/O.

From catalog:- 4U PICMG 1.3 chassis

1 Gb/s Ethernet

I/O outside of the catalog:- example: specific PCI design

DON’T !

30

6.3 High-End fast controllers for specific projects

At later date, the fast controller catalog will certainly contain ATCA CPU blades and a managed ATCA shelf solution as base components for more ambitious electronics projects. The ITER Organization has prototyped some example configurations for data acquisition use cases and pre-processing in Diagnostics Plant System I&C using the ATCA products in the fast controller catalog. Configuration guidelines based on the experience of these pilot projects are available on demand; the CODAC team can point you to relevant reports in in IDM.

31

6.4 Alternative solutions for Plasma Control front-end computers

It might not be necessary or even possible to use a high-end ATCA platform for all diagnostics and even in the Plasma Control System (PCS) it might be necessary to have alternative, less demanding solutions available. The proposed, basic fast I/O solution is expandable. The pictures below illustrate some alternatives as to how the pre-processing and the actual data acquisition can be physically separated in different entities.

6.4.1 PCS - Activating the 10 Gb/s Ethernet Option of Basic Fast ControllerNetwork communication performance can be boosted simply by installing the 10 Gb/s Ethernet option on a standard catalog PC 2U PICMG 1.3 computer.

From catalog:- PXIe Chassis

1 Gb/s Ethernet

10 Gb/s Ethernet(requires x4 PCIe)

From catalog:- 4U PICMG 1.3 chassis

6.4.2 PCS – GPU Processing ExpansionParallel processing power can be added to a front-end computer by adding GPU processor units (ex. 2 x 120 GPU processors).

32

From catalog:- PXIe Chassis

From catalog:- 4U PICMG 1.3 chassis

1 Gb/s Ethernet

10 Gb/s Ethernet(requires x4 PCIe)

From catalog:- 2 x 120 GPU Unit- Requires 2x x8 or 2x x16 PCIe

Bus ExtensionPCI-Express x8

6.4.3 PCS – Total segregation of pre-processing from front-end computerSwitched PCI Express techniques can be used to create shared memory zones between the front-end computer and the pre-processing computer (Dolphin, One Stop Systems).

33

From catalog:- PXIe Chassis

From catalog:- 4U PICMG 1.3 chassis

1 Gb/s Ethernet

10 Gb/s Ethernet(requires x4 PCIe)

2 x 120 GPU UnitSDN 10 Gb/s Ethernet

8 x8 lane PCIe switch- Optional, point-to-point connectivity solutions do exist.

Bus ExtensionPCI-Express x8

PCS Pre-processing

PCS front-end

10 Gb/s Ethernet

6.4.4 PCS – PCI Express Expansion in ATCA based systemsAs stated in [4.1], should the Plasma Control System fast controllers be constructed using ATCA, it is necessary to arrange PCI Express connectivity towards the future HPN solutions. Two solutions can be envisaged:

1. AMC PCI-Express expansion card (picture)2. Purpose-designed Rear-Transition Module RTM (design work has been ordered,

and it will be added to the catalogue). This solution is proposed in the hardware catalog [2].

34

7 SYSTEM INTEGRATION

7.1 Opportunities for Industrial System Integrators

Building fast controllers for plant systems is a demanding task. It is not simply a question of selecting hardware from a catalog but different integration skills will be needed to make sure that the system fulfills all the requirements.

CPU/Networking

I/O BusExtension

I/O Chassis

I/O Modules

SignalInterfacing

OperatingSystem(s)

CODAC CoreSystem

EPICS

Device Drivers

ITER Instrumentation and Control System Requirements

PCD

H S

atel

lite

Cata

logu

es a

nd D

ocum

ents

Plant System Requirements

Plan

t Sys

tem

Des

ign

Han

dboo

k

FastController

Obviously a plant system manufacturer has a good understanding in his specific domain and could easily come to a conclusion regarding signal interfacing and other issues close to the local control problem. But many other aspects of the integration process require specific knowledge about operating systems, networking, EPICS and the ITER Instrumentation and Control System in general. It is unlikely that all plant system manufacturers could obtain a similar level of knowledge by simply reading the documentation provided by ITER Organization.

Therefore system integrators who can propose complete systems for the specific needs of each plant system are needed.

System integrators would work closely with the ITER Organization and would be certified both by the ITER Organization and by the Domestic Agencies for selling complete, CODAC compliant solutions for plant system manufacturers who could then concentrate on their specific domain of expertise.

35

7.2 CODAC I&C Integration Kit

The CODAC I&C integration kit is an organizational concept to distribute some of the main components of the previously described control system to the plant system manufacturers. The concept has been created by the ITER Organization to help the Domestic Agencies and their contractors in their work to build CODAC compatible I&C systems. The use of identical CODAC I&C integration kits will enforce the correct level of standardization in the plant systems.

A CODAC I&C integration kit will be loaned to all organizations and entities who have signed a procurement arrangement to build an ITER plant system. It contains all the software tools necessary to develop and manage the construction phase of the plant system I&C. It contains connectivity tools, allowing the plant system manufacturer to use CODAC collaborative tools. These tools are available over the internet, creating a worldwide system to support the I&C software development, configuration, testing, acceptance and integration functions.

Physically, the I&C integration kit is a small set of hardware and it contains:1. A single industrial computer2. An industrial grade network switch3. Network connected cubicle health monitoring unit of ITER design4. Set of accessories and cables5. [optionally] ITER High Performance Network interface boards, delivered for some plant systems

The kit allows the rapid commissioning of the 1st I&C Cubicle of the plant system, constructed by the plant system manufacturer. Read more from the Integration Kit for PS I&C (C8X9AE v1.0) [9].

FromThe I & CIntegration Kit

Not in the Kit

The 1st I&C cubicle of the Plant System

HPN cards

Cubicle Monitoring

36

7.3 How to make new I/O cards compliant with the ITER I&C Standards

The ITER Organization is fully aware that there could never be such a thing as a fast controller catalog which would cover all of the I&C and especially diagnostics needs. New I/O cards will appear and it is in the general interest to get them integrated in both the ITER I&C system and the CODAC Core System and finally make them appear in the fast controller catalog in order to promote standardization.The picture below illustrates how, ideally, all solutions would be found from PCDH catalogues so that the best, integrated solutions would be selected by the plant system designer together with the responsible officers and experts from the ITER Organization. If no solution is available, the COTS market must be searched with help from an external expert for a suitable candidate product to resolve the given problem.

Yes

No

Use CaseFast I/O or DAQ

Requirements

PSDesigner

IOROs

IOExpert

EXTExpert

COTSMarket

PCDHCatalogues

Solution Proposal

New solution required ?

When a suitable candidate product has been found, it must be prototyped. The ITER Organization is willing to help in the prototyping effort. Since all products will ultimately lead to their use in the ITER I&C, software items such as Linux device drivers and EPICS Device Support must be developed at the earliest possible moment. The CODAC Core System is the framework for the development and therefore also for the prototyping. Since ITER is not only a scientific project but also a huge industrial and engineering challenge, the successful prototype must be developed further and industrialized.

The picture on the next page illustrates the prototyping phase which should contain the Linux and EPICS software support from the very first step of the project.

37

Linux Drv

HW/SW Prototyping

EPICSSolution Proposal

Industrialization

The industrialization of a prototype I/O board is a major project which reveals the actual total cost of ownership (TCO) of the proposed I/O board when used through the lifespan of the ITER Project.

For installation purposes, the rack mounting, cabling, signal termination and EMC issues must be specified, planned and documented. The software support must reach production level quality and it must be maintained at that level. The procurement of the I/O board must be arranged worldwide and it must be made consistent in time using, for example ITER part numbers. The calibration, replacement and maintenance issues must be planned beyond the commissioning, over the time span of several years, even over decades. The picture below illustrates this continuous process, which is under the control of the ITER Organization, for all items which appear in the PCDH related catalogues.

Industrialization

SoftwareSupport

Hardware Support

Lifecycle Management

SDDHardware

and Software integration

Procurement

CubicleIntegration

ITER part numbers

Obsolescence management

Software maintenance

Ex. new CODAC versions

Tests

Documents

Linux Drv

EPICS

PCDHFast Controller

Catalog

38

8 CONCLUSIONS

The ITER Organization has selected PCI Express as the base I/O bus technology to be used in ITER instrumentation and control system fast controllers. The selection is based on the excellent backward compatibility with different Conventional PCI bus technologies and on the dominant market position of PCI Express in COTS systems when compared to other serial I/O bus technologies.

PXI, CompactPCI and PXI Express I/O boards will be used as the main, recommended items for the ITER Instrumentation and Control System fast controller catalog.

PXI Express chassis is used to carry the I/O boards.

Segregation between the CPU/Network power and the I/O sub-system is promoted by providing PCI Express I/O expansion systems. In this model, an industrial computer with good price/performance ratio is used to provide the CPU power and network connectivity. The I/O will be done through a PXI Express chassis which is connected to the industrial computer, typically over an 8x lane PCI Express link. Since the CPU/Network power requirements are evolving more rapidly than the basic I/O requirements, the industrial computer unit can be replaced when necessary without touching the I/O and most importantly, the cabling.

The ITER Organization is promoting the use of Ethernet based interconnection techniques between I/O sub-systems together with the PCI Express based I/O.

For high-end I/O requirements in diagnostics systems, AdvancedTCA and MTCA.4 (µTCA) systems are considered as the strongest candidates, together with high end PXI Express data acquisition solutions.

ITER Organization has created a PCDH satellite document, “ITER Instrumentation and Control System Catalog – fast controllers” to standardize the selection of hardware as far as possible.

The various example configurations given in this document emphasize the flexibility of the technical selections made. The complexity of the system selection, purchasing, construction and configuration is recognized. A role of a system integrator to provide this work as a service is suggested.

To maximize the speed of penetration of CODAC hardware and software standards into the plant system development and manufacturing cycle, the document introduces the concept of ITER CODAC I&C Integration Kit, which is a complete, turn-key CODAC Core System development solution with all the necessary hardware and software components.