instrument control system design

27
Project Documentation Document TN-0102 Revision E Instrument Control System Design John Hubbard Software March 2017

Upload: others

Post on 30-May-2022

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Instrument Control System Design

Project Documentation Document TN-0102

Revision E

Instrument Control System Design

John Hubbard Software

March 2017

Page 2: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page ii

REVISION SUMMARY:

1. Date: January 28, 2010 - July 18, 2011

Revision: Drafts 1-12

Changes: Created. Added ICS sequence diagrams, instrument adapter thoughts and general

thoughts on various sections. Multiple design updates. Fixed page printing problem, organized

document according to interface layer and SIF. Added deployment diagrams and mock-up ICS

GUI. Removed references to Synchrobus. Updated so there are 4 modulator parameters.

Corrected data type of modT0. Updated categories of instrument parameters. Updated TRADS,

modulator attribute names, headers and VC meta-data. Removed sections on SIF and Database

Services.

2. Date: August 2, 2011

Revision: A

Changes: Minor updates regarding modulator attributes, headers and TCS attributes. Final

updates prior to release for review.

3. Date: September 1, 2011

Revision: B

Changes: Fixed mistake in compliance matrix, minor update to 3.1.7 (Synchronization) and

addition of 3.1.8 (Coordination) based on David’s comments.

4. Date: May 2012

Revision: Draft C1

Changes: Updated design to reflect new version of SPEC 23.

5. Date: February 2014

Revision: Draft C2

Changes: Incorporated Science Feedback

6. Date: July 2014

Revision: C

Changes: Updated after the creation of the ICS DRD. Released for HLS FDR.

7. Date: October/November 2016

Revision: D

Changes: Input from Science and additional updates to reflect ICS zsh-release as-built.

8. Date: March 2017

Revision: E

Changes: Changed tense to reflect as-built; updated UI section.

Page 3: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page iii

Table of Contents

1. OVERVIEW ........................................................................................................... 1

1.1 REFERENCES.......................................................................................................... 1 1.2 ABBREVIATIONS ...................................................................................................... 1 2. BASIC OPERATION.............................................................................................. 2 2.1 ICS CONTEXT ......................................................................................................... 3 3. DESIGN OVERVIEW ............................................................................................ 4

3.1 OBSERVATION MANAGEMENT SYSTEM ...................................................................... 4 3.2 INSTRUMENT ADAPTER ............................................................................................ 4 3.3 RENDEZVOUS ......................................................................................................... 5 3.4 TIME REFERENCE DISTRIBUTION SYSTEM ................................................................. 5 3.5 THE ICS USER INTERFACE (GUI) ............................................................................. 5

3.6 HARDWARE REQUIREMENTS .................................................................................... 5 3.6.1 ICS Deployment ............................................................................................................. 6 3.6.2 ICS Early Performance Numbers .................................................................................... 6 4. OMS DETAILED DESIGN ..................................................................................... 8 4.1 OMS INTERFACES .................................................................................................. 8 4.1.1 OMS to OCS Interface .................................................................................................... 8 4.2 OMS IMPLEMENTATION ......................................................................................... 13

5. IA DETAILED DESIGN ........................................................................................ 15 5.1 INSTRUMENT ADAPTER INTERFACES ....................................................................... 15 5.1.1 OMS to Instrument Adapter Interface ............................................................................15 5.2 IA IMPLEMENTATION .............................................................................................. 15

6. RENDEZVOUS DETAILED DESIGN .................................................................. 17 6.1 RENDEZVOUS SERVICE INTERFACES ....................................................................... 18 6.1.1 Rendezvous Helper API ................................................................................................18 6.1.2 Helper to Rendezvous Component Interface .................................................................18 6.2 RENDEZVOUS SERVICE IMPLEMENTATION ............................................................... 19 6.2.1 Rendezvous Component Implementation ......................................................................19 6.2.2 Rendezvous Callback Implementation ...........................................................................19 6.2.3 Rendezvous Access Helper Implementation .................................................................19 6.2.4 Rendezvous Adapter Implementation ............................................................................20 6.2.5 Rendezvous Events .......................................................................................................20 7. UI DETAILED DESIGN ........................................................................................ 21 8. TESTING ............................................................................................................. 22 9. UML DIAGRAMS ................................................................................................. 24

Page 4: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 1 of 24

1. OVERVIEW

This document describes the design of the Instrument Control System. It is written for two different

audiences: reviewers and implementers/maintainers. Both groups of readers should have a grasp for the

overall design of the DKIST High Level Software (HLS). This includes knowing of the four principal

system are (OCS, ICS, TCS and DHS), understanding the frameworks on which the principal systems

have been implemented (CSF and Base) and a basic understanding of how Experiments and Observations

will be carried out at the DKIST. If you are not familiar with the above, you are urged to read the

referenced documents before beginning to read this document.

The DKIST Instrument Control System (ICS) is the principal system responsible for the operation of the

DKIST facility instruments. It is a layer of software between the Observatory Control System (OCS) and

the instruments that implements a simple control interface for the instruments and relieves the OCS of the

burden of managing the instruments themselves. The ICS responds to OCS commands and events,

coordinating and distributing them to the various instruments. The ICS presents a single narrow interface

to the OCS, while synchronizing and coordinating the actions of the instrument controllers underneath it.

1.1 REFERENCES

The following documents are used or referenced in this document:

1. SPEC-0013, Software Concepts Definition.

2. SPEC-0022, Common Services Framework Software Design.

3. SPEC-0023, Instrument Control System Specification.

4. SPEC-0036, Operational Concepts Definition.

5. SPEC-0063, Interconnect and Services Specification Document.

6. SPEC-0113, Facility Instrument Interface Specification.

7. SPEC-0122, DKIST Data Model.

8. TN-0064, Observatory Control System Reference Design Study and Analysis

9. TN-0088, Base Software for Control Systems.

10. TN-0152, The Standard Instrument Framework.

11. ICD 3.1.4-4.2 ICS to OCS

1.2 ABBREVIATIONS

The following abbreviations are used within this document:

1. ATST- Advanced Technology Solar Telescope

2. BDT- Bulk Data Transport

3. CSF- Common Services Framework

4. DHS- Data Handling System

5. DKIST – Daniel K. Inouye Solar Telescope

6. IA- Instrument Adapter

7. ICS- Instrument Control System

8. OCS- Observatory Control System

9. TCS- Telescope Control System

Page 5: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 2 of 24

2. BASIC OPERATION

The interface between the OCS and ICS1 is configuration-based and essentially stateless. Commands

from the OCS, ICS engineering GUI (or any outside actor) are submitted to the ICS top-level controller

(atst.ics), using DKIST CSF Configuration objects. Configurations are used to send the appropriate

observing parameters to the instruments. These configurations contain information about the Experiment

and Observing Program IDs, Observing Task, list of instruments participating in the experiment (the

active instrument list), a list of those instruments who control the pacing of the observation, parameters

for each of the instruments, and information about the state of the telescope. These configurations and the

Actions that they cause are referred to as Task Groups (see OCS/ICS ICD for more details). The ICS

passes the relevant portion of the Task Group to each instrument2. The interface to the instrument is also

an essentially stateless configuration-based interface. After the ICS has submitted the configuration to the

instruments it awaits their completion. As the instruments complete the ICS compares the completed

instruments against the pacing instruments. When all pacing instruments have finished the ICS cancels

the rest. When all instruments have reported completion, the ICS notifies the OCS. While the above is a

good summary of the behavior, the actual logic associated with canceling non-pacing instruments is

somewhat more complicated. It is formally defined in the SPEC-0163 the ICS Design Requirements

Document; specifically requirements 20070, 20080 and 20090.

The ICS (with help from the Instruments) supports a two-pass approach for carrying out Instrument

Programs. The first pass is intended for setup. The instrument is given an IP along with a flag indicating

that the TCS is not configured. In the first pass the instrument is expected to fetch its script and prepare

for the upcoming observation and return. The assumption is that the instrument will be re-issued the

same IP a second time, but with a flag indicating that the TCS is configured. At this point the instrument

picks up where it left off and begins the data collection phase of the observation. Most of this capability

is provided within Instrument Controller, but the ICS does need to be aware that the pacing instrument list

only applies when the TCS is configured and the instruments are collecting data.

It should be pointed out that while the assumption is that the OCS sends two copies of each Task Group

to the ICS, the first Configuration tcsConfigured is false and the second it is true, there is no

requirement that the OCS send Task Groups in this manner. Because the entire contents of the Task

Group are always sent down, the ICS is always capable of ‘doing what its told’ and will never end up in a

broken state. This means that if the OCS were to only send a Task Group with tcsConfigured=true

the ICS would accept such a configuration and pass it along to the instruments for immediate execution.

Conversely if the ICS receives a Configuration with tcsConfigured=false followed by another

Configuration with tcsConfigured=false and a set of different parameters (e.g. different Instrument

Programs) the ICS would queue the IPs from the new TG. After the instruments return from setting up

for the now defunct IPs the ICS will pass IPs from the new Task Group to the instruments. The

instruments are responsible for 'discarding' the pre-configure state from the first Task Group's IP when the

new Task Group's IP arrives.

1 See ICD 3.1.4-4.2 ICS to OCS for the formal definition of this interface. 2 See SPEC-0113 Facility Instrument Interface Specification for the formal definition of this interface.

Page 6: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 3 of 24

2.1 ICS CONTEXT

OCS

OMS

VBI BLUE

IA

VBI RED

IA

VISP

IA

DL NIRSP

IA

CR NIRSP

IA

VTF

IA

ASI

IA

VIS INST

IA

VBI BLUE IC VBI RED IC VISP IC DL NIRSP IC CR NIRSP IC VTF IC ASI IC VIS INST IC

Engineering GUIICD 3.1.4/4.2

Summit Database

ServicesTRADS

ICS Interface Layer

SPEC-0113 Facility Instrument Interface Specification

Summit Database

Services

Rendezvous

Controller

Figure 1: A block diagram of the Instrument Control System, illustrating the main components, the Observation

Management System (OMS), the Instrument Adapters (IAs) and the Rendezvous Controller. The diagram also shows the

Instrument Controllers (ICs), the Time Reference and Distribution System (TRADS), and the Database Services. The

database services are provided by the OCS for all high-level software entities at the summit. The Instrument Controllers

are provided by the respective instruments. Also shown in blue is the ICS-provided ICS Engineering Graphical User

Interface.

Page 7: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 4 of 24

3. DESIGN OVERVIEW

The DKIST Instrument Control System (ICS) is the principal system responsible for the coordination of

instruments within the DKIST Facility. It is a layer of software between the OCS and the instruments. It

implements a simple control interface for the instruments and relieves the OCS of the burden of managing

them. The ICS responds to OCS commands, coordinating and distributing them to the various

instruments as required. The ICS presents a single narrow interface to the OCS, while synchronizing and

coordinating the actions of the instrument controllers underneath it. The ICS comprises the software and

hardware required to implement these control systems.

The ICS requires no specific knowledge about the instruments to be controlled. All information about the

instruments used in an experiment and their parameter settings is passed by the OCS through the ICS,

which uses the instrument list to extract and forward the parameters to the appropriate instrument

controllers (ICs). All ICs, which do have detailed knowledge about their underlying instruments, use the

same standard narrow interface. This allows new instruments to be added without having to modify the

interface or any existing ICs. If an instrument implements the Facility Instrument Interface it can be

controlled by the ICS. This is true for both facility instruments as well as visitor instruments.

The ICS consists of two main components: the Observation Management System (OMS), which provides

the interface to the OCS, and a set of Instrument Adapters (IAs), which provide a standardized interface

between the OMS and their respective Instrument Controllers. A context diagram illustrating the

structure of the ICS is shown above in Figure 1.

The ICS is constructed using the DKIST Common Software Framework without the need for any

additional third party libraries. This means that the ICS makes use of CSF’s event and connection

services for communications and CSF’s log service to log pertinent state changes, errors and information

necessary for the reconstruction of instrument operations. Because the ICS is built on top of the CSF and

relies on the command action response model the ICS provides a non-blocking interface for command

submission. The ICS makes use of CSF’s Property and Application Databases for application

persistence. As per the CSF standards the ICS uses Ethernet control over the LAN. By using CSF and

following the best practices for Configuration construction the ICS automatically gains configuration

traceability. The ICS relies on its host OS to provide it with time. Because the ICS runs on the standard

CSF Linux distribution this will be UTC. It is constructed following typical Best Software Practices,

including not requiring the system to reboot/restart. The ICS application is able to start in under a minute

as is typical with other CSF applications.

3.1 OBSERVATION MANAGEMENT SYSTEM

The Observation Management System (OMS) manages instruments as part of an ongoing experiment. It

is the role of the OMS to provide an interface between the OCS and the instruments involved in an

experiment. The OMS is a control layer between the experiment management of the OCS and the

individual instrument control systems and the operation of the instrument mechanisms. The OMS is

responsible for organizing the instruments required by an OCS Observing Program, sending instrument

parameters to them, and collecting and returning their completion states. The OMS is also responsible for

canceling some instruments (non-pacing instruments) when others (pacing instruments) have completed

their observations. The OMS also provides a location for canceling/aborting the ICS’s portion of an

Observing Program. See Section 4: OMS Detailed Design, for details about the OMS.

3.2 INSTRUMENT ADAPTER

As shown in Figure 1, there is a Controller called an Instrument Adapter (IA) between the OMS and each

Instrument Controller (IC). Instrument Adapters are responsible for queuing Instrument Programs for

their associated Instrument Controller. The primary reason for IAs being standalone controllers is the

Page 8: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 5 of 24

modularity that it provides. Breaking the IA functionality into a separate Controller/Class allows the

OMS to focus on the logic behind completion and canceling non-pacing instruments, while the IA

handles all of the queuing of IPs. The Instrument Adapters also make canceling/aborting an instrument’s

entire IP queue simpler. Finally the Standard Instrument Adapter could be replaced with a custom

adapter to support visitor instruments. The Standard Instrument Adapter is capable of interfacing to any

instrument implementing the Facility Instrument API. This means that for all first light instruments the

same Controller class is used.

See Section 5: IA Detailed Design, for details about the Instrument Adapters.

3.3 RENDEZVOUS

The ICS provides a loose synchronization service for instruments. The Rendezvous Services provides an

API for entities to call to request a ‘common’ time. The entities wishing to rendezvous tell the service

which other entities they want to rendezvous with and then await notification from the service that all

other entities are ready. Supporting this service is a CSF Component (managed by the OMS) that listens

for rendezvous events and provides a rendezvous time when all entities are ready. The Rendezvous

Service is only a service. Instrument Scripts must be developed to use the service in order to actually

achieve loose synchronization.

3.4 TIME REFERENCE DISTRIBUTION SYSTEM

The Time Reference Distribution System is responsible for distributing the common time to all interested

entities within the DKIST telescope. It consists of a Grand Master Clock, which receives a GPS time

reference, and then distributes the time reference throughout the observatory via IEEE-1588 Precision

Time Protocol (PTP), and as an NTP Stratum 1 Server.

3.5 THE ICS USER INTERFACE (GUI)

The ICS Engineering UI consists of two parts. The first part is a standalone GUI through which Task

Groups may be constructed, and submitted to the ICS. It is meant for engineering use only, primarily to

test the ICS in the absence of the OCS and for debug purposes. The second part is a status monitoring

‘panel’ that can be embedded within other GUIs (e.g. the OCS). This panel allows for monitoring of

ongoing observations, and the ability to cancel/abort elements of those observations. The panel is

embedded within the OCS GUI and appears in the ICS standalone GUI. Both elements are implemented

using Java Engineering Screens (JES).

3.6 HARDWARE REQUIREMENTS

The ICS proper consists of a single host located in a rack in the Computer Room (located adjacent to the

Control Room). The host may be either a physical machine, or a virtualized environment. The ICS host

consists of at least 4 GB of RAM, and at least 16 apparent x86_64 COTS processing cores running

CentOS Linux 7 or newer (e.g a 1U ASLab Lancelot 1890 with one Xeon E5-v3 2630 CPU & 4x8GB

RAM.) A development machine with a similar configuration has no problems booting in under 4

minutes. Changes to this specification are likely as processor technology and CSF evolve. The

development machine has shown itself to be powerful enough that it never rises above 50% CPU or disk

utilization. The systems are interconnected to the DKIST control network as described in SPEC-0063,

Interconnects and Services Specification Document. Power, cooling, and inter-host communications are

provided by DKIST.

The ICS GUI runs on a single host located in the DKIST Control Room. The host is a physical machine,

consisting of at least 16GB of RAM, and at least 16 apparent x86_64 processing cores running CentOS

Linux 7 with the Gnome Desktop Manager. (e.g. an ASLab Marquis C531 with a Core i7-5960X).

Changes to this specification are likely as processor technology and CSF evolve. The systems are

Page 9: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 6 of 24

interconnected to the DKIST control network as described in SPEC-0063, Interconnects and Services

Specification Document. Power, cooling, and inter-processor communications are provided by DKIST.

3.6.1 ICS Deployment

A deployment diagram for the ICS is shown in Figure 2. The OCS and the Instrument Hosts are shown

for context. All communications between nodes in the diagram use either the CSF connection service

(point-to-point) or the CSF event service (publish-subscribe) over Ethernet (TCP/IP). The OCS and ICS

are shown deployed in separate nodes, but it may be possible to deploy them in the same node. This will

be determined during construction. The OCS and ICS node(s) will be located in the DKIST computer

room. The Instrument Hosts and their Instrument Controllers are shown as separate nodes. It is assumed

that each instrument will have its own host, installed under the Coudé Lab floor but (by virtue of CSF) the

ICS is agnostic to instrument deployment.

OCS Server

OCS

CSF Services

DB

Event

Connection

GUI Station

ICS GUI

Inst Host

instIC

Inst Host

instIC

ICS Server

OMS

vbiRedIA

vbiBlueIA vispIA

vtfIAdlNirspIA

cryoNirspIA

Rendezvous

Figure 2: A deployment diagram of the ICS interface showing the OMS and Instrument Adapters and the some of the

other systems of interest.

3.6.2 ICS Early Performance Numbers

Early prototyping of the ICS has allowed for the gathering of some performance numbers. These include

things like how long the ICS takes to start up, how quickly it can turn accept a Task Group, query the

Instrument Program DB, and pass the IP off to the Instrument. It is important to remember that the test

system does not accurately reflect the final environment that the ICS runs in. Thus even though the

system used for testing was likely slower (slow HDD, slow CPU) there were other advantages (e.g. DBs

likely had less load).

Page 10: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 7 of 24

ICS Startup: The script used to start the ICS takes about 20 seconds to run. Approximately 5 seconds of

the time is spent starting up the Jython Environment and CSF context that the script runs under. The time

that the startup script actually spends starting the ICS container, and its components, is approximately 15

seconds. This includes deploying and starting the ICS container (starting the JVM process) and

transitioning the ICS proper, 5 instrument adapters, and the Rendezvous component into the running state.

At this point the ICS is ready to begin accepting configurations. Even if the databases were being

particularly hard hit by other systems it seems unlikely that it would impart a 40 second delay into

starting the ICS. Thus the 1 minute startup requirement should be easily met.

ICS TG to IP turnaround time: From a general observational efficiency perspective the most interesting

number is probably the time that it takes the ICS to take a submitted Task Group and perform the

Instrument Program DB lookup, place the IP into a configuration and submit it to the Instrument. Early

testing shows that given a Task Group with three instruments, two IPs per instrument the time between

the submit of the TG and the receipt of the IP at the last instrument was 61 milliseconds. The IPs used for

testing were trivial, and while a larger IP would take longer to transfer from the DB it should not have a

significant impact on the overall time.

ICS completion detection time: After the time it takes to start the IPs associated with a TG the next most

interesting number is probably how long it takes the ICS to realize that an instrument is done and respond

accordingly. There are three cases here 1) the time it takes the ICS to detect an IP is complete and submit

the next IP in an IP Sequence to the instrument, 2) the time it takes the ICS to realize that all pacing

instruments are done, and issues cancel commands to the remaining instruments, 3) the time it takes the

ICS to realize that all instruments are done with all IPs and report out its public interface that it is done.

Using the same TG configuration described above (with a single pacing instrument) it took the ICS 27

milliseconds to detect the completion of an IP and feed the next IP in the sequence to the instrument. It

took the ICS 14 milliseconds to detect that the pacing instrument was complete and issue cancels to the

non-pacing instruments. Finally, it took the ICS 17 milliseconds to detect the completion (canceled) of

the last instrument and report its completion. Note that these tests were carried out using test instruments

and do not take into account any instrument overhead. If an instrument were to take a second to detect its

own completion than the ICS’s completion detection would be actually come 1 + 0.017 =1.017 seconds

after the IP was truly done.

Page 11: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 8 of 24

4. OMS DETAILED DESIGN

This section describes the design of the Observation Management System in more detail, with an

emphasis on the CSF constructs used. The OMS is the interface between the OCS and the Instrument

Adapters, which provide a standard interface to the instruments.

4.1 OMS INTERFACES

The OMS has two major interfaces: the main command interface to the OCS and an interface to each of

the Instrument Adapters. The OMS-OCS interface is discussed in detail in ICD 3.1.4/4.2 Instrument

Control System to Observatory Control System; it should be used as the primary reference for the

interface. The details of the OMS IA interface as discussed in the MAN-0006 Instrument Control System

Manual appendix. Only a brief overview of the interface is described here.

4.1.1 OMS to OCS Interface

The OMS/OCS interface is formally described in ICD 3.1.4/4.2, Instrument Control System to

Observatory Control System. Only a brief summary of the key concepts of the interface is provided here.

The OMS is the top-level interface of the ICS and is registered in the DKIST connection service under the

name atst.ics. Instrument Controllers under the OMS are given similar hierarchical names, such as

atst.ics.vbiBlue for the Visible Broadband Imager Blue instrument controller.

The OMS is the top-level master controller for the ICS, responsible for coordinating and synchronizing

the actions of the instrument controllers below it. Each instrument in the system is given a name in the

hierarchy of the ICS; for example:

atst.ics # Top-level ICS controller (the OMS)

atst.ics.visp # Visible Spectropolarimeter (ViSP) Instrument Controller (IC)

atst.ics.vbiRed # Visible Broadband Imager Red (VBI red) IC

atst.ics.vbiBlue # Visible Broadband Imager Blue (VBI blue) IC

atst.ics.vtf # Visible Tunable Filter (VTF) IC

atst.ics.dlNirsp # Diffraction-limited Near-IR Spectropolarimeter (DL-NIRSP) IC

atst.ics.cn # Cryogenic Near-IR Spectropolarimeter (Cryo-NIRSP) IC

atst.ics.asi # A Simple Instrument (ASI) IC

The OMS is implemented as a subclass of the DKIST CSF ManagementController, implementing the

full set of technical lifecycle commands and the functional Command-Action-Response model for

commands, as described in SPEC-0022-1, Common Services Framework User’s Manual. This will be

discussed in more detail in the sections that follow.

All communication of commands from the OCS to the OMS is via CSF configurations. The OMS parses

and interprets the configurations, extracting the appropriate portions and sending them to the

corresponding Instrument Controllers, via the Instrument Adapter. The OMS then monitors the

instrument controllers for completion and sends status information back to the OCS.

4.1.1.1 Task Groups

The fundamental unit of work that the OCS sends to the ICS is the Task Group. It consists of a list of

instruments, parameters for those instruments, the telescope status, and some observation metadata. From

the ICS perspective there are two different kinds of Task Groups: Configure, and Execute. Both Task

Groups contain all of the same information, but in a TG-Configure the tcsConfigured flag is false.

This indicates that the telescope has not yet achieved its target state. In a TG-Execute the

Page 12: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 9 of 24

tcsConfigured flag is true, indicating that the TCS has achieved its target state, and the observation

may commence.

The general sequence of commands is expected to be similar to the following:

The OCS sends all parameters for a Task Group in one CSF configuration. The OMS extracts the

parameters relevant for each instrument and then sends the parameters (as Configurations) to the

appropriate instruments via the Instrument Adapters. This activity is assumed to begin while the

TCS is still moving into position (i.e. the initial Task Group is a TG-Configure). The OMS

monitors the progress of the Instrument-Program Configurations being executed by the various

Instruments and reports the status, including final completion, to the OCS using the standard CSF

Action/Response callback mechanism.

When the TCS has reached its target state, the OCS sends the corresponding TG-Execute. The

OMS forwards the relevant Task Group information to all active instruments. The instruments

begin their observations as soon as they have individually finished their setups. The OMS

monitors the completion progress of the configurations and reports back the status, including final

completion to the OCS.

After the TG-Execute has been sent, the OCS may send the next TG-Configure to the OMS. The

parameters in these configurations are sent to the appropriate instruments as soon as possible,

allowing the instruments to setup for the next observation as soon as they have completed the

current observation. Having the next TG-Configure waiting in the queue is referred to as pre-

configuration.

The OMS monitors the completion status of the observation actions started with TG-Execute.

When all of the instruments that are required to complete their observations for this Task Group

(the pacingInsts set) have completed their observing actions, the OMS terminates the

observing activities of the remaining instruments participating in the observation and sends the

final completion callback to the OCS. See the discussion in Section 4.4 of the ICS/OCS ICD

regarding the activeInsts and pacingInsts sets for further details.

Additional observations are executed by repeating the pattern described above. This process

repeats until all the observations for an experiment are completed or the experiment is cancelled

or aborted by the operator.

4.1.1.2 OCS/TCS/ICS Interactions:

In general, the OCS sends configurations to both the OMS and the TCS when setting up for a new

Experiment or Observing Program. The OMS immediately begins configuring all the participating

instruments while the TCS is processing its configuration. After the TCS is configured (in position), the

OCS sends a TG-Execute configuration to the OMS. The OMS forwards relevant parts of the Task

Group to the participating instruments to indicate that they may start their observing activities.

There are two variations to this process depending on whether pre-configuration is requested or not. The

remainder of this section discusses these variations in more detail and presents sequence diagrams and

descriptions for each.

The first example shows a sequence of two Observations without pre-configuration. The sequence

diagram is shown below in Figure 3.

1. At the start, Configurations are sent to both the OMS (cfg1a) and the TCS (cfg1b) (shown in red

lines). The TCS is receiving its demand state. The OMS is receiving a TG-Configure.

2. The OMS assigns the command to one of its action threads (OMS T1 in this case). The action

thread parses the configuration and sends the appropriate portions to the Instrument Adapters of

Page 13: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 10 of 24

the participating instruments (the ViSP and VBI in this example). The TCS begins processing its

configuration as well, setting up for this observation.

3. The Instrument Adapters forward the configurations on to their instruments, which begin their,

configure actions. When the configure actions are complete, the instruments respond to their IA’s

with a done. The IA’s then respond to the OMS with done.

4. Both the TCS and OMS respond with done callbacks to the OCS when their actions are complete,

done cfg1a for the OMS and done cfg1b for the TCS. In this particular case, the OMS completes

before the TCS.

5. When the TCS action completes, the OCS sends a TG-Execute Configuration to the OMS (cfg1x,

shown with blue lines). The OMS assigns this command to the next available action thread

(OMS T1 in this case). The action thread passes the command on to the Instrument Adapters,

which are idle3, and begin processing it immediately.

6. Each instrument starts its observing actions as soon as it receives its part of the TG-Execute.

Upon completion, it responds with a done callback to the IA, which in turn responds to the OMS

with done.4

7. When all instruments have finished their actions (successfully, canceled, or with errors), the OMS

reports the completion status to the OCS (done cfg1x).

8. Once the Observing Task is complete, the OCS sends configurations for the next Task Group to

both the OMS (cfg2a) and the TCS (cfg2b), and the entire process begins again. In this case, the

TCS action is shorter than before and the VBI is still busy processing the setup configuration

when the TCS issues the tcsConfigured command (cfg2x). As a result, the command is

queued at the Instrument Adapter until the VBI completes its setup and is ready to begin

observing. The dashed yellow line in the VBI IA thread indicates the queuing. Note that the

ViSP has completed its setup actions by the time the tcsConfigured command arrived, and is

able to start its observing actions immediately.

9. The OCS sends the tcsConfigured command as soon as possible, regardless of whether the

instruments have completed their setup actions. This allows instruments to begin their observing

actions as soon as they are ready, not having to wait for any other instruments to complete their

setups before proceeding and helps to maximize efficiency.

3 If the TCS had finished first, the Instrument Adapters would queue the TG-Execute on a per-IA basis. As soon as

the instrument was available, it would begin carrying out its portion of the Task Group. The assumption is that long

instrument configurations are the exception rather than the rule. 4 If there is a desire to start the instruments at the ‘same’ time there are a few approaches for supporting this but they

rely on behavior outside of the control of the ICS. One option would be for an OP script to wait until the TG-

Configure completes before submitting the TG-Execute; such an OP script should generally result in all IP Scripts

beginning their execute phase to within a few 10s of milliseconds. Another option would be for instruments to

internally rely on the Rendezvous Service to obtain a common reference time; this is dependent on support within

any Instrument Programs, but if present may result in data collection being synchronous to sub millisecond level.

Page 14: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 11 of 24

Figure 3: A sequence diagram showing the interaction between the OMS, TCS, OMS and instruments for a typical

observing sequence. In this case there is no pre-configuration.

OCS TCS ICS OMS OMS T1 VISP IA VISP T1 VBI IA VBI T1OMS T2cfg1a, cfg1b

done cfg1b

do conf

done

do conf

done

move

move

data

cfg1a

done cfg1adone cfg1a

cfg1a

cfg1x cfg1x cfg1x

data

done vbi

done

done

do exp

done visp

done visp

done cfg1x done cfg1x

done vbi

do exp

cfg2a, cfg2b

done cfg2b

do conf

done

do conf

done

move

cfg2a

done cfg2adone cfg2a

cfg2a

done visp

done vbi

cfg2x cfg2x

done cfg2x

cfg2x

datadone

do exp

data

done

do exp

done cfg2x

done visp

done vbi

move

Page 15: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 12 of 24

The second example, shown in Figure 4, examines the effect of using pre-configuration, in which the

OCS sends configurations to the OMS as soon as possible. This allows instruments to configure

themselves for the next Observing Task as soon as they have completed their current observing activities,

maximizing the efficiency of the system. The sequence diagram for this scenario is shown above in

Figure 4. This is similar to the first example except that the TG-Setup configuration for the next

observation (cfg2a) is sent immediately after the first TG-Execute configuration (cfg1x). The OMS

immediately sends this configuration to the IAs, where they are queued until the previous configurations

have completed. The VBI begins to act on cfg2a as soon as it completes its first observation (cgf1x),

while the ViSP is still acquiring data. This allows the VBI to begin its configuration activities sooner

than in the previous example, where it had to wait idly for the ViSP to finish its observing activities

before it could act on cfg2a.

Figure 4: A sequence diagram showing the interaction between the OMS, TCS, ICS and instruments using pre-

configuration. Notice the use of the third action thread and the queuing of configurations in the IAs.

No special software is required to implement pre-configuration. It is achieved by the OCS sending

configurations as soon as possible and by using only a single action thread in the IAs so that the incoming

configurations get queued when the IA is busy. Note that even though the IA will queue multiple

configurations, direct access to the instrument is still available through the Instrument Engineering GUI.

The Instrument GUI connects directly to the Instrument Controller, which is below the IA, and is multi-

threaded. The Instrument Controller is not shown in the sequence diagram.

OCS TCS ICS OMS OMS T1 VISP IA VISP T1 VBI IA VBI T1OMS T2cfg1a, cfg1b

done cfg1b

do conf

done

do conf

done

move

move

data

cfg1a

done cfg1adone cfg1a

cfg1a

cfg1x cfg1x cfg1x

data

done vbi

done

done

do exp

done visp

done visp

done cfg1x done cfg1x

done vbi

do exp

cfg2b

done cfg2b

do conf

done

do conf

done

move

cfg2a

done cfg2adone cfg2a

cfg2a

done visp

done vbi

cfg2x cfg2x

done cfg2x

cfg2x

datadone

do exp

data

done

do exp

done cfg2x done visp done vbi

move

cfg2a

Page 16: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 13 of 24

A minimum of two action threads in the OMS is required to implement the desired behaviors illustrated

above. However, different scenarios may require more action threads: sending multiple setup

configurations or sending a pre-configuration before the current setup configuration is complete. In order

to support these scenarios, the OMS should use a grow-able thread pool for its tasks.

4.2 OMS IMPLEMENTATION

The OMS is implemented in CSF as an extension of the ManagementController class, which is

defined in the Base Software extensions to CSF (TN-0088, Base Software for Control Systems). The

OMS manages just the lifecycle of the Rendezvous Component (as components have no actions to

manage) and both the lifecycle and actions of the Instrument Adapters. It reads its list of Instrument

Adapters from the Property Database during initialization. During operation, it parses incoming

configurations, extracting and forwarding the appropriate portions to each controller in its assigned

controller list. It monitors completion of the subordinate configurations and responds with a completion

callback to the configuration source.

The Management Controller defined in the CSF Base Software is extended to add the following

functionality to implement the OMS:

The OMS keeps a list of all Task Groups that are currently running or queued to it, and what their

statuses are.

The doSubmit() method is overridden to verify that all of the required attributes are present and

the configuration is valid. The doSubmit() method is responsible for rejecting nonsensical

configurations (e.g. has Observing Program ID, but missing experiment ID), and warning about

valid but potentially incorrect configurations (e.g. IP information present for an instrument that is

not in the active instruments list). See the OCS/ICS ICD for details on valid vs. invalid

Configurations. The amount of validity checking here is kept to a minimum to ensure that the

ICS is capable of accepting or rejecting commands submitted on its public interface within 0.1

seconds.

The localDoAction() method is overridden to add the newly started Task Group to the end of

the Task Group List, and to verify that the active instruments are online before starting an

observation.

The makeWorkerConfig() method is overridden to forward all global attributes (experiment

meta-data and Telescope information) and the IP information to all of the instrument adapters. It

additionally forwards all instrument specific attributes to the appropriate Instrument Adapter.

The oneDone() method is updated in order to update the task group’s status (e.g. marking

instruments that have completed as done/aborted) and to determine when to cancel the remaining

active instruments (e.g. all pacing instruments have completed successfully). Instrument failures

will cause the ICS to raise an alarm, and treat the instrument as aborted for the sake of

continuing/canceling other instruments. For details about the ICS’s cancel/abort logic see the ICS

DRD for the detailed requirements.

The handleWorkerReport() method is overridden to update the status of the associated Task

Group.

The OMS Controller makes use of a Posting TAB, to post the event defined in the OCS to ICS

ICD.

o The Status Posting TAB is responsible for posting the regular status event (@ 1Hz)

containing the complete current status of all known, running Task Groups.

Page 17: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 14 of 24

The OMS Controller makes use of a custom CSF Action objects. It overrides the

getNewAction() and getAction() methods in order to return an OMS Action. The OMS

Action maintains a pointer to the Task Group associated with the action. This allows oneDone()

and handleWorkerReport() to gain a reference to the Task Group associated with the worker-

action without having to walk the entire Task Group list.

The OMS/ICS relies on the standard CSF Controller’s Action Manager to ensure that once a

command has been accepted, it will begin execution within 0.1 seconds.

See Section 9 for a UML class diagram of the OMS.

Page 18: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 15 of 24

5. IA DETAILED DESIGN

This section discusses the Instrument Adapter (IA) in more detail, with emphasis on implementation in

CSF.

5.1 INSTRUMENT ADAPTER INTERFACES

The IA has two interfaces: the interface to the OMS and an interface to its designated instrument. The

interface to the OMS is described in this section, while the interface to the designated instrument is

discussed in SPEC-0113, Facility Instrument Interface Specification and in TN-0152, The Standard

Instrument Framework.

5.1.1 OMS to Instrument Adapter Interface

Although there are multiple Instrument Adapters in the ICS, the OMS interface to each is the same.

Because this interface is internal to the ICS, no formal ICD is required and the description here shall serve

as the guiding technical documentation for the interface.

5.1.1.1 Interface naming

The OMS is the top-level controller in the ICS and is registered with the DKIST connection service as

atst.ics. The IAs are registered as atst.ics.<inst>_ia, where <inst> is the instrument name as

described earlier in Section 4.1.1. The following instrument adapters are used in conjunction with the

first generation instruments at DKIST:

atst.ics.visp_ia # Visible Spectropolarimeter (ViSP) instrument adapter

atst.ics.vbiRed_ia # Visible Broadband Imager Red (VBI red) instrument adapter

atst.ics.vbiBlue_ia # Visible Broadband Imager Blue (VBI blue) instrument adapter

atst.ics.vtf_ia # Visible Tunable Filter (VTF) instrument adapter

atst.ics.dlNirsp_ia # Diffraction-limited Near-IR Spectropolarimeter (DL-NIRSP) instrument

adapter

atst.ics.cn_ia # Cryogenic Near-IR Spectropolarimeter (Cryo-NIRSP) instrument adapter

atst.ics.asi_ia # A Simple Instrument (ASI) adapter

5.1.1.2 Instrument Programs

The fundamental unit of work for an instrument is the Instrument Program (IP). An IP consists of an

Instrument Script and a collection of Parameters for that script. IPs are passed to the ICS/OMS as part of

a Task Group. The OCS passes IPs to the ICS by reference, not by value. That is the OMS receives an IP

ID that can be used to perform a database lookup and get the contents of the ID. Passing IPs by reference

makes it easy to support multiple IPs to be passed to an instrument within one Task Group. For the same

reasons that passing IPs to the OMS by reference is preferable, passing IPs to the IA by reference is

preferable. Because the interface between the ICS (IA) and Facility Instruments has IPs passed by value,

the IA is responsible for fetching IPs from the database and un-box the IP before sending it to the

instrument. This means that the IA takes the IP identifier that it receives, and extracts that IP from the

database. That object is then deconstructed into a Script reference, and a set of Parameters that are boxed

in a CSF Configuration and submitted to the Instrument Controller.

5.2 IA IMPLEMENTATION

The Instrument Adapters are implemented as extensions of the CSF ManagementController (see TN-

0088, Base Software for Control Systems). Each Instrument Adapter is responsible for controlling only a

single sub-controller: the Instrument Controller for their respective instrument. While the Management

Page 19: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 16 of 24

Controller is probably overkill in this instance it satisfies all of the needs Instrument Adapters needs and,

by leveraging the existing code, development will be faster, and maintenance will be easier through its

use. Typically Management Controllers are multi-threaded controllers that assume that queuing will be

handled by sub-controllers if they cannot support multiple simultaneous actions. Instrument Controllers

are only capable of supporting on observation at a time, but they do support multiple actions so as to

facilitate engineering intervention in the event of a problem. This results in a need to queue observation

related actions at a level above the IC. By making the Instrument Adapter a single-threaded Controller it

is possible to gain this queuing very easily.

This implementation actually gives IAs two separate levels of queueing. The standard CSF Controller’s

action queue gives IAs their first level. Because Task Groups are Actions and IAs are single threaded,

IAs will queue subsequently submitted TGs until their first has finished. The only limit that is placed on

the size of the queue is JVM/System memory. The second level of queueing happens at the Management

Controller level. Management Controllers support multiple phases of actions. The IA assigns one IP

from a TG to each action phase. This means that the canceling or aborting of an Action at the IA level

will result in the canceling of all IPs associated with that TG but without affecting any other IPs.

The only complexity in the Instrument Adapters is a result of the DKIST/CSF fully qualified attribute

name policy. The IA’s name is not a hierarchical parent of the instrument that it manages. This results in

a little added work in making worker configurations.

The IA Controller overrides the makeWorkerConfiguration() to query the IP database for

the specified IP(s), log that IP Metadata to the appropriate database, and construct a worker

configuration containing the returned the Script ID and Parameters along with the ICS global

parameters.

The IA Controller overrides the hasNextPhase() method to determine if there are multiple IPs

packaged in a single Configuration/Task Group. The method will return true if tcsCfg==true

and there are more IPs to execute. If tcsCfg is false, then the method will return false so that the

Instrument remains properly pre-configured for the first IP.

The IA Controller overrides the oneDone() method to notify the OMS (via action report) when a

controller finishes its IP. In the case of multiple IPs in a single Task Group this is necessary so

that the OMS can properly report the worker’s progress in completing its various IPs.

The handleWorkerReport() method is overridden to receive IP Status updates.

The IA Controller is also responsible for notifying the OMS of the managed Instrument

Controller’s online/offline status. The IA is responsible for subscribing to the IC’s status event.

While the event is being received at regular interval, the IA will propagate the ICs status up to the

OMS. When the IC is not posting a status event (e.g. the IC is not deployed/running) the IA will

assume offline and notify the OMS of the assumed status. This monitoring/notification system

consists of two parts:

o The IA uses an Event Callback to subscribe to the IC’s status event. Every time the event

is received, its value will be stored in the cache. In addition to the IC’s status, the

timestamp of the last-received IC event will be stored.

o Any time there is a change in the status of the Instrument the change will be logged.

o The IA uses a Posting TAB to forward the current IC status to the OMS. Assuming that

the timestamp of the last received event is no older than 1.5 * posting period, the last

received event value will be used. If the timestamp is older than 1.5 * posting period, the

IA assumes that the IC is offline and forward this value to the OMS.

Page 20: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 17 of 24

6. RENDEZVOUS DETAILED DESIGN

The Rendezvous Services provides a mechanism for loose synchronization between disparate systems.

The basic principal is that the systems wishing to rendezvous notify the service a) that they are ready to

rendezvous, and b) what they wish to rendezvous with. The service is responsible for tracking what is

ready, and when everything is ready, notifying everyone to go forward. The request to rendezvous and

the notification of rendezvous use CSF events as the backing mechanism. In addition to a received event

inherently indicating that everyone is ready, a timestamp included in the event can be used as a common

reference. The Service is composed of two primary parts. The first is a public facing API (Rendezvous

Service Helper) that application developers can call to request rendezvous. The second is a CSF

Component responsible for monitoring rendezvous requests and servicing the requests as they come. The

baseline is for the component to be deployed in the same container as the rest of the ICS components

(OMS and IAs). A NTP backed reference time is expected to be sufficient for the Rendezvous Service.

However if NTP proves insufficient, the Rendezvous Component will need a more accurate clock. A

Precision Time Protocol (PTP)-aware system or a system with a Spectracom TSync-PCIe timing card5

should be more than sufficient to meet the timing need and will serve as the fallback implementation. In

the event the timing card is required, the Rendezvous Component at a minimum would need to run on a

machine supporting such a card, which may preclude the virtualized environment mentioned in section

3.6 - Hardware Requirements.

The basic sequence of a rendezvous is as follow:

1) Instrument A calls the Rendezvous Service Helper’s rendezvous method indicating its desire to

rendezvous with B, and C. The Service Helper will not return until rendezvous, timeout, or

action abort.

2) The Rendezvous Service Helper posts an event on the rendezvous request event stream, stating

A’s desire to rendezvous with B, and C

3) The ICS Rendezvous Component receives the event, and checks an internal map for a node

corresponding to the {A, B, C} rendezvous.

4) Assuming that the node is not found, the Rendezvous Component creates it.

5) The ICS Rendezvous Component sends an acknowledgement event back to Instrument A, on

Instrument A’s rendezvous event stream.

6) Later Instrument B request rendezvous with A and C.

7) The ICS Rendezvous Component receives the event, and finds a corresponding node in its

internal node map.

8) The ICS Rendezvous Component updates the node to reflect that both A and B are ready.

9) The ICS Rendezvous Component sends an acknowledgement event back to Instrument B, on

Instrument B’s rendezvous event stream.

10) Later Instrument C requests rendezvous with A and B.

11) The ICS Rendezvous Component receives the event, and finds a corresponding node in its

internal node map.

12) The ICS Rendezvous Component updates the node to reflect that A, B and C are ready.

5 The TSync timing card is the standard card used by the DKIST Time Reference and Distribution System. It

receives highly accurate PTP time that can be used to set the host OS time.

Page 21: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 18 of 24

13) The ICS Rendezvous Component sends an acknowledgement event back to Component C, on

Component C’s rendezvous event stream.

14) Because all components are ready, the Rendezvous Component grabs a timestamp, and posts one

event on each of the three event streams with the acknowledgement of rendezvous.

15) After posting the event, the Rendezvous Component removes the node from the map.

16) When the access helper receives the event, it returns the included timestamp to the calling

Instrument.

In addition to the general sequence that is laid out above, if one or more instruments fail to rendezvous in

a timely manner, the Rendezvous Component can notify others of a timeout. This notification is

dependent on Instruments wishing to rendezvous providing a non-zero value for timeout when making the

initial rendezvous request. Additionally if an Instrument wishing to rendezvous has its CSF-Action

canceled or aborted, the Rendezvous Helper will return immediately.

There are a couple of points to note about the rendezvous service:

While the Rendezvous Service is intended for use by instruments, there is nothing that limits its

use to instruments. Through the above example Instrument could be replaced with Generic

Entity.

The actual call to the Rendezvous service (using the Helper API mentioned below) needs to

happen within an Instrument Program’s Script. Thus it is the responsibility of the Instrument

Developers to make sure that their scripts provide the necessary flags and information to indicate

whether or not rendezvous is required, with whom to rendezvous with, and to make use of the

service if required.

Very complicated rendezvous scenarios are possible but rendezvous is ONLY if the instrument

scripts support them.

6.1 RENDEZVOUS SERVICE INTERFACES

The Rendezvous Service has two interfaces. The first is the API that application developers use to

interact with the service (Rendezvous Helper). The second is the internal interface used between the

Rendezvous Helper, and the Rendezvous Component.

6.1.1 Rendezvous Helper API

IAtstDate rendezvous (IActionStatus action, long timeout, String self,

String... components)

The public API for rendezvous is straightforward. Callers must pass in the CSF Action so that the

Rendezvous Service can properly support cancel and abort. In addition a timeout may be specified in

order to give up after a prescribed amount of time. Finally the caller’s identity must be specified, as well

as what the caller wishes to rendezvous with.

6.1.2 Helper to Rendezvous Component Interface

Internally the Rendezvous Services has an event-based interface that the Service Access Helper uses to

communicate with the Rendezvous Component. The interface relies on a single event that the

Rendezvous Component listens for (atst.ics.rendezvous), and one event per client

(rendezvous.<app>) that the service will post to.

Page 22: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 19 of 24

6.2 RENDEZVOUS SERVICE IMPLEMENTATION

The rendezvous service is composed of four primary modules all of which are found in the Base

repository. Two, the Rendezvous Component and the Rendezvous Manager, are on the server side. The

other two, Rendezvous Access Helper and Rendezvous Adapter, are on the client side. The following

four sections highlight the key features in each.

6.2.1 Rendezvous Component Implementation

The service backing Rendezvous consists of a single CSF Component responsible for monitoring

rendezvous requests, and creating and updating the ‘nodes’ corresponding to those requests, and

eventually notifying the requestors when all requestors are ready.

It subscribes/unsubscribes the Callback (aka Rendezvous Manager) on Component

startup/shutdown.

It provides an external interface so that debugging can be enabled as needed

Starts and stops the timeout watchdog thread. It is responsible for notifying Instruments that

everyone was not ready to rendezvous in time.

It maintains the internal map used to store the above mentioned nodes.

6.2.2 Rendezvous Callback Implementation

The Rendezvous Callback is a CSF SmartEventCallbackAdapter class. It is constructed with using

the SmatEventCallbackFactory with the Component’s handleEvent method passed in as the event

handler. It is responsible for handing rendezvous requests (propagated as CSF events)

The Callback tracks rendezvous requests using Nodes

o Container class

o Holds complete Instrument list

o Holds ready Instrument list

The callback method is responsible for checking the active rendezvous ‘Nodes’

o Creating a new one if necessary

o Send the requestor an ACK. The ACK is used by the client to ensure that the manager is

actually present and running.

o Update the Node

o Transmit Rendezvous Event (one event per Instrument) if all Instruments are ready

6.2.3 Rendezvous Access Helper Implementation

The Rendezvous Access Helper consists of a single method that takes care of setting up the rendezvous

request, and awaiting the rendezvous. It is responsible for

Subscribing a callback (Rendezvous Adapter) to the ACK/Rendezvous event.

Spawning threads track the Instrument Action’s cancel and abort. The threads are responsible for

interrupting the rendezvous if the action is canceled or aborted.

Transmitting a request to Rendezvous to the Rendezvous Service

Awaiting the Rendezvous ready event or timeout event.

Page 23: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 20 of 24

6.2.4 Rendezvous Adapter Implementation

The Rendezvous Adapter is an extension of the CSF BaseEventCallbackAdapter class. In addition

to subscribing to the CSF event, it manages the Java Lock/Condition used to signal that the event has

been received.

It is responsible for spawning a thread to ensure that the ACK is received. The thread is

responsible for interrupting the rendezvous if an ACK is not received in a timely manner.

It provides a waitForReady() method that blocks until the rendezvous event has been received,

or the thread is Interrupted (e.g. by the ACK watcher, or an aborted action)

It provides a callback() method that notifies the ACK watcher when the ACK event is

received and signals waitForReady() when the ready event is received.

6.2.5 Rendezvous Events

6.2.5.1 Access Helper Rendezvous Initialization Event

The event which the Access Helper posts requesting rendezvous has the following structure:

Name Data type Meaning atst.ics.rendezvous

.origin String The name of the entity attempting to

rendezvous. .cmptList String[] The complete list of entities participating in

this rendezvous. .timeout int The amount of time (in milliseconds) to wait

for other entities to be ready to rendezvous.

6.2.5.2 Rendezvous Service Event

The events which the service-backing component posts have the following structure:

Name Data type Meaning rendezvous.<appName>

.msgType Choice: ACK,

ERROR,

READY

The type of message contained in this event.

This attribute is always present.

.reason String Reason for the error. This attribute is only

present if the msgType is ERROR. .rendezvousTime Date The reference time for the rendezvous. This

attribute is only present if the msgType is

READY.

Page 24: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 21 of 24

7. UI DETAILED DESIGN

The ICS UI includes two custom parts. The Task Group Builder JES Widget allows Task Groups to be

constructed, viewed and submitted to the ICS. It functions differently depending on whether it is being

used within the ICS Engineering GUI or the OCS Experiment Executor. The Task Group Status Display

JES Widget displays the contents of currently running Task Groups and recently finished Task Groups.

7.1 TASK GROUP BUILDER

The Task Group Builder widget has three modes of operation. The first mode, ICS-EGUI, allows the

complete creation of a task group, including the specification of all associated flags and IDs. The second

mode two OCS-Manual allows for the creation of a task group and allows for the specification of some of

the Task Group flags (e.g. save data) but not others (e.g. manual). Mode three OCS-Auto allows only for

the display of an existing Task Group. When in OCS-Auto mode there is no mechanism to submit the

Task Group (that has already been done by the OCS). The startup mode is encoded into the widget’s

XML representation.

The widget is comprised of a proxy layer, which satisfies the JES interface (TgContentDisplayWidget).

The proxy layer contains a single JPanel (TgContentDisplay) which fills the actual widget’s entire panel.

The TgContentDisplay holds all of the sub-parts, of the display, and implements the ITgMasterDisplay

interface. The TgContentDisplay object (via the Master interface) is passed to all of the sub-screens so

that when they need to kick off an update of the underlying panel they can notify the master of the

change. The Master handles changes by pushing the updated version of the Task Group to all sub-panels.

7.2 TASK GROUP STATUS

The Task Group Status Display JES Widget is responsible for displaying the status of running and

completed Task Groups, as well as supporting cancel and abort behavior as described in the ICS’s DRD

(SPEC-0163). The Status Widget subscribes to the ICS’s cStatus event and parses the list of active, and

historical Task Groups found in that event into a collection of ITaskGroupStatus objects which are then

displayed. Based on the feedback received during the ICS Factory Acceptance Testing this widget is

expected to undergo some changes and is therefore not discussed more thoroughly at this time.

Page 25: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 22 of 24

8. TESTING

Testing the ICS is carried out by executing python scripts using the scripting facilities available as part of

the CSF. All tests are run using the DKIST end-to-end simulator to provide a fully functional

environment. Where simulators are not yet available for some or all of the instruments the ICS provides

simple interface-compatible simulators. For the non-GUI aspects of the ICS, the testing framework

developed for use with the TCS is adapted for use with the ICS. This discussion draws heavily from the

TCS description of its testing facility.

Using the Quality Assurance System (QAS) testing mechanism allows tests to make full use of the

available services present in the CSF while keeping the testing mechanism detached from the application

and in a scripting format. This makes writing tests and updating tests easy, without the need to recompile

any code. Using the QAS testing framework (also written in jython) ensures all tests are standardized.

The testing framework provides class definitions that the test scripts subclass. The testing classes ensure

test comments are logged into the log database using tags that can easily be retrieved, and also put results

using predefined names that allow specialized JES widgets to pick up and display the results. Coupled

with another JES widget that allows selection and execution of the test scripts results in all testing being

carried out within a standard CSF environment, making use of standard CSF services and recording all

test results to standard CSF logging facilities.

The testing framework script locations are added to the scripting widget (relative to $(ATST) so they are

portable between CSF installations) such that the test scripts themselves can simply import the required

python modules.

If any operator input is required then the test scripts can invoke dialog boxes through jython as no

command line interfaces are available.

The required jython scripts and JES widgets are supplied as part of the ICS deliverables, along with test

python scripts to perform unit and acceptance tests.

The available classes and methods of the testing framework are presented in the table below.

Method Parameters Return Description

TestStep class – subclass and override the Run method. Log message void Logs the message using the CSF log service. Also

posts the message and stores it into the cache for use

(e.g. JES widgets). Fail message void Fail the current test step. The step is logged as

failing with the reason given. The step exits at this

point. Get component,

table

table Call get() on the specified component. The table

is an attribute table filled with the names of

attributes to get. The returned table contains the

filled in values. Set component,

table

void Call set on the specified component. The table is an

attribute table filled with the names and values of

attributes to set. Submit controller,

configuration

int Call submit() on the specified controller. The

configuration is submitted to the controller and the

Page 26: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 23 of 24

submit return value is returned. This does not block. BlockSubmit controller,

configuration

int Call submit() on the specified controller. The

configuration is submitted to the controller and the

final status of the command is returned. This does

block until the action is

completed/aborted/cancelled. Request message,

table

table Request user input. The names of the values

requested are provided in the table input (attribute

table format) and the return attribute table contains

the names and values that were entered. This

method blocks the test until the user has closed the

popup. Message message void Open a popup to display a message to the user. This

method blocks until the user has closed the popup.

Test class – create, add test steps and then Run the test. Test ID,

description,

requirements,

details

void Constructor for a test object. Create a test object,

add the required steps to it and then run it to perform

the test.

AddStep Step void Add another step to this test. Run table void Execute the test, running through each step in order.

An attribute table is returned that contains the report

for the test, each step is included along with its

pass/fail state and any failure message.

Whenever a test script is executed the testing framework requests a unique ID from the CSF ID service.

This ID is appended to the original test ID (that is defined as part of the test script) and all messages are

logged with this unique ID. The ID is also presented to the operator making the task of retrieving old test

logs in the future much easier (using standard SQL search queries).

For example, if the test is called “TEST.ICS.0001” and the uniquely generated ID is “2272.3” then every

log message for this test starts with

[TEST.ICS.0001.2272.3]

Testing of ICS GUIs is accomplished by providing detailed instructions to the tester on how to perform

the test on the GUI. No attempt is made to automate GUI testing.

Page 27: Instrument Control System Design

Instrument Control System Reference Design

TN-0102, Rev E Page 24 of 24

9. UML DIAGRAMS

UML Diagrams for the key assemblies of the ICS are maintained and kept up to date in the ATST

Software Development Tree. This is done in order to keep them in sync with the software development

effort. They may be found in the directory $ATST/doc/uml. For details on the ICS’s OMS Controller

and associated bits see ics/OMS.png. For details on the ICS’s Instrument Adapter Controller and

associated bits see ics/IA.png. For details on the Rendeavous Service see base/Rendezvous.png. For

details on the ICS GUI’s Task Group Builder see ics/TgBuilder.png and for details on the ICS GUIs Task

Group Status see ics/TG_Status_GUI.png