hyperdaq

29
HyperDAQ Where Data Acquisition Meets the World Wide Web Johannes Gutleber, CERN Dept. PH/CMD

Upload: oliver-nichols

Post on 31-Dec-2015

14 views

Category:

Documents


3 download

DESCRIPTION

HyperDAQ. Where Data Acquisition Meets the World Wide Web. Johannes Gutleber, CERN Dept. PH/CMD. Outline. What is HyperDAQ? Why is it done? What is it good for? How is it done? Where is it used? Summary. The Web is all about linking documents. HyperDAQ - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: HyperDAQ

HyperDAQ

Where Data Acquisition Meets the World Wide Web

Johannes Gutleber, CERNDept. PH/CMD

Page 2: HyperDAQ

CERN [email protected]

Outline

• What is HyperDAQ?

• Why is it done?

• What is it good for?

• How is it done?

• Where is it used?

• Summary

Page 3: HyperDAQ

CERN [email protected]

The Web

is all about linking documents

Page 4: HyperDAQ

CERN [email protected]

HyperDAQ

is all about linking applications

Page 5: HyperDAQ

CERN [email protected]

Each application in a distributed system

• is directly browsable

• can discover others

• can link to the others

Page 6: HyperDAQ

CERN [email protected]

What is HyperDAQ?

A way to provide access to distributed data acquisition systems through World Wide Web + Peer to Peer

Page 7: HyperDAQ

CERN [email protected]

To realize the potential of distributed data acquisition systems,

access to information must be simple.

Why is it done?

See system as single entity

“drill into the system”

from anywhere

Page 8: HyperDAQ

CERN [email protected]

What is it good for?

• Monitor distributed applications

• Control distributed applications

• Help distributed development

Page 9: HyperDAQ

CERN [email protected]

How is it done?

link

HTTP engine Applications serve Web pages

Applications can be discovered

Contents are linked together

Address applications by URN

Page 10: HyperDAQ

CERN [email protected]

URN

• Uniform Resource Name• Identifies resources on a computer

– Applications– Services– Data sources

urn:xdaq-application:service=monitor

Page 11: HyperDAQ

CERN [email protected]

URL• Uniform Resource Location• Gives context to URN• Calls operations on URN

http://host:port/urn:xdaq-application:service=monitor/retrieve?flash=cpuUsage

Page 12: HyperDAQ

CERN [email protected]

• Toolkit for distributed data acquisition• C++, cross-platform• Enablers for high-performance operation

– Zero copy message passing– Concurrent use of multiple transports– Memory pools

• Run-time extensions– Application components– Peer-transport modules

Page 13: HyperDAQ

CERN [email protected]

HyperDAQ Programming

• Incoming HTTP requests call handlers– Input stream– Output stream

• Libraries for creating Web content– Cgicc (GNU)– Mimetic (GPL)– XGI (XDAQ)

Page 14: HyperDAQ

CERN [email protected]

Service Discovery

• Abstract discovery interface

• Pluggable implementations– SLP: Service Location Protocol– UPnP: Universal Plug and Play– JXTA

Optional – but simplifies configuration

Page 15: HyperDAQ

CERN [email protected]

Modular Approach

Core librariesData serializersLoggingHyperDAQ core

Monitoring toolsControl toolsSecurity modulesDiscovery servicesPeer transports

Event builderDAQ monitoringHardware access

Page 16: HyperDAQ

CERN [email protected]

Where is it used?

Page 17: HyperDAQ

CERN [email protected]

CMS - Distributed DAQEvent fragments: Event data fragments are stored in separated physical memory systems

Full events: Full event data are stored in a single physical memory system associated to a processing unit

Readout Units Buffer event fragments

Builder Units Assemble eventfragments

Event Manager Interfaces betweenRU, BU and trigger

Requirements: L1 trigger: 100 kHz (@2KB), ev-size 1MB, 200 MB/s in AND out per RU, 200 MB/s in AND 66 MB/s out per BU

Page 18: HyperDAQ

CERN [email protected]

Link to the mainapplication for each computer

XAct is a jobcontroller. It is a HyperDAQapplication

Control

Operations

Page 19: HyperDAQ

CERN [email protected]

Data collectedfrom computers

Collected datarendered

graphically

Monitor

Widget

Page 20: HyperDAQ

CERN [email protected]

Direct Access

Page 21: HyperDAQ

CERN [email protected]

Page 22: HyperDAQ

CERN [email protected]

Simplify Integration

Loosely coupled development collaborationswrite programs with HyperDAQ interfaceslink together their programs

Monitoring

Controller 1

Controller 2

Page 23: HyperDAQ

CERN [email protected]

Summary

• simplify access to distributed systems

• give direct control to any application

• decrease configuration complexity

• enable loosely coupled development

By linking applications we

Page 24: HyperDAQ

CERN [email protected]

Information

http://xdaqwiki.cern.ch

http://www.sourceforge.net/projects/xdaq

[email protected]

Page 25: HyperDAQ

CERN [email protected]

Backup

Page 26: HyperDAQ

CERN [email protected]

Security

• XAccess Module– Basic Web authentication– IP based filtering– Combination of both

• SSL– For server and client authentication

• Custom policy– Any policy can be “plugged-in”

Page 27: HyperDAQ

CERN [email protected]

Clients

• Web browser

• LabView

• MS Excel

Page 28: HyperDAQ

CERN [email protected]

Installations

• Magnet test cluster in Cessy– 64 computers

• Adopted by subdetectors– Used for commissioning

• DAQ Monitoring– Event builder (8 slices, 64x64)– Front End readout link (512)

Page 29: HyperDAQ

CERN [email protected]

Monitorable Entities

• 512 FRL• 512 RU (64 * 8)• 512 BU (64 * 8)• 50 FRL Controllers (32 partitions, at most 2 hierarchy levels)• 4 FMM Controllers• 8 EVM (1 per partition)• 200 DCS• 16 Myrinet Switches (8 Fed builder, 8 RU builder)• 2 Ethernet Switches• 8 Run Control hosts (1 per partition)• 64 Filter Subfarm Controllers• 4096 Filter nodes (512 subfarms with 8 nodes each)• 5984 Total• From 6000 nodes, 1 MByte/s (monitored data at 1 HZ)