hyperdaq
DESCRIPTION
HyperDAQ. Where Data Acquisition Meets the World Wide Web. Johannes Gutleber, CERN Dept. PH/CMD. Outline. What is HyperDAQ? Why is it done? What is it good for? How is it done? Where is it used? Summary. The Web is all about linking documents. HyperDAQ - PowerPoint PPT PresentationTRANSCRIPT
HyperDAQ
Where Data Acquisition Meets the World Wide Web
Johannes Gutleber, CERNDept. PH/CMD
CERN [email protected]
Outline
• What is HyperDAQ?
• Why is it done?
• What is it good for?
• How is it done?
• Where is it used?
• Summary
CERN [email protected]
Each application in a distributed system
• is directly browsable
• can discover others
• can link to the others
CERN [email protected]
What is HyperDAQ?
A way to provide access to distributed data acquisition systems through World Wide Web + Peer to Peer
CERN [email protected]
To realize the potential of distributed data acquisition systems,
access to information must be simple.
Why is it done?
See system as single entity
“drill into the system”
from anywhere
CERN [email protected]
What is it good for?
• Monitor distributed applications
• Control distributed applications
• Help distributed development
CERN [email protected]
How is it done?
link
HTTP engine Applications serve Web pages
Applications can be discovered
Contents are linked together
Address applications by URN
CERN [email protected]
URN
• Uniform Resource Name• Identifies resources on a computer
– Applications– Services– Data sources
urn:xdaq-application:service=monitor
CERN [email protected]
URL• Uniform Resource Location• Gives context to URN• Calls operations on URN
http://host:port/urn:xdaq-application:service=monitor/retrieve?flash=cpuUsage
CERN [email protected]
• Toolkit for distributed data acquisition• C++, cross-platform• Enablers for high-performance operation
– Zero copy message passing– Concurrent use of multiple transports– Memory pools
• Run-time extensions– Application components– Peer-transport modules
CERN [email protected]
HyperDAQ Programming
• Incoming HTTP requests call handlers– Input stream– Output stream
• Libraries for creating Web content– Cgicc (GNU)– Mimetic (GPL)– XGI (XDAQ)
CERN [email protected]
Service Discovery
• Abstract discovery interface
• Pluggable implementations– SLP: Service Location Protocol– UPnP: Universal Plug and Play– JXTA
Optional – but simplifies configuration
CERN [email protected]
Modular Approach
Core librariesData serializersLoggingHyperDAQ core
Monitoring toolsControl toolsSecurity modulesDiscovery servicesPeer transports
Event builderDAQ monitoringHardware access
CERN [email protected]
Where is it used?
CERN [email protected]
CMS - Distributed DAQEvent fragments: Event data fragments are stored in separated physical memory systems
Full events: Full event data are stored in a single physical memory system associated to a processing unit
Readout Units Buffer event fragments
Builder Units Assemble eventfragments
Event Manager Interfaces betweenRU, BU and trigger
Requirements: L1 trigger: 100 kHz (@2KB), ev-size 1MB, 200 MB/s in AND out per RU, 200 MB/s in AND 66 MB/s out per BU
CERN [email protected]
Link to the mainapplication for each computer
XAct is a jobcontroller. It is a HyperDAQapplication
Control
Operations
CERN [email protected]
Data collectedfrom computers
Collected datarendered
graphically
Monitor
Widget
CERN [email protected]
Direct Access
CERN [email protected]
CERN [email protected]
Simplify Integration
Loosely coupled development collaborationswrite programs with HyperDAQ interfaceslink together their programs
Monitoring
Controller 1
Controller 2
CERN [email protected]
Summary
• simplify access to distributed systems
• give direct control to any application
• decrease configuration complexity
• enable loosely coupled development
By linking applications we
CERN [email protected]
Backup
CERN [email protected]
Security
• XAccess Module– Basic Web authentication– IP based filtering– Combination of both
• SSL– For server and client authentication
• Custom policy– Any policy can be “plugged-in”
CERN [email protected]
Installations
• Magnet test cluster in Cessy– 64 computers
• Adopted by subdetectors– Used for commissioning
• DAQ Monitoring– Event builder (8 slices, 64x64)– Front End readout link (512)
CERN [email protected]
Monitorable Entities
• 512 FRL• 512 RU (64 * 8)• 512 BU (64 * 8)• 50 FRL Controllers (32 partitions, at most 2 hierarchy levels)• 4 FMM Controllers• 8 EVM (1 per partition)• 200 DCS• 16 Myrinet Switches (8 Fed builder, 8 RU builder)• 2 Ethernet Switches• 8 Run Control hosts (1 per partition)• 64 Filter Subfarm Controllers• 4096 Filter nodes (512 subfarms with 8 nodes each)• 5984 Total• From 6000 nodes, 1 MByte/s (monitored data at 1 HZ)