s. cittolin cern/cms, 22/03/07 daq architecture. tdr-2003 daq evolution and upgrades

14
S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades DAQ upgrades at SLHC DAQ upgrades at SLHC

Upload: ursula

Post on 19-Jan-2016

35 views

Category:

Documents


3 download

DESCRIPTION

S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades. DAQ upgrades at SLHC. DAQ baseline structure. Parameters Collision rate40 MHz Level-1 Maximum trigger rate100 kHz Average event size ≈1 Mbyte Data production≈Tbyte/day. TDR design implementation - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S. Cittolin CERN/CMS, 22/03/07

DAQ architecture. TDR-2003DAQ evolution and upgrades

DAQ upgrades at SLHCDAQ upgrades at SLHC

Page 2: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S. Cittolin CMS/PH-CMD

ParametersCollision rate 40 MHzLevel-1 Maximum trigger rate 100 kHzAverage event size ≈ 1 MbyteData production ≈ Tbyte/day

TDR design implementationNo. of In-Out units 512Readout network bandwidth ≈ 1 Terabit/s Event filter computing power ≈ 106 SI95No. of PC motherboards ≈ Thousands

DAQ baseline structureDAQ baseline structure

(SLHC: x 10)

Page 3: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S. Cittolin CMS/PH-CMD

PS:1970-80: MinicomputersReadout custom designFirst standard: CAMAC• kByte/s

LEP:1980-90: MicroprocessorsHEP standards (Fastbus)Embedded CPU, Industry standards (VME)• MByte/s

LHC: 200X: Networks/GridsIT commodities, PC, ClustersInternet, Web, etc.• GByte/s

Evolution of DAQ technologies and structuresEvolution of DAQ technologies and structures

Event building

Detector Readout

On-line processing

Off-line data store

Page 4: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

CMS/SC TriDAS Reviw AR/CR05

DAQ-TDR layoutDAQ-TDR layout

Architecture: distributed DAQAn industry based Readout Network with Tbit/s aggregate bandwidth between the readout and event filter DAQ nodes (FED&FU) is achieved by two stages of switches interconnected by layers of intermediate data concentrators (FRL&RU&BU).Myrinet optical links driven by custom hardware are used as FED Builder and to transport the data to surface (D2S, 200m) Gb Ethernet switches driven by standard PCs is used in the event builder layer (DAQ slices and Terabit/s switches).Filter Farm and Mass storage are scalable and expandable clusters of services

Architecture: distributed DAQAn industry based Readout Network with Tbit/s aggregate bandwidth between the readout and event filter DAQ nodes (FED&FU) is achieved by two stages of switches interconnected by layers of intermediate data concentrators (FRL&RU&BU).Myrinet optical links driven by custom hardware are used as FED Builder and to transport the data to surface (D2S, 200m) Gb Ethernet switches driven by standard PCs is used in the event builder layer (DAQ slices and Terabit/s switches).Filter Farm and Mass storage are scalable and expandable clusters of services

Equipment replacements (M+O):FU-PCs every 3 years Mass storage 4 yearsDAQ-PC &networks 5 years

Equipment replacements (M+O):FU-PCs every 3 years Mass storage 4 yearsDAQ-PC &networks 5 years

Page 5: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S. Cittolin CMS/PH-CMD 5

LHC --> SLHCLHC --> SLHC

•Same level 1 trigger rate, 20 MHz crossing, higher occupancy..Same level 1 trigger rate, 20 MHz crossing, higher occupancy..•A factor 10 (or more) in readout bandwidth, processing, storageA factor 10 (or more) in readout bandwidth, processing, storage

•Factor 10 will come with technology. Architecture has to exploit itFactor 10 will come with technology. Architecture has to exploit it•New digitizers needed (and a new detector-DAQ interface)New digitizers needed (and a new detector-DAQ interface)•Need for more flexibility in handling the event data and operating the Need for more flexibility in handling the event data and operating the experimentexperiment

Page 6: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S.C. CMS-TriDAS 05-Mar-07

Computing and communication trendsComputing and communication trends

2020

1000Gbit/IN21000Gbit/IN2

10Mbit/IN210Mbit/IN2

2015 SLHC

1Gbit/IN21Gbit/IN2

Mas

s st

orag

e

Mas

s st

orag

eB

andw

idth

Ban

dwid

th

Page 7: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S.C. CMS-TriDAS 05-Mar-07

Industry Tbit/s (2004)Industry Tbit/s (2004)

•Above is a photograph of the configuration used to test the performance and resiliency metrics of the Force10 TeraScale E-Series. During the tests, the Force10 TeraScale E-Series demonstrated 1 billion packets per second throughput, making it the world's first Terabit switch/router, and ZERO packet loss hitless fail-over at Terabit speeds. The configuration required 96 Gigabit Ethernet and eight 10 Gigabit Ethernet ports from Ixia, in addition to cabling for 672 Gigabit Ethernet and 56 Ten Gigabit Ethernet.

http://www.force10networks.com/products/reports.asp

TextText

Page 8: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S.C. CMS-TriDAS

BU Builder UnitDCS Detector Control SystemEVB EVent builder EVM Event managerFU Filter UnitFRL Front-end Readout LinkFB FED BuilderFES Front End SystemFEC Front End Controller FED Front End Driver GTP Global Trigger Processor LV1 Regional Trigger Processors TPG Trigger Primitive Generator TTC TIming, Trigger and ControlTTS Trigger Throttle System RCMS Run Control and MonitorRTP Regional Trigger ProcessorRU Readout Unit

LHC-DAQ: trigger loop & data flowLHC-DAQ: trigger loop & data flow

BU Builder UnitDCS Detector Control SystemEVB EVent builder EVM Event managerFU Filter UnitFRL Front-end Readout LinkFB FED BuilderFES Front End SystemFEC Front End Controller FED Front End Driver GTP

Global Trigger Processor LV1Regional Trigger Processors

TPG Trigger Primitive Generator TTCTIming, Trigger and Control

TTS Trigger Throttle System RCMSRun Control and Monitor

RTP Regional Trigger ProcessorRU Readout Unit

TTS (N to 1)sTTS: from FED to GTP collected by a FMM tree (~ 700 roots)aTTS: From DAQ nodes to TTS via service network (e.g. Ethernet) (~ 1000s nodes)Transition in few BXs

TTC (1 to N)from GTP to FED/FES distributed by an optical tree (~1000 leaves)- LV1 20 MHz rep rate- Reset, B cmd 1 Mhz Rep rate- Controls data 1 Mbit/s

FRL&DAQ networks (NxN)Data to Surface (D2S). FRL-FB 1000 Myrinet links and switches up to 2Tb/sReadout Builders (RB). RUBU/FU3000 ports GbEthernet switches 1Tb/s sustained

Page 9: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S.C. CMS-TriDAS 9

LHC-DAQ: event descriptionLHC-DAQ: event description

Event #(Central)Time (ABSolute)Run NumberEvent TypeHLT info

Full Event Data

Storage Manager

Event No. (LOCAL)Time (ABSolute)Run NumberEvent TypeHLT info

Full event data

from GTdata

Filter Unit

Event No. (LOCAL)Orbit/BX (LOCAL)

FED data fragment

TTC rxFED-FRL-RU

Event No. (LOCAL)Orbit/BX (LOCAL)

FED data fragment

Event No. (LOCAL)Orbit/BX (LOCAL)

FED data fragment

from EVM

Readout Unit

EVB tokenBU destination

FED data fragment

Event fragments

Built events

Page 10: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S.C. CMS-TriDAS 10

SLHC: event handlingSLHC: event handling

•TTC: full event info in each event fragment distributed from the GTP-EVM via TTC

Event #(Central)Time (ABSolute)Run NumberEvent TypeHLT info

Full Event Data

EVM

Distributed clusters and computing services

•FED-FRL interface e.g. PCI•FED-FRL system able to handle high level data communication protocols (e.g. TCP/IP)•FRL-DAQ commercial interface e.g. Ethernet (10 GBE or multi-GBE) to be used for data readout and FED configuration, control, local DAQ etc.

New FRL design can be based on embedded PC (e.g. ARM like) or advanced FPGA etc.The network interface can be used in place of VME backplane to configure and control the new FED.Current FRL-RU systems can be used (with some limitations) for those detector not upgrading their FED design.

Network

Page 11: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S.C. CMS-TriDAS 05-Mar-07 11

•In addition to the Level-1 Accept and to the other synchronous commands, the Trigger&Event Manager-TTC transmits to the FEDs the complete event description such as the event number, event type, orbit, BX and the event destination address that is the processing system (CPU, Cluster, TIER..) where the event has to be built and analyzed....•The event fragment delivery and therefore the event building will be warranted by the network protocols and (commercial) network internal resources (buffers, multi-path, network processors, etc.). FED-FU push/pull protocols can be employed...•Real time buffers of Pbytes temporary storage disks will cover a real-time interval of days, allowing to the event selection tasks a better exploitation of the available distributed processing power

SLHC DAQ upgrade (architecture evolution)SLHC DAQ upgrade (architecture evolution)

Another step toward uniform network architecture:

•All DAQ sub-systems (FED-FRL included) are interfaced to a single multi-Terabit/s network. The same network technology (e.g. ethernet) is used to read and control all DAQ nodes (FED,FRL, RU, BU, FU, eMS, DQM,...)

Page 12: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S.C. CMS-TriDAS 05-Mar-07

SLHC: trigger info distributionSLHC: trigger info distribution

Trigger synchronous data (20 MHz) (~ 100 bit/LV1)Clock distribution & Level 1 acceptFast commands (resets, orbit0, re-sync, switch context etc...)Fast parameters (e.g. trigger/readout type, Mask, etc.)

Event synchronous data packet (~ 1 Gb/s, few thousands bits/LV1)Event Number 64 bits as before local counters will be used to sychronize and check...Orbit/time and BX No. 64 bitsRun Number 64 bitsEvent type 256 bitsEvent destination(s) 256 bits e.g. list of IP indexes (main EVB + DQMs..)Event configuration 256 bits e.g. information for FU event builderEvent fragment will contain the full event description etc.. Events can be built in Push or Pull mode. E.g. 1. and 2. will allow many TTC operations be done independently per TTC partition (e.g. local re-sych, local reset, partial readout etc..)

aSynchronous control information Configuration parameters (delays, registers, contexts, etc.)Maintenance commands, auto-test.

Additional features? Readout control signal collector (a la TTS)?sTTS replacement. Collect FED status (at LV1 rate from >1000 sources?)Collect local information (e.g. statistics etc.) on demand?Maintenance commands, auto-test, statistics etc.Configuration parameters (delays, registers, contexts, etc.)

Page 13: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S.C. CMS-TriDAS 05-Mar-07 13

SummarySummary

DAQ designArchitecture: will be upgraded enhancing scalability and flexibility and exploiting at maximum (by M+O) the commercial network and computing technologies. Software: configuring, controlling and monitoring a large set of heterogeneous and distributed computing systems will continue to be a major issue

Hardware developmentsIncreased performances and additional functionality's are required by the event data handling (use standard protocols, perform selective actions etc.)- A new timing, trigger and event info distribution system- A general front-end DAQ interface (for the new detector readout electronics) handling high level network protocols via commercial network cards

The best DAQ R&D programme for the next years is the implementation of the 2003 DAQ-TDRAlso required a permanent contact between SLHC developers and LHC-DAQ implementors to understand and define the new systems scope, requirements and specifications

Page 14: S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades

S.C. CMS-TriDAS 05-Mar-07

HLT output

Level-1

Event rate

DAQ data flow and computing modelDAQ data flow and computing model

FRL direct network access

Filter Farms & Analysis

DQ

M&

Sto

rag

e s

erv

ers

EVENT BUILDER NETWORKMulti Terabit/s