the data flow system of the atlas daq/ef "-1" prototype project g. ambrosini 3,9, e. arik...
TRANSCRIPT
The Data Flow System of the ATLAS DAQ/EF "-1" Prototype Project
G. Ambrosini 3,9, E. Arik 2, H.P. Beck 1 , S. Cetin 2, T. Conka 2, A. Fernandes 3, D. Francis3,
Y. Hasegawa 4, M. Joos 3, G. Lehmann 1,3 , J. Lopez 3,10, A. Mailov 2, L. Mapelli 3,
G. Mornacchi 3, Y. Nagasaka7, M. Niculescu 3,5, K. Nurdan 2, J. Petersen 3,
D. Prigent 3, J. Rochez 3, L. Tremblet3, G. Unel3 , S. Veneziano 3,6, Y. Yasu8
1. Laboratory for High Energy Physics, University of Bern, Switzerland
2. Department of Physics, Bogazici University, Istanbul, Turkey
3. CERN, Geneva, Switzerland
4. ICEPP, University of Tokio, Tokio, Japan
5. Institute of Atomic Physics,Bucharest, Romania
6. I.N.F.N. Sezione di Roma, Roma, Italy
7. Nagasaki Institute for Applied Science, Nagasaki, Japan
8. High Energy Accelerator Research Organization (KEK), Japan
9. Now at Lightning Instrumentation S.A., Lausanne, Switzerland
10. Now at EDF, Grenoble, France
Padova, CHEP 2000
2
G. Lehmann, LHEP Bern / CERN
The DAQ/EF “-1” Project
Study of a vertical slice of the ATLAS DAQ system to:– define requirements on different sub-systems,
– design the elements of the DAQ with their boundaries and their interaction with other components,
– implement a prototype to evaluate technological solutions with the requested performance.
The project has been organized in 4 main activities:– Detector interface
– Data Flow
– Event Filter– Back-End (see talk by I. Soloviev on Thursday)
Padova, CHEP 2000
3
G. Lehmann, LHEP Bern / CERN
View of the Data Flow System
LDA
EF SubFarm
SFOQ
SFI
LDA
ReadoutBuffers
TRGQ
EBIF
Sw itc hing N etw ork
D e te ct o r E le c tr o n ic s
F ar m D A Q
SwitchSupervision
Mass Storage
DFM+ LDAQ
F r o nt E nd DA Q
From triggerTo level 2
E ven t B u ild er
LDA
ReadoutBuffers
TRGQ
EBIF
systems (L2A, L2R, ROI)
consisting of read-out crates
consisting of sub-farm crates
Input Rate
75 - 100 kHz
Bandwidth
~100 MB/s
1- 2 kHz 4-5 GB/s
LDA
EF SubFarm
SFOQ
SFI
See talk by S. Veneziano
Padova, CHEP 2000
4
G. Lehmann, LHEP Bern / CERN
Prototype Implementation
of the Data Flow
Padova, CHEP 2000
5
G. Lehmann, LHEP Bern / CERN
Global Performance Measurements
3000
Event Rate = f (ROB fragment size)
0
0.5
1
1.5
2
2.5
3
3.5
0 500 1000 1500 2000 2500
ROB fragment size (bytes)
Eve
nt
Bu
ilder
rat
e (k
Hz)
2x2 system
Event Building Rate = f (L2R)
0
0.5
1
1.5
2
2.5
89 90 91 92 93 94 95 96 97 98 99 100
L2 reject factor
Eve
nt
Bu
ild
ing
rat
e (
kH
z) •ROB fragment size variable, with mean ~1.5 kBytes
•The EB dictates the performance for a L2 rejection ratio below 95%
•Measurements with no L2 rejection
•The performance is limited by the EB interface which collects fragments over VME and sends them out on the ATM network.
Padova, CHEP 2000
6
G. Lehmann, LHEP Bern / CERN
The Event Builder (EB)• The EB is responsible for merging data fragments
to complete, formatted events• The EB design is based on a two layer approach
which separates the technology specific aspects from the functionality of the EB elements and their interaction protocol
Bsy/NotBsy
EoT
Transfer
EoE
GetIdSrc
DFM
Dst
• The EB has been studied through prototyping and simulation
Padova, CHEP 2000
7
G. Lehmann, LHEP Bern / CERN
EB Performance Measurements: Gigabit Ethernet
Processor:
Intel Pentium PC @ 450 MHz
OS: Linux
Protocol: TCP/IP
Padova, CHEP 2000
8
G. Lehmann, LHEP Bern / CERN
EB Performance Measurements: ATM
Processor:
RIOII 8062 SBC @ 200 MHz
OS: LynxOS 2.5.1
Protocol: AAL5
ATM bandwidth
Data
Padova, CHEP 2000
9
G. Lehmann, LHEP Bern / CERN
Modelling of the EB• Main purpose: study of the scaling of the EB
system performance to ATLAS sizes• Model design: 2 layer approach as in the
prototype in order to be capable of studying different technologies with the same model
• Simulation Program: implementation with the discrete event simulation domain of the PTOLEMY (http://ptolemy.eecs.berkeley.edu) simulation tool
Padova, CHEP 2000
10
G. Lehmann, LHEP Bern / CERN
Network Node Scheme
Check queue
Check queueCheck
User
Buffer
Send buffer queue
Receive buffer queue
CPU
runn
ing
the
EB
ap
pli
ca
ton
NIC
NIC
Flow of data
Flow control: checking of queues and requesting data
•Every EB element is a network node. Only the application specific part distinguishes between DFM, EBIF and SFI.
•The Network is modelled as an ideal router introducing a constant delay between input and output.
Padova, CHEP 2000
11
G. Lehmann, LHEP Bern / CERN
EB Scalability Studies
The time to build an event increases fairly linearly with the number of EBIFs; nevertheless the data cannot exclude a weak quadratic dependency yet.
ATM
Padova, CHEP 2000
12
G. Lehmann, LHEP Bern / CERN
Results of the simulation calibrated with the processing times and the ATM technology parameters measured in the prototype.
The performance is strongly dependent on the evolution of the processing time as a function of the number of nodes.
Scaling of the EB
Link speed
EB performance
ATLAS region
Padova, CHEP 2000
13
G. Lehmann, LHEP Bern / CERN
Scaling: Possible Improvements
Reduce the number of nodes in the system by introducing higher bandwidth links
Increase the processing power of the nodes 400x400 ATM155 with
3 times fa
ster C
PU
Padova, CHEP 2000
14
G. Lehmann, LHEP Bern / CERN
Conclusions• An Event Builder prototype has been implemented
for Gigabit Ethernet and ATM; the latter has been used to calibrate a computer model of the EB.
• The model shows that the EB design is scalable and that the required performance is in reach.
• A small but complete Data Flow prototype has been designed and implemented; a LVL2 accept rate of 2.3 kHz could be sustained for ROB fragment sizes of ~1.5 kBytes.