data sizes and running scenarios overview

18
L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 1 Data sizes and running scenarios overview LHCb Trigger and hit multiplicities: Distributions of VELO and TT clusters for L1 L1 Processing time distribution Trigger sensitivity to hit multiplicity (cuts) Motivate various running scenarios for farm design studies

Upload: nicole

Post on 16-Jan-2016

33 views

Category:

Documents


0 download

DESCRIPTION

Data sizes and running scenarios overview. LHCb Trigger and hit multiplicities: Distributions of VELO and TT clusters for L1 L1 Processing time distribution Trigger sensitivity to hit multiplicity (cuts) Motivate various running scenarios for farm design studies. Current Baseline Scenario. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 1

Data sizes and running scenarios overview

Data sizes and running scenarios overview

LHCb Trigger and hit multiplicities:

Distributions of VELO and TT clusters for L1

L1 Processing time distribution

Trigger sensitivity to hit multiplicity (cuts)

Motivate various running scenarios for farm design studies

Page 2: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 2

Current Baseline Scenario

Current Baseline Scenario

L1/DAQ CPU farm designed for following averages:L1 input rate 1.1 MHzL1 output rate 40 KHzL1 CPUs ~ 500HLT CPUs ~ 700L1 processing time ~ 0.5 msHLT processing time ~ 17.5 msL1 uses VELO + TT + L0 objectsHLT uses as L1 + Calo, Muon, Trackers

BUT: what if hit multiplicities are larger than expected ?… larger data sizes, larger processing times.What can we do ?

Page 3: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 3

Trigger and MultiplicitiesTrigger and MultiplicitiesR

ate

(a.u

)

SPD multiplicity

• Because of inefficiencies (pattern recognition), offline signal content goes down at large multiplicities.

• Events at large multiplicities consume more CPU time and put more data through the network.

• The idea: try to replace those events a.s.a.p. (i.e. at L0) by lower multiplicity events.

• Because of inefficiencies (pattern recognition), offline signal content goes down at large multiplicities.

• Events at large multiplicities consume more CPU time and put more data through the network.

• The idea: try to replace those events a.s.a.p. (i.e. at L0) by lower multiplicity events.

Page 4: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 4

L1VeloTrackAlg Time distributionsL1VeloTrackAlg Time distributionsBrunel v14r2

tav t no_cut

Mean and RMS in NSPD-bin

Page 5: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 5

L0 and SPD multiplicity cutL0 and SPD multiplicity cutBrunel v14r2 (fall 2002 production)Bs J/ (e+e-)

1MHz

L0 can cope with a multiplicity cut. What about L1 ?

L0 output fixed at 1MHz

Page 6: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 6

Sensitivity of L0*L1 to multiplicity: a first lookSensitivity of L0*L1 to multiplicity: a first look

Apply a given cut on SPD multiplicity at L0

Each time, re-adjust L0 thresholds to get always the same L0 minimum bias retention rate

Pass events to L1 and re-adjust threshold on L1 global variable to get always the same L1 minimum bias retention rate

Page 7: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 7

L1 versus VELO clusters

L1 versus VELO clusters

Small fraction of events with more than (say) 2500 clusters in L0yes MBIA and in offline-selected L0yes signal events

But large fraction of L1 time spent on those MBIA events!

Brunel v14r2

Evt fraction with more than Nvc velo clusters

Time (a.u.) spent on evts with Nvc velo clus.

Page 8: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 8

Scenarios for farm studies

Scenarios for farm studies

So far, we considered three scenarios (with a fixed # of CPUs):Sc1: Baseline scenario:

L1 processing time scaled such that average is ~0.5 ms. HLT processing time scaled such that average is ~17 ms. L1 output rate = 40 kHz, using only TT+VELO+L0. 500 L1 CPUs + 700 HLT CPUs.

Sc2: Upgrade scenario: L1 processing time scaled such that average is ~1 ms. HLT processing time scaled such that average is ~17 ms. L1 output rate = 12 kHz, using TT+VELO+L0+Trackers ( + Calo sel. crate, muon ). 1000 L1 CPUs + 200 HLT CPUs.

Sc3: Robustness check: Suppose all processing times are a factor ~1.5 higher than in the Sc1... So, apply multiplicity cuts at L0 (NSPD<180 and NPU<80) to recover the same average for L1 time (0.5 ms), while preserving a reasonable signal efficiency. L0 and L1 thresholds readjusted to keep the nominal L0 and L1 retention rates. This is similar to assuming that we have only 2/3 of the CPUs available compared to baseline.

Use “realistic” data ( = full simulation, with noise, clusters per L1 board, etc.), including CPU processing time. (4.4M minimum bias events)

Page 9: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 9

L1 time and number of clusters

L1 time and number of clusters

Sum of all L1 TT+VELO clusters

L1 processing time (a.u.)

Sc1 (baseline)

Sc3 (multipl. cut), before re-scaling time

Considerable gain in average time and data size with small signal efficiency loss.

Page 10: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 10

Summary and OutlookSummary and Outlook

There seem to be no offline-selected signal events above a certain multiplicity

L0*L1 quite sensitive to multiplicity cut

Multiplicity cuts offer possibility to reduce average and tails ofprocessing time in L1&DAQ farm,

amount of data we put through the network

with little loss on signal efficiency (to be confirmed by multichannel study)

SPD + Pile-Up Veto (at L0) and Velo (at L1) multiplicities can be used as an additional handle to counter “unexpected adversity” from mother nature

Various scenarios under study

Page 11: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 11

Look at L1 Time for various multiplicity

cuts on VELO clusters

Look at L1 Time for various multiplicity

cuts on VELO clusters

Make as if applying a cut at the “entrance” of L1 processing:

if #L1Clusters>value then L1-no and set L1 time = 0.

Brunel v17r4 L0-yes MBIA

Page 12: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 12

L1L0 Global Performance vs multiplicity

cuts

L1L0 Global Performance vs multiplicity

cuts

Still Brunel v16r4 (Xmas)

Apply both PU and SPD multiplicity cuts

Rescale L0 thresholds and L1global to get proper MBIA retentions

Only 1 physics channel looked at…

…this is just a start.

Page 13: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 13

Same but with additional Velo

clusters cut

Same but with additional Velo

clusters cut

Apply a cut on number of Velo clusters at entrance of L1: < 2000.

Page 14: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 14

L1 versus VELO clustersL1 versus VELO clustersFrom Frederic Teubert @LHCC

BB00ss D Dss

--KK++

What if the Minimum What if the Minimum

Bias Bias multiplicity is multiplicity is wrongwrong??

What if the Minimum What if the Minimum

Bias Bias multiplicity is multiplicity is wrongwrong??

Subdivide the Subdivide the Minimum Minimum

Bias sample in:Bias sample in:

tracks < 40tracks < 40

40 < tracks < 70 40 < tracks < 70

70 < tracks70 < tracks

Subdivide the Subdivide the Minimum Minimum

Bias sample in:Bias sample in:

tracks < 40tracks < 40

40 < tracks < 70 40 < tracks < 70

70 < tracks70 < tracks

If the If the multiplicity multiplicity isis ~70% ~70%

higherhigher, the , the signal signal efficiency efficiency

isis reduced reduced by by ~30%.~30%.

If the If the multiplicity multiplicity isis ~70% ~70%

higherhigher, the , the signal signal efficiency efficiency

isis reduced reduced by by ~30%.~30%.

Page 15: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 15

L0>0L0>0

BBd d BBs s D Ds s

BBs s (()) BBs s (ee)(ee) BBdd K* K*

L0>0 && L1>0L0>0 && L1>0 L0>0 && L1>0 && SEL>0L0>0 && L1>0 && SEL>0

L1 versus VELO clustersL1 versus VELO clusters Thanks to Frederic Teubert

BBs s D Ds s KK

Page 16: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 16

L1 vs VELO clustersL1 vs VELO clusters Thanks to Frederic Teubert

Status at LHCCStatus at LHCC Velo < 2.2k clustersVelo < 2.2k clusters

Page 17: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 17

L1 Time versus Velo

Clusters multiplicity

L1 Time versus Velo

Clusters multiplicity

Page 18: Data sizes and running scenarios overview

LHCb L1&DAQ review – April 29, 2003, CERN Massimiliano Ferro-Luzzi 18

Two scenarios: L1 time and number of clustersTwo scenarios: L1 time and number of clusters

Sum of all L1 TT+VELO clustersScaled L1 time (s)

baseline

Robustnesscheck

(Note: here real L1veloclusters; previous slides, offline Velo clusters)

What impact on farm design ?What impact on farm design ?