f.carminati alice computing model workshop december 9-10, 2004 introduction and overview of the...

48
F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

Upload: bernard-neal

Post on 13-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

F.Carminati

ALICE Computing Model Workshop

December 9-10, 2004

Introduction and Overview of the ALICE Computing Model

Page 2: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 2

Objective of the meeting

• Present the current status of the ALICE computing model

• Receive feedback from the Collaboration

• Receive endorsement for the draft to be presented to the LHCC review committee

• Start the process that will bring to the Computing TDR

Page 3: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 3

Timeline

• December 15: the draft computing model and the projected needs are presented to the LHCC review committee

• January 17-19 LHCC review with sessions devoted to each of the experiments and a close-out session Monday, 17 January : ATLAS (a.m.), CMS (p.m.) Tuesday, 18 January: ALICE (a.m.) LHCb (p.m.) Wednesday, 19 January: Closed Session (a.m.)

Page 4: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 4

Computing TDR’s

• LCG TDR Next meeting first half 2005 First draft 11 April, good copy 9 May 15 June final TDR to LHCC (LHCC mtg. 29-30 June) 3 June ready for approval by PEB on 7 June

• ALICE Computing TDR Early draft given to LHCC on December 15 Draft presented and distributed to the Collaboration

during the ALICE/offline week in February Final discussion and approval during the ALICE/offline

week beginning of June

Page 5: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 5

Computing MoU

• Distributed to the Collaboration management to obtain feedback on October 1

• Coupled with the LHCC review in February• Provide the C-RRB with documents that can be

finalised and approved at its April 2005 meeting• Subsequently distributed for signature

Page 6: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 6

Mandate of the February LHCC review

• In the context of the preparation of the Computing MoUs and TDRs, the LHC experiments have come forward with estimated computing capacity requirements in terms of disks, tapes, CPUs and networks for the Tier-0, Tier-1 and Tier-2 centres. The numbers vary in many cases (mostly upwards) from those submitted to the LHC Computing Review in 2001 […] it is felt to be desirable at this stage to seek an informed, independent view on the reasonableness of the present estimates.

• […] the task of this Review is thus to examine critically, in close discussion with the computing managements of the experiments, the current estimates and report on their validity in the light of the presently understood characteristics of the LHC experimental programme. The exercise will therefore not be a review of the underlying computing architecture.

Page 7: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 7

Membership

• Chairman: J. Engelen - CERN Chief Scientific Officer

• Representatives from the LHCC: F. Forti, P. McBride, T.Wyatt

• External: E. Blucher (Univ. Chicago), N.N.

• LHCC Chairman and Secretary: S. Bertolucci, E. Tsesmelis

• PH Department: J.-J. Blaising, D. Schlatter

• IT Department: J. Knobloch, L. Roberston, W. von Rueden

Page 8: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 8

Elements of the computing model

• From detector to Raw data (see P.Vande Vyvre’s talk)

• Framework & software management

• Simulation

• Reconstruction

• Condition infrastructure

• Analysis

• Grid Middleware & distributed computing environment

• Project management & planning

• From RAW data to physics analysis (see Y.Schutz’s talk)

Page 9: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 9

Framework

• AliRoot in development since 1998 Entirely based on ROOT Used already for the detector TDR’s

• Two packages to install (ROOT and AliRoot) Plus transport MC’s

• Ported on several architectures (Linux IA32 and IA64, Mac OSX, Digital True64, SunOS…)

• Distributed development Over 50 developers and a single cvs repository

• Tight integration with DAQ (data recorder) and HLT (same codebase)

Page 10: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 10

AliRoot layout

ROOT

AliRoot

STEER

Virtual MC

G3 G4 FLUKA

HIJING

MEVSIM

PYTHIA6

PDF

EVGEN

HBTP

HBTAN

ISAJET

AliE

n/g

Lite

EMCAL ZDCITS PHOSTRD TOF RICH

ESD

AliAnalysis

AliReconstruction

PMD

CRT FMD MUON TPCSTART RALICESTRUCT

AliSimulation

Page 11: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 11

Software management

• Regular release schedule Major release every six months, minor release (tag) every month

• Emphasis on delivering production code Corrections, protections, code cleaning, geometry

• Nightly produced UML diagrams, code listing, coding rule violations, build and tests , single repository with all the code No version management software (we have only two packages!)

• Advanced code tools under development with IRST/Italy Aspect oriented programming Smell detection Automated testing

Page 12: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 12

Simulation

• Simulation performed with Geant3 till now• Virtual MonteCarlo interface separates the ALICE code

from the MonteCarlo used• New geometrical modeller scheduled to enter production

at the beginning of 2005• Interface with FLUKA finishing validation• The Physics Data Challenge 2005 will be performed with

FLUKA• Interface with Geant4 ready to be implemented

Second half 2005 (?)

• Testbeam validation activity started

Page 13: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 13

The Virtual MC

User Code

VMC

Geometrical Modeller

G3 G3 transport

G4 transportG4

FLUKA transportFLUKA

Reconstruction

Visualisation

Geant3.tar.gz includesan upgraded Geant3

with a C++ interface

Geant4_mc.tar.gz includesthe TVirtualMC <--> Geant4

interface classes

Generators

Page 14: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 14

HMPID: 5 GeV Pions

Geant3 FLUKA

Page 15: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 15

0 10 20 30

microsec/point (1 milion

Gexam1

Gexam3

Gexam4

ATLAS

CMS

BRAHMS

CDF

MINOS_NEAR

BTEV

TESLA

Performance for "Where am I" - physics case (G3 geometries collected in 2002)

ROOT

G3

TGeo modeller

Page 16: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 16

Reconstruction strategy

• Main challenge - Reconstruction in the high flux environments (occupancy in the TPC detector up to 40%) requires a new approach to tracking

• Basic principle – Maximum information principle use everything you can, you will get the best

• Algorithms and data structures optimized for fast access and usage of all relevant information Localize relevant information Keep this information until it is needed

Page 17: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 17

Tracking strategy – Primary tracks

• Iterative process Forward

propagation towards to the vertex –TPC-ITS

Back propagation –ITS-TPC-TRD-TOF

Refit inward TOF-TRD-TPC-ITS

• Continuous seeding –track segment finding in all detectors

TRD

TPC

ITS

TOF

Page 18: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 18

Sources of information

• spatial characteristic of a track and sets of tracks px,py,pz,y,z parameters and covariance chi2 number of points on the track number of shared clusters on the track overlaps between tracks DCA for V0s, Kinks and Cascades …

• dEdx mean, sigma, number of points, number of shared points… reliability

• TOF of a track and sets of tracks• derived variables

Mass Causality - Probability that particle “ really exists” in some space interval (used for causality cuts)

• Based on clusters occurrence, and chi2 before – after vertex Invariant mass Pointing angle of neutral mother particle …

Page 19: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 19

ITS tracking

• Follow the TPC seeds into a tree of track hypotheses connecting reconstructed clusters track in dead zone missing clusters (dead or noisy channels, clusters below threshold) secondary tracks not cross ITS layer as function of impact parameter in z

and r-φ probability of the cluster to be shared as a function of the cluster shape restricted amount of tracks kept for further parallel tracking procedure for secondary tracks also short best tracks kept, for further V0 study

• Best track is registered to all the clusters which belong to that track • Overlap between the best track and all other tracks is calculated,

and if above threshold, χ2 of the pair of tracks is calculated

Page 20: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 20

ITS - Parallel tracking (2)

• double loop over all possible pair of branches• weighted χ2 of two tracks calculated

effective probability of cluster sharing and for secondary particles the probability not to cross given layer taken into account

Best track 1 Best track 2

Conflict !

Page 21: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 21

Results – Tracking efficiency (TPC)

• PIV 3GHz – (dN/dy – 6000) TPC tracking - ~ 40s TPC kink finder ~ 10 s ITS tracking ~ 40 s TRD tracking ~ 200 s

Page 22: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 22

Kink finder efficiency

• Efficiency for Kaons as a function of decay radius• Left side – low multiplicity (dN/dy~2000) – 2000 Kaons

• Right side – same events merged with central event (dN/dy~8000)

Page 23: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 23

is the combined response function.

Ci are the same as in the single detector case (or even something

reasonably arbitrary like Ce~0.1, C~0.1, C~7, CK~1, …)

PID combined over several detectors

The functions R(S|i) are not necessarily “formulas” (can be “procedures”).

Some other effects (like mis-measurements) can be accounted for.

∏=

≈,...,

)|()|(TPCITSd

dd isriSR

Probability to be a particle of i-type (i = e, K, p, … ), if we observe a vector S= {sITS, sTPC, sTOF, …} of PID signals:

∑=

=

,...,,

)|()|(

)|(

πμek

k

i

iSRCiSRC

SiW

Page 24: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 24

PID combined over ITS, TPC and TOF (Kaons)

ITS TPC

TOF

Efficiency of the combined PID is higher (or equal) and the contamination islower (or equal) than the ones given by any of the detectors stand-alone.

Selection : ITS & TPC & TOF (central PbPb HIJING events)

Contamination

Efficiency

ITS & TPC & TOF

Page 25: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 25

HLT Monitoring

Aliroot Simulation

Digits

Raw Data

LDCLDC

LDCLDC

GDC

Event builder

alimdc

Root fileCASTOR

AliEn

Monitoring

Online Monitoring

ESD

Histograms

Page 26: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 26

Condition DataBases

• Information source stored in heterogeneous databases

• A program periodically polls all sources and creates ROOT file with condition information

• These files are published on the Grid

• Distribution of the files is done by the Grid DMS

• Files are identified via DMS metadata

Page 27: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 27

External relations and DB connectivity

DAQ

Trigger

DCS

ECS

Physics

data

DCDB

AliEn/GLite:metadatafile store

calibration procedures

calibration files

AliRoot

Calibration classes

API

API

API

API

API

filesFrom URs:

Source, volume, granularity, update frequency, access pattern, runtime environment and dependencies

API – Application Program Interface

• Relations between DBs not final not all shown

API

APIHLT

Call for UR to come!!

Page 28: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 28

Development of Analysis

• Analysis Object Data designed to be analysis oriented Contains data needed to analysis only Designed for efficiency of the analysis

• Analysis à la PAW ROOT + at most a small

• Work on the infrastructure done by the ARDA project

• Batch analysis infrastructure Prototype end 2004

• Interactive analysis infrastructure Demonstration end 2004

• Physics working groups here just starting

Page 29: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 29

Forward Proxy

Forward Proxy

RootdProofd

Grid/Root Authentication

Grid Access Control Service

TGrid UI/Queue UI

Proofd Startup

PROOFPROOFClientClient

PROOFPROOFMasterMaster

Slave Registration/ Booking- DB

Site <X>

PROOF PROOF SLAVE SLAVE SERVERSSERVERS

Site APROOF PROOF SLAVE SLAVE

SERVERSSERVERS

Site B LCG

PROOFPROOFSteerSteer

Master Setup

New Elements

Grid Service Interfaces

Grid File/Metadata Catalogue

Client retrieves listof logical file (LFN + MSN)

Booking Requestwith logical file names

“Standard” Proof Session

Slave portsmirrored onMaster host

Optional Site Gateway

Master

ClientGrid-Middleware independend PROOF Setup

Only outgoing connectivity

Page 30: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 30

Page 31: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 31

The ALICE Grid (AliEn)

Functionality+

Simulation

Interoperability+

Reconstruction

Performance, Scalability, Standards+

Analysis

First production (distributed simulation)

Physics Performance Report (mixing & reconstruction)

10% Data Challenge (analysis)

2001 2002 2003 2004 2005

Start

There are millions lines of code in OS dealing with GRID issuesWhy not using them to build the minimal GRID that does the job?

Fast development of a prototype, can restart from scratch etc etc Hundreds of users and developers Immediate adoption of emerging standards

AliEn by ALICE (5% of code developed, 95% imported)

gLite

Page 32: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 32

Why Physics Data Challenges?

• We need simulated events to exercise physics reconstruction and analysis

• We need to exercise the code and the computing infrastructure to define the parameters of the computing model

• We need a serious evaluation of the Grid infrastructure

• We need to exercise the collaboration readiness to take and analyse data

Page 33: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 33

CERN

Tier2

Tier1

Tier2

Tier1

Production of RAW

Shipment of RAW to CERN

Reconstruction of RAW in all T1’s

Analysis

AliEn job control

Data transfer

PDC04 schema

DO IT ALL ON

THE GRID!!!!

Page 34: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 34

Signal-free event Mixed

signal

Merging

Page 35: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 35

Phase II (started 1/07) – statistics

• In addition to phase I Distributed production of signal

events and merging with phase I events

Network and file transfer tools stress

Storage at remote SEs and stability (crucial for phase III)

• Conditions, jobs …: 110 conditions total 1 million jobs 10 TB produced data 200 TB transferred from CERN 500 MSI2k hours CPU

• End by 30 September

Signal

Signals / Underlying

event

Underlying events

MB per signal event

kSI2Ks per signal event

TB [MSI2K x h]

Jets cent1 cycles: 2Jets PT 20-24 GeV/c 5 1666 5.2 940 0.09 4.35Jets PT 24-29 GeV/c 5 1666 5.2 946 0.09 4.38Jets PT 29-35 GeV/c 5 1666 5.3 952 0.09 4.41Jets PT 35-42 GeV/c 5 1666 5.3 958 0.09 4.43Jets PT 42-50 GeV/c 5 1666 5.4 964 0.09 4.46Jets PT 50-60 GeV/c 5 1666 5.4 970 0.09 4.49Jets PT 60-72 GeV/c 5 1666 5.5 976 0.09 4.52Jets PT 72-86 GeV/c 5 1666 5.5 982 0.09 4.54Jets PT 86-104 Gev/c 5 1666 5.6 988 0.09 4.57Jets PT 104-125 GeV/c 5 1666 5.6 994 0.09 4.6Jets PT 125-150 GeV/c 5 1666 5.7 1000 0.09 4.63Jets PT 150-180 GeV/c 5 1666 5.7 1006 0.09 4.66Total signal 199920 1.08 54.04Jets with quenching cent1 cycles: 2Total signal 199920 1.08 54.04Jets per1 cycles: 2Jets PT 20-24 GeV/c 5 1666 2.6 940 0.04 2.18Jets PT 24-29 GeV/c 5 1666 2.6 946 0.04 2.19Jets PT 29-35 GeV/c 5 1666 2.65 952 0.04 2.2Jets PT 35-42 GeV/c 5 1666 2.65 958 0.04 2.22Jets PT 42-50 GeV/c 5 1666 2.7 964 0.04 2.23Jets PT 50-60 GeV/c 5 1666 2.7 970 0.04 2.24Jets PT 60-72 GeV/c 5 1666 2.75 976 0.05 2.26Jets PT 72-86 GeV/c 5 1666 2.75 982 0.05 2.27Jets PT 86-104 Gev/c 5 1666 2.8 988 0.05 2.29Jets PT 104-125 GeV/c 5 1666 2.8 994 0.05 2.3Jets PT 125-150 GeV/c 5 1666 2.85 1000 0.05 2.31Jets PT 150-180 GeV/c 5 1666 2.85 1006 0.05 2.33Total signal 199920 0.54 27.02Jets with quenching per1 cycles: 2Total signal 199920 0.54 27.02PHOS cent1 cycles: 1Jet-Jet PHOS 1 20000 8.6 3130 0.17 17.39Gamma-jet PHOS 1 20000 8.6 3130 0.17 17.39Total signal 40000 0.34 34.78D0 cent1 cycles: 1D0 5 20000 2.3 820 0.23 22.77Total signal 100000 0.23 22.77Charm & Beauty cent1 cycles: 1Charm (semi-e) + J/psi 5 20000 2.3 820 0.23 22.78Beauty (semi-e) + Y 5 20000 2.3 820 0.23 22.78Total signal 200000 0.46 45.56MUON cent1 cycles: 1Muon coctail cent1 100 20000 0.04 67 0.08 37.22Muon coctail HighPT 100 20000 0.04 67 0.08 37.22Muon coctail single 100 20000 0.04 67 0.08 37.22Total signal 6000000 0.24 111.66MUON per1 cycles: 1Muon coctail per1 100 20000 0.04 67 0.08 37.22Muon coctail HighPT 100 20000 0.04 67 0.08 37.22Muon coctail single 100 20000 0.04 67 0.08 37.22Total signal 6000000 0.24 111.66

All signals 4.75 488.55MUON per4 cycles: 1Muon coctail per4 5 20000 Muon coctail single 100 20000 proton-proton no merging cycles: 1proton-proton 100000

Page 36: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 36

• Structure of event production in phase II

Master job submission, Job Optimizer (N sub-jobs), RB, File

catalogue, processes monitoring and control, SE…

Central servers

CEs

Sub-jobs

Job processing

AliEn-LCG interface

Sub-jobs

RB

Job processing

CEs

Storage

CERN CASTOR: underlying events

Local SEs

CERN CASTOR: backup copy

Storage

Primary copy Primary copy

Local SEs

Output files Output files

Underlying event input files

zip archive of output files

Register in AliEn FC: LCG SE: LCG LFN = AliEn PFN

edg(lcg) copy&register

File catalogu

e

Page 37: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 37

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.

end endto

UI applicationmiddlewareshell

Page 38: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 38

• Structure analysis in phase 3

Master job submission, Job Optimizer (N sub-jobs), RB, File

catalogue, processes monitoring and control, SE…

Central servers

CEs

Sub-jobs

Job processing

AliEn-LCG interface

Sub-jobs

RB

Job processing

CEs

Local SEs

Primary copy Primary copy

Local SEs

Input files Input files

File catalogu

e

Job splitter

File catalogu

eMetadata

lfn 1lfn 2lfn 3

lfn 7lfn 8

lfn 4lfn 5lfn 6

PFN = (LCG SE:) LCG LFNPFN = AliEn PFN

Query LFN’s

Get PFN’s

User query

Page 39: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 39

Phase III - Execution Strategy

• Very labour intensive The status of LCG DMS is not brilliant

• Does not “leverage” the (excellent!) work done in ARDA So… why not doing it with gLite?

• Advantages Uniform configuration: gLite on EGEE/LCG-managed sites & on ALICE-managed

sites If we have to go that way, the sooner the better AliEn is anyway “frozen” as all the developers are working on gLite/ARDA

• Disadvantages It may introduce a delay with respect to the use of the present – available –

AliEn/LCG configuration But we believe it will pay off in the medium term

• PEB accepted to provide us with limited support for this exercise Provided it does not hinder the EGEE release plans

Page 40: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 40

New phase III - Layout

Server

gLite/A CE/SE

lfn 1lfn 2lfn 3

lfn 7lfn 8

lfn 4lfn 5lfn 6

gLite/L CE/SE

Catalog

gLite/E CE/SE

gLite/A CE/SE

User

Query

Page 41: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 41

Production Environment

Coord.

• Production environment (simulation, reconstruction & analysis)

• Distributed computing environment

• Database organisation

DetectorProjects

Framework & Infrastructure

Coord.

• Framework development (simulation, reconstruction & analysis)

• Persistency technology

• Computing data challenges

• Industrial joint projects

• Tech. Tracking• Documentation

Simulation Coord.

• Detector Simulation• Physics simulation• Physics validation• GEANT 4 integration• FLUKA integration• Radiation Studies• Geometrical modeler

International Computing

Board

DAQ

Reconstruction & Physics Soft

Coord.

• Tracking• Detector

reconstruction• Global

reconstruction• Analysis tools• Analysis algorithms• Physics data

challenges• Calibration &

alignment algorithms

Management Board

Regional Tiers

Offline BoardChair: Comp Coord

Software Projects

HLTLCG

SC2, PEB, GDB, POB

Core Computing and Software

EU Gridcoord.

US Gridcoord.

Offline Coordination

• Resource planning• Relation with funding agencies• Relations with C-RRB

Offline Coord.(Deputy PL)

Page 42: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 42

Core Computing Staffing

Core Computing FTEs -- renewal CERN+M&O

0.0

5.0

10.0

15.0

20.0

25.0

2004 2005 2006 2007 2008 2009 2010

FTEs

Req External

Req Fell, stud

Req Proj Ass

Req STAF-LD

Req STAF

M&O

External

Funding Agencies

Fellows, students

Proj Ass

STAF-LD

STAF

Page 43: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 43Computing Project

Core ComputingS

ub

dete

ctor

Soft

ware

Ph

ysi

cs A

naly

sis

Soft

ware

Core

Soft

ware

Infrastructure & Services

Offl

ine C

oord

inati

on

Cen

tral S

up

port

M&O AFunding

Comp projDetector

projPhysics

WGs

Page 44: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 44

Offline activities in the other

projects

Extended CoreOffline

CERNCore

Offline

20~15

100~500?

10~7

Page 45: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 45

Cosmic Ray Telescope

(CRT)A.Fernández

Offline BoardChair F.Carminati

ElectromagneticCalorimeter

(EMCAL)G.Odiniek, M.Horner

Forward Multiplicity

Detector(FMD)

A.Maevskaya

Inner Tracking System

(ITS)R.Barbera, M.Masera

Muon Spectrometer

(MUON)A.DeFalco, G.Martinez

Photon Spectrometer

(PHOS)Y.Schutz

Photon Multiplicity

Detector(PMD)B.Nandi

High Momentum Particle ID(HMPID)

D.DiBari

T0 Detector(START)

A.Maevskaya

Time of Flight(TOF)

A.DeCaro, G.Valenti

Time Projection Chamber

(TPC)M.Kowalski, M.Ivanov

Transition Radiation Detector

(TRD)C.Blume, A.Sandoval

V0 detector(VZERO)B.Cheynis

Zero Degree Calorimeter

(ZDC)E.Scomparin

Detector Construction DB

W.Peryt

ROOTR.Brun, F.Rademakers

Core OfflineP.Buncic, A.Morsch,

F.Rademakers, K.Safarik

Web & VMCCEADEN

Eu Grid coordination

P.Cerello

US Grid coordination

L.Pinsky

Page 46: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 46

What do we have to do by end 2005

• Alignment & Calibration• Change of MC• Integration with HLT• Control of AliRoot evolution• Development of analysis environment• Development of visualisation• Revision of detector geometry and simulation• Migration to new Grid software• Physics and computing challenge 2005• Project structure & staffing• Organisation of computing resources• Writing of the computing TDR

Page 47: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 47

Period(milestone)

Fraction of the final capacity (%)

Physics Objective

06/01-12/01 1% pp studies, reconstruction of TPC and ITS

06/02-12/02 5%

• First test of the complete chain from simulation to reconstruction for the PPR

• Simple analysis tools• Digits in ROOT format

01/04-06/04 10%

• Complete chain used for trigger studies• Prototype of the analysis tools• Comparison with parameterised MonteCarlo• Simulated raw data

05/05-07/05 TBD• Test of condition infrastructure and FLUKA• Test of gLite and CASTOR• Speed test of distributing data from CERN

01/06-06/06 20%• Test of the final system for reconstruction and

analysis

ALICE Physics Data Challenges

NEW NEW

Page 48: F.Carminati ALICE Computing Model Workshop December 9-10, 2004 Introduction and Overview of the ALICE Computing Model

December 9, 2004 Computing Model Workshop 48

ALICE Offline Timeline

2004 2005 2006

ALICE PDC04

Analysis PDC04Design of new components

Developmentof new components

Pre-challenge ‘06

PDC06preparation

PDC06

Final developmentof AliRoot

First data takingpreparation

PDC06 AliRoot readyComputing TDR PDC06 AliRoot ready

nous sommes

ici

CDC 04

PDC04

CDC 05