v.gavrilov 1, i.golutvin 2, v.ilyin 3, o.kodolova 3, v.korenkov 2, e.tikhonenko 2, s.shmatov...

20
V.Gavrilov 1 , I.Golutvin 2 , V.Ilyin 3 , O.Kodolova 3 , V.Korenkov 2 , E.Tikhonenko 2 , S.Shmatov 2 ,V.Zhiltsov 2 1- Institute of Theoretical and Experimental Physics, Moscow, Russia 2- Joint Institute for Nuclear Research, Dubna, Russia 3 – Skobeltsyn Institute of Nuclear Physics, Moscow, Russia NEC’2009 Varna, Bulgaria, September 07-14, 2009 RDMS CMS computing activities to satisfy LHC data processing and analysis scenario

Upload: jasper-wilkinson

Post on 13-Dec-2015

214 views

Category:

Documents


1 download

TRANSCRIPT

V.Gavrilov1, I.Golutvin2, V.Ilyin3, O.Kodolova3, V.Korenkov2, E.Tikhonenko2, S.Shmatov2 ,V.Zhiltsov 2

1- Institute of Theoretical and Experimental Physics, Moscow, Russia 2- Joint Institute for Nuclear Research, Dubna, Russia

3 – Skobeltsyn Institute of Nuclear Physics, Moscow, Russia

NEC’2009 Varna, Bulgaria, September 07-14, 2009

RDMS CMS computing activities to satisfy LHC data processing and analysis scenario

Russia Russian Federation Institute for High Energy Physics, Protvino Institute for Theoretical and Experimental Physics,

Moscow Institute for Nuclear Research, RAS, Moscow Moscow State University, Institute for Nuclear Physics,

Moscow Petersburg Nuclear Physics Institute, RAS,

St.Petersburg P.N.Lebedev Physical Institute, Moscow

Associated members: High Temperature Technology Center of Research &

Development Institute of Power Engineering, Moscow Russian Federal Nuclear Centre – Scientific Research

Institute for Technical Physics, Snezhinsk Myasishchev Design Bureau, Zhukovsky Electron, National Research Institute, St. Petersburg

Georgia High Energy Physics Institute, Tbilisi

State University, Tbilisi Institute of Physics, Academy of

Science ,Tbilisi

Ukraine Institute of Single Crystals of National

Academy of Science, Kharkov National Scientific Center, Kharkov

Institute of Physics and Technology, Kharkov

Kharkov State University, Kharkov

Uzbekistan Institute for Nuclear Physics, UAS,

Tashkent

Dubna Member States Armenia Yerevan Physics Institute, Yerevan

Belarus Byelorussian State University, Minsk Research Institute for Nuclear

Problems, Minsk National Centre for Particle and High

Energy Physics, Minsk Research Institute for Applied

Physical Problems, Minsk

Bulgaria Institute for Nuclear Research and

Nuclear Energy, BAS, Sofia University of Sofia, Sofia

JINRJoint Institute for Nuclear Research, Dubna

Composition of the RDMS CMS Collaboration

the RDMS CMS Collaboration was founded in Dubna in September 1994

RDMS - Russia and Dubna Member States CMS Collaboration

RDMS Full Responsibility

RDMSParticipation

ME1/1

HE

SE

ME

EE

FS

HF

RDMS Participation in CMS Construction

Full responsibility including management, design, construction, installation, commissioning, maintenance and operation for:

Endcap Hadron Calorimeter, HE

1st Forward Muon Station, ME1/1

Participation in:

Forward Hadron Calorimeter, HFEndcap ECAL, EE

Endcap Preshower, SE Endcap Muon System, ME Forward Shielding, FS

Full responsibility including management, design, construction, installation, commissioning, maintenance and operation for:

Endcap Hadron Calorimeter, HE

1st Forward Muon Station, ME1/1

Participation in:

Forward Hadron Calorimeter, HFEndcap ECAL, EE

Endcap Preshower, SE Endcap Muon System, ME Forward Shielding, FS

RDMS Participation in CMS Project

Design, production and installation

Calibration and alignment

Reconstruction algorithms

Data processing and analysis

Monte Carlo simulation

H (150 GeV) Z0Z0 4

RDMSRDMS activities in CMS activities in CMS

RAL

IN2P3

BNL

FZK

CNAF

PIC ICEPP

FNAL

LHC Computing Model

Tier-0 (CERN) Filter raw data Reconstruction summary data (ESD) Record raw data and ESD Distribute raw and ESD to Tier-1

PNPINIKHEFMinsk

Kharkov

Rome

IHEP

TRIUMF

CSCS

Legnaro

ITEP

JINR

IC

MSU

Prague

Budapest

Cambridge

Tier-1small

centresdesktopsportables

Santiago

WeizmannTier-2

Tier-1 Permanent storage and management of raw,

ESD, calibration data, meta-data, analysis data and databases grid-enabled data service

Data-heavy analysis Re-processing raw ESD ESD-AOD selection National, regional support

Tier-2 Simulation, digitization, calibration of simulated data End-user analysis

Tier 0 – Tier 1 – Tier 2

7

Tier-0 (CERN):•Data recording•Initial data reconstruction•Data distribution

Tier-1 (11 centres):•Permanent storage•Re-processing•Analysis

Tier-2 (>200 centres):• Simulation• End-user analysis

8

RDMS CMS computing structure

RDIG sites

9

RCMS CMS T2 association

Now Future interest

Analysis Groups

Exotica: T2_RU_JINR Exotica: T2_RU_INR

HI: T2_RU_SINP QCD: T2_RU_PNPI Top: T2_RU_SINP FWD: T2_RU_IHEP Object/Performance Groups Muon: T2_RU_JINR e-gamma-ECAL: T2_RU_INR JetMET-HCAL: T2_RU_ITEP

1010

CMS T2 requirements

Basic requirements to CMS VO T2 sites for Physics group hosting: a) info on contact persons responsible for site operation b) site visibility (BDII) c) availability of CMSSW actual version d) regular file transfer test “OK” e) Certified links with CMS T1: 2 up and 4 down f) CMS job robot test “OK” g) disk space ~ 150-200 TB for: - central space (~30 TB) - analysis space (~60-90 TB) - MC space (~20 TB) - local space (~30-60 TB)

- local CMS users space (~1 TB per user) h) CPU resources ~ 3KSI2K per 1 TB disk space, 2GB memory per job

11

T2 readiness requirements

• Site visibility and CMS VO support

• Availability of disk and CPU resources

• Daily SAM availability > 80%

• Daily JR-MM efficiency > 80%

• Commissioned links TO Tier-1 sites ≥ 2

• Commissioned links FROM Tier-1 sites ≥ 4

13

CMS T1 – RU T2 link status

RU T2 Up links Down links

IHEP 2 3

INR 1 3

ITEP 2 5

JINR 2 5

PNPI 0 0

RRC KI 2 5

SINP 2 8

KIPT 2 7

14

Available resources

RU T2 Disk (TB) Used (TB) Job slots

IHEP 8 7 36

INR 46 8 75

ITEP 80 26 99

JINR 197 40 270

PNPI 10 4 40

RRC KI 161 76 174

SINP 124 59 103

KIPT 50 10 68

15

T2_RU_ITEP: Ready

T2_RU_SINP: Ready

T2_RU_JINR: Ready

RDMS CMS T2 readiness

T2_UA_KIPT: Ready

16

CMS computing in 2009 year

• Computing scale test (together with ATLAS) May – June 2009 • Cosmic run data processing and analysis July – September 2009• Big MC samples production Starting in July 2009• LHC data processing and analysis Starting in October 2009

17

STEP 09 results

Test of data transfer from CMS T1s to T2sRU_SINP, RU_JINR, RU_ITEP were participatedHigh transfer rate and quality were achieved

SINPmax 101 MB/s

CMS T1-CH-CERN

20

Request for RDMS CMS T2s upgrade

CMS request to upgrade by Jan. 2010:

Total disk space – up to 1300TBTotal CPU - up 4500kSI2K (~1800 job slots)

First priority tasks:

- Complete T1<->T2 link certification for INR, IHEP, PNPI- Improve stability of operation (“availability” & “readiness”)- Full test of MC prod and analysis jobs running in parallel- Increase disk space at each of T2 up to 150 TB- Increase number of CMS job slots at each of T2 up to 200

21

Summary

ITEP, JINR, SINP and UA_KIPT- in a stable state RRC_KI – all sw required is installed, the links are certified but not in a

stable state INR – not all the links required are certified – to be accomplished in a

month or earlier PNPI – now installed 1 Gbs external channel; certification of link is in

process IHEP – now installed 1 Gbs external channel; certification of link is in

process ITEP, JINR and SINP support group space for MUON, JetMET/HCAL, HI and

Exotica – thus the main efforts were applied to certify links to/from these institutes