the worldwide lhc computing grid

43
The Worldwide LHC Computing Grid Processing the Data from the World’s Largest Scientific Machine --- Jamie Shiers, CERN, Geneva, Switzerland

Upload: javier

Post on 22-Jan-2016

19 views

Category:

Documents


0 download

DESCRIPTION

The Worldwide LHC Computing Grid. Processing the Data from the World’s Largest Scientific Machine --- Jamie Shiers, CERN, Geneva, Switzerland. Abstract. The world's largest scientific machine will enter production about one year from the time of this conference - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid

Processing the Data from the World’s Largest Scientific

Machine---

Jamie Shiers, CERN, Geneva, Switzerland

Page 2: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

Abstract

• The world's largest scientific machine will enter production about one year from the time of this conference

• In order to exploit its scientific potential, computational resources way beyond those needed for previous accelerators are required

• Based on these requirements, a distributed solution based on Grid technologies has been developed

 • This talk describes the overall requirements that come from the

Computing Models of the experiments, the state of deployment of the production services, on-going validation of these services as well as the offline infrastructure of the experiments and finally the remaining steps that need to be achieved in the remaining months before the deluge of data arrives.

Page 3: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

Overview

• Brief Introduction to CERN & LHC

• Data Processing requirements

• The Worldwide LHC Computing Grid

• Status and Outlook

Page 4: The Worldwide LHC Computing Grid

LHC Overview

The Large Hadron ColliderProton-proton collider using an existing tunnel 27km in circumference, ~100m underground

Lies beneath French / Swiss border near Geneva

Page 5: The Worldwide LHC Computing Grid

September 2006 The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

CERN – European Organization for Nuclear Research

Page 6: The Worldwide LHC Computing Grid

September 2006 The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

The LHC Machine

Page 7: The Worldwide LHC Computing Grid

HEPiX Rome 05apr06

LCG

[email protected]

Page 8: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

8

CMS

Data Rates:• 1PB/s from detector• 100MB/s – 1.5GB/s to ‘disk’• 5-10PB growth / year• ~3GB/s per PB of data

Data Processing:• 100,000 of today’s

fastest PCs

Level 1

Level 2

40 MHz

40 MHz (1000 TB/sec)

(1000 TB/sec)

Level 3

75 KHz 75 KHz (75 GB/sec)

(75 GB/sec)5 KHz5 KHz (5 GB/sec)

(5 GB/sec)100 Hz 100 Hz (100

(100 MB/sec)MB/sec)

Data Recording &

Data Recording &

Offline Analysis

Offline Analysis

Page 9: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

9

Page 10: The Worldwide LHC Computing Grid

simulation

reconstruction

analysis

interactivephysicsanalysis

batchphysicsanalysis

batchphysicsanalysis

detector

event summary data

rawdata

eventreprocessing

eventreprocessing

eventsimulation

eventsimulation

analysis objects(extracted by physics topic)

Data Handling and Computation for

Physics Analysisevent filter(selection &

reconstruction)

event filter(selection &

reconstruction)

processeddata

les.

rob

ert

son

@ce

rn.c

h

RAW

ESD

AOD

Page 11: The Worldwide LHC Computing Grid

September 2006 The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

RAW

ESD

AOD

TAG

randomseq.

1PB/yr (1PB/s prior to reduction!)

100TB/yr

10TB/yr

1TB/yr

Data

Users

Tier0

Tier1

Page 12: The Worldwide LHC Computing Grid

Physics @ LHC

Concluding talk, ≠ SummaryKraków, July 2006

John Ellis, TH Division, PH Department, CERN

• Principal Goals:• Explore a new energy/distance scale

• Look for ‘the’ Higgs boson

• Look for supersymmetry/extra dimensions, …

• Find something the theorists did not expect

Page 13: The Worldwide LHC Computing Grid

All charged tracks with pt > 2 GeV

Reconstructed tracks with pt > 25 GeV

(+30 minimum bias events)

selectivity: 1 in 1013

- 1 person in a thousand world populations- A needle in 20 million haystacks

LHC: Higgs Decay into 4 muons

Page 14: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

14The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

ATLAS Getting Ready for LHC

H ZZ 4

e,

Z

e,

e,

e, mZ

Hg

g

tZ(*)

“Gold-plated” channel for Higgs discovery at LHC

Simulation of a H ee event in ATLAS

Signal expected in ATLASafter ‘early' LHC operation

Physics example

Page 15: The Worldwide LHC Computing Grid

(W)LCG Overview

The LHC Computing GridA Worldwide Grid build on existing Grid Infrastructures, including

OpenScience Grid (OSG), EGEE and NorduGrid

Page 16: The Worldwide LHC Computing Grid

Grid Computing

Today there are many definitions of Grid computing:

The definitive definition of a Grid is provided by [1] Ian Foster in his article "What is the Grid? A Three Point Checklist" [2].

The three points of this checklist are:

1. Computing resources are not administered centrally;

2. Open standards are used;

3. Non trivial quality of service is achieved.

… Some sort of Distributed System at least…

that crosses Management / Enterprise domains

Page 17: The Worldwide LHC Computing Grid

October 7, 200517

LCG Status report [email protected] LHCC Open Meeting; 28th June 2006

LCG depends on 2 major science grid infrastructures …

The LCG service runs & relies on the grid infrastructures provided by:

EGEE - Enabling Grids for E-SciencEOSG - US Open Science Grid

Page 18: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

EGEE – Close-up

• Many EGEE regions are Grids in their own right

• In some cases these too are build out of smaller, regional Grids

• These typically have other, local users, in addition to those of the ‘higher-level’ Grid(s)

• Similarly, OSG also supports communities other than those of the LCG…

Page 19: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

WLCG

• WLCG:– A federation of fractal Grids…– A (small) step towards “the” Grid

• (rather than “a” Grid)

EGEE OSGWLC

G

GridGrid

Grid Grid GridGrid

Page 20: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

Why a Grid Solution?

• The LCG Technical Design Report lists:

1. Significant costs of [ providing ] maintaining and upgrading the necessary resources … more easily handled in a distributed environment, where individual institutes and … organisations can fund local resources … whilst contributing to the global goal

2. … no single points of failure. Multiple copies of the data, automatic reassigning of tasks to resources… facilitates access to data for all scientists independent of location. … round the clock monitoring and support.

Page 21: The Worldwide LHC Computing Grid

October 7, 200521

LCG Status report [email protected] LHCC Open Meeting; 28th June 2006

WLCG Collaboration

The Collaboration ~130 computing centres 12 large centres

(Tier-0, Tier-1) 40-50 federations of smaller

“Tier-2” centres 29 countries

Memorandum of Understanding Agreed in October 2005, now being signed

Purpose Focuses on the needs of the 4 LHC experiments Commits resources

Each October for the coming year 5-year forward look

Agrees on standards and procedures

Page 22: The Worldwide LHC Computing Grid

LCG Service ModelTier0 – the accelerator centre (CERN) Data acquisition & initial processing Long-term data curation Distribution of data Tier1s

Canada – Triumf (Vancouver)France – IN2P3 (Lyon)Germany – Forschungszentrum KarlsruheItaly – CNAF (Bologna)Netherlands – NIKHEF (Amsterdam)

Nordic countries – distributed Tier-1 Spain – PIC (Barcelona)Taiwan – Academia Sinica (Taipei)UK – CLRC (Didcot)US – FermiLab (Illinois) – Brookhaven (NY)

Tier1 – “online” to the data acquisition process high availability

Managed Mass Storage – grid-enabled data service

Data intensive analysis National, regional support Continual reprocessing activity

Tier2 – ~100 centres in ~40 countries Simulation End-user analysis – batch and interactive

Les Robertson

Page 23: The Worldwide LHC Computing Grid

CERN18%

All Tier-1s39%

All Tier-2s43%

CERN12%

All Tier-1s55%

All Tier-2s33%

CERN34%

All Tier-1s66%

CPU Disk Tape

Summary of Computing Resource RequirementsAll experiments - 2008From LCG TDR - June 2005

CERN All Tier-1s All Tier-2s TotalCPU (MSPECint2000s) 25 56 61 142Disk (PetaBytes) 7 31 19 57Tape (PetaBytes) 18 35 53

Networking Requirements: GB/s out of CERN (1.6GB/s

nominal + factor 6 safety 100s of MB/s into Tier1s 10s of MB/s into / out of

Tier2s

Provisioned: (Backbone at CERN)

10Gbps link to each Tier1 site 1Gbps minimum to Tier2s

Page 24: The Worldwide LHC Computing Grid

Summary of Tier0/1/2 Roles

Tier0: safe keeping of RAW data (first copy); first pass reconstruction, distribution of RAW data and reconstruction output to Tier1; reprocessing of data during LHC down-times;

Tier1: safe keeping of a proportional share of RAW and reconstructed data; large scale reprocessing and safe keeping of corresponding output; distribution of data products to Tier2s and safe keeping of a share of simulated data produced at these Tier2s;

Tier2: Handling analysis requirements and proportional share of simulated event production and reconstruction.

N.B. there are differences in roles by experimentEssential to test using complete production chain of each!

Page 25: The Worldwide LHC Computing Grid
Page 26: The Worldwide LHC Computing Grid

Dario Barberis: ATLAS SC4 Plans 26

WLCG SC4 Workshop - Mumbai, 12 February 2006

ATLAS Computing Model

Tier-0: Copy RAW data to Castor tape for archival

Copy RAW data to Tier-1s for storage and reprocessing

Run first-pass calibration/alignment (within 24 hrs)

Run first-pass reconstruction (within 48 hrs)

Distribute reconstruction output (ESDs, AODs & TAGS) to Tier-1s

Tier-1s: Store and take care of a fraction of RAW data

Run “slow” calibration/alignment procedures

Rerun reconstruction with better calib/align and/or algorithms

Distribute reconstruction output to Tier-2s

Keep current versions of ESDs and AODs on disk for analysis

Tier-2s: Run simulation

Keep current versions of AODs on disk for analysis

Page 27: The Worldwide LHC Computing Grid

Dario Barberis: ATLAS SC4 Plans 27

WLCG SC4 Workshop - Mumbai, 12 February 2006

ATLAS Tier-0 Data Flow

EF

CPUfarm

T1T1T1sCastorbuffer

RAW

1.6 GB/file0.2 Hz17K f/day320 MB/s27 TB/day

ESD

0.5 GB/file0.2 Hz17K f/day100 MB/s8 TB/day

AOD

10 MB/file2 Hz170K f/day20 MB/s1.6 TB/day

AODm

500 MB/file0.04 Hz3.4K f/day20 MB/s1.6 TB/day

RAW

AOD

RAW

ESD (2x)

AODm (10x)

RAW

ESD

AODm

0.44 Hz37K f/day440 MB/s

1 Hz85K f/day720 MB/s

0.4 Hz190K f/day340 MB/s

2.24 Hz170K f/day (temp)20K f/day (perm)140 MB/s

Tape

Page 28: The Worldwide LHC Computing Grid

Dario Barberis: ATLAS SC4 Plans 28

WLCG SC4 Workshop - Mumbai, 12 February 2006

ATLAS “average” Tier-1 Data Flow (2008)

Tier-0

CPUfarm

T1T1OtherTier-1s

diskbuffer

RAW

1.6 GB/file0.02 Hz1.7K f/day32 MB/s2.7 TB/day

ESD2

0.5 GB/file0.02 Hz1.7K f/day10 MB/s0.8 TB/day

AOD2

10 MB/file0.2 Hz17K f/day2 MB/s0.16 TB/day

AODm2

500 MB/file0.004 Hz0.34K f/day2 MB/s0.16 TB/day

RAW

ESD2

AODm2

0.044 Hz3.74K f/day44 MB/s3.66 TB/day

T1T1OtherTier-1s

T1T1Tier-2s

Tape

RAW

1.6 GB/file0.02 Hz1.7K f/day32 MB/s2.7 TB/day

diskstorage

AODm2

500 MB/file0.004 Hz0.34K f/day2 MB/s0.16 TB/day

ESD2

0.5 GB/file0.02 Hz1.7K f/day10 MB/s0.8 TB/day

AOD2

10 MB/file0.2 Hz17K f/day2 MB/s0.16 TB/day

ESD2

0.5 GB/file0.02 Hz1.7K f/day10 MB/s0.8 TB/day

AODm2

500 MB/file0.036 Hz3.1K f/day18 MB/s1.44 TB/day

ESD2

0.5 GB/file0.02 Hz1.7K f/day10 MB/s0.8 TB/day

AODm2

500 MB/file0.036 Hz3.1K f/day18 MB/s1.44 TB/day

ESD1

0.5 GB/file0.02 Hz1.7K f/day10 MB/s0.8 TB/day

AODm1

500 MB/file0.04 Hz3.4K f/day20 MB/s1.6 TB/day

AODm1

500 MB/file0.04 Hz3.4K f/day20 MB/s1.6 TB/day

AODm2

500 MB/file0.04 Hz3.4K f/day20 MB/s1.6 TB/day

Plus simulation Plus simulation && analysis data analysis data

flowflow

Real data storage, reprocessing and

distribution

Page 29: The Worldwide LHC Computing Grid

Tier1 Centre ALICE ATLAS CMS LHCb Target

IN2P3, Lyon 9% 13% 10% 27% 200

GridKA, Germany 20% 10% 8% 10% 200

CNAF, Italy 7% 7% 13% 11% 200

FNAL, USA - - 28% - 200

BNL, USA - 22% - - 200RAL, UK - 7% 3% 15% 150

NIKHEF, NL (3%) 13% - 23% 150

ASGC, Taipei - 8% 10% - 100

PIC, Spain - 4% (5) 6% (5) 6.5% 100

Nordic Data Grid Facility - 6% - - 50

TRIUMF, Canada - 4% - - 50

TOTAL 1.6GB/s

Nominal Tier0 – Tier1 Data Rates (pp)H

eat

Page 30: The Worldwide LHC Computing Grid

Centre T0->T1 T1->T2 T2->T1 T1->T1

Predictable – Data Taking

Bursty – User Needs

Predictable – Simulation

Scheduled Reprocessing

IN2P3, Lyon 200 286.6 85.5

GridKA, Germany 200 353.0 84.1

CNAF, Italy 200 278.0 58.4

FNAL, USA 200 403.1 52.6

BNL, USA 200 64.5 24.8

RAL, UK 150 76.0 36.0

NIKHEF, NL 150 36.0 6.1

ASGC, Taipei 100 114.6 19.3

PIC, Spain 100 106.6 23.3

Nordic Data Grid Facility 50 - -

TRIUMF, Canada 50 - -

Global Inter-Site Rates

Page 31: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

The Scoville Scale

• The Scoville scale is a measure of the hotness of a chilli pepper. These fruits of the Capsicum genus contain capsaicin, a chemical compound which stimulates thermoreceptor nerve endings in the tongue, and the number of Scoville heat units (SHU) indicates the amount of capsaicin present. Many hot sauces use their Scoville rating in advertising as a selling point.

• It is named after Wilbur Scoville, who developed the Scoville Organoleptic Test in 1912[1]. As originally devised, a solution of the pepper extract is diluted in sugar water until the 'heat' is no longer detectable to a panel of (usually five) tasters; the degree of dilution gives its measure on the Scoville scale. Thus a sweet pepper, containing no capsaicin at all, has a Scoville rating of zero, meaning no heat detectable even undiluted. Conversely, the hottest chiles, such as habaneros, have a rating of 300,000 or more, indicating that their extract has to be diluted 300,000-fold before the capsaicin present is undetectable. The greatest weakness of the Scoville Organoleptic Test is its imprecision, because it relies on human subjectivity.

Page 32: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

Scoville Scale – cont.

Scoville rating Type of pepper

No heat Bell Pepper

600 – 800 Green Tabasco Sauce

30,000 - 50,000 Cayenne Pepper

100,000 - 325,000 Scotch Bonnet

15,000,000 - 16,000,000

Pure capsaicin

Page 33: The Worldwide LHC Computing Grid

LCG Status

The LHC Computing GridStatus of Deployment of Worldwide Production Grid Services

Page 34: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

The LCG Service

• The LCG Service has been validated over the past 2 years via a series of dedicated “Service Challenges”, designed to test the readiness of the service infrastructure

• These are complementary to tests by the experiments of the offline Computing Models – the Service Challenges have progressively ramped up the level of service in preparation for ever more detailed tests by the experiments

The target: full production services by end September 2006!• Some additional functionality is still to be added, resource

levels will continue to ramp-up in 2007 and beyond• Resource requirements are strongly coupled to total volume

of data acquired to date

Page 35: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

The Service Challenge Programme

• Significant focus on Data Management, including data export from Tier0-Tier1

• Services required by VO / site agreed in mid-2005 with small but continous evolution expected

Goal is delivery of stable production services• Status: after several iterations, requirements and

plans of experiments understood, required services by site established

• Still some operational and functional problems, being pursued on a regular basis

Page 36: The Worldwide LHC Computing Grid

Service Maximum delay in responding to operational problems

Average availability[1] on an annual

basis

DOWN Degradation > 50%

Degradation > 20% BEAMON

BEAMOFF

Raw data recording

4 hours 6 hours 6 hours 99% n/a

Event reconstruction / data distribution (beam ON)

6 hours 6 hours 12 hours 99% n/a

Networking service to Tier-1 Centres (beam ON)

6 hours 6 hours 12 hours 99% n/a

All other Tier-0 services

12 hours 24 hours 48 hours 98% 98%

All other services[2] – prime service hours[3]

1 hour 1 hour 4 hours 98% 98%

All other services – outside prime service hours

12 hours 24 hours 48 hours 97% 97%

CERN (Tier0) MoU Commitments

Page 37: The Worldwide LHC Computing Grid

R.Bailey, Chamonix XV, January 2006R.Bailey, Chamonix XV, January 2006 3737

Breakdown of a normal yearBreakdown of a normal year

7-8

~ 140-160 days for physics per yearNot forgetting ion and TOTEM operation

Leaves ~ 100-120 days for proton luminosity running? Efficiency for physics 50% ?

~ 50 days ~ 1200 h ~ 4 106 s of proton luminosity running / year

- From Chamonix XIV -S

ervi

ce u

pgra

de s

lots

?

Page 38: The Worldwide LHC Computing Grid

July-August 2006 Disk-Tape Rates

Centre ATLAS(4/4)

ATLAS tape

CMS(1/4)

LHCb ALICE(HI)

CombinedTape Rates

Nominal ppAll to tape

ASGC 60.0 24 10 - - 35 100

CNAF 59.0 24 25 ~4 60 113 200

PIC 48.6 20 30 ~4 - 54 100

IN2P3 90.2 36 15 ~4 60 115 200

GridKA 74.6 30 15 ~4 60 109 200

RAL 59.0 24 10 ~4 30 68 150

BNL 196.8 80 - - - 80 200

TRIUMF 47.6 20 - - - 20 50

SARA 87.6 36 - ~4 30 70 150

NDGF 48.6 20 - - - 20 50

FNAL - 50 - - 50 200

Totals ~800 1600

Testing of experiment driven data export at 50% ofnominal rate > 1 year prior to first collisions

Easter w/eTarget 10 day period

Page 39: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

Experiment Production

• Experiments currently testing full production chain

• Elements include:– Data export– Job submission– Full integration of

Tier0/Tier1/Tier2 sites

Page 40: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

Plans Prior to First Collisions

• Between now and first collisions these activities will continue, progressively ramping up in scope and scale

• Still significant work to involve ~100 Tier2s in a distributed, reliable service

• Still much work to do to attain data rates for prolonged periods (weeks) including recovery from site failure – power, cooling, service issues

Page 41: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

• First collisions LHC expected November 2007– These will be at ‘low’ energy – 450 GeV per beam– Main target will be understanding detectors, trigger and offline software– ‘Re-discover’ existing physics – excellent for calibration! Data rates will be full nominal values! (Machine efficiency?)

• First full energy run in 2008: 7 + 7 TeV– Physics discovery run!– Heavy Ions in 2009? Data export schedule?

• Typically takes ~years to fully understand detector and software chain– Much of the initial ‘analysis’ will be done starting from RAW/ESD datasets– Big impact on network load – larger datasets, transferred more frequently Potential mismatch with ‘steady-state’ planning? Much larger initial bandwidth requirement (but do you really believe it will

go down?)– Those sites that have it will be more ‘competitive’ (and vice-versa…)

• Rate calculations have overhead for recovering backlogs due to down-time– But not for recovery from human and / or software error! e.g. bug in alignment / calibration / selection / classification code -> junk data!

And Beyond…

Page 42: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea

Summary & Conclusions

• Deploying a Worldwide Production Grid is not without its challenges

• Much has been accomplished; much still outstanding• My two top issues?

– Collaboration & communication at such a scale requires significant and constant effort

• We are not yet at the level that this is just basic infrastructure

– “Design for failure” – i.e. assume that things don’t work, rather than hope that they always do!

• A lesson from our “founding fathers” – the creators of the Internet?

Page 43: The Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid - [email protected] - CCP 2006 - Gyeongju, Republic of Korea