cern it department ch-1211 genève 23 switzerland t frédéric hemmer it department head - cern 23...

12
CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/ Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from a Data Center Perspective CERN School of Computing 2010 Uxbridge, U.K.

Upload: victoria-blair

Post on 13-Dec-2015

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

CERN IT Department

CH-1211 Genève 23

Switzerlandwww.cern.ch/

it

Frédéric Hemmer

IT Department Head - CERN23rd August 2010

Status of LHC Computingfrom a Data Center Perspective

CERN School of Computing 2010Uxbridge, U.K.

Page 2: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

CERN IT Department

CH-1211 Genève 23

Switzerlandwww.cern.ch/

it

CERN School of Computing 2010 [email protected] 2

1.25 GB/sec (ions)

Frédéric Hemmer

Page 3: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

CERN IT Department

CH-1211 Genève 23

Switzerlandwww.cern.ch/

it

1430MB/s

700MB/s 1120MB/s

700MB/s 420MB/s

(1600MB/s) (2000MB/s)

Averages! Need to be able tosupport 2x for recovery!

Scheduled work only!

Tier-0 Computing Tasks

Page 4: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

CERN IT Department

CH-1211 Genève 23

Switzerlandwww.cern.ch/

it

CERN School of Computing 2010

The CERN Tier-0 in Numbers

• Data Centre Operations (Tier 0)– 24x7 operator support and System Administration services to support

24x7 operation of all IT services.– Hardware installation & retirement

• ~7,000 hardware movements/year; ~1000 disk failures/year

– Management and Automation framework for large scale Linux clusters

4

Xeon 51502%

Xeon 516010%

Xeon E533

57%

Xeon E534

514%

Xeon E540

56%

Xeon E541016%

Xeon L542

08%

Xeon L552033%

Xeon 3GHz4%

Fujitsu3% Hitachi

23% HP0%

Max-tor0% Seagate

15%

Western Digital59%

Other0%

High Speed Routers(640 Mbps → 2.4 Tbps) 24

Ethernet Switches 350

10 Gbps ports 2000

Switching Capacity 4.8 Tbps

Servers 8,076

Processors 13,802

Cores 50,855

HEPSpec06 359,431

Disks 53,728

Raw disk capacity (TB) 45,331

Memory modules 48,794

RAID controllers 3,518

Tape Drives 160

Tape Cartridges 45000

Tape slots 56000

Tape Capacity (TB) 34000

Frédéric Hemmer

Page 5: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

CERN School of Computing 2010 5

The WLCG Collaboration

Frédéric Hemmer

Tier-0 (CERN):•Data recording•Initial data reconstruction

•Data distribution

Tier-1 (11 centres):• Permanent storage• Re-processing• Analysis

Tier-2 (~130 centres):• Simulation• End-user analysis

Page 6: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

• WLCG running increasingly high workloads:– ~1 million jobs/day

• Real data processing and re-processing

• Physics analysis• Simulations

– ~100 k CPU-days/day• Unprecedented data rates

Today WLCG Status is:~100k CPU-days/day

Castor traffic over the last 2 months:> 4 GB/s input

> 13 GB/s served

Data export during data taking:- According to expectations on average

Traffic on OPN up to 70 Gb/s!- ATLAS reprocessing campaigns

Page 7: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

• Data reaches Tier 2s within hours

• Increasing numbers of (analysis users)– E.g.:~500 grid users in each

ATLAS/CMS; ~200 in ALICE

Readiness of the Computing

e.g. CMS raw to Tier 1s e.g. ATLAS data

distribution

ALICE: >200 users~500 jobs on average over 3 months

CMS users

Page 8: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

CERN IT Department

CH-1211 Genève 23

Switzerlandwww.cern.ch/

it

So, everything fine ?

Not quite ...

Page 9: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

9

Worldwide data distribution and analysis

GRID-based analysis in June-July 2010: > 1000 different users, ~ 11 million analysis jobs processed

Total throughput of ATLAS data through the Grid: from 1st January until yesterday

MB/sper day

MC reprocessing

Jan Feb March April May

Start of 7 TeVdata-taking

6 GB/s

Peaks of 10 GB/s achieved

~2 GB/s(design)

June July

Start of 1011 p/bunchoperation

2009 datareprocessing

Data and MCreprocessing

From F. GianottiICHEP 2010

Page 10: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

• A “transparent” configuration change in Castor software resulted in data being directed across all available tape pools instead of to the dedicated raw data pools

– For ALICE, ATLAS, CMS this included a pool where the tapes were re-cycled after a certain time• The result of this was that a number of files were lost on tapes that were recycled• For ATLAS and CMS the tapes had not been overwritten and could be fully recovered (fall back would have

been to re-copy files back from Tier 1s)• For ALICE 10k files were on tapes that were recycled, including 1973 files of 900 GeV raw data

• Actions taken:– Underlying problem addressed; all recycle pools removed

• Software change procedures being reviewed now– Tapes sent to IBM and SUN for recovery – have been able to recover ~97% of critical (900 GeV sample) files, ~50% of all

ALICE files• We have been very lucky, 1717 tapes were recovered by IBM & STK/Sun/Oracle

– Work with ALICE to ensure that always 2 copies of data available • In HI running there is a risk for several weeks until all data is copied to Tier 1s; several options to mitigate this risk under

discussion – As this was essentially a procedural problem: we will organise a review of Castor operations procedures (software

development, deployment, operation, etc...) together with experiments and outside experts.

ALICE data loss

Page 11: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

CERN IT Department

CH-1211 Genève 23

Switzerlandwww.cern.ch/

it

CERN School of Computing 2010 11

A Word on Computer Security

August 2010

• It is “your” responsibility to write secure code, and to fix promptly eventual flaws

•Not only for ensuring proper operation of your application– But also, failure to do so might bring your and other organizations in a very

embarrassing position.

Frédéric Hemmer

Page 12: CERN IT Department CH-1211 Genève 23 Switzerland  t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from

CERN IT Department

CH-1211 Genève 23

Switzerlandwww.cern.ch/

it

CERN School of Computing 2010 12

Summary

• The readiness of LHC Computing has now clearly been demonstrated– There are however large load fluctuations– Experiments Computing Models will certainly evolve

• And WLCG Tiers will have to adapt to those changes

– Change Management is critical for ensuring stable data taking

• Some challenges ahead– Large number of objects with permanent flux of change

• Automation is critical

– (Distributed) Software complexity– Computer Security remains a permanent concern

Frédéric Hemmer