asgc site report

12
ASGC Site Report Felix Lee HEPiX 2011 Fall In Vancouver 24 Oct, 2011

Upload: jamil

Post on 18-Mar-2016

64 views

Category:

Documents


1 download

DESCRIPTION

ASGC Site Report. Felix Lee HEPiX 2011 Fall In Vancouver 24 Oct, 2011. 2. ASGC Data Centre. Cooling Power : CPU Power Summer 1 : 1.4 Winter 1 : 2. Total Capacity 2MW, 400 tons AHUs 99 racks ~ 800 m2 Resources 15,000 CPU Cores 6 PB Disk 5 PB Tape Rack Space Usage (Racks) - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: ASGC Site Report

ASGC Site Report

Felix Lee

HEPiX 2011 Fall In Vancouver

24 Oct, 2011

Page 2: ASGC Site Report

ASGC Data Centre

•Total Capacity• 2MW, 400 tons AHUs• 99 racks• ~ 800 m2

•Resources• 15,000 CPU Cores• 6 PB Disk• 5 PB Tape

•Rack Space Usage (Racks)• AS e-Science: 51.8 (55.6%)• ASCC: 13.2 (14.2%)• IPAS: 6.5 (7.0%)• Free: 28.85 (23.2%)

2

Monitoring the power consumption and temperature

of every piece of equipment every 10 seconds.

Cooling Power : CPU PowerSummer 1 : 1.4

Winter 1 : 2

Page 3: ASGC Site Report

Resource update3

•Purchasing 10GbE core switch Cisco Nexus 7010.

• 7 line cards, 48 ports for each.• 336 ports in total.

Hope it can be delivered by end of Nov.•32 HP BL460 G7 blades are delivered in earlier Oct. •288TB storages are delivered in end of Aug.•New 4Way system Dell 6140 + C410 GPU expansion is

purchased, now it's waiting for deliver.•96 cores in 2U, which is attractive for us•On nVidia 2070M GPU is for surveying.

Page 4: ASGC Site Report

System Re-Configuration4

•Disk System re-configuration is still on going. As reported in last HEPiX meeting, we completed DPM

migration. Now we are migrating Castor disk server to be 10GbE.

• It takes long time..•DPM improvements• Performance evaluation and improvement• pNFS and WebDAV are under testing.

•Constructing the 10Gb backbone•10GbE fibre patch panel, core switch..

•Bandwidth upgrade of legacy work nodes•Upgrade IBM blade switch module to get LACP working

more efficiently.

Page 5: ASGC Site Report

Smart CenterIncrease power efficiency by eliminating the use of UPS. UPS reduces power efficiency by 30 per cent. Among them 10 per cent is in the form of heat that has to be carried away.

Power Efficiency

Apply space technology to heat conduction of the data center to increase thermal efficiency.

Thermal Efficiency

Analyzing long term data allows us to build models that can assist us in operating the center intelligently.

Intelligent Monitoring & Control

5Cooling Power : CPU Power = 1 : 3 (PUE = 1.3)

Page 6: ASGC Site Report

6 e-Science Networking in Asia Pacific Region

SINET

SG

I2 / GN2

WIDE

JP-Tokyo-LCG2

AU-ATLAS

JP-KEK2

JP-KEK1

PK-NCP

HK-HKU

KR-KNU

KR-KISTI-GCRT

CN-SDU-LCG2

CN-BEIJING-LCG2

AARNET

PK-PAKGRID

MY-MIMOS

IN-VECC1

IN-TIFR

TWAREN/ TANET

IP Transit

TW-NTCU

TW-FTT TW-NCUHEP

ASGC

IN-VECC2

JP-HIROSHIMA-

WLCG2

MY-UPM-BIRUNI-01

TH-HAII

TH-NECTEC

VN-IOIT-HN

VN-IOIT-KEYLAB

VN-IFI-PPS

MY-UM-CRYSTAL

PH-ASTI-LIKNAYA

N

KREONET

CSTNET

NL2.5G

2.5G 5G

10G

622M10G

TWHK US

JPCERNET

APAN-JP

iHEP-CAS

NTU

ATLAS Sites

CMS Sites

ALICE Sites

EUAsiaGrid Sites

ITB-ID

NYMU TW-NIU

Page 7: ASGC Site Report

E-Science Application support in Asia

• Not only porting computing models to EUAsia, but also establishing research oriented production services and long term scientific collaboration among partners

• Valuable data challenges achieved or launched• EUAsia VO• Use the catch-all VO as the way to engage newcomers• Deployment and certification of 16 sites used by 250 people

• Application repository• Based on EELA-2 and INFN experience• Online database to gather information about application programs

availablability (affiliation to a specific domain, middleware information, abstract and material reference, status overview, key research contacts)

Page 8: ASGC Site Report

8Drug Discovery by AutoDocking Sample e-Science

Applications in Taiwan

Page 9: ASGC Site Report

ASGC Cloud - Objectives•Enhancing DCI for e-Science: Let Scientists focus on

Sciences•Service Oriented Architecture• Infrastructure, Platform and Service•Service re-use and re-combination according to scientific

workflow.•Flexible and fast resource provisioning•Operation cost and energy consumption reduction

•Capability on Big Data•Facilitating collaboration on E-Science•Life Science, Earth Science, Environmental Changes,

Social Sciences and HEP, etc.•Technology R&D: Grid+Cloud, Cloud Federation, versatile &

persistent storage, etc.

9

Page 10: ASGC Site Report

Strategy and Plan•Approach•VIM – OpenNebula + vNode + OpenStack•VMIC – Working with CERN•CERNVM – Virtual Appliance (with Contextualisation)•CVMFS for ATLAS deployed and operational•Extending for Blast, R and more e-Science Applications

•EMI Cloud and Virtualization Task Force•Develop Repository of VM and VA

• Interoperability•Policy repository and information system (along with

auditing)•Monitoring •Use Cases•Cloud Trust: different req for data and computing•Data provenance, access control, federation

10

Page 11: ASGC Site Report

ASGC Cloud System Architecture

Page 12: ASGC Site Report

Thank You!