nscc singapore updatesinghealth academy ministry of health integrated health information system...
TRANSCRIPT
NSCC Singapore Update
Yves Poppe, Alvin Chiam
APRP Working Session
APAN47, Daejeon Korea
March 2019
National Petascale Facility
The National Supercomputing Centre Singapore is established to support the
high performance science and engineering computing needs for academic,
research and industry communities in Singapore
Our facility is linked by high bandwidth multi-gigabit networks to provide high
speed access to users everywhere locally & globally
Our Stakeholders
Introduction: Vision & Objectives
Vision of NSCC
Support
Singapore’s
R&D
Initiatives
1
Attract
Industrial
Research
Collaborations
2
Objectives of NSCC
Enhance
Singapore’s
Research
Capabilities
3
Democratising Access to Supercomputing
HPC Hardware
EDR InfiniBand
Interconnect
• EDR (100Gbps) Fat Tree
within cluster
• InfiniBand connection to
remote login nodes at
stakeholder campuses
13PB Storage
• I/O bandwidth up to
500GB/s
• GPFS and Lustre File
System
1 PFLOP System
• 1,288 nodes (dual socket,
12 cores/CPU E5-2690v3)
• 31,320 cores
• 128 GB DDR4 RAM/node
• 9 Large memory nodes
(1x6TB, 4x2TB, 4x1TB)
Accelerator nodes
• 128 nodes with GPUs
• 1 x Tesla K40 per node
Visualization nodes
• 2 nodes R940 graphic workstations
• Each with 2 x NVIDIA Quadro K4200
• NVIDIA Quadro Sync support
*Capable of supporting 274,363,200 core hours per year.
5
Envisaged High-Speed InfiniBand Fabric
Duke-NUSSingHealth Academy
Ministry of Health
Integrated Health Information System
National Cancer Centre Singapore
42km
OUTRAM
Genome Institute of Singapore
BIOPOLIS
30
km
Up to 100Gbps GE
Up to 500Gbps GE
WOODLANDS10/40/100Gbps InfiniBand/IP
SELETAR
CHANGI
NOVENA
OUTRAM
ONE-NORTH
JURONG
Lee Kong ChianSchool of Medicine
Tan Tock Seng Hospital
NOVENASUTD
CHANGI
Rolls Royce
SELETAR
WOODLANDS
Republic Polytechnic
JURONG
KOMtech
SingAREN
Global Switch
A*STARBiopolis
NTU
SUTD
NUS
A*STAR
Fusionopolis
Regional Network Connectivity (via SingAREN)
NSCC STAR-N Co-funded International Links
To connect Singapore to Established HPC
Centres worldwide.
To position Singapore as a Strategic
Hub in the region.
In Partnership with
London, UK;
Europe
Singapore
Los Angeles, USA
100Gbps
co-funded with
10Gbps
co-funded with
Japan
100Gbps
co-funded with
HongKong
Perth
100Gbps
SupercomputingAsia 2019 “Move That Data!” Challenge
Yves Poppe, Alvin Chiam
APRP Working Session
APAN47, Daejeon Korea
March 2019
“Move That Data” Challenge
• Inaugural Data Mover Challenge (DMC) organised by NSCC
• Held in conjunction with NSCC’s flagship conference - SupercomputingAsia
2019 (SCA19)
• Bring together experts from industry and academia
• To test their software across servers in various countries connected by 100G
international networks
DMC Partner and Topology
DMC Partners
• AARNet
• Internet2
• KREONET
• KISTI
• NCI
• NICT
• PACIFIC WAVE
• SingAREN
• StarLight
“Move That Data” Challenge
• Running from Jan – Mar 2019
• Each Team given 5 days
– Monday– Wednesday: Software Setup
– Thursday – Friday: Running and demonstration of Software to judges
• Winning team’s leader invited to SCA19 for award presentation and solution
showcase
Partner DTN Setup
• Basic OS and Software
– Centos7.5, Iperf3, Singularity
• OS and 100G NIC tuning according to ESnet FasterData Guide
• /DMC directory created on each DTN with subdirectory
– /test - a READ/WRITE directory as the destination the data set transfer
– /data - a READONLY directory as the source of the data set
• Bandwidth utilization and statistics captured at NSCC and SingAREN
Competition Scope
• 2TB data set consisting of Genome and Satellite mixed data type and sizes
• 2-way Data Transfer scenarios
nscc01 <-> starlight01
nscc01 <-> nict01
nscc01 <-> nci01
nscc01 <-> kisti01
Judging Criteria
1. Error free data transfer performance of the data set
Memory to Memory Test for both directions
Disk to Disk Test for both directions
2. Technological Innovation base on solution submission and interview with judges
3. End-user Experience including user interface, ease of installation, licensing model, and integration with services such as access federation etc.
• Participants will log in with non-root user accounts
• Data transfer tools deployed only on Singularity Containers running on the DTNs
• Limited BIOS and administrator/root level access by request basis only
• Changes implemented by local partner systems administrator only
Rules and Conditions
Participating Teams
Organisation Solution Title:SEAIP/NCHC+Starlight SEAIP DTN-as-a-Service
Fermilab BigData Express
Zettar Inc. Zettar zx hyperscale data distribution software platform
The University of Tokyo Secure Data Reservoir
Argonne National Laboratory Using GridFTP and Globus Online for Large Data Transfers
iCAIR/Northwestern University STARLIGHT DTN-as-a-Service for Intensive Science
JAXA & Fujitsu Smart Communication Optimizer
Challenges
• Tight timeline
– Forming DMC committee (Oct 2018)
– Arrange partner support, prepare hardware and infrastructure (Nov –Dec 2018)
– Participants shortlist and competition schedule (Jan – Mar 2019)
• Coordinating and working with differing partner DTNs server specs
• Partners and participants geographical location and time zone
differences
• Participants software compatibility running on Singularity Containers
SupercomputingAsia 2019