serc, iisc cray petaflops system

12
May 11, 2015 SERC Petascale System

Upload: adarsh-patil

Post on 08-Aug-2015

764 views

Category:

Technology


4 download

TRANSCRIPT

  1. 1. May 11, 2015 SERC Petascale System
  2. 2. SERC Providing computing facility to Institute users from 1970. State-of-the-art facility for parallel computing with high performance storage along with a range of software application domains. Balance between capacity and capability computing to meet diverse requirements of users.
  3. 3. SERC Petascale System Cray-XC40, an MPP class machine Petascale compute capability with CPU Clusters with 33024 Intel-Haswell processor cores achieving more than 900 TFLOPS 44 node GPU clusters with Intel Ivybridge processor and Tesla K40 GPU, delivering 52 TFLOPS 48 node Intel Xeon Phi Cluster (5120D) giving 28 TFLOPS
  4. 4. Compute Blade : Each having 4 nodes or 96 cores Chassis : 16 blades; 64 nodes; No cables Group: 6 Chassis and 384 nodes Electrical network cables System 4 Groups Active Optical cables SYSTEM BUILDING BLOCKS
  5. 5. Aries NOC Connects nodes on a blade 128Gbps Rank 1 Network Backplane connection through blade Connects nodes on a chassis 157.6Gbps Rank 2 Network Passive Electrical cables Connect multiple chassis All-to-all chassis connection 157.6Gbps Rank 3 Network Optical cable connection to multiple cabinets All-to-all cabinet connection on a Dragonfly topology 43Gbps SYSTEM INTERCONNECT
  6. 6. XC40 SOFTWARE ENVIRONMENT
  7. 7. Parallel File System 2 Peta Byte of Direct Attached parallel storage with RAID6 from DDN Crays Parallel Lustre File System 28 GBps Read and 32 GBps Write performance
  8. 8. Installation & Commissioning of the Petascale System
  9. 9. Petascale System: Status Operational from Jan. 15, 2015 System Acceptance by Feb. 19, 2015 60+ users A few show case capability applications
  10. 10. Unsteady Aerodynamic Simulation Dynamic Ground Effect: Unsteady simulation of entire landing sequence of a high lift wing Problem size: 36 Million volume grid & simulation of 11 secs of the landing sequence System : 8 days on XC40 using 10000 Xeon cores (for a time step of .001 sec)
  11. 11. Observation: A critical number of supernovae required to maintain an over-pressured bubble for 10s of Myr, for a given density and spatial separation between supernovae. These superbubbles should break out of galaxies and suppress global star formation. PLUTO hydrodynamics code to simulate supernovae using 5123 and 10243 simulations. Application scales upto 12,000 cores. Simulating Overlapping Supernovae
  12. 12. Thank You!