national institute for computational sciences
DESCRIPTION
NICS is a collaboration between the University of Tennessee and ORNL Awarded the NSF Track 2B-Kraken (1PF) Remote Data Analysis and Visualization –Nautilus (Sean Ahern) Experimental GPGPU system – Keeneland (Jeff Vetter). National Institute for Computational Sciences. - PowerPoint PPT PresentationTRANSCRIPT
· NICS is a collaboration between the University of Tennessee and ORNL
· Awarded the NSF Track 2B-Kraken (1PF)· Remote Data Analysis and Visualization –
Nautilus (Sean Ahern)· Experimental GPGPU system –
Keeneland (Jeff Vetter)
National Institute for Computational Sciences
Cray XT5 system – October 2009· 8,256 two-socket nodes· 16,512 six-core AMD Istanbul processors· 99,072 cores (2.6 GHz)· 129 TB memory· 1,030 teraflops
NICS User Accounts and Projects
#
Date
Aug-08
Sep-08
Oct-08
Nov-08
Dec-08
Jan-0
9
Feb-09
Mar-09
Apr-09
May-09
Jun-0
9Ju
l-09
Aug-09
Sep-09
Oct-09
Nov-09
Dec-09
Jan-1
0
Feb-10
Mar-10
Apr-10
May-10
0
500
1000
1500
2000
2500
235305
382481 515
586
638747
902985
10741157
1448
150415621633 1670
17071808
20242124 2255
35 49 78 90 92 107125149 170 187 201236 254268 290 308 315 327
371 400441
462
# of Users# of Projects
HPSS Usage
#
DateAug-0
8Sep
-08Oct-
08
Nov-08Dec-
08Jan
-09Feb
-09
Mar-09
Apr-09
May-09
Jun-09Jul-0
9
Aug-09Sep
-09Oct-
09
Nov-09Dec-
09Jan
-10Feb
-10
Mar-10
0
500
1000
1500
2000
2500
3000
3500
2429 40 42 51 61 68 86 96 107113119128 139158 169171183 193
202
361372380422481
506729945
1,0881,462
1,5271,5721,7731,992
2,0052,1112,194
2,2482,4422,518
72 90 138164240327391
475592702
794943
1,2341,311
1,523
1,8362,115
2,470
2,8833,098
# of Users# of Files (thous)Total TB stored
Kraken Job Mix (March, 2010)
14
816
2432
4048
5660
01,000,0002,000,0003,000,0004,000,0005,000,0006,000,000
63127
255511
10232047
40958191
8256
63
127
255
511
1023
2047
4095
8191
8256Nodes Wallclock
Hours
NodesCPU-Hours
Kraken Utilization (weekly)
10/5
10/19 11
/211
/1611
/3012
/1412
/28 1/11
1/25 2/8 2/2
2 3/8 3/22 4/5 4/1
9 5/3 5/17
0
10
20
30
40
50
60
70
80
90
100
39
71
64
50
39
55
49
53
58
65
63
37
34
35
47
66
78
70
77
92
90
92
74
81
8784 83 83
77
85 8690
9694
Kraken XT5 Utilization
Nautilus Versions: all SGI Ultraviolet, running SLES 11 OS· P0 (half-rack)
– 128 Cores– 256 GB RAM– 1 GPU
· P1 (1 rack)– 256 Cores– 1 TB RAM– 4 GPUs
· Final System (4 racks)– 1024 Cores– 4 TB RAM– 16 GPUs
Nautilus Delivery Schedule
Remote Data Analysis & Visualization Events· RDAV resources are currently in the allocations
system, and several requests have been made. · Joint visualization class with TACC at the
Petascale Programming Environments and Tools classes in early July.
· A tutorial on Nautilus usage for visualization, data analysis, and workflow management will be taught at TeraGrid'10
10
Keeneland – An NSF-Funded Partnership to Enable Large-scale Computational Science on Heterogeneous Architectures
· NSF Track 2D System of Innovative Design– Georgia Tech– University of Tennessee, Knoxville– UT National Institute for Computational
Sciences– ORNL
· Exploit graphics processors to provide extreme performance and energy efficiency
· Deploy two GPU clusters– Initial Delivery – 2010– Final Delivery – 2012– NVIDIA, HP, Intel, Qlogic
· Software tools, application development
· Operations, user support· Education, Outreach, Training for
scientists, students, industry
· FERMI – capable of over 1 TFs single
precision and over 500 GFs double precision
– Includes error correction in memory
– Includes new level of cache
NVIDIA’s new Fermi GPU
Jeffrey VetterJack DongarraRichard FujimotoThomas SchulthessKarsten SchwanPhil AndrewsTroy BaerKathlyn BoudwinMark FaheyJim FergusonUrsula HendersonDoug HudsonRon HutchinsPatricia KovatchBruce LoftisNathaniel MendozaJeremy MeredithTerry MooreTracy RaffertyDon ReedJim RogersPhilip RothArlene WashingtonSudha Yalamanchili
Keeneland will enable transformational science for those applications currently limited by node level parallelism and memory bandwidth
· Node-level extreme fine-grained parallelism and memory bandwidth from GPUs can transform applications that cannot benefit directly from scaling up
· Recent applications successes on GPUs:– Molecular modeling (NAMD, VMD,
OpenMM, GROMACS, AMBER)– Materials modeling (DCA++,
QMCPACK, LAMMPS)– Combustion (S3D)
· GPUs are setting a new trajectory for HPC architectures by providing very high energy efficiency and density
11
NAMD
DCA++
S3D