teragrid 2006 by the numbers
DESCRIPTION
TeraGrid 2006 By the Numbers. Compute Resource Change. Additions: NCSA (Cobalt, Copper, Xeon Linux Supercluster, Condor Cluster) SDSC (DataStar p655, DataStar p690, BlueGene) Purdue (Condor+) TACC (LoneStar+) IU: (BigRed) PSC: (BigBen+) Retirements: PSC: (TCS1) IU: ( - IA 32 & 64) - PowerPoint PPT PresentationTRANSCRIPT
Dane Skow ([email protected]) April 2007
TeraGrid 2006 By the Numbers
Dane Skow ([email protected]) April 2007
Allocations FY05 FY06 % Change
LRAC proposals awarded 62 (13 new) 88 (22 new) +42(+69)
MRAC proposals awarded 70 (50 new) 160 (92 new) +129(+84)
TeraGrid DAC proposals awarded 123 (115 new) 229 (209 new) +86(+82)
Active TeraGrid PIs 361 1,019 +182
Usage
NUs Requested (LRAC/MRAC/DAC) 1.3 B 2.96 B +130
NUs Awarded 844 M 1.92 B +128
NUs Available (max) 881 M 2.23 B +153
NUs Delivered (% util) 565 M (64%) 1.28 B (57%) +129(-11)
NUs used by TG Staff 10.4 M 10.1 M
Jobs run 594,756 1,686,686 +185
Users (Total)
Users with active accounts during the year 1,712 4,190 +145
Users charging jobs during the year 876 1,731 +98
Users with active accounts on December 31 1,468 3,126 +113
User Home Institutions (users charging jobs) 151 265 +76
US states (incl DC/PR) (users charging jobs) 37 47 +27
Users by Allocation Size
LRAC Users (# charging jobs) 509 (238) 1,152 (496) +126(+108)
MRAC Users (# charging jobs) 542 (248) 1,087 (423) +101(+71)
DAC Users (# charging jobs) 661 (365) 1,948 (783) +195(+116)
Dane Skow ([email protected]) April 2007
Compute Resource Change• Additions:
– NCSA (Cobalt, Copper, Xeon Linux Supercluster, Condor Cluster)
– SDSC (DataStar p655, DataStar p690, BlueGene)
– Purdue (Condor+)
– TACC (LoneStar+)
– IU: (BigRed)
– PSC: (BigBen+)
• Retirements:– PSC: (TCS1)
– IU: ( - IA 32 & 64)
• Upcoming:– TACC (Ranger), NCAR (Frost)
– + HPCOPS (NCSA (Abe) +) + Track 2’ (?) + ???
Dane Skow ([email protected]) February 2007
Networking
SDSC
UC/ANL PSC
TACC
ORNLLA DEN
NCSA
NCAR
Abilene
2x10G
1x10G 1x10G
PU
IPGrid
IU
CHI
1x10G
1x10G each
2x10G
1x10G
1x10G
3x10G each
Cornell
1x10G
1x10G
Dane Skow ([email protected]) February 2007
TeraGrid Usage Growth
Specific Allocations Roaming Allocations
Normalized Units (millions)
100
200
TeraGrid currently delivers to users an average of 400,000 cpu-hours per day -> ~20,000 CPUs DC
Dane Skow ([email protected]) February 2007
1
100
10,000
O N D J F M A M J J A S O N D J F M A M J J A S O N D J F M A M J J A S O N D
2003 2004 2005 2006
Active UsersAll Users EverNew Accounts
TeraGrid User Community GrowthBegin TeraGrid Production Services
(October 2004)
Incorporate NCSA and SDSC Core (PACI) Systems and Users
(April 2006)
Decommissioning of systems typically causes slight reductions in active users. E.g. December 2006 is due to decommissioning of Lemeux (PSC).
FY05 FY06
New User Accounts 948 2,692
Avg. New Users per Quarter 315 365*
Active Users 1,350 3,228
All Users Ever 1,799 4,491(*FY06 new users/qtr excludes Mar/Apr 2006)
Charlie Catlett ([email protected]) January 2007
TeraGrid Projects by Institution
Blue: 10 or more PI’sRed: 5-9 PI’sYellow: 2-4 PI’sGreen: 1 PI
1000 projects, 3200 users
TeraGrid allocations are available to researchers at any US educational institution by peer review. Exploratory allocations can be obtained through a biweekly review process. See www.teragrid.org.
Charlie Catlett ([email protected]) January 2007
FY06 Quarterly Usage by Discipline
100
50
Percent Usage
Dane Skow ([email protected]) February 2007
Use ModalityUse Modality Community SizeCommunity Size(est. number of projects)(est. number of projects)
Batch Computing on Individual Resources 850
Exploratory and Application Porting 650
Workflow, Ensemble, and Parameter Sweep 160
Science Gateway Access 100
Remote Interactive Steering and Visualization 35
Tightly-Coupled Distributed Computation 10
TeraGrid User Community in 2006
Grid
-y U
sers
Dane Skow ([email protected]) February 2007
Monthly Usage Growth Markers
1.E+00
1.E+01
1.E+02
1.E+03
1.E+04
1.E+05
1.E+06
MyCluster Jobs
MyCluster CPUs
GRAM Users
GRAM Jobs
Cross-Site Jobs
Purdue Condor
Monthly Use of Selected Grid Capabilities
January 2005 through December 2006
MyCluster CPUs
MyCluster Jobs
Purdue Condor jobs
Globus GRAM Jobs
Globus GRAM Users
Synchronous cross-site jobs
Dane Skow ([email protected]) February 2007
DAC Roaming Behavior 2006TG DAC Roaming Behavior
0
20
40
60
80
100
120
140
160
1 2 3 4 5 6 7 8 9 10 12
Number of Resources Used
Number of Projects
0
200,000
400,000
600,000
800,000
1,000,000
1,200,000
1,400,000
1,600,000
1,800,000
2,000,000
TGSUs
TG DACs Total TGSUs
Data
Count of resource_name
TG DACs
Total TGSUs
1 143 1,745,314
2 60 919,461
3 46 664,231
4 16 351,340
5 8 183,271
6 5 153,083
7 1 64,270
8 1 3,878
9 1 6,979
10 2 25,121
12 1 97,774
Grand Total
284 4,214,722
Analysis and Chart courtesy Dave Hart, SDSC
321 DACs used resources EVER !!
(only 37 before 2006)
Dane Skow ([email protected]) February 2007
Real-Time Usage Mashup
Alpha version Mashup tool - Maytal Dahan, Texas Advanced Computing Center ([email protected])
521 Jobs running across 12,090 processors at 21:29:31 11/12/2006
December 4, 2006500 jobs, 9400 processors
Dane Skow ([email protected]) February 2007
Popular Resources for DAC AwardsResources Used by TG DACs
0
20
40
60
80
100
120
140
dtf.ncsa.teragriddtf.sdsc.teragridcobalt.ncsa.teragridlonestar.tacc.teragridtungsten.ncsa.teragrid
rachel.psc.teragriddtf.anl.teragrid
lemieux.psc.teragridcopper.ncsa.teragriddatastar.sdsc.teragrid
datastar-p655.sdsc.teragrid
bluegene.sdsc.teragrid
tiger.iu.teragrid
lear.purdue.teragridcondor.purdue.teragridradon.purdue.teragridmaverick.tacc.teragrid
nstg.ornl.teragrid
cloud.purdue.teragrid
Number of TG DACs
0
200,000
400,000
600,000
800,000
1,000,000
1,200,000
1,400,000
TG SUs
TG DACs Total TGSUs
Dane Skow ([email protected]) February 2007
Grid Service Usage (PreWS GRAM)
QuickTime™ and aTIFF (LZW) decompressor
are needed to see this picture.
Daily INCA Reporter (http://tinyurl.com/23ugbm) courtesy Kate Ericson, SDSC
Dane Skow ([email protected]) February 2007
Daily GT4 WS Invocation Reports
GT4 WS Invocation Counts
0
100
200
300
400
500
600
700
800
1/31/112/2/112/4/112/6/112/8/11
2/10/112/12/112/14/112/16/112/18/112/20/112/22/112/24/112/26/112/28/113/2/113/4/113/6/113/8/11
3/10/113/12/113/14/113/16/113/18/113/20/113/22/113/24/113/26/113/28/113/30/114/1/114/3/114/5/114/7/114/9/11
Graph courtesy Tony Rimovsky, NCSA