hungarian grid projects and cluster grid initiative p. kacsuk mta sztaki [email protected]
Post on 19-Dec-2015
223 views
TRANSCRIPT
Hungarian GRID Projects and Cluster
Grid InitiativeP. Kacsuk
MTA SZTAKI
www.lpds.sztaki.hu
Clusters
Hungarian Grid projectsHungarian Grid projects
VISSZKIVISSZKI
Globus, Condorservice
SuperGridSuperGrid
Resource scheduling
AccountingP-GRADEMPICH-G
Supercomputers
DemoGridDemoGrid
Security Gridmonitoring
Data storage subsystem
Applications
– Testing various tools and methods for creating a Virtual supercomputer (metacomputer)
– Testing and evaluating Globus and Condor
– Elaborating a national Grid infrastructure service based on Globus and Condor by connecting various clusters
Ojectives of the VISSZKI project Ojectives of the VISSZKI project
Results of the VISSZKI projectResults of the VISSZKI project
PVM MW
Condor-G
GlobusGrid middleware
Grid fabric
MPI
Local job management
Clusters
Condor
Clusters
Condor
Clusters
Condor
Clusters
Condor
Low-levelparallel development
Grid level job management
GRIDGeneric
architecture
Structure of the DemoGrid projectStructure of the DemoGrid project
GRID subsystems study and development
- Data Storage subsystem: Relational database, OO
database, geometric database, distributed file system
- Monitoring subsystem- Security subsystem
• Demo applications– astro-physics– human brain simulation– particle physics– car engine design
Data Storage
SubsystemMonitoring subsystem
Security subsystem
Applications
Hardver
RDB OODB
GDB DFS
DataTightly Loosely
Decomp
StorageCPU
Network
Structure of theStructure of the HungarianHungarian Supercomputing Supercomputing GridGrid
2.5 Gb/s Internet
NIIFI 2*64 proc. Sun E10000
BME 16 proc. Compaq AlphaServer
ELTE 16 proc. Compaq AlphaServer
• SZTAKI 58 proc. cluster
• University (ELTE, BME) clusters
P-GRADE
PVM MW
Condor-G
Globus
SUN HPC
Compaq AlphaServer
Compaq AlphaServer
Klaszterek
Grid middleware
Grid fabric
High-level parallel development layer
Grid level job management
Condor, LSF, Sun Grid Engine
Condor, PBS, LSF
Condor
MPI
GRID portalWeb based GRID access
Condor, PBS, LSF
GRID application
GRID application
Low-levelparallel development
The Hungarian Supercomputing GRID project
Distributed supercomputing: P-GRADEDistributed supercomputing: P-GRADE
• P-GRADE (Parallel GRid Application Development Environment)
• The first highly integrated parallel Grid application development system in the world
• Provides:– Parallel, supercomputing programming for the
Grid – Fast and efficient development of Grid programs– Observation and visualization of Grid programs– Fault and performance analysis of Grid programs
Condor/P-GRADE on the whole range of Condor/P-GRADE on the whole range of parallel and distributed systemsparallel and distributed systems
Super-computers
2100 2100 2100 2100
2100 2100 2100 2100
Clusters Grid
GFlops
Mainframes
P-GRADE
Condor
P-GRADE
Condor
P-GRADE
Condor flocking
Berlin CCGrid Grid Demo workshop: Berlin CCGrid Grid Demo workshop: Flocking of P-GRADEFlocking of P-GRADE programs by Condorprograms by Condor
m0 m1
P-GRADEBudapest
Budapest
n0 n1
Madison
p0 p1
Westminster
P-GRADE program runs at the Madison cluster
P-GRADE program runs at the Budapest cluster
P-GRADE program runs at the Westminster cluster
Next step: Check-pointing and Next step: Check-pointing and migration of P-GRADEmigration of P-GRADE programsprograms
m0 m1
P-GRADEGUI
Wisconsin
Budapest
n0 n1
London1
P-GRADE program downloaded to London as a Condor job
2
P-GRADE program runs at the London cluster
London cluster overloaded => check-pointing
3
P-GRADE program migrates to Budapest as a Condor job
4P-GRADE program runs at the Budapest cluster
Further develoment: TotalGridFurther develoment: TotalGrid
• TotalGrid is a total Grid solution that integrates the different software layers of a Grid (see next slide) and provides for companies and universities – exploitation of free cycles of desktop
machines in a Grid environment after the working/labor hours
– achieving supercomputer capacity using the actual desktops of the institution without further investments
– Development and test of Grid programs
Layers of TotalGridLayers of TotalGrid
Internet Ethernet
PVM v. MPI
Condor v. SGE
PERL-GRID
P-GRADE
Hungarian Cluster Grid InitiativeHungarian Cluster Grid Initiative
• Goal: To connect the new clusters of the Hungarian higher education institutions into a Grid
• By autumn 42 new clusters will be established at various universities of Hungary.
• Each cluster contains 20 PCs and a network server PC.– Day-time: the components of the clusters are used for
education– At night: all the clusters are connected to the Hungarian
Grid by the Hungarian Academic network (2.5 Gbit/sec)– Total Grid capacity in 2002: 882 PCs
• In 2003 further 57 similar clusters will join the Hungarian Grid– Total Grid capacity in 2003: 2079 PCs
• Open Grid: other clusters can join at any time
Structure of the Structure of the Hungarian Cluster GridHungarian Cluster Grid
2.5 Gb/s Internet
2003: 99*21 PC Linux clusters, total 2079 PCs
2002: 42*21 PC Linux clusters, total 882 PCs
TotalGrid
TotalGridTotalGrid
Live demonstration of TotalGridLive demonstration of TotalGrid
• MEANDER Nowcast Program Package:– Goal: Ultra-short forecasting (30 mins)
of dangerous weather situations (storms, fog, etc.)
– Method: Analysis of all the available meteorology information for producing parameters on a regular mesh (10km->1km)
• Collaborative partners:– OMSZ (Hungarian Meteorology Service)– MTA SZTAKI
Structure of MEANDERStructure of MEANDER
First guess dataALADIN
SYNOP data Satelite Radar
CANARIDelta analysis
Basic fields: pressure, temperature, humidity, wind.
Derived fields: Type of clouds, visibility, etc.
GRID
Radar to grid
Satelite to grid
Current time
Type of clouds
Overcast
Visibility
Rainfall state
VisualizationFor meteorologists:HAWK
For users: GIF
Lightning
decode
P-GRADE version of MEANDERP-GRADE version of MEANDER
Implementation Implementation of the Delta of the Delta method in method in P-GRADEP-GRADE
netCDFoutput
34 MbitShared
PERL-GRIDCONDOR-PVM
job
11/5 MbitDedicated
P-GRADEPERL-GRID
job
HAWK
netCDFoutput
Live demo of MEANDER Live demo of MEANDER based on TotalGridbased on TotalGrid
ftp.met.hu
netCDF
512 kbitShared
netCDFinput
Parallel execution
GRMTRACE
34 MbitShared
PERL-GRIDCONDOR-PVM
job
11/5 MbitDedicated
P-GRADEPERL-GRID
job
On-line Performance Visualization in On-line Performance Visualization in TotalGridTotalGrid
ftp.met.hu
netCDF
512 kbitShared
netCDFinput
Parallel execution and GRM
GRMTRACE
Task of SZTAKI in the DataGrid project
PROVE PROVE visualization of visualization of
the delta methodthe delta method
ConclusionsConclusions
• Already several important results that can be used both for– the academic world (Cluster Grid,
SuperGrid)– commercial companies (TotalGrid)
• Further efforts and projects are needed – to make these Grids
• more robust• more user-friendly
– having more new functionalities
Thanks for your attention
Thanks for your attention
?
Further information: www.lpds.sztaki.hu