andrey tchernykh

29
HPC in México: Looking to the Future based on solid foundations Andrei Tchernykh Centro de Investigación Científica y de Educación Superior de Ensenada, Ensenada, Baja California, México [email protected] http://www.cicese.mx/~chernykh RISC, México, 2011

Upload: guadalupemoreno

Post on 18-Jan-2015

230 views

Category:

Technology


1 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Andrey tchernykh

HPC in México: Looking to the Future based on solid foundations

Andrei Tchernykh Centro de Investigación Científica y de Educación

Superior de Ensenada,

Ensenada, Baja California, México [email protected]

http://www.cicese.mx/~chernykh

RISC, México, 2011

Page 2: Andrey tchernykh

HPC History

2 CICESE Parallel Computing Laboratory

Workstation

PC

Cluster

(by Christophe Jacquet)

Mainframe

Page 3: Andrey tchernykh

3 CICESE Parallel Computing Laboratory

GRID

HPC History

Page 4: Andrey tchernykh

4 CICESE Parallel Computing Laboratory

Cloud Computing

SaaS

Software as a Service

PaaS

Platform as a Service

IaaS

Infrastructure as a

Service

HPCaaS

High Performance

Computing as a Service

HPC History

Page 5: Andrey tchernykh

5 CICESE Parallel Computing Laboratory

2008

50 años de la instalación de la 1ª.

Computadora en México

1958

IBM-650

Page 6: Andrey tchernykh

6

Mexico

CICESE Parallel Computing Laboratory

0

1

2

3

4

5

6

1993.0

6

1993.1

1

1994.0

6

1994.1

1

1995.0

6

1995.1

1

1996.0

6

1996.1

1

1997.0

6

1997.1

1

1998.0

6

1998.1

1

1999.0

6

1999.1

1

2000.0

6

2000.1

1

2001.0

6

2001.1

1

2002.0

6

2002.1

1

2003.0

6

2003.1

1

2004.0

6

2004.1

1

2005.0

6

2005.1

1

2006.0

6

2006.1

1

2007.0

6

2007.1

1

2008.0

6

2008.1

1

2009.0

6

2009.1

1

2010.0

6

2010.1

1

2011.0

6

Number of computers in Top 500

Mexico

Page 7: Andrey tchernykh

06. 1993

Rank Site Computer Cores Year Rmax Rpeak

246

Universidad

Nacional

Autonoma de

Mexico

Y-MP4/432

Cray Inc. 4 1991 1.159 1.333

7

Rank Site Computer Cores Year Rmax Rpeak

226

Universidad

Autonoma

Metropolitana

Mexico

Lufac Cluster, Intel

Xeon E54xx 3.0 Ghz,

Infiniband

2120 2009 18.48 25.44

06.2009

CICESE Parallel Computing Laboratory

Page 8: Andrey tchernykh

Universidad Nacional Autonoma de

Mexico

List Systems Ranking Sum Rmax (GFlops)

06/2007 1 309 5090.00

11/2006 1 126 5090.00

11/1997 1 459 10.42

06/1997 1 324 10.42

11/1994 1 492 1.16

06/1994 1 378 1.16

11/1993 1 292 1.16

06/1993 1 246 1.16

309 Cluster Platform 4000 DL145 Opteron Dual Core 2.6 GHz

Infiniband, Hewlett-Packard 1360 cores

8 CICESE Parallel Computing Laboratory

Page 9: Andrey tchernykh

Universidad Autonoma

Metropolitana

List Systems Highest Ranking Sum Rmax

(GFlops)

06/2009 1 441 18.48

11/2008 1 226 18.48

06/1996 1 385 4.14

11/1995 1 268 4.14

06/1995 1 197 4.14

11/1994 2 157 5.62

9 CICESE Parallel Computing Laboratory

Lufac Cluster, Intel Xeon E54xx 3.0 Ghz, Infiniband, 2120 cores

Page 10: Andrey tchernykh

List Systems Highest Ranking Sum Rmax

(GFlops)

11/2005 3 351 6078.00

06/2005 3 201 6078.00

11/2004 3 118 6078.00

06/2004 3 89 6078.00

11/2003 3 83 3889.05

Geoscience

351 xSeries Xeon 2.8 GHz, Gig-Ethernet IBM 706 cores

10 CICESE Parallel Computing Laboratory

List Systems Highest Ranking Sum Rmax

(GFlops)

11/2002 1 422 197.30

Grupo Electra

Page 11: Andrey tchernykh

Banco Azteca

List Systems Ranking Sum Rmax (GFlops)

06/2007 1 391 4704.00

06/2005 1 399 1330.60

11/2004 1 261 1210.00

List Systems Ranking Sum Rmax (GFlops)

06/2003 1 412 289.00

11/2002 1 218 289.00

06/2000 1 361 48.93

06/1996 1 323 4.62

Pemex Gas

412 Netfinity Cluster PIII 1 GHz – Eth IBM 1024 cores

391 Integrity Superdome, Itanium2 DC 1.6 GHz, HyperPlex

Hewlett-Packard 980 cores

11 CICESE Parallel Computing Laboratory

Page 12: Andrey tchernykh

List Systems Highest Ranking Sum Rmax

(GFlops)

06/2005 1 462 1210.00

11/2004 1 280 1210.00

Instituto Latinoamericano

462 Integrity Superdome, 1.5 GHz, HPlex Hewlett-Packard 256 cores

12 CICESE Parallel Computing Laboratory

List Systems Highest Ranking Sum Rmax

(GFlops)

11/1993 1 409 0.66

SP1 IBM 8 cores

ITESM

Page 13: Andrey tchernykh

Telcel

List Systems Ranking Sum Rmax (GFlops)

11/2001 1 335 118.10

06/2001 1 228 118.10

11/2000 1 179 118.10

06/2000 2 159 123.93

06/1999 1 436 26.38

11/1998 1 273 26.38

335 HPC 10000 400 MHz Cluster Sun Microsystems 192

13 CICESE Parallel Computing Laboratory

List Systems Ranking Sum Rmax (GFlops)

11/2003 1 328 517.00

06/2003 1 182 517.00

SAT/ISOSA Servicio de Administración Tributaria, México

328 SuperDome 875 MHz/HyperPlex Hewlett-Packard 256

Page 14: Andrey tchernykh

14 CICESE Parallel Computing Laboratory

Top 500 Ranking

0

100

200

300

400

500

600

Mexico

Spain

Page 15: Andrey tchernykh

Application Areas

Page 16: Andrey tchernykh

CICESE Parallel Computing Laboratory

Aerospace Automotive Biology Consulting Database Defense Electronics Energy Environment Finance Geophysics Hardware Information Service Life Science Medicine Telecomm Transportation Weather and Climate Research WWW

Application Areas

16

2011

Page 17: Andrey tchernykh

CICESE Parallel Computing Laboratory

Old and New Application Areas

Chemistry

Manufacturing

Mechanics

Pharmaceutics

17

2000

Retail Logistic Services

Research Service

Software Weather Forecasting

Semiconductor Digital Media

2011

Page 18: Andrey tchernykh

CICESE Parallel Computing Laboratory

My first job

18

The Elbrus (Russian: Эльбрус) is a line of Soviet and Russian computer

systems developed by Lebedev Institute of Precision Mechanics and

Computer Engineering, Moscow.

1975

Models of Parallel

Computation

Processing of

Incolmplete Information

Data flow

Page 19: Andrey tchernykh

My Research Areas

HPC

Grid Computing

Scheduling Resource optimization

online offline

Re

al T

ime

Sy

ste

ms

Clo

ud

Co

mp

uti

ng

Knowledge Free

Scheduling with

Uncertainty

Multiobjective

Optimization

Computational

Intelligence

List Scheduling Stealing

Scheduling with

System Level Agreement

Approximation

Algorithms

Workflow Orchestration

Page 20: Andrey tchernykh

Collaboration

CICESE

Mexico

EUA

Francia

Rusia

Alemania

Universidad Autónoma de Baja California

Universidad Autónoma de Nuevo León

Tecnológico de Monterrey Instituto Tecnológico de Morelia Centro de Estudios Superiores

del Estado de Sonora

Dortmund University Prof. Uwe Schwiegelshohn

University of Göttingen Prof. Ramin Yahyapour

Institute for System Programming, Russian Academy of Sciences Dr. Nikolay Kuzurin

Institute of Informatics and Applied Mathematics of Grenoble Prof. Denis Trystram

Ohio University, USA Prof . Klaus Ecker

University of California – Irvine, CA, USA

Prof. Isaac Scherson, Prof. Jean Luc Gaudiot

Page 21: Andrey tchernykh

Green Computing

Page 22: Andrey tchernykh

Important issues – fossil fuels

Average desktop computer with monitor requires

• 10 times its weight in chemicals and fossil fuels to

produce

• 266 kg of fossil fuel for LCD monitor

• 4 litres of oil for laser toner cartridge

CICESE Parallel Computing Laboratory 22

Page 23: Andrey tchernykh

Important issues – electronic-waste

• Over 130,000 PCs dumped in US homes & businesses…each day

• Less than 10% of electronics are recycled

• Est. 50 million tons of e-waste is generated globally each year

CICESE Parallel Computing Laboratory 23

Page 24: Andrey tchernykh

Electronic Waste

CICESE Parallel Computing Laboratory 24

Page 25: Andrey tchernykh

Important issues – toxic waste

Electronic waste is an increasing problem

• up to 70% of all hazardous waste.

• high in many toxic materials (heavy metals, plastics)

• can easily leach into ground water and bio-accumulate • CRT – graphite/zinc leachate (monitors are hazardous waste)

• Lead (plomo): can attack proteins and DNA, as well as

interfere with nervous system function

• LCD – 4-12 mg mercury /unit

CICESE Parallel Computing Laboratory 25

Page 26: Andrey tchernykh

Important issues – wasting electricity

The average desktop PC wastes

• nearly half the power it consumes

• one-third of their power as heat

• Energy consumed by data centres worldwide doubled from 2000 to

2005

• The more powerful the machine, the more cool air needed to keep it

from overheating.

• By 2005, the energy required to power and cool servers accounted

for about 1.2% of total U.S. electricity consumption

CICESE Parallel Computing Laboratory

Cooling towers

26

Page 27: Andrey tchernykh

Algorithmic efficiency

• has an impact on the amount of computer resources required for any given

computing function

Resource allocation

• cut energy usage by routing traffic and resource usage

Virtualization

• Use what you need (Cloud computing)

Sophisticated power management

Operating system support

Power supply

Storage

Video card

Display

Materials recycling

Telecommuting

• reduction of greenhouse gas emissions related to travel

Education and Certification

CICESE Parallel Computing Laboratory

The way out

27

Page 28: Andrey tchernykh

28 CICESE Parallel Computing Laboratory

Page 29: Andrey tchernykh

HPC in México: Looking to the Future based on solid foundations

Andrei Tchernykh Centro de Investigación Científica y de Educación

Superior de Ensenada,

Ensenada, Baja California, México [email protected]

http://www.cicese.mx/~chernykh

RISC, México, 2011