94document resume ed 368 330 ir 016 567 title high performance computing and communications: toward...

188
DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council for Science, Engineering and Technology, Washington, DC. PUB DATE 94 NOTE 190p.; Illustrations and photographs may not copy adequately. PUB TYPE Reports Evaluative/Feasibility (142) EDRS PRICE MF01/PC08 Plus Postage. DESCRIPTORS Computer Centers; Computer Networks; *Educational Technology; Government Role; Information Networks; Program Evaluation; Research and Development; *Technological Advancement; *Telecommunications; Training IDENTIFIERS *High Performance Computing; High Performance Computing Act 1991; *National Information Infrastructure; National Research and Education Network ABSTRACT This report describes the High Performance Computing and Communications (HPCC) initiative of the Federal Coordinating Council for Science, Engineering, and Technology. This program is supportive of and coordinated with the National Information Infrastructure Initiative. Now halfway through its 5-year effort, the HPCC program counts among its achievements more than a dozen high-performance computing centers in operation nationwide. Traffic on federally funded networks and the number of local and regional networks connected to these centers continues to double each year. Teams of researchers have made substantial progress in adapting software for use on high-performance computer systems; and the base of researchers, educators, and students trained in HPCC technologies has grown substantially. The five HPCC program components in operation at present are: (1) scalable computing systems ranging from affordable workstations to large-scale high-performance systems; (2) the National Research and Education Network; (3) the Advanced Software Technology and Algorithms program; (4) the Information Infrastructure Technology and Applications program; and (5) the Bas.c Research and Human Resources program. Components of each are reviewed. Twenty-five tables present program information, with 72 illustrations. (SLD) *********************************************************************** * Reproductions supplied by EDRS are the best that can be made * * from the original document. * ***********************************************************************

Upload: others

Post on 27-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

DOCUMENT RESUME

ED 368 330 IR 016 567

TITLE High Performance Computing and Communications: Towarda National Information Infrastructure.

INSTITUTION Federal Coordinating Council for Science, Engineeringand Technology, Washington, DC.

PUB DATE 94

NOTE 190p.; Illustrations and photographs may not copyadequately.

PUB TYPE Reports Evaluative/Feasibility (142)

EDRS PRICE MF01/PC08 Plus Postage.DESCRIPTORS Computer Centers; Computer Networks; *Educational

Technology; Government Role; Information Networks;Program Evaluation; Research and Development;*Technological Advancement; *Telecommunications;Training

IDENTIFIERS *High Performance Computing; High PerformanceComputing Act 1991; *National InformationInfrastructure; National Research and EducationNetwork

ABSTRACTThis report describes the High Performance Computing

and Communications (HPCC) initiative of the Federal CoordinatingCouncil for Science, Engineering, and Technology. This program issupportive of and coordinated with the National InformationInfrastructure Initiative. Now halfway through its 5-year effort, theHPCC program counts among its achievements more than a dozenhigh-performance computing centers in operation nationwide. Trafficon federally funded networks and the number of local and regionalnetworks connected to these centers continues to double each year.Teams of researchers have made substantial progress in adaptingsoftware for use on high-performance computer systems; and the baseof researchers, educators, and students trained in HPCC technologieshas grown substantially. The five HPCC program components inoperation at present are: (1) scalable computing systems ranging fromaffordable workstations to large-scale high-performance systems; (2)

the National Research and Education Network; (3) the AdvancedSoftware Technology and Algorithms program; (4) the InformationInfrastructure Technology and Applications program; and (5) the Bas.cResearch and Human Resources program. Components of each arereviewed. Twenty-five tables present program information, with 72illustrations. (SLD)

************************************************************************ Reproductions supplied by EDRS are the best that can be made *

* from the original document. *

***********************************************************************

Page 2: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

fo)0

II 11 Alk

Col

004:,

0 Z.. 1 1

en

AW

_ A

Of 4

aU I DEPARTMENT OF EDUCATION

Office of EducationsiResearch and improvement

EOUCKaONAL RESOURCESINFORMATION

CENTER (ERICI

0 This documenthas been reproduced as

received fro, the person or oroanization

originating a0 Minor changes

have been made to improve

rproduction quality

Points of view or opinionsstated in this docu

ment do not neceSsanly represent official

OERI Pesition or policy

Page 3: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

A. Nico HabeTnann (1932-1993.j

This "Blue Book" is dedicated to the memory of our friend and colleague,A. Nico Habermann, who passed away on August 8, 1993, as this publicationwas going to press. Nico served as Assistant Director of NSF for Computerand Information Science and Engineering (CISE) for the past two years andled NE-.F's commitment to developing the HPCC Program and enabling aNational Information Infrastructure. He came to NSF from Carnegie MellonUniversity, where he was the Alan J. Perlis Professor of Computer Scienceand founding dean of the School of Computer Science. He was a visionaryleader who helped promote collaboration between computer science and otherdisciplines and was devoted to the aevelopment and maturation of theinteragency collaboration in HPCC. We know Nico was proud to be part ofthese efforts. 14

,11 BEST COPY AVAILABLE

Page 4: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

HIGH PERFORMANCECOMPUTING AND

COMMUNICATIONSTOWARD A NATIONAL INFORMATION INFRASTRUCTURE

A Report by the Committee onPhysical, Mathematical, and Engineering Sciences

Federal Coordinating Council forScience, Engineering, and Technology

Office of Science and Technology Policy

Page 5: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

EXECUTIVE OFFICE OF THE PRESIDENTOFFICE OF SCIENCE AND TECHNOLOGY POLICY

WASHINGTON, D.C. 20506

MEMBERS OF CONGRESS:

I am pleased to forward with this letter "High Performance Computing and Communications:Toward a National Information Infrastructure" prepared by the Committee on Physical,Mathematical, and Engineering Sciences (CPMES) of the Federal Coordinating Council forScience, Engineering, and Technology (FCCSET), to supplement the President's Fiscal Year1994 Budget. This report describes the FCCSET Initiative in High Performance Computing andCommunications.

The interagency HPCC Initiative is developing computing, communications, and softwaretechnologies for the 21st century. It is making progress toward providing the high performancecomputing and communications capabilities and advanced software needed in critical researchand development programs. The HPCC Program is fully supportive of and coordinated with theemerging National Information Infrastructure (N11) Initiative, which is part of the President's andVice President's Technology Initiative released February 22, 1993.

To enable the NH Initiative to build on the HPCC Program, the FCCSET CPMES HighPerformance Computing, Communications and Information Technology (HPCCIT) Subcommitteehas included a new program component, Information Infrastructure Technology and Applications(IITA). It will provide for research and development needed to a address National Challenges,and it will also address problems where the application of HPCC technology can provide hugebenefits to America. Working with industry, the participating agencies will develop and applyhigh performance computing and communications technologies to improve information systemsfor National Challenges in areas such as health care, design and manufacturing, the environment,public access to government information, education, digital libraries, energy management, publicsafety, and national security. IITA will support the development of the NII and thedevelopment of the computer, network, and software technologies needed to provide appropriateprivacy and security protection for users.

The coordination and integration of the interagency research and development strategy for thisInitiative and its coordination with other interagency FCCSET Initiatives has been led very ablyby the CPMES and its HPCCIT Subcommittee. Donald A. B. Lindberg, Chair of the HPCCITSubcommittee, his interagency colleagues, their associates, and staff are to be commended fortheir efforts in the Initiative itself and in this report.

. Gibector

5

Page 6: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Office of Science and Technology Policy

Federal Coordinating Council for Science, Engineering, and Technology

Committee on Physical, Mathematical, and Engineering Sciences

Department of AgricultureDepartment of CommerceDepartment of DefenseDepartment of EducationDepartment of EnergyDepartment of Health and Human ServicesDepartment of InteriorEnvironmental Protection AgencyNational Aeronautics and Space AdministrationNational Science FoundationOffice of Management and BudgetOffice of Science and Technology Policy

FCCSET Directorate

Charles H. Dickens, Executive SecretaryElizabeth Rodriguez, Senior Policy Analyst

High Performance Computing and Communications and Information Technology Subcommittee

HPCCIT HPCCIT ExecutiveName Agency or Department Subcommittee Committee (see p. iii)

Donald A.B. Lindberg National Coordination Chair ChairOffice

James Burrows National Instituteof Standards and

Representative Representative

Technology

John S. Cavallini Department of Energy Alternate Alternate

Melvyn Ciment National Science Alternate AlternateFoundation

George Cotter National Security RepresentativeAgency

Robin L. Dennis Environmental Protection AlternateAgency

Page 7: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Name Agency. or Department

Norman Glick

A. Nico Habermann(Deceoved)

Lee Holcomb

Paul E. Hunter

Steven Isakowitz

Charles R. Kalina

National SecurityAgency

National ScienceFoundation

National Aeronauticsand SpaceAdministration

National Aeronauticsand SpaceAdministration

Office of Managementand Budget

National CoordinationOffice

Norman H. Kreisman Department of Energy

R.J. (Jerry) Linn

Daniel R. Masys

Bruce McConnell

William L. McCoy

James A. Mitchell

David Nelson

Michael R. Nelson

Joan H. Novak

National Instituteof Standards andTechnology

National Institutesof Health

Office of Managementand Budget

Federal AviationAdministration

Department ofEducation

Department of Energy

Office of Scienceand TechnologyPolicy

EnvironmentalProtection Agency

HPCCITSubcommittee

Alternate

Representative

Representative

Alternate

Representative

ExecutiveSecretary

Alternate

Alternate

Representative

Alternate

Observer

Representative

Representative

Representative

it

RepresentatiN e

7

HPCCIT ExecutiveCommittee (see p. iii)

Representative

Representative

Alternate

Alternate

Alternate

Ropresentative

Page 8: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

HPCCIT HPCCIT ExecutiveName Agency or Department Subcommittee Committee rs, vou. below,

Merrell Patrick

Alex Poliakoff

Thomas N. Pyke

National ScienceFoundation

Department ofEducation

National Oceanicand AtmosphericAdministration

Alternate Alternate

Alternate

Representative Representative

John Silva Advanced Research Alternate AlternateProjects Agency

Pau! H. Smith

Stephen L. Squires

John C. Toole

Judith L. Vaitukaitis

National Aeronautics Alternate Alternateand SpaceAdministration

Advanced ResearchProjects Agency

Advanced ResearchProjects Agency

National Institutesof Health

Alternate Alternate

Representative Representative

Alternate

Note: The Advanced Research Projects Agency. the National Science Foundation. the Department and Energy. andthe National Aeronautic.s and Space Administration hold permanent positions on the HPCCIT Executive Committee;the other two positions rotate among the other agencies.

Editorial Group for High Performance Computing and Communications1394

EditorSally E. HoweAssistant Director far Technical Programs. Nationa Coordination Office

Program Component and Section Editors

Melvyn Ciment. NSF, Past Co-Editor I IPCC Program and Program ComponentsStephen Griffin, NSF Case StudiesDaniel R. Masys, VideoMerrell Patrick. NSF BRHR

Page 9: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Calvin T. Ramos, NASA ASTAStephen L. Squires. ARPA HPCC Program. HPCS. IITARoger Taylor, NSF NREN

Agency Editors

Norman GlickStephen GriffinFrederick C. JohnsonThomas KitchensDaniel R. MasysJames A. MitchellJoan H. NovakThomas N. PykeCalvin T. RamosStephen L. Squi

Copy Editor

National Security A:4encyNational Science FoundationNational Institute of Standards and TechnologyDepartment of EnergyNational Institutes of HealthDepartment of EducationEnvironmental Protection AgencyNational Oceanic and Atmospheric AdministrationNationa' Aeronautics and Space AdministrationAdvanced Research Projects Agency

Patricia N. Williams National Coordination Office

Acknowledgments

In addition to the FCCSET. CPMES. HPCCIT Subcommittee, and Editorial Group. many other peoplecontributed to this book, and we thank them for their efforts. We explicitly thank the following contribu-tors to the Component and Agency Sections:

Robert Aiken. Lawrence Livermore National Laboratory, DOERobin Dennis, EPADar leen Fisher, NSFMichael St. Johns. USAF, ARPAAnthony Villasenor. NASA

Contributors to the Case Studies are acknowledged at the end of the Case Studies section.

We thank Joe Fitzgerald and Troy Hill of the Audiovisual Program Development Branch at the NationalLibrary of Medicine for their artistic contributions and for the preparation of the book in its final form, andwe thank Patricia Carson and Shannon Uncangco of the National Coordination Office for their contribu-tions throughout the preparation of this hook.

iv

Page 10: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

MEM.'

=1.

r-ITM1111111111111111111111,

Table of Contents

I. Executive Summary

II. The HPCC Program

Program Overview7

2. Program Management15

3. Interdependencies Among Components 20

4. Coordination Among Agencies21

5. Membership on the HPCCIT Subcommittee

6. Technology Collaboration with Industr and Academia

7. Agency Budgets HPCC Components

8. Agency Responsibilities by HPCC Components 16

HI. HPCC Program Components

I. High Performance Computing Systems 29

2. National Research and Education Network 32

3. Advanced Software Technology and Algorithms 41

4. Information Infrastructure Technology and Applications 47

5. Basic Research and Human Resources55

IV. Individual Agency Programs

1. Advanced Research Projects Agency 59

2. National Science Foundation64

3. Department of Energy72

4. National Aeronautics and Space Administration 8 I

5. National Institutes of Health87

6. National Security Agency96

7. National Institute of Standards and Technology 10 I

8. National Oceanic and Atmospheric Administration 106

9. Environmental Protection AgencyIII

la Department of Education117

V. Case Studies

Introduction119

1. Climate Modeling120

2. Sharing Remote InstrumentsI 23

3. Design and Simulation of Aerospace Vehicle. 126

4. High Perlormanee Life Science: Hom Molecules to MR1 I 29

it

Page 11: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Table of Contents (cont )

5. Non-Renewable Energy Resource Recovery 1326. Groundwater Remediation

1357. Improving Environmental Decision Making 1378. Galaxy Formation

1399. Chaos Research and Applications

14110. Virtual Reality Technology

14311. HPCC and Education

14512. GAMS: An Example of HPCC Community Resources 14813. Process Simulation and Modeling 15014. Semiconductor Manufacturing for the 21st Century 15215. Advances Based on Field Programmable Gate Arrays 15416. High Performance Fortran and its Environment 156

Contributing Writers158

VI. Glossary159

VII. Contacts166

IX. Index173

Page 12: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

List of Tables

HPCCIT (High Performance Computing, Communications, andInformation Technology) Subcommittee

Some Grand Challenge Problems 5

Some National Challenge Application Areas 5

Overview of the Five HPCC Components 11

HPCC Program Goals 11

HPCC AgenciesHPCC Program Strategies 13

HPCC Research Areas 14

Major HPCC Conferences and Workshops During FY 1993 I 8

HPCC-Related Conferences 18-19Evaluation Criteria for the HPCC Program 12-13

Agency Budgets by HPCC Program ComponentsAgency Responsibilities by HPCC Program Components 26-27Major Federal Components of the Interagency Internet 33InterNIC Service Awards 36Gigabit Testbeds 39HPCC Agency Grand Challenge Research Teams 43Contrasts Between Grand Challenges and National Challenges 49NSF centers directly supported within the ASTA component 66On-Going NSF Grand Challenge Applications Groups 68NSF Grand Challenge Applications Groups Initiated in FY 1993 69High performance systems under evaluation for biomedical applications 90NOAA Grand Challenges Requiring HPCC Resources 106

NOAA Laboratories Involvedin HPCC 107

NOAA National Data Centers 108

vii

Page 13: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

List of Illustrations

Schematic of interconnected NSF. NASA, and DOE "backbone- networks NREN 35

High performance interconnect and fine-grained parallel systems board ARPA 60

High performance networking map ARPA 61

Modern operating systems schematic ARPA 62

National Challenges and NII program layers schematic arpl examples ARPA 63

NSF metacenter hardware configuration NSF 64

Industrial affiliates and partners at NSF Supercomputer Centers (pie chart) NSF 65

Mosaic screens NSF 67

Hydrogen/air turbulent reacting flame streams DOE 72

ESnet map DOE 73

Biochemically activated environmental chemical henzo-a-yrene DOE 74

Parallel Virtual Machine (PVM) DOE 75

National Storage Laboratory (NSL) DOE 76

High School Science Students Honors Program "Superkids- DOE 77

Volume renderings of density of bubble hit by shock wave DOE 78

Pacific Northwest Mesoscale Meteorological (MM) model results DOE 79

High velocity impact model resulty DOE 80

Simulation of the Earth's ozone layer NASA 81

Aeronautics Network (AERONet) map NASA 82

NASA Science Internet (NSINet) maps NASA 82

Simulated temperature profile for hypersonic re-entry body NASA 83

Simulation of surface pressure on High Speed Civil Transport (HSCT) NASA 84

Freestream airflow and engine exhaust of Harrier Vertical Takeoffand Landing (VTOL) aircraft NASA 85

Herpes simplex virus capsid cross-section reconstruction NIH 87

Myoelobin protein NIH 89

Bindine of small molecules to DNA NIH 91

Three-dimensional reconstruction from computed tomography andmagnetic resonance images NIH 92

Four megabyte digital radiology image NIH 93

X-ray diffraction spectroscopy for determining protein structure NIH 94

Terasys workstation NSA 96Graph of time improvements using multiple vs. single processor NSA 97

Memory traffic flow NSA 98

High Speed Network (HNET) Testbed . NSA 99

Board employing Field Programmable Gate Arrays NSA 100

MultiKron chip NIST 101

Computer-controlled coordinate measuring machine NIST 102

Machine tool in the Shop of the 90s NIST 103

Cost-effective ways to help protect computeri7ed data N1ST 104

Statistical analysis of video microscope imaees N1ST 105

Model of Colorado Front Range bli77ard NOAA 109

Contours of wind flow in middle atmosphere IN:1AA 110

EPA netw orking maps EPA 111

Electric field sectors for benzop.s.rene Et'A 112

viii

Page 14: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

List of Illustrations (cont )

Regional Acid Deposition Model predictions of nitric acid over eastern U.S. EPA 113

Rendition of salinity in Chesapeake Bay EPA 114

Ozone concentrations through AVS interface EPA I 15

Getting feedback on prototype user interface EPA 116

Fairfax County Public School teacher and students using PATHWAYSdatabase and STAR SCHOOLS distance learning materials ED 117

Simulation of speed of currents at ocean surface and global ocean model 118

Sea surface temperature simulation maps 120

Chick cerebellum Purkinje cell 123

Controlling high-voltage electron microscope over the Internet 124

Simulation of airflow over and past high performance air:raft wing 126

Airspeed contours around aircraft traveling at Mach 0.77 127

Simulation of airflow through jet engine components 128

Modeling of Purkinje neuron I 29

Computer-aided biopsy from magnetic resonance observation 130

Molecular dynamics of leukotriene molecules portrayed using "ghosting" 131

Gulf Coast Basin as described by geological database 132

Comparison of computed injectivity of carbon dioxide and field data fromTexas oil reservoir 133

Groundwater remediation at Oak Ridge Waste Storage Facility 135

Sediment transport in Lake Erie 137

Astrophysical simulation of 8.8 million gravitational bodios I 39

Pattern found by "chaos" research 141

Virtual reality environment for controlling scanning tunneling microscope 143

Detail of virtual reality environment 144

High school students using HPCC technologies 145

DLA fractal object research for 1993 Westinghouse Science Talent Search 146

Guide to Available Mathematical Software (GAMS) screen 148

Simulation of liquid molding process for manufacturing polymer compositeautomotive body panels 150

Simulation of current flow in bipolar transistor 152

Splash 2 board containing Field Programmable Gate Arrays I 54

High Performance Fortraa (HPF) code segments 157

t4trc, ix

Page 15: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

High Performance Computing and Communications:Toward a National Information Infrastructure

The FY 1994 U.S. Research and Development Program

Executive Summary

The goal of the Federal High Performance Computing and Communications (HPCC) Program is to

accelerate the development of future generations of high performance computers and networks and the

use of these resources in the Federal government and throughout the American economy. Scalable

high performance computers, advanced high speed computer communications networks, and advanced

software are critical components of a new National Information Infrastructure (NI1). This infrastruc-

ture is essential to our national competitiveness and will enable us to strengthen and improve the civil

infrastructure, digital libraries, education and lifelong learning, energy management. the environment,

health care, manufacturing processes and products, national security, and public access to government

information.

The HPCC Program evolved out of the recognition in the early 1980s by American scientists and engi-

neers and leaders in government and industry that advanced computer and telecommunications tech-

nologies could provide huge benefits throughout the research community and the entire U.S. economy.

The Program is the result of several years of effort by senior government, industry, and academic sci-

entists and managers to initiate and implement a program to exte-ki U.S. leadership in hih perfor-

mance computing and networking technologies and to aop1:1; those technologies to areas of profound

impact, on and interest to the American people.

The Program is planned. amded. and executed through the close cooperation of Federal agencies and

laboratories, private industry. and academia. These efforts are directed toward ensuring that to the

greatest extent possible the Program mects the needs of all conmunities involved and that the results

of the Program are brought into the research and educational communities and into the commercial

marketplace as rapidly as possible.

Now halfway through its five-year effort, the Program's considerable achievements include:

JMore than a dozen high performancc computing centers are in operation nationwide. New scal-

able high performance systems are in operation at these centers, more advanced systems are in

the pipeline, and new systems software is making these systems increasingly easy to use.Benchmark results improve markedly with each new generation of hardware and software and

bring the Program closer to its goal of achieving sustained teraflop (trillions of floating point

operations per second) performance.

1 5

Page 16: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

AIM&

Traffic on federally-funded networks and the number of new local and regional networks con-nected to these networks continue to double every year.

More than 6,000 rev,ional, state, and local IP (Internet Protocol) networks in the U.S., and morethan 12,000 worldwide, are connected., more than 800 of the approximately 3,200 two-year andfour-year colleges and universities in the Nation are interconnected; and an estimated 1,000 highschools also are connected to the Internet. Traffic on the NSFNET backbone has doubled overthe past year and has increased a hundred-fold since 1988.

Already. HPCC research in the next generation of networking technologies indicates that theProgram goal of sustained gigabit (billions of bits) per second transmission speeds will beachieved by no later than 1996.

Teams of researchers have made substantial progress in adapting software applications for useon scalable high performance systems and are taking advantage of the increased computationalthroughput to solve problems of increasing resolution and complexity.

Many of these problems are "Grand Challenges." fundamental problems in science and engi-neering with broad economic and scientific impact whose solution can he advanced by applyinghigh performance computing techniques and resources. These science and engineering GrandChallenge problems have motivated both the creation and the evolution of the HPCC Program.Solution of these problems is critical to the missions of several atrencies participating in theProgram.

base oi researchers. educators, and students trained in using HPCC technologies has grownsubstantially as agencies have provided training in these technologies and in application areasthat rely on them.

The HPCC Program fully supports and is closely coordinated with the Administration's efforts toaccelerate the development and deployment of the NII. The Program and its participating agencieswill help provide the basic research and technological development to support NII implementation.To this end, several strategic and programmatic modifications have been made to the HPCC Program.The most significant of these is the addition of a new program component, Information InfrastructureTechnology and Applications (IITA).

IITA is a research and development effort that will enable the integration of critical information sys-tems and their application to "National Challenge" problems. National Challenges arc major societalneeds that computing and communications technology can help address in key areas such as the civilinfrastructure, digitril libraries, education and lifelong learning, energy management, the environment,health care, manufacturing processes and products. national security, and public access to governmentinformation. The IITA component will develop and demonstrate prototype solutions to NationalChallenge problems.

Page 17: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

IITA technologies will support advanced applications such as:

Routine transmission of an individual's medical record (including X-ray and CAT scan images)to a consulting physician located a thousand miles away.

-11 he study of books, films, music, photographs, and works of art in the Library of Congress andin the Nation's great libraries, galleries, and museums on a regular basis by teachers and studentsanywhere in the country.

JThe flexible incorporation of improved design and manufacturing to produce safer and moreenergy-efficient cars, airplanes, and homes.

Universal access by industry and the public to government data and information products.

The five HPCC Program components and their key aspects are:

High Performance Computing Systems (HPCS)

Scalable computing systems, with associated software, including networks of heterogeneoussystems ranging from affordable workstations to large scale high performance systems

--I Portable wireless interfaces

National Research and Education Network (NREN)

Widened access by the research and education communities to high performance computingand research resources

Accelerated development and deployment of networking technologies

Advanced Software Technology and Algorithms (ASTA)

Prototype solutions to Grand Challenge problems through the development of advanced algo-rithms and software and the use of HPCC resources

Information Infrastructure Technology and Applications (IITA)

Prototype solutions to National Challenge problems using HPCC enabling technologies

Page 18: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Basic Research and Human Resources (BRHR)

-I Support for research, training, and education in computer science, computer engineering, andcomputational science, and infrastructure enhancement through the addition of HPCC resources

HPCC Program agencies work closely with industry and academia in developing, supporting, andusing HPCC technology. In addition, industrial, academic, and professional societies provide criticalanalyses of the HPCC Program through conferences, workshops, and reports. Through these efforts.Program goals and accomplishments are better understood and Program planning and management arestrengthened.

The National Coordination Office (NCO) for High Performance Computing and Communications wasestablished in September 1992 to provide a central focus for Program implementation. The Officecoordinates the activities of participating agencies and organizations. and acts as a liaison to Congress,industry, academia, and the public. National Library of Medicine Director Donald A. B. Lindbergconcurrently serves as Director of the NCO, in which capacity he reports directly to John H. Gibbons.the Assistant to the President for Science and Technology and the Director of the Office of Scienceand Technology Policy.

In the past year, the National Security Agency, in the Department of Defense. and the Department orEducation have joined the HPCC Program, bringing to 10 the number of participating agencies. Thetotal FY 1993 HPCC budget for these 10 agencies is $805 million. For FY 1994. the proposed HPCCProgram budget for the 10 agencies is $1.096 billion, representing a 36 percent increase over theappropriated FY 1993 level

The HPCC Program is one of six multiagency programs under the Federal Coordinating Council forScience, Engineering, and Technology (FCCSET). The other five programs are AdvancedManufacturing; Advanced Materials and Processing; Biotechnology Research; Global ChangeResearch; and Science, Mathematics, Engineering. and Technology Education. Each of these dependson the capabilities provided by HPCC.

The FY 1994 Program and this document are the products of the High Performance Computing,Communications, and Information Technology Subcommittee (HPCC1T) under the direction of theFCCSET Committee on Physical. Mathematical, and Engineering Sciences (CPMES).

4 1-8

Page 19: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Some Grand Challenge Problems

Build more energy-efficient cars and airplanes

Design better drugs

Forecast weather and predict global climate change

Improve environmental modeling

Improve military systems

Understand how galaxies are formed

Understand the nature of new materials

Understand the structure of biological molecules

Some National Challenge Application Areas

The civil infrastructure

Digital libraries

Education and lifelong learning

Energy management

The environment

Health care

Manufacturing processes and products

National security

Public access to governnzent information

5 1 9BEST COPY AVAILABLE

Page 20: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

HPCC Program Overview

High performance computing has become a critical tool for scientific and engineering research. In

many fields, computational science and engineering have become as important as the traditional meth-ods of theory and experiment. This trend has been powered by computing hardware and software.computational methodologies and algorithms, availability and access to high performance computingsystems, and the growth of a trained pool of scientists and ermineers.

The High Performance Computing and Communications (HPCC) Program has accelerated thisprogress through its investment in advanced research in computer and network communications hard-ware and software, national networks, and arrency high performance computing centers. The 10Federal agencies that participate in the HPCC Program (listed on page 12), along with their partners inindustry and academia, have made significant contributions to addressing critical areas of nationalinterest to both the Federal government and the general public.

High Performance computing is knowledge and technology intensive. Its development and applicationspan all scientific and engineering disciplines. Over the last 10 years. a new approach to computinghas emerged that can support a broad range of needs ranging from workstations for individuals to thelargest scale highest performance systems that are used as shared resources. The workstations mayalso be small scale parallel systems and connect by high performance networks into clusters. Throughthe combination of advanced computing and computer communication networks with associated soft-ware, these systems may be scaled over a wide performance range, may be heterogeneous, and may beshared over large geographic distances by interdisciplinary research communities. The largest scaleparallel systems are referred to as massively pavallel when hundreds, thousands, or more processorsare involved. Networks of workstations provide access to shared computine resources consisting ofother workstations and larger scale higher performance systems.

High performance computing refers to the full range of supercom-puting activities including existing supercomputer systems, specialpurpose and experimental systems, and the new generation of largescale parallel architectures.

A Research and Development Strategyfor High Performance Computing

Executive Office of the PresidentOffice of Science and Technology Policy

November 20, 1987

BEST COPY AVAR.ABLE

Page 21: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

The uses of and demand for advanced computer networking funded in part by the HPCC Programcontinue to expand. Progress and productivity in many fields of modern scientific and technicalresearch rely on the close interaction of people located at distant sites, sharing and accessing computa-tional resources across high performance networks. Their use of networks has provided researcherswith unexpected and unique capabilities and collaborations. As a result, the scientific community isdemanding even higher performance from networks. This increased demand includes increasing num-bers of users; increasing usage by individual users; the need to transmit more information at fasterrates; more sophisticated applications; and the need for increased security, privacy, and the protectionof intellectual property.

The solution of "Grand Challenge" problems is a key part of the missions of many agencies in theHPCC Program. Grand Challenges are fundamental problems in science and engineering with broadeconomic and scientific impact whose solution can be advanced by applying high performance com-puting techniques and resources. These problems have and will continue to tax any available compu-tational and networking capabilities because of their demands for increased spatial and temporal reso-lution and increased model complexity. The fundamental physical sciences, engineering, and mathe-matical underpinnings are similar for many of these problems. To this end, a number of multiagencycollaborations arc underway. (Examples of these problems are identified in Some Grand ChallengeProblems on page 5 and in HPCC Research Areas on page 14.)

Although the U.S. remains the world leader in most of the critical areas of computing and computercommunications technology, this lead is being threatened by countries that recognize the strategicnature of these technology developments. The HPCC Program leads the Federal investment in thefrontiers of computing and computer communications technologies, formulated to satisfy nationalneeds in science and technology, the economy, human resources, and technology transf,:r.

The HPCC Program will help provide the technological foundation for thc National InformationInfrastructure (NW. The NII will consist of computers and information appliances (including tele-phones and video displays), all containing computerized information, linked by high speed telecom-munication lines capable of transmitting billions of bits of information in a second (an entire encyclo-pedia in a few seconds). A Nation of users will be trained to use this technology.

The computing and networking technology that will make the NI1 possible is improving at anunprecedented rate, expanding its effectiveness and even further stimulating our imaginations abouthow it can be used. Using these technologies, a doctor who seeks a second opinion could transmit apatient's entire medical record X-rays and ultrasound scans included to a colleague thousands ofmiles away. in less time that it takes to send a fax today. A school child in a small town could comehome and through a personal computer reach into an electronic Library of Congress or a great artgallery or museum to view thousands of books. photographs, records, videos, and works of art, allstored electronically. At home, viewers could use equivalent commercial services to choose at anytime to view one of th msands of films or segments of television programming.

The Administration is committed to accelerating the development and deployment of the NII, whichthe U.S. will need to compete in the 2Ist century. This infrastructure of "information superhighways"will revolutionize the way we work, learn, shop, and live, and will provide Americans the informationthey need, whcn they need it, and where they need it whether in the form of text, images, sound, orvideo. It promises to have an even greater impact than the interstate highways or the telephone sys-tem. The NI1 will be as ubiquitous as the telephone system. but will be able to carry information atleast 1,000 times faster. It will be able to transmit not only voice and fax, but will also provide hun-dreds of channels of interactive high-definition TV programming, teleconferencing, and access tohuge volumes of information.

8

Page 22: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

.-.1 "1.1111-,IIIMIGNIX IN( IN11

...aggwalaift IINMNIMO.....1011111Ms,-10.-=1NIM.,

Thanks in part to the HPCC Program, this technology is already in use in many of our research labora-tories where it is transforming the way research is done. Scientists and engineers can access informa-tion from computer databases scattered throughout the country and use high performance computersand research equipment thousands of miles away. Perhaps most importantly, researchers can collabo-rate and share information and tools with colleaeues across the country and around the world as easilyas if they were in the same room.

This same telecommunications and computing technology could soon be available to all Americans,provided there is adequate public and private investment and forward-looking government policiesthat promote its deployment and use.

The Administration believes that the Federal government has several important roles to play in thedevelopment of this infrastructure, which will be built and operated primarily by the private sector.The HPCC Program is a key part of the Administration's strategy for the NII. On February 22. 1993,the President and the Vice President unveiled a Technology Initiative that outlined the five parts of theAdministration's strategy for building the National Information Infrastructure:

1. Implement the HPCC Program.

This Program is helping develop the basic technology needed for the NII.

2. Develop NII technologies.

Through the new Information Infrastructure Technology and Applications (lITA) component of theHPCC Program, industry, universities, and Federal laboratories will collaborate to develop technolo-gies needed to improve effective use of the NII.

3. Fund networking pilot projects.

Thc Federal government will provide funding for networking pilot projects through the NationalTelecommunications and Information Administration (NTIA) of the Department of Commerce, whichcurrently plays a key role in developing Federal communications policy. NTIA will provide matchinggrants to states, school districts, libraries, and other non-profit entities to purchase the computers andnetwork connections needed for distance learning and for linking into computer networks such as theInternet. These pilot projects will demonstrate the benefits of networking in the educational andlibrary communities in addition, to the extent that other agencies undertake networking pilot pro-jects. NTIA will coordinate such projects, as appropriate.

4. Promote dissemination of Federal information.

Every year, the Federal government spends billions of dollars collecting and processing information(e.g., economic data, environmLntal data, and technical information). Unfortunately. while much ofthis information is very valuable, many potential users either do not know ffiat it exists or do not knowhow to access it. The Administration is committed to using new computer and networking technology

Page 23: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

to make this information more available to the taxpayers who paid for it. This will require consistentFederal information policies designed to ensure that Federal information is made available at a fairprice to as many users as possible while encouraging the growth of the information industry.

5. Reform telecommunications policies.

Government telecommunications policy has not kept pace with new developments in telecommunica-tions and computer technology. As a result, government regulations have tended to inhibit competi-tion and delay deployment of new technology and services. Without a consistent, stable regulatoryenvironment, the private sector will hesitate to make the investments necessary to build the high speednational telecommunications network that this country needs to compete successfully' in the 21st cen-tury. To address this and other problems. the Administration has created a White House-level intera-gency Information Infrastructure Task Force that will work with Congress, the private sector, and stateand local governments to reach consensus on and implement policy changes needed to acceleratedeployment of the NII.

Although the HPCC Program began as a research and development program. its impact is alreadybeing felt far beyond the research and education communities. The high performance computing tech-nology developed under this Program has allowed users to improve understanding of global warming,discover more effective and safer drugs, design safer and more fuel-efficient cars and aircraft, andaccess huge "digital libraries" of information. The high sp- networking technology developed anddemonstrated by the HPCC Program has accelerated the growtn of the Internet computer network andenabled millions of users not just to exchange electronic mail, but to access computers, digitallibraries, and research equipment around the world. This technology, which allows Internet users tohold a video conference from their desks, is enabling researchers across the country to collaborate aseffectively as if they were in the same room. The new IITA component of the HPCC Program willaccelerate the deployment of HPCC technology into the marketplace and ensure that all Americanscan enjoy its benefits.

Federal investment in new technologies is one of the best investments the government can make, onethat will provide huge, long-term benefits in terms of new jobs. better health care. better education.and a higher standard of living. This is particularly true in the case of the National InformationInfrastructure, which will provide benefits to all sectors of our economy. Few initiatives offer asmany potential benefits to all Americans.

Several strategic and programmatic modifications have been made to the HPCC Program in order toenable the NI1 Initiative to build on the Program's original four components. The most significant ofthese is the addition of the new IITA program component. IITA consists of research and developmentto enable the integration of critical information systems and the application of these systems to"National Challenges," problems where the application of HPCC technology can provide huge bene-fits to all Americans.

These efforts will develop and apply high performance computing and communications technologiesto improve information systems for National Challenees such as the civil infrastructure, digitallibraries, education and lifelong learning, energy management. the environment, health care, manufac-turing processes and products. national security, and public access to government information.Working with industry. IITA will support the development of the NI1 and the developnlent of thecomputer. network, and database technology needed to provide appropriate privacy and security protection for users.

()10 0

Page 24: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Overview of the Five HPCC Components

Five equally important, integrated components represent the key areas of high performance computingand communications:

HPCS High Performance Computing Systems

Extend U.S. technological leadership in high performance computing through the development ofscalable computing systems, with associated software, capable of sustaining at least one trillion opera-tions per second (teraops) performance. Scalable parallel and distributed computing systems will beable to support the full range of usage from workstations through the largest scale highest perfor-mance systems. Workstations will extend into portable wireless interfaces as technology advances.

NREN National Research and Education Network

Extend U.S. technological leadership in computer communications by a program of research anddevelopment that advances the leading edge of networking technology and services. NREN willwiden the research and education community's access to high performance computing and researchcenters and to electronic information resources and libraries. This will accelerate the developmentand deployment of networking technologies by the telecommunications industry. It includes nation-wide prototypes for terrestrial, satellite, wireless, and wireline communications systems, includingfiber optics, with common protocol support and applications interfaces.

ASTA Advanced Software Technology and Algorithms

Demonstrate prototype solutions to Grand Challenge problems through the development of advancedalgorithms and software and the use of HPCC resources. Grand Challenge problems are computation-ally intensive problems such as forecasting weather, predicting climate, improving environmentalmonitoring, building more energy-efficient cars and airplanes, designing better drugs, and conductingbasic scientific research.

IITA Information Infrastructure Technology and Applications

Demonstrate prototype solutions to National Challenge problems using HPCC enabling technologies.IITA will support integrated systems technology demonstration projects for critical NationalChallenge applications through development of intelligent systems interfaces. These will include sys-tems development environments with support for virtual reality, image understanding, language andspeech understanding, and data and object bases for electronic libraries and commerce.

BRFIR Basic Research and Human Resources

Support research, training, and education in computer science, computer engineering, and computa-tional science, and enhance the infrastructure through the addition of HPCC resources. Initiation ofpilot projects for K-12 and lifelong learning will support expansion of the Nil.

11

24 KU COPY MAILABLE

Page 25: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

HPCC Program Goals

Extend U.S. technological leadership in high performance comput-ing and computer communications.

Provide wide dissemination and application of the technologies tospeed the pace of innovation and to improve the national economiccompetitiveness, national security, education, health care, and theglobal environment.

Provide key parts of the foundation for the National InformationInfrastructure (NH) and demonstrate selected NII applications.

HPCC Agencies

ARPA Advanced Research Projects Agency, Department of Defense

DOE Department of Energy

ED Department of Education

EPA Environmental Protection Agency

NASA National Aeronautics and Space Administration

NIH National Institutes of Health, Department of Health andPuman Services

NIST National Institute of Standards and Technology, Departmentof Commerce

NOAA National Oceanic and Atmospheric Administration,Department of Commerce

NSA National Security Agency, Department of Defense

NSF National Science Foundation

12

Page 26: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

HPCC Program Strategies

Develop, through industrial collaboration, high performance comput-ing systems using scalable parallel designs and technologies capable ofsustaining at least one trillion operations per second (teraops) perfor-mance on large scientific and engineering problems such as GrandChallenges.

Support all HPCC components by helping to expand and upgrade theInternet.

Develop the networking technology required for deployment of nation-wide gigabit speed networks through collaboration with industry.

Demonstrate the productiveness of wide area gigabit networking to sup-port and enhance Grand Challenge applications collaborations.

Demonstrate prototype solutions of Grand Challenge problems thatachieve and exploit teraops performance.

Provide and encourage innovation in the use of high performance com-puting systems and network access technologies for solving GrandChallenge and other applications by establishing collaborations to pro-vide and improve emerging software and algorithms.

Create an infrastructure, including high performance computingresearch centers, networks, and collaborations that encourage the dif-fusion and use of high performance computing and communicationstechnologies in U.S. research and industrial applications.

Work with industry to develop information infrastructure technology tosupport the National Information Infrastructure.

Leverage the HPCC investment by working with industry to implementNational Challenge applications.

Enhance computational science as a widely recognized discipline forbasic research by establishing nationally recognized and accepted edu-cational programs in computational science at the pre-college, under-graduate, and postgraduate levels.

Increase the number of graduate and postdoctoral fellowships in com-puter science, computer engineering, computational science and engi-neering, and informatics, and initiate undergraduate computationalsciences scholarships and fellowships.

13

BEST COM AVAILABLE

Page 27: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

HPCC Research Areas

AerospaceAircraftSpacecraft

Basic Science add TechnologyAstronomyComputers and network communicationsEarth sciencesMolecular, atomic, and nuclear structureThe nature of new materials

Education

EnergyCombustion systems (in automobile engines, for example)Energy-efficient buildings

EnvironmentPollutionWeather, climate, and global change prediction and modeling

HealthBiological moleculesImproved drugs

Library and Information Science:

Manufacturing

Military Systems and National Security Systems

14 7

BM COPY AVAILABLE

Page 28: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

HPCC Program Management

The HPCC Program is planned, funded, and executed with the close cooperation of Federal agenciesand laboratories, private industry, and academia. These efforts are directed toward ensuring that to thegreatest extent possible the Program meets the needs of all communities involved and that the resultsof the Program are brought into the research and educational communities and into the commercialmarketplace as rapidly as possible.

The National Coordination Office (NCO) for High Performance Computing and Communications wasestablished in September 1992. Donald A. B. Lindberg was selected to direct the NCO, while continu-ing to serve as Director of the National Library of Medicine. The NCO coordinates the activities ofparticipating agencies and organizations, and serves as a liaison to Congress. industry, academia, andthe public.

As Director of the NCO. Lindberg reports to John H. Gibbons, the Assistant to the President forScience and Technology and the Director of the Office of Science and Technology Policy. TheDirector of the NCO also chairs the CPMES High Performance Computing, Communications, andInformation Technology (HPCCIT) Subcommittee. The Subcommittee meets regularly to coordinateagency HPCC programs through information exchanges. the common development of interagencyprograms, and the review of individual agency plans and budgets. It is also informed by presentationsby other Federal working groups and by public bodies.

Several HPCCIT working groups coordinate activities in specific areas. Individual agencies arcresponsible for coordinating these efforts:

-1The Communications group, led by NSF. coordinates network integration activities and worksclosely with the Federal Networking Council (FNC). The FNC consists of representatives frominterested Federal agencies, coordinates the efforts of govemmen' HPCC participants and otherNREN constituents, and provides liaison to others interested in the Federal Program.

JThe Applications group. led by NASA, coordinates activities related to Grand Challenge applica-tions, software tools needed for applications development, and software development at high per-formance computing centers.

The Research group. led by ARPA. focuses on basic research, technology trends, and alternativeapproaches to address the technoloilical limits of information technology. Its activities arc inte-grated into the overall research program through meetings with thc various technical communi-ties.

The Education group, led by NIEL coordinates HPCC education and training activities and pro-vides liaison with other education-related efforts under FCCSET.

A Federal Networking Advisory Committee (FNCAC) supports the FNC by providing input and rec-ommendations from broad communities and constituencies of thc NREN effort. Pursuant to P.L. 102-194. the High Performance Computing Act of 1991. a High Performance Computing Advisory'Committee will be established to support the overall HPCC Program. The Committee will improvecommunications and collaborations with U.S. industry, universities, state and local governments. andthe public.

15re),

Page 29: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Each participating agency has focr; points for addressing matters related to the HPCC Program.Organizational and management structures facilitating participation in the HPCC Program aredescribed in the sections presenting individual agency programs. Many participating agencies havepublished documents about their HPCC programs and solicitations for research in HPCC areas;requests should be directed to the respective HPCC contacts listed at the end of this document.

U.S. industry, academia, and other developers and users of IIPCC technology are involved in agencyprogram planning and execution through advisory committees, commissioned reviews, self-generatedcommentary, and through direct participation in HPCC research and development efforts and as sup-pliers of technology.

The HPCC Program has benefited from the interest, advice, and specific recommendations of a vari-ety of governmental, industrial, academic, professional. trade, and other organizations. Theseinclude:

-I Federal organizations

- Commerce, Energy, NASA, NLM, Defense Information (CENDI) Group

- Congressional Research Service (CRS)

- Department of Commerce HPCC Coordinating Group

- Department of Health and Human Services Agency Heads

Federal Aviation Administration (FAA)

- Federal Information Resources Management Policy Council (FIRMPOC)- Federal Library and Information Center Committee (FLICC) Forum on Information Policies

- Food and Drug Administration (FDA) Center for Drug Evaluation and Research- House Armed Services Committee Staff

House Subcommittee on Science; Committee on Science, Space, and Technology

House Subcommittee on Technology. Environment, and Aviation; Committee on Science.

Space. and Technology

- House Subcommittee on Telecommunications and Finance; Committee on Energy and

Commerce

NASA Advisory Council

- National Telecommunications and Information Administration (NTIA)

- NIH Information Resources Management Council

Securities and Exchange Commission (SEC)

Senate Science Subcommittee; Committee on Commerce. Science, and Transportation

-I Federally-chartered institutions

- National Academy of Sciences (NAS)/Computer Science and Technology Board (CSTB)

- NAS/CSTB/NRENaissance Study Group

- NAS/Institute of Medicine

- NAS/National Research Council (NRC) Executive Board

-I State organizations

- Texas Education Network

- Wisconsin Governor's Council for Science and Technology

16

Page 30: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

,7011111.11110111

JUniversity organizations

- Computing Research Association (CR A)

EDUCOM

Professional societies

- American Association of Engineering Societies (AAES)

- American College of Cardiologists

- American Institute of Medical and Biological Engineering (AIMBE)

- Association of American Medical Colleges

- Coalition of Academic Supercomputer Centers (CASC)

Computer Professionals for Social Responsibility (CPSR )

- International Medical Informatics Association (IMIA)

-2 Industrial organizationsAmerican Electronics Association (AEA)Computer Systems Policy Project (CSPP)

- Information Industry Association (HA)

-I Local organizations

- Montgomery County High Technology Council

- The Suburban Maryland Technology Council

-I Other organizations with interest in HPCC

- Coalition for Networked Information (CNI)

Foundation for Educational InnovationMicroelectronics and Computer Technology Corporation (MCC)

Science. Technoloay, and Public Policy Program

John F. Kennedy School of Government, Harvard University

Supercomputing Center Directors

Representatives from individual corporations, publishers, research laboratories, and supercom-

puling centers

-1 Foreign governments and organizations, including the British and Canadian governments

1?,3

Page 31: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

The HPCC Program has sponsored several major conferences and workshops. HPCC representativeshave given presentations at a number of related conferences as well. These are itemized below.

Major HPCC Conferences and Workshops During FY 1993

Event Sponsor

High Performance Computing Industry NASAPresentations

Improving Medical Care: Vision of HPCCIT

HPCC Applications in Chemistry

HPCC Comprehensive Review

Blue Ribbon Panel on HPCC

Workshop and Conference on Grand ChallengeApplications and Software Technology

National Libraryof Medicine

NIH

NASA

NSF

Nine HPCCAgencies

HPCC-Related Conferences

American Institute of Medical and Biological Engineering 1993Annual Meeting

Coordinating Federal Health Care: Progress and PromiseWorkshop

Council on Competitiveness Forum on Information Infrastructure

Electronic Industries Association Fifth Annual Federal InformationSystems Conference

Georgetown University Medical School Computer Health CareConference

tr 18

Page 32: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

HPCC-Rekaed Conferences (cont.)

Institute of Electrical and Electronics Engineers (IEEE)Information Exchange Conference

International Council for Scientific and Technical Information(IC'STI) Annual Meeting

NII Round Table sponsored by 3Com Corporation

Research Consortium Inc. (RCI) North American Annual MemberExecutive Conference

Scientific Computing and Automation Conference

Supercomputing '92

3Com Interop Conference

Wide Area Information Services (WAIS) Conference

The dynamism and flexibility of the HPCC Program is illustrated by the incorporation of many ofthese recommendations into currcnt Program plans. For example, the CSPP conducted its secondintensive study of the HPCC Program structure and in January 1993 published "Perspectives on theNational Information Infrastructure: CSPP's Vision and Recommendations for Action." Many oftheir recommendations formed the basis for plans in the HPCC Program's new IITA component.

Page 33: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Interdependencies Among Components

A complex web of interdependencies exists among the five components of the HPCC Program: suc-cess in each component is critical to the success of the others. Because of these interdependencies,maintaining balance among the components is crucial for Program success. The current balance isdesigned to foster that success as rapidly as possible. Some examples of the large number of interde-pendencies are given below.

Examples of two-component interdependencies:

HPCS and NREN The development of routers for the NREN component and other advanced com-ponent technologies depend on HPCS research in scalable computing and componenttechnologies.

HPCS and ASTA The advanced computing systems with associated systems software are used byASTA for Grand Challenac research.

HPCS and IITA - HPCS systems are used by IITA for National Challenge research.

NREN and ASIA ASTA's Grand Challenge research helps to determine requirements for highperformance networks for which NREN must provide new capabilities and technologies.Examples are distributed heterogeneous computing, scientific visualization. and the NSFmetacenter research efforts.

NREN and IITA NREN provides the networking technology base for IITA. An example isinteractive video.

One example of a three-component interdependency:

HPCS. NREN, ASTA Networks developed under the NREN component are used to accesstestbeds developed under the HPCS component in ASTA Grand Challenge research.

An example of a four-component interdependency:

HPCS, NREN, ASIA. IITA Some computationall). intensive IITA applications will use systemsdeveloped under the HPCS component. connected by NREN-developed networks, andGrand Challenge software developed under ASIA. An example is managing emergen-cies such as hurricanes.

Two five-component interdependencies:

BRHR dependencies BRHR depends on each of the other components. both individually and incombination, for research subjects.

BRHR provides training and education in the four other component, and their interrelatiooships.

3320

Page 34: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Coordination Among Agencies

The participating agencies cooperate extensively in their efforts toward accomplishing HPCCProgram goals. in part through the management vehicles described earlier and throu.qh the use ofHPCC products from those other agencies wherever feasible. There are many other collaborations,including:

Evaluation of early systems these systems are procured primarily by NSF. DOE, and NASA:to:tether with NIH. NSA, NOAA. and EPA, they are evaluated for mission-specific computa-tional and information processing applications.

-I DOE and NASA are coordinating testbed development to ensure that a diverse set of computingsystems are evaluated.

-1NIST is developing guidelines for measuring system performance. performance measurementtools, and software needed to monitor and improve the performance of advanced computing sys-tems at HPCC-sponsored high performance computing centers.

-I The gigabit testbeds (described on pages 37-39 in the NREN section).

- I High speed networking experiments - ARPA. NASA, and NSA collaborate.

-I Network securty ARPA, NSA, NIST. and other agencies collaborate.

-I The NSF Supercomputer Centers. Other agencies jointly support and use these environmentsfor their own missions and constituencies. One example is the NIH Biomedical ResearchTechnology program in biomedical computing applications.

The Concurrent SuperComputing Consortium (described on page 44 in the ASTA section).

-I The National Consortium for High Performance Computing established by ARPA in coopera-tion with NSF (described on pages 44-45 in thc ASTA section).

- I The High Performance Software Sharing Exchange uses ARPA's wide area file system. NASA'sdistributed access to electronic data, and software repositories from DOE and NIST. Theserepositories are accessed by the other agencies.

- I Joint agency workshops ( for example. the recent "Workshop and Conference on GrandChallenge Applications and Software Technology").

-I Representation on research proposal review panels (for example. DOE uses other agency expertsin its Grand Challenge Review Committee).

Page 35: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

NW/

Membership on the HPCCITSubcommittee

The National Coordination Office and the HPCCIT Subcommittee actively encourage other Federalagencies (Departments, agencies within Departments. or independent agencies) to consider joiningHPCCIT either as Official Members or as Observers.

Official Membership

If an agency proposes a program and the HPCCIT Subcommittee determines that the program meetsthe Evaluation Criteria (see below) and approves it, then the Subcommittee will recommend to theCommittee on Physical, Mathematical, and Engineering Sciences (CPMES) that the agency be addedto the HPCC Program and participate in the budget crosscut.

Observer Status

Upon request from the agency and approval by the HPCCIT Subcommittee, an agency may participatein thc technical program and attend HPCCIT Subcommittee meetings.

Requests for membership or observer status should be directed to the National Coordination Office(listed on page 166 in the Contacts section).

Evaluation Criteria for the HPCC Program

Relevance/Contribution. The research must significantly omtribute to the over-all goals and strategy of the Federal High Performance Computing andCommunications (HPCC) Program, including computhq, software, networking,

information infrastructure, and basic research, to enable solution of the GrandChallenges and the National Challenges.

Technical/Scientific Merit. The proposed agency program must be

technically/scientifically sound and of high quality, and must be the product of adocumented technical/scientific planning and review process.

Readiness. A clear agency planning process must be evident, and the organiza-tion must have demonstrated capability to carry out the program.

Timeliness. The proposed work must be technically/scientifically timely for one

or more of the HPCC Program components.

22

35

Page 36: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Linkages. The responsible organization must have established policies, pro-

grams, and activities promoting effective technical and scientific connections

among government, industry, and academic sectors.

Costs. The identified resources must be adequate, represent an appropriate

share of the total available HPCC resources (e.g., a balance among program

components), promote prospects for jointfunding, and address long-term

resource implications.

Agency Approval. The proposed program oractivity must have policy-level

approval by the submitting agency.

2336

Page 37: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Technology Collaboration withIndustry and Academia

HPCC agencies work in partnership with each other and industry and academia to develop cost-effec-tive high performance computing and communications technologies. The HPCC Program fosters thedevelopment and use of advanced off-the-shelf technology so that research results and products aresimultaneously available to support both Federal agency missions and the computational needs of theacademic and private sectors.

HPCC researcl is carried out in close collaboration with industry, particularly manufacturers of com-puter and communications hardware, software developers, and representatives from key applicationsareas. A num5er of n ,.!chanisms are used to facilitate these interactions: consortia. contracts, cooper-ative agreements ncl, as Cooperative Research and Development Agreements (CRADAs). grants, andother transactions. Some examples of these collaborations are:

The gigabit testbeds (described on pages 37-39 in the NREN section).

-I The Concurrent Super Computing Consortium (described on page 44 in the ASTA section).

J The National Consortium for High Performance Computing established by ARPA in coopera-tion with NSF (described on pages 44-45 in the ASTA section).

a NSF's National Supercomputer Centers and Science and Technology Centers and their metacen-ter efforts (described on page 66 in the NSF section).

a The Computational Aerosciences Consortium (described on page 85 in the NASA section).

The Consortium on Advanced Modeling of Regional Air Quality (CAMRAQ) (described onpages 45-46 in the ASTA section).

The National Coordination Office for High Performance Computing and Communications meets withrepresentatives from industry and academia. Major HPCC conferences and workshops held by HPCCagencies or addressing HPCC issues have included a number of representatives from industry andacademia as well. Further details about these activities are presented in the Program Management sec-tion.

24

Page 38: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

7

Agency Budgets by HPCC ProgramComponents

FY 1993 Budget (Dollars in Millions)

Agency HPCS NREN ASTA BRHR TOTAL

ARPA 119.5 43.6 49.7 62.2 275.0NSF 15.9 40.5 108.0 50.8 225.2

DOE 10.9 10.0 65.3 14.8 101.0NASA I 1.1 9.0 59.1 2.9 82.1

NIH 3.0 4.1 31.4 8.9 46.5NSA 34.8 3.2 5.4 0.2 43.6

NOAA 0.4 9.4 9.8EPA 0.4 6.0 1.5 7.9

NIST 9.3 1.1 0.6 2.1

ED 2.0 2.0

TOTAL 205.5 114.4 334.9 140.4 795.2

FY 1994 Budget (Dollars in Millions)

Agency HPCS NREN ASTA IITA BRHR TOTAL

A RPA 151.8 60.8 58.7 71.7 343.0NSF 34.2 57.6 140.0 36.0 73.2 341.0

DOE 10.9 16.8 75.1 21.0 123.8NASA 20.1 13.2 74.2 12.0 3.5 123.0

NI H 6.5 6.1 16.2 24.0 8.3 71.1

NSA 22.7 11.2 7.6 0.2 41.7

NIST 0.3 1.2 0.6 24.0 26.1

NOAA 1.6 10.5 0.3 12.4

EPA 0.7 9.6 1.6 11.9

ED 2.0 2.0

TOTA L 246.5 171.2 402.5 96.0 179.8 1,096.0

Page 39: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Age

ncy

Res

pons

ibili

ties

by H

PC

C P

rogr

am C

ompo

nent

s

DO

D/A

RP

A

NS

F

DO

E

NA

SA

HP

CS

NR

EN

AS

TA

IRA

BR

HR

* S

cala

ble

com

putin

g sy

stem

sin

clud

ing

netw

orks

of

wor

ksta

tions

, lar

ge s

cale

para

llel s

yste

ms,

&he

tero

gene

ous

conf

igur

atio

nsP

orta

ble

scal

able

sys

tem

sso

ftwar

e"

Adv

ance

d co

mpo

nent

s,pa

ckag

ing,

des

ign

tool

s, &

rapi

d pr

otot

ypin

g fo

r V

LSI

syst

ems

Sca

labl

e In

tern

et te

chno

logi

esw

ith in

tegr

atio

n of

diff

eren

tbi

tway

sG

igab

it te

chno

logi

es &

test

beds

' Sca

labl

e al

gorit

hms

& li

brar

ies

Sof

twar

e &

sys

tem

dev

elop

men

ten

viro

nmen

ts fo

r pa

ralle

l &di

strib

uted

het

erog

eneo

ussy

stem

s

' Sca

labl

e In

tern

et te

chno

logi

esfo

r un

iver

sal &

ubi

quito

usac

cess

' Mob

ile &

wire

less

sys

tem

sH

yper

med

ia s

yste

ms

with

inte

llige

nt u

ser

inte

rfac

es

' Sup

port

uni

vers

ity r

esea

rch

inco

mpu

ter

& c

ompu

tatio

nal

scie

nce

Res

earc

h on

sys

tem

sar

chite

ctur

eE

arly

sys

tem

s pr

otot

ypin

g &

eval

uatio

n' D

esig

n &

pro

toty

ping

tool

sD

istr

ibut

ed d

esig

n &

inte

llige

ntm

anuf

actu

ring

' Opt

ical

com

mun

icat

ions

syst

ems

& d

evic

es

Coo

rdin

ate

NR

EN

com

pone

nt' M

anag

e &

upg

rade

NS

FN

ET

back

bone

Pro

vide

enh

ance

d ne

twor

kse

curit

yIn

crea

se In

tern

et c

onne

ctiv

ityP

rovi

de In

tern

et in

form

atio

nse

rvic

es' G

igab

it ne

twor

k te

stbe

ds

' R&

D in

alg

orith

ms

& s

oftw

are

tech

nolo

gies

, dis

trib

uted

&he

tero

gene

ous

envi

ronm

ents

Gra

nd C

halle

nges

res

earc

h,in

clud

ing

inte

rdis

cipl

inar

yco

llabo

ratio

nS

cien

tific

dat

abas

esD

eplo

y pa

ralle

l sys

tem

s in

NS

FS

uper

com

pute

r C

ente

rs,

test

beds

, etc

.Im

plem

ent m

etac

ente

r

Sup

port

dev

elop

men

t of t

ools

for

hum

an c

omm

unic

atio

n &

colla

bora

tion

Dev

elop

mul

timed

ia s

oftw

are

&ed

ucat

iona

l mat

eria

lsS

uppo

rt d

igita

l lib

rarie

s re

sear

chP

ilot p

roje

cts

for

info

rmat

ion-

inte

nsiv

e ap

plic

atio

nsS

uppo

rt v

irtua

l rea

lity

appl

icat

ions

* S

uppo

rt u

nive

rsity

res

earc

h in

com

pute

r &

com

puta

tiona

lsc

ienc

e &

eng

inee

ring

Impr

ove

HP

CC

infr

astr

uctu

re a

tco

llege

s &

uni

vers

ities

Sup

port

edu

catio

n &

trai

ning

Pro

cure

& e

valu

ate

early

syst

ems

Res

earc

h in

par

alle

l sys

tem

sH

iera

rchi

cal d

ata

stor

age

deve

lopm

ent

Man

age

& u

pgra

de E

Sne

t:pr

ovid

e ac

cess

to e

nerg

yre

sear

ch fa

cilit

ies

Pro

cure

fast

pac

ket s

ervi

ces

' Res

earc

h in

gig

abit

netw

orks

,pa

cket

ized

vid

eo &

voi

ce

* E

nerg

y G

rand

Cha

lleng

esre

sear

chR

esea

rch

in d

atab

ase

tech

nolo

gy, c

ompu

tatio

nal

tech

niqu

esH

igh

Per

form

ance

Com

putin

gR

esea

rch

Cen

ters

Dev

elop

pro

toty

pe e

nerg

y-re

late

d ap

plic

atio

ns' S

uppo

rt e

duca

tiona

l pro

gram

s' D

evel

op c

ompu

tatio

nal s

cien

cete

xts

Pro

cure

& e

valu

ate

early

syst

ems

Man

age

& u

pgra

de A

ER

ON

et &

NS

Inet

' Pro

cure

fast

pac

ket s

ervi

ces

Res

earc

h in

gig

abit

netw

orks

.sa

telli

te-b

ased

app

licat

ions

Gra

nd C

halle

nges

res

earc

h in

aero

naut

ics

& th

e E

arth

'sen

viro

nmen

tH

PC

C S

oftw

are

Exc

hang

e

' Dev

elop

pro

toty

pe a

pplic

atio

nsfo

r ac

cess

ing

NA

SA

dat

a. &

for

digi

tal l

ibra

ries.

edu

catio

n, &

man

ufac

turin

g

Sup

port

res

earc

h in

aer

onau

tics

& E

arth

sci

ence

s &

hig

hpe

rfor

man

ce c

ompu

ting

Sup

port

edu

catio

nal p

rogr

ams

BE

ST C

OPY

AV

AllA

ila

40

Page 40: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

HF

IS/N

IH

DO

D/N

SA

DO

C/N

IST

_ DO

C/N

OA

A

EP

A

ED

HP

CS

NR

EN

AS

TA

HT

AB

AH

R

Pro

cure

& e

valu

ate

early

syst

ems

for

biom

edic

alap

plic

atio

ns

Ext

end

giga

bit s

peed

com

mun

icat

ions

Med

ical

Con

nect

ions

- in

crea

seIn

tern

et c

onne

ctiv

ityA

cces

s to

X-r

ay im

ages

' Res

earc

h in

Uni

fied

Med

ical

Lang

uage

Sys

tem

s &

dig

ital

anat

omy

Gra

nd C

halle

nges

res

earc

h in

mol

ecul

ar b

iolo

gy &

bio

med

ical

imag

ing

' Sca

labl

e so

ftwar

e tib

rarie

s' S

cien

tific

dat

abas

es' "

Vis

ible

Hum

an' p

roje

ct*

HIV

res

earc

h

Dev

elop

test

beds

link

ing

med

ical

faci

litie

s &

sha

ring

med

ical

info

rmat

ion

' Dev

elop

ana

tom

y vi

sual

izat

ion

softw

are

Dev

elop

virt

ual r

ealit

yte

chno

logy

for

sim

ulat

ing

oper

atio

ns. e

tc.

Dev

elop

med

ical

dat

abas

ete

chno

logy

Med

ical

info

nnat

ics

fello

wsh

ips

Bio

med

ical

sci

ence

edu

catio

nC

olla

bora

tion

amon

g bi

omed

ical

orga

niza

tions

' New

and

upg

rade

d hi

ghpe

rfor

man

ce c

ompu

ting

reso

urce

cen

ters

Dep

loy

expe

rimen

tal s

cala

ble

syst

ems,

em

phas

izin

gin

tero

pera

bilit

y &

ope

nsy

stem

s*

Dev

elop

mas

s ar

chiv

al s

tora

geP

rom

ote

inte

rope

rabi

lity

Pro

vide

Inte

rnet

con

nect

ivity

Dev

elop

net

wor

k se

curit

yte

chno

logy

Gig

abit

netw

ork

test

bed

' Dev

elop

sys

tem

s so

ftwar

e fo

rdi

strib

uted

par

alle

l sys

tem

sR

esea

rch

in a

lgor

ithm

s,pr

ogra

mm

ing

lang

uage

s,vi

sual

izat

ion,

sys

tem

sm

odel

ing

Inve

stig

ate

deve

lopm

enta

lte

chno

logi

es s

uppo

rtiv

e of

info

rmat

ion

infr

astr

uctu

reap

plic

atio

ns' R

esea

rch

in s

ecur

ity. m

ass

stor

age,

dat

abas

em

anag

emen

t. tr

ansa

ctio

npr

oces

sing

Sup

port

bas

ic r

esea

rch

in h

igh

perf

orm

ance

com

putin

g &

netw

orki

ng

' Dev

elop

inst

rum

enta

tion

&m

etho

dolo

gy fo

r pe

rfor

man

cem

easu

rem

ent o

f sys

tem

s &

netw

orks

Coo

rdin

ate

deve

lopm

ent o

fst

anda

rds

for

inte

rope

rabi

lity.

user

inte

rfac

es. s

ecur

ity

` A

lgor

ithm

s &

gen

eric

sof

twar

efo

r m

anuf

actu

ring

& o

iler

appl

icat

ions

Gui

de to

Ava

ilabl

e M

athe

mat

ical

Sof

twar

e

' Dev

elop

man

ufac

turin

gap

plic

atio

nsT

rain

use

rs o

f sca

labl

e sy

stem

:' S

uppo

rt u

nive

rsity

col

labo

ratio

ns

Incr

ease

Inte

rnet

acc

ess

tosy

stem

s &

dat

a' P

rovi

de in

form

atio

n &

assi

stan

ce s

ervi

ces

' Env

ironm

enta

l Gra

ndC

halle

nges

res

earc

h' A

cqui

re s

cala

ble

syst

em

Dev

elop

env

ironm

enta

lm

onito

nng.

pre

dict

ion.

8as

sess

men

t app

licat

ions

Exp

and

acce

ss to

env

ironm

enta

lda

ta

' Dev

elop

& e

valu

ate

mat

eria

ls fo

rtr

aini

ng u

sers

of e

nviro

nmen

tal

asse

ssm

ent t

ools

Sup

port

uni

vers

ity r

esea

rch

&ed

ucat

iona

l pro

gram

s

Incr

ease

& u

pgra

de In

tern

etco

nnec

tivity

Env

ironm

enta

l ass

essm

ent

Gra

nd C

halle

nges

res

earc

h' A

cqui

re h

igh

perf

orm

ance

syst

em

Exp

and

acce

ss to

env

ironm

enta

lda

ta'D

evel

op in

telli

gent

use

rin

terf

aces

to e

nviro

nmen

tal

appl

icat

ions

Dev

elop

PA

TH

WA

YS

for

Inte

rnet

acc

ess

by te

ache

rs &

pare

nts

Incr

ease

Inte

rnet

con

nect

ivity

Not

e. It

alic

ized

AR

PA

IIT

A e

ntrie

s ar

e be

ing

fund

ed a

s pa

rt o

f the

ir F

Y 1

994

HP

CC

bud

get:

italic

ized

DO

E, D

OD

/NS

A, D

OC

/NO

AA

. and

EP

A II

TA

ent

ries

are

cand

idat

e ac

tiviti

es a

nd a

re n

ot fu

nded

in th

e F

Y 1

994

HP

CC

bud

get.

4142

Page 41: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

High Performance Computing Systems(HPCS)

The HPCS component produces scalable parallel computing systems in collaboration with industry

and academia. Unlike dedicated. single processor architectures of the past, scalable parallel systems

have the property that increases in size result in proportional improvement in performance. This is

achieved by connecting multiple processors and memory units through a scalable interconnection

structure. Scalable systems can be configured over a wide range that can deliver high performance

computing_ to users at both small and very large scales.

Because the computing system designs are scalable, they can be used in smaller scale workstations.

Such workstations may also have high performance graphics capabilities to enable visualization of a

computational result and provide interactive interfaces to the user. These workstations may be linked

to local networks connected to the Internet. a network of networks that includes high performance sub-

nets linking higher performance and larger scale computing systems throughout the country.

HPCS focuses on the fundamental scientific and technological challenges of accelerating the advance

of affordable scalable parallel high performance computing systems. Critical underlying technologies

are developed in prototype form along with associated design tools. This allows evaluation of alterna-

tives as the prototype systems mature. Evaluation continues throughout the research and development

process. with experimental results used to refine successive generations of systems.

Scalable computing technologies used in combination with scalable networking technologies provide

the technology base needed to address the Grand Challenges and the National Challenges. The neces-

sary software technologies are developed by the ASTA and IITA components.

HPCS is composed of four elements to produce progressively more advanced and mature systems:

I. Research for Future Generations of Computing Systems

This element develops the underlying architecture, components, packaging (integration of electronics.

photonics, power, cooling, and other components), systems software, and scaling concepts to achieve

affordable high performance computing systems. These efforts ensure that the required advanced

technologies will be available for the new systems and provide a foundation for the more powerful

systems to follow. This element also produces the basic approaches for systems software, program-

ming languages. and environments for heterogeneous configurations of workstations and high perfor-

mance servers.

H. System Design Tools

This element develops computer aided design tools and the technology to allow multiple design tools

to work together in order to enable the design. analysis. simulation, and testing of system components

and modules. These tools make rapid prototyping of new system concepts possible. New design tools

will be produced to enable the design of more advanced prototype systems using new technologies as

they emerge.

4 329

Page 42: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

III. Advanced Prototype Systems

Systems capable of scaling to 100 gigaops (billions of operations per second) performance have begunto emerge. Teraops (trillions of operations per second) performance designs will be demonstrated bythe mid 1990s. Research in high performance systems focuses on reducing the cost and size of thesesystems so they can be used for a broader range of applications.

IV. Evaluation of Early Systems

Experimental systems will be placed at sites where researchers can provide feedback to systems andsoftware designers. Performance evaluation criteria for systems and results of evaluations will bemade widely available. Scalability enables small to medium size systems to be used for early perfor-mance evaluation and sobware development in preparation for larger scale applications. Larger scalesystems are included in the ASIA component fo..- applications such as the Grand Challenges.

HPCS Accomplishments

J Small, medium, and large scale systems developed under the HPCS component have beendeployed and are being used in the ASTA component. This includes large systems deployed invarious high performance computing centers and some systems installed in heterogeneous con-figurations.

The small and medium scale systems are being used to develop algorithms and software, includ-ing fundamental building blocks for Grand Challenge problems and a wide variety of new scien-tific computation models. These prototypes arc characterized by very fast routing and compo-nent technology, capable or scaling up to 100 gigaops system configurations.

-I Scalable systems continue to he evaluated and refined, providing early feedback on hardware.operating systems. compilers, software development tools, input/output systems, and mass stor-age systems. This process has resulted in rapid upgrades in a commercial system to a scalableoperating system based on very small and efficient software called microkernel technology.Extensions such as real time service and distributed and replicated file systems are under devel-opment.

J New technologies are pros iding a scalable, modular approach to mass storage performance andarchiving needed in the new large scale parallel computing systems:

Prototype scalable mass storage systems that use parallel arrays of inexpensive disk drivesto achieve both high aggrepte data transmission rates and large storage capacity have beendemonstrated. These systems de::ionstrate an approach that is the basis for a new genera-tion of high performance tile servers and mass storage systems that are internal to scalableparallel computing systems.

- Petahyte mass storage systems, which can hold images from about 50 unix ersity libraries.are now as ailable using commodity storage modules sk jib autonlated robotic transfer tomultiple readAk rite units. These systems help meet the dramatically increasing require-

44

Page 43: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-01 7

ments for mass storage from storing library information to storing remotely sensed satel-lite imagery.

Evolving advanced component technology is being employed in early experimental computersystems. This technology will form the basis for a new generation of higher performance, physi-cally smaller, and more affordable computing systems. Examples include the following:

- Single chip nodes that integrate processing. storage and conlmunications. new systems soft-ware. and new development environments, have been demonstrated. These have the poten-tial of providing very cost-effective scalable computing usimz these single chip or fineorained nodes.

Multichip modules are being studied in experiments to determine the optimal design forfuture scalable units.

-I Supporting technologies that enable the rapid design. prototyping. and manufacturing of HPCSsystems have made an important contribution to HPCS progress. Examples of rapid prototypingfacilities used by researchers include:

- A laser direct write multichip module tool and associated design capabilit has been devel-oped to reduce the prototyping time of new modules from months to two weeks. Thisenables designs to he developed more rapidly. and allows for the exploration of more effec-tive and cost-effective alternatives.

New algorithms nave been incorporated into design systems that extend synthesis to beapplied to new technologies such as field programmable gate arrays and various integratedcircuit technologies.

- A model "factory of the future," linking advanced design technologies from workstations tolarge scalable computing. was completed and coupled to a prototype factory (described onpages 152-153 in the Case Studies section). These technologies form the basis of a newgeneration of computational prototyping, exploiting networked and distributed design pro-cesses for rapidly prototyping future generations.

-I New competitive contractual mechanisms have been developed to enable the timely purchase ofexperimental systems. These joint government-industry research projects allow experimentaluse and early evaluation by a variety of user communities, which in turn pros ides early feedbackto the hardware vendors and to developers of associated software technology. Such projectsaccelerate the maturation of these complex technologies in preparation for their larger scale useby the ASTA component.

Page 44: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

National Research and EducationNetwork Program (NREN)

The NREN* component will establish a gigabit communications infrastructure to enhance the abilityof U.S. researchers and educators to perform collaborative research and education activities, regard-less of their physical location or local computational and information resources. This infrastructurewill be an extension of th2 Internet, and will serve as a catalyst fot the development of the high speedcommunications and information systems needed for the National Information Infrastructure (NW.

The emerging Nll will require: advances in the underlying foundations of networking technology andin generic networking services; the development and deployment of major new networking technolo-gies; broader access to state-of-the-art high performance computing facilities; and early testing of newcommercial products and services so that these can he effectively integrated into NREN associatednetworks.

The principal objectives of the NREN component are to:

a Establish and encourage wide use of gigabit networks by the research and education communi-ties to access high performance computing systems, research facilities, electronic informationresources, and libraries.

a Develop advanced high performance networking technologies and accelerate their deploymentand evaluation in research and education environments.

a Stimulate thc wide availability at reasonable cost of advanced network products and servicesfrom the private sector for the general research and education communities.

a Catalyze the rapid deployment of a high speed general purpose digital communications infras-tructure for the Nation.

Thc NREN component's Interagency Internet and Gigabit Research and Development elements con-tribute to reaching these goals:

I. Interagency Internet

Near-term enhanced network services will he developed on the Nation's evolving networking andtelecommunications infrastructure for use by mission agencies and the research and education com-munities. Interagency Internet activities include expansion of the connectivity atid enhancement ofthe capabilities of the federally funded portion of today's research and education networks, anddeployment of advanced high performance technologies and services as they rmiture. Coordinatedamong Federal agencies in cooperation with the private sector, this effort succeeds the InterimInteragency NREN element identified in previous reports about the HPCC Program.

NSF has apphed for registration of the service mark "NREN" with the U.S. Patent and Trademark Office.

32

44

Page 45: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

The Interagency Internet is a network of networks, ranging from high speed cross-country networks,

to regional and mid-level networks, to state and campus network systems. Its major Federal compo-

nents are the national research agency networks listed below. When these agencies' "backbone net-works" are upgraded, together they will form a national gigabit network to support research and edu-

cation. This network may in turn serve as a prototype for broader national gigabit networks.

Major Federal Components of the Interagency Internet

NSFNETNSF-funded nationat backbone network service

ESnetDOE's Energy Sciences Network

NSfNASA's Science Internet

ARPA's exploratory networks

The Interagency Internet and the other, non-federally-supported. portions of the Internet connect the

Nation's communities of researchers and educators to each other: to facilities and resources such as

computation centers. databases. libraries, laboratories, ano scientific instruments: and to supporting

organizations such as publishers and hardware and software vendors. The Interagency Internet alsoprovides international connections that serve the national interest. These services will be continually

enhanced as the Interagency Internet evolves.

The Interagency Internet also provides a testbed to stimulate the market for advanced network tech-

nologies such as Synchronous Optical Network (SONET) transmission infrastructure. Asynchronous

Transfer Mode (ATM) cell switches, high speed routers, computer interfaces, and other communica-

tions hardware and software. These technologies are being developed by the telecommunications

industry, routing vendors, and computer manufacturers, in collaboration with government and

academia, as part of the NREN component of the HPCC Program. Through these efforts. the HPCC

agencies will provide expertise in the systems integration of key technologies to form an integrated

and interoperable high performance network system that will continue to meet the needs of theNation's research and education communities. Once the initial development risks are reduced through

this collaboration among government, industry, and academia. the U.S. communications community

can build on these experiences and develop new products and services to serve the broader market-

place of NIL applications.

II. Gigabit Research and Development

The Gigabit Research and Development element is a comprehensive program to develop the funda-

mental technologies needed for a national network with advanced capabilities and with a minimum

33

Page 46: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

gigabit per second (Gb/s) transmission speed. Gigabit research and development takes place in twoways: through a basic research profiram that provides the building blocks to move data at increasinglyfaster rates with novel techniques such as all optical networking; and through the deployment oftestbed networks that use and prove the viability of these techniques. The testbeds provide an envi-ronment for the development of advanced applications targeted toward the solution of HPCC GrandChallenges.

As these technologies for networkin2 hardware and software are developed and shown to be viableand cost-effective, they will be incorporated into the Interagency Internet. They will also provide afoundation for supporting Grand and National Challenges and their further extension to the NationalInformation Infrastructure. Building on this foundation, the government and industrial partners willdevelop prototypes for a future high capability commercial communications infrastructure for theNation.

NREN Component Management

Each agency implements its own NREN activities through normal agency structures and coordinationwith OMB and OSTP. All 10 agencies participate in the NREN component as users. Muhiagencycoordination is achieved through FCCSET, CPMES. HPCCIT, and the HPCCIT High PerformanceCommunications working group.

Operation of the Interagency Internet is coordinated by the Federal Networking Council (FNC). whichconsists of agency representatives. The FNC and its Executive Committee establish direction, providefurther coordination, and address technical, operational, and manacement issues through workinggroups and ad hoc task forces. The FNC has established the Federal Networking Council AdvisoryCommittee, which consists of representatives from several sectors including library sciences, educa-tion, computers, telecommunications, information services, and routing vendors, to assure that pro-gram goals and objectives reflect the interests of these broad sectors.

Accomplishments

Increased Connectivity and Use of the Interagency Internet

The Interagency Internet has experienced tremendous growth in the number of connections (and hencethe number of researchers) it supports. and in the amount of traffic that it carries. Significant leverag-ing of the Interagency Internet activities have resulted in the following:

More than 6.000 regional. state, and local IP (Internet Protocol) networks in the U.S. are con-nected as of March 1993. More than 12.000 such networks are connected worldwide.

JMore than 800 of the approximately 3,200 two-year and four-year colleges and universities inthe Nation are interconnected, including all of the schools in the top two categories ("Research"and "Doctorate") of the Carnegie classification.

JAn estimated 1.000 U.S. high schools also are connected. The exact number is difficult to deter-mine since regional networks have ss idely lex eraged NSF and other agency funds to connect

34

46

Page 47: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

such institutions without direct agency involvement. For example, state initiatives in Texas andNorth Carolina proceed with little or no Federal funding or involvement.

Traffic on the NSFNET backbone has doubled over the past year, and has increased a hundred-foldsince 1988. Improvements and upgrades to the network made by NSF have kept pace with theincreased traffic and have advanced the state of network technology and operations.

ARPA, DOE, NASA, and NSF provide international connectivity to the Pacific Rim, Europe, Japan,United Kingdom, South America, China, and the former Soviet Union, for mission-specific scientificcollaborations and general research and education infrastructure requirements. These links are ofvarying speeds, with many of the larger "fat pipes" cost-shared and co-managed by agencies requiringhigh speed connectivity.

Schematic of the interconnected "backbone" networks of NSF, NASA, and DOE, together with selected clientregional and other networks. The backbone topology is shown on a plane above the outline of the U.S. Line seg-ments connect backbone nodes with geographic locations where client networks attach.

Procurement of Fast Packet Services

The need for advanced networking services has been driven by the requirements of distributed scien-tific visualization and remote experiment control, and more recently by the phenomenal growth ofmultimedia applications. In order to satisfy these needs. DOE and NASA arc in the process of jointlyacquiring fast packet services based on new telecommunications industry-provided services (for

'4 9BEST COPY AVAILABLE

Page 48: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

example, ATM/SONET). The initial deployment will provide a 45 megabits per second (Mb/s) back-bone service: upgrades to higher speeds are planned as soon as technology and budgets permit.

A key feature of this procurement is the use of as yet untariffed telecommunications provider servicesin an alpha test mode. In order to meet this procurement's deployment schedule, the telecommunica-tions industry nas accelerated its prototyping and deployment plans. The IITA component of theHPCC Program will later use these technologies via commercial services from the telecommunica-tions industry and router vendor market.

NASA and ARPA will use NASA's geostationary Advanced Communications Technology Satellite(ACTS), launched in 1993. to provide even higher speed (for example. 622 Mb/s) ATM/SONETtransmission to remote sites such as Alaska and Hawaii. The deployment will allow the HPCC agen-cies to gain experience in interfacing both terrestrial and satellite high speed communications systems.ARPA manages the development and deployment of the ACTS High Data Rate Terminals.

Internet Network Information Center (InterNIC)

In 1992, NSF issued a competitive solicitation for an Internet Network Information Center (InterNIC)to provide a variety of services to the worldwide Internet community. Awards were made to threeorganizations listed below to collaborate in providing these services. Information about how to con-nect to the Internet, pointers to network tools and resources, and seminars on various topics heldacross the country is availahle from the InterNIC Information Services (listed on page 167 in theContacts section).

InterNIC Service Awards

Service

Information

Description Award Made To

General information Generalabout the Internet Atomics/CERFnet

and how to use it

Directory Coordinated directory

and of the growing number

Database of resources availableon the Internet

AT&T

Registration Registry of the growing Network

number of networks Solutions Inc.

connected to the Internet

36

Page 49: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Prompted by the recent development of network-based tools to seek out information by queryingremote databases. NSF has established a Clearinghouse for Networked Information Discovery andRetrieval tools for assembling, disseminating, and enhancing such pui,licly available network tools.The clearinghouse complements the InterNIC.

Solicitation of the Next Generation NSFNET

Now that basic network services are readily and economically available commercially, NSFNET will,beginning in 1994. evolve into a very high speed national backbone for research applications requiringhigh bandwidth. In a new solicitation. NSF is requesting proposals to:

j Establish an unspecified number of Network Access Points (NAPs) where regional and otherservice providers will be able to exchange traffic and routing information.

a Establish a Routing Arbiter to ensure coherent and consistent routing of traffic among NAP par-ticipants.

a Establish a very high speed Backbone Network Service (vBNS) linking the NSF-supportedSupercomputing Centers.

a Allow e isting or realigned regional networks to connect to NAPs or Network ServiceProviders, who would connect to NAPs, for interregional connectivity.

The NAPs will provide connectivity to mid-level or regional networks serving both commercial andresearch and education customers and will also provide access to the vBNS.

With respect to regional networks, this solicitation addresses only interregional connectivity. On-going complementary intraregional support will continue and will be funded at constant or rising lev-els. These efforts include the Connections Program, which provides grants either to individual institu-tions or to more effective or more economical aggregates. A separate announcement to addressintraregional connection of high-bandwidth users to the vBNS is planned for FY 1994.

Interconnecting the NSF Supercomputer Centers, the vBNS will be part of the Interagency Internet. lt

is expected that the vBNS will run at a minimum speed of 155 Mb/s and that low speed connections toNAPs will be routed elsewhere.

Gigabit Research Projects

By 1996, gigabit research will lead to an experimental nationwide network able to deliver speeds up to2.4 billion bits per second to individual end user applications.

Ongoing research and development addresses communications protocols. resource allocation algo-rithms. network security systems, exploration of alternative network architectures. hardware and soft-ware, and the validation of that research by the deployment of several wide-area testbed networks.Several high data rate local area network testbeds will allow Federal agencies, industry, and academicresearchers to explore innovative approaches to advanced applications such as global change research.computer imagery, and chip design.

37

51

Page 50: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

In 1990, ARPA and NSF jointly began sponsorMg five gigabit network research testbeds: all areexpected to be operational by thc end of 1993. The research at the five testbeds and at testbeds initiat-ed subsequently (e.g.. MAGIC sponsored by ARPA), focuses on network technology and networkapplications, with alternative network architectures, implementations, and applications of specialinterest.

Each testbed explores at least one aspect of high performance distributed computing and networking:together they seek to create and investigate a balanced high performance computing and communica-tions environment.

Testbed teams consist of several government agencies (ARPA, DOE, Department of the Interior.NASA, NSF, state centers, and supercomputer centers), a number of universities, computer compa-nies, and various local and long distance telephone companies that participate both as serviceproviders and experimenters.

Other Projects

ARPA-sponsored consortia and individual projects are implementing novel networks that minimize oreliminate electronic content and replace it with optical technology. These efforts use alternative opti-cal schemes for data rates in excess of 10 Gb/s. Industry partnerships guarantee a rapid transition ofthe most promising technologies into the commercial sector.

ARPA's Washington Area Bitway is a multiple-technology testbed in the Washington-Baltimore areathat enables early experience with advanced network technologies. The first phase. called theAdvanced Technology Demonstration Network (ATDnet) uses the best commercial prototypes ofSONET/ATM technology to provide 100Mb/s-1Gb/s services to several DOD agencies and NASA.Applications include ACTS ground connections, imaging, and gigabit encryption. Later phases willdemonstrate advanced optical technologies over the same optical fiber paths.

Another ARPA demonstration project will show the utility of asymmetric rate/asymmetric path (CableTV and dialup) network access. Planned for the San Francisco Bay Area and for the Washington.D.C. area, the project is designed to explore a relatively inexpensive alternative to satisfying the "lastmile" that is. connections to homes and businesses in high speed networking implementations.

NSF also supports Project ACORN. a collaborative research effort with an NSF Engineering ResearchCenter and its industrial consortia. that is investigating lightwave networks of the 21st century. Theproject's TeraNet, a laboratory implementation and feasibility demonstration. will lead to a campus-wide field experiment involving leading-edge users. NSF has also begun to support research in all-optical networks.

38

Page 51: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

"MI.1141

,Th=inill111111111111111111111111/1111,

Gigabit Testbeds

Testbed Description

Aurora' Explore alternative networktechnologies, new networkmanagement and distributed systemparadigms, and quality of servicetechniques for gigabit networkmultimedia applications.

Blanca' Investigate network control, real-time protocols, and distributedinteractive applications includingremote thunderstorm visualization,radio astronomy imaging, andmultimedia digital libraries.

Casa' Investigate distributed large-scalesupercomputing over wide-areagigabit networks using chemicalreaction dynamics, geology, andclimate modeling applica'ions.

Nectar* Investigate software and interfacingenvironments for gigabit-basedheterogeneous computing andexplore chemical processing andcombinatorial optimizationapplications.

VISTANer Evaluate the application of gigabitnetworks and distributed computingtechniques to interactive radiationtherapy medical treatment planning.

Magic Early demonstration of high speedterrain visualization with longerrange plans to incorporate real-timesensor data into a real-time virtualworld model and display.

Sites

Bellcore -- Morristown, NJIBM -- Hawthorne, NYMIT -- Cambridge, MAU. Pennsylvania -- Philadelphia

Lawrence Berkeley Laboratory (LBL) --Berkeley, CA

NCSA Champaign-Urbana, ILU. California at BerkeleyU. Illinois -- Champaign-UrbanaU. Wisconsin at Madison

Caltech -- Pasadena, CAJet Propulsion Laboratory (JPL) -- Pasadena,

CALos Alamos National Laboratory (LANL)San Diego Supercomputer Center (SDSC)UCLA

Carnegie-Mellon U. (CMU) -- Pittsburgh, PAPittsburgh Supercomputer Center (PSC)

BellSouth -- Chapel Hill, NCGTE Durham, NCMCNC (formerly Microelectronics Center of

North Carolina) -- Research Triangle Park,NC

U. North Carolina at Chapel Hill (UNC-CH)

Minnesota Supercomputer Center --Minneapolis

U. of Kansas in LawrenceU.S. Army's Future Battle Laboratory -- Fort

Leavenworth, KSU.S. Geological Survey (USGS) -- Sioux Falls,

SD

Principal Research Participantsand CollaboratingTelecommunications Carriers

Bel! AtlanticBellcoreIBMMCIMITNYNEXU. ArizonaU. Pennsylvania

AmeritechAstronauticsAT&TBell AtlanticLBLi4CSAPacific BellU. California at BerkeleyU. Illinois at Champaign-UrbanaU. Wisconsin at Madison

CaltechJPLLANLMCIPacific BallSDSCUCLAUSWest

Bell Atlantic/Bell of PennsylvaniaBellcoreCMUPSC

BellSouthGTEMCNCNorth Carolina State UUNC-CH

Digital Equipment Corp.DOEEarth Resources Observation

Systems Data CenterLBLMITREMinnesota Supercomputer CenterNorthern TelecomSplit Rock TelecomSprintSouthwestern BellSRI InternationalU. KansasU.S. Army High Performance

Computing Research CenterU.S. Army's Future Battle

LaboratoryUSGS

Information summarized from 'A Brief Descnption of the CNR1 Gigabit Testbed Initiative." Corporation for National Research Initiatives.1895 Preston Whito Drive. Suite 100, Reston. Virginia. 22091-5434. (703) 620-8990.

BEST COPY AVAILABLE311-

Page 52: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

.4=1.ar

NREN FY 1994 Milestones

Bring MAGIC testbed into operation. Conduct initial terrain visualization demonstrations.

Demonstrate prototypes of gigabit ATMISONET technology operating over fiber and satellite(using ACTS) media. Install initial gigabit network interconnection.

Bring all-optical testbed networks into operation.

Put medical. terrain visualization, and modeling applications on 100 metlabit and Ogabit class net-works.

Complete ESnet and NSI fast packet upgrades.

Beta test high speed LAN interconnects with ESnet's fast packet WAN services.

Make awards to establish a series of NAPs, a Routing Arbiter, and a vBNS that links NSF super-computer sites and is accessible from the NAPs.

Formulate programs and solicit proposals to support high bandwidth applications on the vBNS.

Continue improvements in U.S.-to-international connectivity.

40

Page 53: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Advanced Software Technology andAlgorithms (ASIA)

The purpose of the ASTA component is to develop the scalable paraliel algorimms and software need-ed to realize the potential of the new high performance comr.iting systems in solving GrandChallenge problems in science and engineering. The early experimental use of this software on thenew systems accelerates their maturation and identifies and resolves scaling issues on the most chal-lenging problems.

The principal objectives of the ASTA component are to:

a Demonstrate large-scale, multidisciplinary computational results on heterogeneous, distributedsystems, using the Internet to access distributed file systems and national software libraries.

a Establish portable, scalable libraries that enable transition to the ncw scalable computing baseacross different systems and their continued advance through successive generations.

a Develop a suite of software tools that enhance productivity (e.g., debuggers, monitoring and par-allelization tools, run-time optimizers, load balancing tools, and data management and visualiza-tion tools).

a Promote broad industry involvement.

The ASTA component is composed of four elements:

I. Support for Grand Challenges

Prototype applications software will be developed to address computationally intensive problems suchas the Grand Challenges. Solution of these problems is not only critical to the missions of agencies inthe HPCC Program, but has broad applicability to the national technology base. Continuing increasesin computational power enable researchers in government, industry, and academia to address prob-lems of greater magnitude and complexity. Increased computational power enables:

a More realistic models as a result of higher resolution computational models. An example isweather models that show features on a local or regional scale, not just on a continental or globalscale.

a Reduced execution times. Models that took days of execution time now take hours, enabling theuser to modify the input more frequently, perhaps interactively and graphically, thus gaininginsight faster. Reduced execution times also enable modeling over longer time scales (for exam-ple. 100 year climate models can now be executed in thc same timc it used to take for 10 yearmodels).

More sophisticated models. including models formerly too time consuming. The radiative prop-erties of clouds can be included in climate models, for example.

a I .0 er cog solutions to specific problems. resulting in availability to a larger user community,

.'.t44.

5 5

Page 54: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Multidisciplinary teams of computational scientists and disciplinary specialists from one or moreFederal agencies and from industry and academia are working together to address these problems.Many of these Grand Challenges projects are cofunded and cosponsored by industry.

Significant progress has already been made toward solving many of these problems, and is expected tocontinue as the Program progresses. These new computational approaches are already being incorpo-rated by industry into new products, services, and industrial processes such as testing and manufactur-ing.

II. Software Components and Tools

A complete collection of software components and tools is essential for a mature scalable parallelcomputing environment that supports portable software. These software components and tools mustinclude standard higher level languages; advanced compilers; tools for performance measurement.optimization and parallelization, debugging and analysis; visualization capabilities; and interoperabili-ty and data management protocols.

As scalable parallel computing extends to a distributed computing environment, greater demands willbe made upon the advanced network technologies developed and deployed through the NREN compo-nent. Operating systems and database management software for heterogeneous configurations ofworkstations and high performance servers will be developed along with remote procedure calls andinterprocess communication protocols to support gigabit per second interconnections.

Broad industry involvement will be promoted through the identification and development of systemstructures using open interfaces. Industry involvement will also increase as computational approachesbecome available for more problem domains and more individuals arc trained in high performancecomputing.

III. Computational Techniques

Portable scalable software libraries are being developed to enable software to move across differentcomputational platforms and from one generation to the next. The performance and generality of thenew computing technologies will be evaluated using a variety of experimental applications. Standardsystems-level tools will be developed to support visualization of data and processes.

IV. High Performance Computing Research Centers (HPCRC)

The HPCRCs will deploy prototype large scale parallel computing facilities accessible over theInternet through the integration of advanced and innovative computing architectures (both hardwareand software). Computational scientists working on Grand Challenge applications, software compo-nents and tools, and computational techniques will be able to access the largest advanced systemsavailable in order to conduct a wide spectrum of experiments and scalability studies. Through theHPCRCs, the HPCC Program will evaluate prototype system and subsystem interfaces, protocols,advanced prototypes of hierarchical storage subsystems, and high performance visualization facilities.This work is done in cooperation with the Evaluation of Early Systems element of the HPCS compo-nent.

5642

Page 55: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

:3::._ZIIMMI11111111111111111611MINIM.

Major FY 1993 activities andaccomplishments and FY 1994 plans

Grand Challenges Research

The majority of ASTA Grand Challenge research has focused on key computational models using first.

generation HPCC computers. This research will be cxtended to include applications software for mul-

tidisciplinary applications, hybrid computational models, and heterogeneous and distributed problems

on second generation computer testbeds.

HPCC Agency Grand Challenge Research Teams

NSF Computational science and engineering in academic disciplines

DOE Energy and materials

NASA Aerosciences and Earth and spaCe sciences

NIH Biomedical applications

NOAA Atmospheric and oceanic computational modeling

EPA Environmental modeling

Collaboration with industry and academia, primarily through CRADAs and consortia, is a high priori-

ty. Collaborative efforts are currently underway in the areas of environmental and Earth sciences;

computational physics; computational biology; computational chemistry; material sciences; and com-

putational fluid and plasma dynamics.

A four day "Workshop and Conference on Grand Challenge Applications and Software Technology"

was held during May 1993 in Pittsburgh. PA. Funded by nine HPCC agencies, it brought together

some 30 Grand Challenge teams. Discussions centered on multidisciplinary computational science

research issues and approaches, the identification of major technology challenges facing users and

providers, and refinements in software technology needed for Grand Challenges research applications.

A special session was devoted to industrial applications. including the aeronautics, automotive, chemi-

cal. energy, financial, health care, and textue industries.

43 5 7

Page 56: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Software Sharing

The large collection of software needed to address the Grand Challenges and other computationallyintensive problems is certain to grow at a rapid rate. Effective and efficient mechanisms to manageand reuse this software are essential.

Toward this end. NASA coordinates the collection of and access to a high performance computingsoftware repository. The High Performance Computing Software Exchange uses ARPA's wide areafile systems and NASA's distributed access to electronic data to connect software repositories in sev-eral Federal agencies. These repositories include netlib from NSF and DOE, and MST's Guide toAvailable Mathematical Software (GAMS) (described on pages 148-149 in the Case Studies section).They will be expanded to include databases and bibliographic archives.

Mosaic, a hypermedia interface to repositories throughout the Internet, including gopher and WAIS.has been developed by the National Center for Supercomputer Applications (NCSA). In addition toproviding a means to browse the Internet, it enables research results to be electronically publishedwith text, images, image sequences. software libraries, and other resources. (Mosaic screens areshown on page 67 in the NSF section.)

An HPCC Software Exchange Prototype System is being built to establish foundations and guidelinesfor software submission standards, directories, indices, and Unix client/server interfaces. The criticalprocesses and procedures to be used are derived from a 1992 Software Sharing Experiment, whichidentified and reviewed current software and set priorities and mechanisms for creating needed soft-ware.

Supercomputing Centers and Consortia

Two examples illustrate the collaborative efforts initiated through the HPCC Program:

j The Concurrent Supe,Computing Consortium (CSCC) is an alliance of universities, researchlaboratories, government agencies, and industry that pool their resources to gain access tounique computational facilities and to exchange technical information, share expertise, and col-laborate on high performance computing, communications, and data storage issues. CSCCmembers are:

ARPAArgonne National LaboratoryCalifornia Institute of Technolo2yThe Center for Research on Parallel Computation (an NSF Science and

Technology Center)Intel's Supercomputer Systems DivisionJet Propulsion LaboratoryLos Alamos National LaboratoryNASAPacific Northwest LaboratoryPurduc UniversitySandia National Laboratory

JThe National Consortium for High Performance Computing was initiated by ARPA to accelerateadvances in high performance computing technology. It focuses on 1 ) software and applications

5 8 44I.

Page 57: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

411111110P16-.

development, and 2) the fostering of interdisciplinary collaboration among DOD and otherFederal agencies and laboratories, industry, academic partners, and other research and develop-ment organizations to solve important problems in defense and national security. Initiated incoordination and consultation with NSF using non-HPCC funds, this Consortium is an exampleof how HPCC technologies are being deployed on a national scale.

Scalable parallel systems are also located at the NSF sponsored Supercomputer Centers the CornellTheory Center (CTC), the National Center for Supercomputer Applications (NCSA) at Champaign-Urbana, the Pittsburgh Supercomputer Center, and the San Diego Supercomputer Center (SDSC).Plans are underway to establish a "metacenter'' in which these Centers will be interconnected via highperformance networks, allowing their supercomputing resources to be used as if they were an inteerat-ed high performance computing system.

NSF's Supercomputer Centers offer an interdisciplinary and collaborative environment for industrialand academic researchers. More than 100 corporations have formal affiliations with the Centersresulting in transition of enabling technologies and expertise. The Centers also work directly withvendors to identify and predict the needs of the computational science research community and todevelop and test hardware and software systems. Special programs at the Centers are funded by otheragencies, including the NIH Biomedical Research Technology program in biomedical computingapplications at CTC and NCSA. Other federally-supported high performance computing activities notcoordinated or budgeted through the HPCC Program, such as the National Center for AtmosphericResearch (NCAR) in Boulder, CO, maintain important connections with HPCC.

DOE and NASA have also established HPCRCs. These include the DOE facilities at Los AlamosNational Laboratory and Oak Ridge National Laboratory and NASA facilities at Ames ResearchCenter and the Goddard Space Flight Center.

The Consortium on Advanced Modeling of Regional Air Quality (CAMRAQ) is working to developpollution modeling systems. such as the regional environmental impact of pollutants including ozone.sulfate, nitrates, and articulates. Participants include:

Federal organizationsDefense Nuclear AgencyEPANASANOAA

Aeronomy LaboratoryAtmospheric Research LaboratoryNational Meteorological Center

National Park ServiceU.S. Army Atmospheric Sciences Laboratory

Federally-chartered institutionsNational Academy of Sciences/National Research Council

State and local organizationsCalifornia Air Resources BoardNortheast States for Coordinated Air Use ManagementSouth Coast Air Quality Management District

451, 59

Page 58: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

naiamm IWO-Industrial organizations

American Petroleum InstituteChevron Research CorporationElectric Power Research InstitutePG&ESouthern California Edison Company

International organizationsEnvironment Canada, Atmospheric Environment ServiceEUROTRAC/EUMACOntario Ministry of the Environment

DOE AND NASA are responsible for coordinating Grand Challenges applications software develop-ment, and are coordinating testhed development to ensure that a diverse set of computing systems areevaluated. Applications software and high performance computing benchmarks will be used by par-ticipating agencies to evaluate 'aigh performance computing system options. A key component of thiseffort will be to provide feedback to developers of teraops systems.

ASTA Milestones

FY 1993 - 1994

-I Demonstrate initial multidisciplinary Grand Challenge applications.

=I Deploy l00-gigaops systems to major high performance computing centers.

FY 1994

Complete initial software components and tools for large scale systems.

-I Deploy HPCC prototype libraries to the NII.

Beginning in FY 1994

Deploy 300-gigaops systems to major high performance computing centers and enable their use.

46

Page 59: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-.11111iIIIMO

Information infrastructure Technologyand Applications (IITA)

The IITA component of the HPCC Program is a research and development program intended to:

-I Develop the technology base underlying a universally accessible National InformationInfrastructure (NH).

-I Work with industry in using this technology to develop and demonstrate prototype "NationalChallenge" applications.

National Challenges are major societal needs that high performance computing and communicationstechnology can address in areas such as the civil infrastructure, digital libraries, education and lifelonglearning, energy management, the environment, health care, manufacturing processes and products,national security, and public access to government inform,ition. The list of National Challenges isdynamic and will expand as the technologies and other applications mature.

Solution of these National Challenges requires a three-part technology base consisting of the followingservices, tools, and interfaces:

-I Services that are common to and necessary for the efficient operation of the NIL For example,conventions and standards are needed to handle different formats of data such as text. image,audio, and video.

-I Tools for developing and supporting common services, user interfaces, and NII applications.

-1 Intelligent user interfaces to NH applications.

The IITA component depends critically on the technologies already developed by the HPCC Program,and places its own set of demands on the Program. IITA efforts will strengthen the underlying HPCCtechnology base, broaden the market for these technologies, and accelerate industry development ofthe NII.

The Federal agencies that participate in the HPCC Program will work with industry and academia todevelop these technologies. The NII, however, will be built and operated primarily by the private sec-tor, which can form new markets for products and services enabled by the emerging NII. This jointeffort by government, industry, academia, and the public to develop the Nil and address the NationalChallenges will require:

Deployment of substantially more high performance computing systems of increasingly higherperformance.

-1 A nationwide communications network with vastly greater capacity for connections andthroughput than today's Internet.

Page 60: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Further development and wider deployment of applications software for computationally inten-sive problems such as the Grand Challenges.

JEducation of large numbers of developers of NII technologies and training of a Nation of users.

Users of the early NII will be able to take advantage of small to moderate capacity computers andslow to medium speed communications, provided they have high quality user interfaces and access tothe applications. As user interfaces improve, more computing and communications performance maybe required. This can be achieved through the continual advances in the underlying technology devel-oped under the HPCC Program.

The HPCC Program's original focus on research and development will continue to play a pivotal rolein enhancing the Nation's computing and communications capabilities. For example, the GrandChallenges will continue to provide the scientific focus for critical computing technologies because oftheir profound and direct impact on fundamental knowledge. The IITA component will enable theextension of these technologies and the development of National Challenge applications that haveimmediate and direct impact on critical information systems affecting every individual in the Nation.Distinctions between National Challenges and Grand Challenges are shown in the table on the nextpage.

The following two examples illustrate potential applications of the NII.

A Medical Emergency

Having taken ill, a traveler is hospitalized and undergoes tests. including X-rays, CAT scans, andMRI. At the same time, the attending medical professionals quickly retrieve test results from the trav-eler's last physical examination. The images are compared, diagnoses made, and treatments pre-scribed. This scenario is difficult if not impossible to implement today, in part because dinnosticimages are commonly not in computer-readable form and network speeds are generally too slow totransmit large three-dimensional image data sets.

Truly remote medical care will depend on services, standards, tools, and user interfaces to store, find.transmit, manipulate, display (and superimpose), compare, and analyze three-dimensional image datafrom several sources. Diagnostic test results and large image data sets from the physical examinationmust be available on computers that can be accessed from thc hospital's computers over a communica-tions network: they must be retrieved quickly: the scientific data used in guiding the diagnosis andtreatment must also be available from electronic libraries and must be quickly retrieved: and the priva-cy of these patient records must be protected. All of this supposes completion of rather extensive andcomplex inter-professional medical arrangements. In addition, it must be done using a user interfacecustomized for the practice of "distance medicine," including collaborations among different sourcesof expertise.

Page 61: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Contrasts Between Grand Challenges and National Challenges

Focus

Users

Distribution of HPCSResources

Main HPCS Use

Main NREN Use

Main ASTA Use

Privacy and Security

Copyright

NetworkPerformance

Grand Challenges

Computation intensive

Computer scientists and computa-tional scientists, extending to sci-entists and engineers (numberingin the millions)

Focus on largest systems at "cen-ters" and smaller systems for soft-ware development

Workstation client/server systemsfor computing and systems devel-opment, scaling with scientificneeds

Share computing resources, data,and software; collaboration

Scientific software for computation-ally-intensive applications

Desirable but not critical to deploy-ment; essential in long term

Desirable but not critical to deploy-ment

High among largest scale computersystems and users needing visual-ization

National Challenges

Information intensive

Information and application usersin majc: na:Ionai sectors, extendingto all sectors (numbering in thehundreds of millions)

Focus on distributed systems ofmoderate scale with many users,scaling with increasing number ofusers

Workstation clienVserver systemsfor access to and processing ofinformation with extensions tomobile and wireless systems

Support distributed systems

Computationally-based services

Essential for deployment

Essential for deployment

Nominal to all involved, moderateto most when needed, high tothose with greatest need

49r 4 BEST COPY AVAILABLE

1

Page 62: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

.11011,

A Weather Emergency

Using a real-time weather forecast for the area 20 miles directly ahead, a trucker diverts to an alternateroute and reduces by hours the potential delay in delivering critically needed parts to a company thatuses a just-in-time inventory system. Relayed from a weather forecast center to the truck's on boardnavigation system, this highly accurate forecast that pinpoints developing adverse weather conditionsis made possible by the use of new computer weather prediction models that exploit the capabilities ofhigh performance, massively parallel computing systems.

Already. advances in computing and communications technologies have led to significant improve-ments in weather forecasting. As illustrated in the recent case of Hurricane Andrew in Florida, thisimproved forecasting can save lives as well as millions of dollars in evacuation costs through bettertargeting of evacuation efforts.

Services, standards, tools, and user interfaces are required to build and support systems for acquiringlarge amounts of three-dimensional environmental data from different sources (e.g., in situ and remotesensing observations). These systems must also support high resolution modeling using these data,incorporating improved representations of the physical environment, and a real-time information dis-semination capability to provide detailed forecast information for hundreds of different locations tothousands of users.

Unlike the first example, the user community for environmental information is larger and morediverse. Weather forecasts are needed by the general public, and for aviation, ship navigation, andagriculture, for example, while HPCC-funded researchers use much of the same observational data tomodel global change. Starting with user interfaces tailored to different kinds of users, individual userscustomize them for their own needs. Toe delivery and use of environmental information for this broadrange of applications is performed by a partnership of government and value added private sectorinformation companies, all part of the NII.

The IITA component will enable the development of an integrated infrastructure so that these twoapparently unrelated applications can work together efficiently. This infrastructure includes:

-I A networked computing base that provides appropriate performance.

-I Methods to provide security, privacy, and copyright protection, and other services such as "digi-tal signatures" to authenticate transactions.

Technical conventions and standards (especially for databases).

Tools to build and support the user interfaces.

Tools to build and support the applications themselves.

The IITA component will demonstrate these technologies through testbed and pilot projects. Theseprojects will evaluate new technologies, provide training in their use, and demonstrate specificNational Challenge applications. Successful project% will serve a% models to bc further refined andengineered for larger scale deployment.

F;50

Page 63: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

7

-.7=11E1.111111,

In order to facilitate the deployment of the NII by the private sector, the government will work closelywith industry, academia, and users on all aspects of the IITA component. The private sector is expect-ed to deploy many of its own applications, in areas such as commerce and entertainment. The govern-ment's goal is that from the user's point of view these applications will be integrated as seamlessly aspossible into a single NII, with appropriate use restrictions and protections incorporated as needed.

The IITA component is organized into four interrelated elements. Each builds on the foundation ofthe HPCC Program and, in large measure, builds on its predecessor.

I. Information Infrastructure Services

These are the basic services and interfaces, and the underlying technical conventions and standards.that provide the common foundation for a broad range of information technology-based applications.Building upon the Interagency Internet, these modular units will in turn be the building blocks of theNII applications. These include:

zt Data formats and object structures (including single- and multimedia formats such as image,audio, and video).

zi Methods for managing distributed databases.

J Services that provide access to electronic libraries and computer-based databases.

Methods for exchanging data (e.g., data compression) and integrating data (e.g.. merging andoverlay ing images) from one or more electronic libraries and computer-based databases.

j Services to search for and retrieve data and objects.

j Protocols and processes such as "digital signatures" needed to obtain appropriately secure andlegal access to information (including protection of copyrighted material).

3 Usage metering mechanisms to enable implementation of payment policies.

j High integrity, fault-tolerant, trusted, scalable computing systems.

J Protocols and processes needed to obtain the appropriate communications speed and bandwiath.

II. Systems Development and Support Environment

This element includes a comprehensive suite of software tools and applications methods such as soft-ware toolkits and software generators for use by computer programmers, tools and methods for inte-grating elements of virtual reality systems, collaboration software :,ystems. and applications-specifictemplates and frameworks. They will be used to:

J Interface to existing services ( for example, existing search services).

J Develop new distributed services for the Information Infrastructure Services element and for the

6-1

Page 64: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Develop generic user interfaces, including templates and frameworks, to facilitate the use of theservices provided by the Information Infrastructure Services element in the development ofadvanced and customized user interfaces by the Intelligent Interfaces clement described below.

Develop generic applications, including architectures and frameworks, for use in the IntelligentInterfaces element and for use by applications developers in implementing applications such asthe National Challenges and other applications in the NII.

This element also includes the systems simulators and modeling methods to be used in designing thetechnology underlying the NII.

HI. Intelligent Interfaces

In the future, high level user interfaces will bridge the gap between users and the NII. A large collec-tion of advanced human/machine interfaces must be developed in order to satisfy the vast range ofpreferences, abilities, and disabilities that affect how users interact with the NII.

Intelligent interfaces will include elements of computer vision and image understanding; understand-ing of language. speech, handwriting, and printed text; knowledge-based processing; and multimediacomputing and visualization. In order to enhance their functionality and ease of use, interfaces willaccess models of bo,h the underlying infrastructure and the users. Just as people now do their own"desktop publishing." they will have their own "desktop work environments." environments that w illextend to mobile and wireless networking modes. Users will be able to customize these environments,thereby reducing reliance on intermediate interface developers.

IV. National Challenges

National Challenges are fundamental applications that have broad and direct impact on the Nation'scompetitiveness and well-being. They will enable people to handle the increasing amounts of infor-mation and the increasing dynamics of the 21st century.

Using selected HPCC enabling technologies and the technologies developed by the other lITA ele-ments. this element will use pilot projects to develop "customized applications" in areas such as thecivil infrastructure. digital libraries, education and lifelong learning, energy management, the environ-ment, health care, manufacturing processes and products, national security, and public access to gov-ernment information. Detailed goals of four of these applications areas are as follows:

Digital Libraries

Develop systems and technology to:

-I Enable electronic publishing and multimedia authoi ing.

-I Provide technology for storing petabytes of data for nearly instantaneous access by users num-bering in the millions or more.

-I Quickly search, filter, and summari/e large volumes of information.

86 52

Page 65: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

a Quickly and accurately convert printed text and "pictures" of all forms into electronic form.

a Categorize and organize electronic information in a variety of formats.

j Use visualization to quickly browse large volumes of imagery.

j Provide electronic data standards.

a Simplify the use of networked databases in the U.S. and worldwide.

Prototypic scientific data bases, including remote-sensing images, will be developed. Librarians andother users will he trained in the development and use of this technology.

Education and Lifelong Learning

Conduct pilc --)rojects that connect elementary and secondary schools to networks through which stu-dents and tea( :rs can:

a Communicate with their peers and with students and faculty at colleges and universities acrossthe country.

a Access information databases and other computing resources.

a Use authoring tools to embody the experiences of the best teachers in systems that others canUse.

a Have greater access to the NII technologies, enabling them to develop and use them more effec-tively.

a Enable future generations to be literate in information technology so that they will be preparedfor the 21st century and beyond.

Health Care

Develop and provide:

a Access to networks that link medical facilities and enable health care providers and researchersto share medical information.

a Technology to visualize and analyze human anatomy and to simulate medical procedures suchas operations.

a The ability to treat patients in remote locations in real-time by having "distance collaborations"with experts at other medical facilities.

Teclmology by which health care providers can readily access databases of medical informationand literature.

Page 66: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Technology to store. access, and transmit patients medical records and protect the accuracy andprivacy of those records when doing so.

Manufacturing

Research and development to:

Prototype advanced computer-integrated manufacturing systems and computer networks linkingthese systems.

--I Work with industry in implementing standards for these advanced manufacturing operations.and process and inventory control.

Transition the manufacturing process to the new scalab:.: computing and networking technologybase.

Train management and employees in advanced manufacturing.

OS 54

Page 67: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Basic Research and Human Resources(BRHR)

The BRHR component is designed to increase the Gow of innovative ideas by encouraging investiga-tor-initiated. long-term research in scalable high performance computing; to increase the pool ofskilled and trained personnel by enhancing education and training in high performance computing andcommunications; and to provide the infrastructure needed to support these research and educationactivities.

The BRI1R component is organized into four elements:

I. Basic Research

This element supports increased participation by, individual investigators in conducting disciplinaryand multidisciplinary research in computer science, computer engineering, and computational scienceand engineering related to high performance computing and communications. Research topicsinclude:

a Foundations of future high performance computing systems.

a High performance hardware components and systems, high density packaging technologies, andsystem design tools.

a Mathematical models, numeric and symbolic algorithms, and library development for scalableand massively parallel computers.

a High level languages. performance prediction models and tools, and fault tolerant strategies forparallel and distributed systems.

-1 Large scale database processing; knowledge based processing; image processing: digitallibraries; visualization; and multimedia computing.

a Resource management strategies and software collaboratory environments for scalable paralleland heterogeneous distributed systems.

II. Research Participation and Traioing

This element addresses the human resources pipeline in the computer and computational sciences. atndergraduate. graduate. and postdoctoral (training and re-training) levels. Activities include:

a Workshops, short courses and seminars.

a Fellow ships in computational science and engineering and experimental computer science.

a Career training in medical informatics through grants to young investigators.

5? 69

Page 68: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-I Institutional training and postdoctoral programs: knowledge transfer exchange programs atnational laboratories, centers, universities, and industry.

Software dissemination through national databases and libraries.

III. Infrastructure

This element seeks to improve university and government facilities for computer science, computerengineering, and computational science and engineering research related to hiah performance comput-ing. Activities include:

J Improvement of equipment in computer science, computer engineering and computational sci-ence and engineering academic departments, centers, and institutions: development of scientificdatabases and repositories.

-I Distribution of integrated system building kits and software tools.

IV. Education, Training, and Curricu:um

This element seeks to expand existing activities and initiate new efforts to improve K-12, undergradu-ate, and graduate level education and training opportunities in high performance computing and com-munications technologies, computational science and engineering for both students and educators.The introduction of associated curriculum and training materials at all levels is an integral part of thiseffort. Activities include:

-I Bringing people, especially teachers, to national centers and laboratories, for summer institutesand other training, technology transfer, and educational experiences.

-I Utilizing professional scientists and engineeis to provide curriculum development materials andinstruction for high school students in the context of high school supercomputer programs.supercomputer user workshops, summer institutes, and career development informatics forhealth sciences.

BRHR Component Implementation

Each agency that participates in the BRHR component sponsors research participation andeducation/training programs designed to meet specific missioa needs. Some of these activities are asfollows.

ARPA supports basic research in such areas as high performance components. high densitypackaging, scalable concepts, system design tools and foundations of petaops systems.

-I NSF's basic research programs promote innovative research on the foundation sciences and tech-nologies of IIPCC as well as specific disciplinary activities in HPCC. NSF coordinates its basicresearch and infrastructure activities to foster balance in the multiagency HPCC Program.

56

7

Page 69: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

car.=.11101.... "la7.-7_721=711111111111111111111111

Through its "research experiences for undergraduates." Super Quest, postdoctoral. graduate fel-lowship, educational and minority infrastructure programs. NSF addresses long term nationalneeds in HPCC.

DOE supports basic research to advance the knowledge of mathematical. computational, andcomputer sciences needed to model complex physical, chemical, and biological phenomenainvolved in energy production and storage systems. DOE also is actively involved in educationand training activities at all levels.

NASA conducts basic research through NASA research institutes and university block grants,including support at the graduate and postdoctoral level.

NIH supports basic research and training in the use of advanced computing and network com-munications. Predoctoral and postdoctoral grants for career training in medical informatics arebeing expanded.

NOAA conducts basic research in computational fluid dynamics applications in atmospheric andoceanic processes.

EPA sponsors targeted fellowships and basic research activities, and develops and evaluatestraining methods and materials to support transfer of advanced environmental assessment toolsto Federal. state, and industrial sectors.

Partnerships with industry, universities, and government help accomplish BRHR objectives.

FY 1993 Accomplishments

In FY 1993. more than 1,000 research awards fund the following activities:

I Basic research in high performance computational approaches to materials processing. molecu-lar structures, fluid dynamics. and structural mechanics.

Basic research on scalable parallel systems in fundamental areas of mathematical models, algo-rithms. performance evaluation techniques, databases, visualization and multimedia computing.digital libraries, and collaboratory technologies.

An increased number of computer and computational science and engineering postdoctoralawards and graduate fellowships.

High school honors programs. teacher training programs and "research experience for under-graduates" in HPCC.

The introduction of the computational science textbook. which involved 24 authors in 10 differ-ent disciplines, into classrooms as part of a pilot project for Computer Science for High SchoolTeachers.

Page 70: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Institutional infrastructure awards to support experimental and novel high performance comput-ing research at universities and national laboratories.

-) Educational and minority infrastructure awards in undergraduate institutions.

FY 1994 Plans

-I Develop a program to apply the principles of artificial intelliaence to advanced intelligent rnanu-facturing.

a Develop a program to integrate virtual reality technology into high performance computing andcommunications systems.

-I Develop an initiative in digital libraries.

Increase research in real time, three-dimensional imaging and multimedia computing.

Increase support of information intensive applications of HPCC technologies in health care.information libraries development, education. and manufacturing.

-3 Increase support for scalable parallel computers.

-I Increase education and training activities in HPCC through establishment of network-based edu-cational testbeds.

Increase number of postdoctoral and graduate fellowship awards.

"258

Page 71: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Advanced Research Projects Agency(ARPA)As the HPCC Program reaches the middle of its initial five-year phase, the ARPA programis shifting focus from stimulating the development of the new scalable computing technologybase and early experimental use toward developing the technologies needed to enable abroad base of applications and users, including their extension to a National InformationInfrastructure. In addition, the foundations for future generations of computing systemsinvolving even more advanced technologies are being developed.

The current scalable computing technology base is char-acterized by the first 100 gigaop class computing sys-tems that are being experimentally used on a wide vari-3ty of problems in the scientific and engineeringresearch communities. Scalable operating systems areenabling software development on high performanceworkstations connected to higher performance serversthrough networks.

The experience gained in the early experimental use ofthese new computing technologies is used to refine thenext generation and guide the development of moreadvanced software and system development technolo-gies. The combination of scalable computing and scal-able networking technologies provides the foundation forsolving both Grand Challenges with large scale parallelsystems and National Challenges with large scale dis-tributed systems. This enables HPCC to progresstoward an NIL

ARPA is the lead DOD agency for advanced technologyresearch and has the leadership responsibility for HighPerformance Computing (HPC) within DOD. The ARPAHPC Program develops dual use technologies withbroad applicability to enable the defense and intelligencecommunities to build on commercial technologies withrapid development of more specific characteristics whenneeded. ARPA has no laboratories or centers of its ownand executes its programs in close cooperation with theOffice of Naval Research, the Air Force Office ofScientific Research, the Army Research Office, ServiceLaboratories, the National Security Agency, and otherDOD organizations and Federal agencies. ARPA partic-ipates in joint projects with other agencies in the FederalHPCC Program, a variety of Defense agencies, theIntelligence community, and other Federal institutions.

Page 72: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

A multi-year Caltech project developedhigh performance interconnect and fine-grained parallel systems includingboards consisting of 64 single nodes.scalable to thousands of nodes.Several commercial systems haveadopted architectures based on thisresearch, and are beginning to demon-strate how low cost modules can beconfigured to meet a broad range ofapplications.

Joint projects with other agencies are established toaccelerate technology development and transition.ARPA joint projects with NSF inciude foundations forscalable systems, visualization, Grand Challenges, giga-bit networks, and accelerating the maturation of systemssoftware at NSF Supercomputer Centers. Joint projectswith NASA include an Internet software exchange, sys-tern software maturation, and ground stations for theACTS gigabit satellite system. ARPA, NSF, and NASAalso have a joint program in digital libraries. Joint pro-jects with DOE include scalable software libraries andnetworking applications. ARPA is working with NIST todevelop performance measurement technologies andtechniques, privacy and trusted systems technologies,and the computer emergency response team system forthe Internet. A joint project with NSA is developing giga-bit network security technology and other secure andtrusted systems technologies. In addition, a variety ofearly evaluation and experimental use projects involvedifferent kinds of scalable parallel computing systems.

The ARPA program focuses on the advanced technolo-gy aspects of all five components of the HPCC Programas follows:

HPCS

ARPA projects stimulate the development of scalablecomputing technologies that are capable of being con-figured as networks of workstation:. and large scale par-allel computing systems capable of sustaining trillions ofoperations per second. Systems can be configured overa wide perform&nce range. The systems will be bal-anced to provide the processor-to-memory, scalableinterconnection, and input/output bandwidth needed tosustain high internal and external system performance.The modular design of the system units of replicationwill enable them to cover the full range from worksta-tions to the largest scale distributed and parallel sys-tems. Scalable systems with vector accelerators may beconfigured as parallel vector systems. Other kinds ofaccelerators such as field programmable logic arraysmay be added for specialized applications. The largestscale parallel systems with hundreds to thousands ofprocessors or more, are sometimes referred to as mas-sively parallel systems. The input/output interfaces ofthese systems may be used to configure heterogeneoussystems with high performance networks.

60

Page 73: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

High Performance NetworkingeJI... %co *14CA. :two' ..r.:.

LS-r-r"

Men,. <11 Pint Ow.

Experimental gigabit networks are over-laying and enhancing the Internet.ARPA's Networking Systems programdevelops and evaluates these technolo-gies as foundations for a global scale,ubiquitous information infrastructuresupporting Grand Challenge. NationalChallenge. and Defense needs.

Scalable microkernel operating systems with a full com-plement of servers will enable software and systemdevelopers to work with a uniform set of applicationinterfaces over the scalable computing base. Throughthe use of multiple servers, different application inter-faces can be supported to enable the transition fromlegacy systems such as those available today. The sys-tem software may be configured as needed for particularapplications including trusted and real-time systems.

Advanced components and packaoing technologiesincluding the associated design, prototyping, and sup-port tools will enable higher performance and more com-pact systems to be developed. These technologies alsoenable the development of embedded systems so thatcomputing can be put in specialized physical and envi-ronmental settings (such as airplanes, spacecraft, landvehicles, or ships).

Early evaluation and experimental use of new computingsystems is an integral part of the overall developmentprocess. Policies and mechanisms have been devel-

; oped that enable the timely purchase of new small tomedium scale computing systems for the purpose ofearly evaluation and experimental use. As these tech-nologies mature, larger scale systems are purchasedand deployed by other parts of the HPCC Program inconsultation with their user communities.

NREN

ARPA projects develop scalable networking technolo-gies to support the full range of applications from localnetworks, to regional networks, to national and globalscale networks including their wireless and mobileextensions. Different kinds of communication channels,or "bitways", will be integrated to enable network con-nectivity to be achieved between users and their applica-tions.

Internet technologies will be developed to enable contin-ued scaling of the networks to every individual and sys-tem needing access. Scalable high performance net-working technologies will be developed to enable gigabitspeeds to be delivered to the end users. A variety ofnetworking testbeds developed in cooperation with otheragencies are used to develop, demonstrate, and experi-mentally deploy new networking technologies. As thesetechnologies mature, larger scale systems are deployedby other parts of the Program in consultation with theiruser communities.

61 5

UST COPY AMAMI

Page 74: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-,11NOWW-ca.1101i1111i/

,..1111

Modern Operating Systems

Message passing, transparent accessto user services, advanced memorymanagement, and real time responseare key to a new generation of portablemicrokernel-based operating systems.Coupled with scalable computing sys-tems, these operating systems enablethe introduction of multilevel caches,scalable device interfaces, and scalabili-ty to thousands of processors.

ASTA

ARPA projects develop the advanced algorithms andsoftware technologies needed for computationally inten-sive applications. Scalable software libraries will enableoptimal performance for common problem domains.Extensions to existing programming languages and thedevelopment of new ones will make it easier to programscalable systems. Optimizing compilers for individualprocessors and their scalable configurat;ons includinasupport for memory and process allocation will enablemaximal performance for a given algorithm to beachieved. Design, analysis, visualization, and debug-ging tools will enable the development and support ofscalable parallel and distributed computing systemsthrough the use of common integrated frameworks.This will provide the foundation for the software and sys-tem development and support environments needed forcomputationally intensive problems such as the GrandChallenges. The application of HPCC technologies tospecific Defense problems is performed by other ARPAor Defense programs.

IITA

ARPA projects will deveiop the information infrastructuretechnologies needed for information and functionallychallenging applications that emphasize information pro-cessing. Advanced Internet-based services will bedeveloped to enable the effective deployment of dis-tributed Internet-based systems including multiple mediawith access to high performance computing servicesand individual access points. Mobile and wireless tech-nologies will enable users and their networks to accessthe information infrastructure with the appropriateauthentication, privacy, and security.

Distributed data and object bases with transparent repli-cation will enable the development and deployment ofdata, information, and knowledge repositories and ser-vices. A variety of access tools and interfaces will bedeveloped to enable interactive access to the infrastruc-ture. These will include intelligent functions such asknowledge based searching and alerting, planning andlearning systems, and natural language understandingsystems for speech, text, and images. This will providethe foundation for solving the large scale distributed,information and functionally intensive problems such asthe National Challenges.

62

Page 75: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

The Program Layers

NationalChallenges (NC)

Athirreeria. rowimeroc.I. stela

AN.pperto kW. of4,94etionoocepst NC

Examples

Applications

c

Applications

-0c c,c:

Service. Iniornualsm

l=t1=1

CommoirAdon

Bitways

Damon and slsoutackuun044 Itarragement

CarettsallnEducation snd TrolnInpr.Ertstronnumtal Moratcsing '

j**ItsvmdiamtticutlInglnoorh. pos.. supputtl'ItstrIbuittl ottisotsItoetr0(35cornmomeUsual tircelPrfasor earvic.MAW ft.*intecvneem fusionWorripmeina %%cow Won

010.01% data corms...SonCommon protocol support:

Torrestrts: Ittor opttosTorrostrist oiostsssUteri's rant and law

Usk." MirarflCea

The three interconnected layers of theNil are a variety of "Bitways" for com-munication, services for i iformation-based and computation-basedresources. and National Challengeapplications. Each layer focuses onprotocols for common delivery mecha-nisms in order to insulate users from thedetails of the underlying technologies.In so doing, each layer sustains adiverse base of technologies and sup-ports a broad base of suppliers.Communications protocols, for example,ensure delivery in a uniform manner;protocols for multimedia multicast andother services provide similar uniformityto applications developers; and applica-tions developers use scalable open pro-tocols to make rapid advances withoutmajor services reengineering.

BRHR

The BRHR component is generally structured as an inte-gral part of the other components. The ARPA program,along with NSF, provides the majority of Federal supportto universities in computer and computational science.Fundamental limitations of HPCC technologies are iden-

, tified and alternatives developed that will provide thebasis for more advanced technologies in the future. AnHistorically Black Colleges and Universities programidentifies individuals and groups that have the potentialto contribute to the program. Small Business InnovationResearch topics are formulated to provide anotheropportunity for small businesses to participate in the pro-gram. University projects provide an extraordinaryopportunity to attract new people into the program, ben-efit from their talent, and prepare them for the new tech-nologies. Basic research programs are often estab-lished in cooperation with NSF.

Major FY 1994 Milestones

Develop system architectures for affordable computingsystems, scalable from workstations to teraops systems.

Demonstrate scalable libraries, components, and toolsto support software development on a scalable parallelsystem.

Develop and deploy gigabit networking protocols basedon experimental testbed results.

Develop information infrastructure system architecturesand eany prototype services.

63

BEST COPY AVAILABLE

Page 76: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

.1.1=11111.11

National Science Foundation (NSF)NSF's HPCC program 2rovides support for basic research in HPCC technologies and appli-cations, national HPCC facilities and services, computational infrastructure, and educationand training. The program attempts to strategically guide broad currents of scientific inquiryand discovery into critical application problem domains. In FY 1994, NSF will continue tohighlight interdisciplinary research, increased collaboration between computer scientistsand application scientists, and cross-sector partnerships. Areas that address pressing inter-ests of science and the broader society will be targeted.

.c101T

Melacenter1993 NSF IN cm, coHardware

111,001.1"

CTC 1111

raa, OftWel

Major NSF activities are planned in all five componentsof the HPCC Program. Program activities are imple-mented and executed through the existing NSF disci-plinary program structure. An HPCC CoordinatingGroup with a representative from each NSF researchdirectorate is responsible for assuring program develop-ment and management consistent with the goals andobjectives of the HPCC Program. The result is ashared-management, NSF-wide process touching onalmost all science and engineering disciplines. In FY1992 over 1,200 individual awards were made by 75separate disciplinary program offices. All HPCC awardsare subject to the Foundation's merit-based peer reviewprocess.

NSF will continue to pursue substantial industry parki-pation and collaboration. Cooperation with industry isachieved programmatically both through the naturalalignment of academic basic research interests in keyHPCC areas and through the deliberate structuring ofselected NSF programs to foster collaboration. Theseinteractions stimulate the growth of shared knowledgeand capabilities, improve the rate of technology transfer,and identify new technologies and products of commer-cial value.

Two national NSF programs, NSFNET and the NSFSupercomputer Centers, could not be conducted at theirpresent scope without significant industry interest andcollaboration. Extensive industry involvement with theNational Supercomputer Centers includes partnershipsand affiliate relationships, cooperative efforts in technol-ogy development, and use of the centers' computingresources and training facilities. Affiliation with a centeroffers an innovative low risk method of exploring andultimately exploiting the usefulness of high performancecomputing technologies in a diverse intellectual and

64

Page 77: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

;.;

The approximately 130 industriai affili-ates and partners at the NSFSupercomputer Centers fall into thebroad groups shown above.

interdisciplinary environment. Currently about 130 non-academic Fin Hates or partners represent many areasincluding aerospace, automotive, financial, chemicalproducts, and telecommunications. The centers arealso engaged in software and s,stems developmentwith most of the major U.S. hign performance computinghardware vendors.

Coordination of NSF activities with other agencies is anintegral feature of the program. This entails multiagencyplanning activities leading to joint sponsorship of largescale research and infrastructure projects, scientific con-ferences and workshops, and development of sharedresources.

NSF Activities by HPCC Program Component

HPCS

The HPCS component focuses on the design and devel-opment of heterogeneous and distributed scalable paral-lel computing systems to advance computational sci-ence and engineering research capabilities. Included isresearch on: systems architecture, including applica-tion-specific systems and memory hierarchy; early pro-totyping and evaluation of hardware and software sys-tems; new tools for systems-level automated design andprototyping; distributed design and intelligent manufac-turing capabilities; and optical communications systemsand devices.

NREN

NSF is responsible for overall coordination of the NRENcomponent and the broad deployment of network infras-tructure and services. NREN component activities sup-port all aspects of the HPCC Program through connec-tion of other agency networks, upgrading of theNSFNET backbone network services, providing forenhanced network security, increasing the number ofeducational institutions connected to the network, andserving as a primary source of information about accessto and use of the network. Activities focus on networkinfrastructure and services and on gigabit research anddevelopment. These activities are designed to developand implement a wide range of networking infrastructure

65 7 9 IIEST COPY AVAILABLE

Page 78: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

NSF centers directlysupported within theASTA componentinclude:

- Cornell Theory Center,Ithaca, NY

- National Center fcrSupercomputingApplications, Champaign-Urbana, IL (NCSA)

- San Diego SupercomputerCenter, San Diego, CA

- Pittsburgh SupercomputingCenter, Pittsburgh, PA

- Center in ComputerGraphics and ScientificVisualization, University ofUtah

- Center for Research onParallel Computation, RiceUniversity, Houston, TX

components and services for the Nation's community ofresearchers, scholars, and students that will foster inter-action and collaboration; and to give rapid access toresearchers and students to facilities and otherresources for use in scholarly endeavors. These facili-ties and resources include computation centers, labora-tories, scientific instruments, and databases andlibraries. NSF also participates in the support (underARPA coordination) of the networking and communica-tions research and development effort leading to earlydeployment of advanced HPCC systems. This coordi-nated research effort includes work on very high speedswitches and several wide-area research gigabit testbednetworks in support of Grand Challenge collaborationsand other applications.

ASTA

The ASTA component emphasizes the development ofalgorithms and software technologies and the establish-ment of research capability to address Grand Challengescale computational problems. Included in ASTA are:research demonstrating the applicability of HPCS prod-ucts to critical problems; building interdisciplinary collab-oration and exchange; collaborative Grand Challengeand large scale computational research; computationaltechniques and software technologies for scalable paral-lel and distributed heterogeneous computing systems;scientific databases; and deploying and configuring newparallel systems in research centers and testbed facili-ties accessible to researchers, students, and educatorsnationwide.

NSF's National Supercomputer Centers and Scienceand Technology Centers are hubs of intellectual andeducational activity, serving researchers, students, andteachers from many disciplines and educational levels.They seek to define and provide a premier environmentfor coordinated approaches to Grand Challenge prob-lems. The centers are currently implementing the con-cept of a "metacenter:" planning and working closelytogether, the individual centers pool their technologicalresources, support services and administrative functionsso that the collective resources are viewed and madeavailable to users as a single, powerful computing envi-ronment. With the metacenter concept, new systemsand services can be incorporated into the overall envi-ronment without disturbing the stable computing environ-ment that is in place. The concept can also offer adoi-tional flexibility in managing and optimizing the centers'operations.

66

Page 79: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

.0itarrno

1111IP'"" -

,

Four sample NCSA Mosaic (described onpage 44 in the ASTA section) screens show-ing access to images contained in publicInternet digital data libraries. From top tobottom:

The Mosaic Demo document directly accessesa gopher server at the University of Illinois'Weather Machine containing current weatherdata (here the Flood of 93)

The Dinosaur fossil exhibit developed byHonolulu Community College The exhibit usesaudio clips, images, and text to educate andentertain Users can take a narrated tour byselecting the audio links throughout the docu-ments

Two images from "Rome Reborn The VaticanLibrary and Renaissance Culture." an exhibitat the Library of Congress

One screen from this hypertext exhibit

Botanical pages from Matena Modica a 14th5M century Italian manusifint

IITA

IITA component activities are designed to expedite theapplication of research products and services developedin the original four component areas to NationalChallenges. Efforts in IITA will build on and integratefundamental HPCC technologies in order to enablewidespread progress on information-intensive applica-tions in such areas as the civil infrastructure, digitallibraries, education and lifelong learning, the environ-ment, health care, and manufacturing. NSF will focusprimarily on building and promulgating intelligent userinterfaces, distributed software systems arri supportenvironments, standardized data management andcommunications processes, and enhanced informationinfrastructure services. IITA activities include:

Support for development of tools and mechanismsfor enhancing human communication and collabo-ration across the network.

JDevelopment of multimedia software and educa-tional materials.

Support for basic and strategic research on devel-opment of advanced data capture, storage, repre-sentation, and retrieval technologies for "digitallibraries."

JTraining support for users in the generation anduse of digital library resources, fostering "lifelonglearning" opportunities and development of newnetwork-based training modalities.

Pilot projects for information-intensive applica-tions.

BRHR

BRHR component activities are designed to encourageresearch projects and infrastructure and educationalactivities that ensure the flow of innovative ideas and tal-ented people into high performance computing technolo-gies and application areas. Activities include support forresearch in fundamental areas of algorithms, data struc-tures, programming languages, operating systems, soft-ware engineering, performance evaluation, databases,digital libraries, information science, knowledge process-ing, language and speech understanding, computer

67 S

Page 80: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

On-Going GrandChallenge ApplicationsGroups

Biomolecular Design

Coupled Field Problems andGAFD (Geophysical andAstrophysical FluidDynamics) Turbulence

High Capacity Atomic-LevelSimulations for Design ofMaterials

Imaging in BiologicalResearch

Large Scale EnvironmentalModeling

Machine Learning

Radio Synthesis Imaging

vision and image understanding, and multimedia com-puting. BRHR also supports postdoctoral awards in highperformance computational science and engineeringand experimental computer science; undergraduateresearch experienca., iiah schooi teacher and studenttraining experiences at high performance computingcenters; and infrastructure acquisition for computationalresearch and education.

Major FY 1993 Accomplishments

-1Six to eight new Grand Challenge ApplicationsGroups funded (listed in Side Bar).

This program activity was initiated in FY 1992, withawards made to multidisciplinary groups of scien-tists and engineers whose activities integrated dis-ciplinary research with the design of models, algo-rithms, and software for high performance comput-ing systems. See Side Bar for areas in whichawards were made.

J Research projects on techniques and environ-ments for scalable parallel and distributed hetero-geneous computing, collaboration technologies,evaluation of High Performance Fortran, program-ming language standards, and user environmentcomponents.

Metacenter implementation in selected areas atthe NSF high performance computing centers toprovide new opportunities for the combined useand shared management of these systems andrequisite supporting technologies.

iDeployment and configuration of scalable parallelsystems for mainstream scientific computing at theNSF centers.

-1NREN awards for Network Information Services;recompetition of the NSFNET Backbone; promul-gation of computer security measures; evaluationof the Network Access Point (NAP) concept;regional networks; and testbeds for teaching andeducation.

68

Page 81: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-Grand Challenge

Application GroupsInitiated in FY 1993

Adaptive CoordinationPredictive Models withExperimental Observations

Advanced ComputationalApproaches to BiomoiecularModeling and StructureDetermination

Black Hole Binaries:Coalescence andGravitational Radiation

Earthquake Ground MotionModeling in Large Basins

High Performance Computingfor Land Cover Dynamics

Massively Parallel Simulationof Large Scale, HighResolution EcosystemModels

Parallel I/0 Methodologies forI/O-Intensive GrandChallenge Applications

The Formation of Galaxies andLarge-Scale Structure

Unders:anding Human JointMechanics ThroughAdvanced Computational

Models

r ,ir

FY 1994 Plans

NSF will continue to support HPCC basic research activ-ities, metacenter implementation, Grand ChallengeApplication Groups, postdoctoral research associate-ships, and broader access to high performance comput-ing research resources for researchers, students, andeducators at all levels. Support for NREN developmentwill continue to meet special NSF obligations andresponsibilities set forth in the HPCC Program plan,including the development of gigabit testbeds. Newactivities will support the migration of technologies andcapabilities developed in the original HPCC Programcomponent areas to address specific applications asso-ciated with such areas as health care, lifelong learning,digital and electronic libraries, and manufacturing.

HPCS support will focus on:

JDeveloping new systems level design tools thathave increased functionality, are highly automat-ed, and accept high level specifications and algo-rithms as input.

Novel computing system design approachesdrawing on expertise in architecture, systems soft-ware, storage technology, I/0 subsystems, andnetworking technologies.

Optical communications devices and systems.

Advanced computer and information technologiesto enable distributed design and intelligent manu-facturing of objects.

-ISystems-level issues in understanding, modeling,and integrating component rnanufacturinr tech-nologies.

NREN activities will include:

-IDevelopment of a very high speed backbone.

JIncreasing the number of network access points(NAPs).

-1Funding, improving, and providing new servicesfor mid-level . 9tworks.

69

Page 82: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

a Funding a routing authority.

Increasing connections for post-secondary educa-tion and related organizations.

ASTA activities will include:

Additional Grand Challenge Application Groupsawards.

Research on computational techniques for dis-tributed and heterogeneous computing environ-ments.

Continued metacenter development and the adop-tion of common standards for hardware, software,and networking technologies.

a Deployment of prototype scalable systems inGrand Challenge research settings.

Collaborative activities on software systems, archi-tectural modifications, new programming languagestandards, and user environment components.

a Projects demonstrating the usability of HighPerformance Fortran and other high level lan-guages for solving large scale scientific and engi-neering problems on massively parallel distributedmemory systems.

IITA activities will focus on:

Application of HPCC technologies to such NationalChallenge areas as health care. lifelong learning,digital and electronic libraries, and manufacturing.

a Software research relevant to scientific databases,digital libraries, collaboration technologies, com-puter graphics, and scientific visualization.

Exploring new applications of virtual reality sys-tems.

a Improvements in accessibility and ease of use ofInternet-based data and computing.

a Research on intelligent knowledge capture, repre-sentation, extraction, and dissemination.

70

Page 83: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-1Projects aimed at simplifying human interactionwith digital information resources through develop-ment of user friendly, graphical interfaces forknowledge representation and manipulation.

JActivities addressing vital issues of standards.

iDevelopment of prototype libraries designed to fillnew roles in research and education.

BRHR support will focus on:

-iBasic research on scalable parallel systems, dis-tributed computing environments, and tools.

-1 Research in fundamental areas of algorithms,databases, visualization, architecture, perfor-mance evaluation and modeling, and computerresource management.

-ilncreased number of postdoctoral awards in highperformance computational science and engineer-ing and computer science, and expansion of thecurrent program to include industry.

-I Continued support for HPCC training and educa-tion in the form of research experiences for under-graduates, supplements to faculty researchawards, high school teacher and student training,and equipment and infrastructure awards.

Continued acquisition of infrastructure to improvecomputer science, computer engineering, compu-tational science and engineering, and GrandChallenge research.

71 F

Page 84: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Department of Energy (DOE)DOE's goals in the HPCC Program are to enable effective applications of HPCC technolo-gies and the emerging National Information Infrastructure (Nil) to scientific problems thatare critical to implementing the Energy Policy Act (PL 102-486), other DOE mission pro-grams, and the national interest. DOE's missions encompass such diverse activities asenergy technologies, studies of energy supply and usage in the U.S., environmental remedi-ation, fundamental research, investigations in the health and environmental sciences, andnational security.

To better understand combustion pro-cesses and the effluent gases producedby these processes, hydrogen/air turbu-lent reacting flame streams are studied.From left tc right, false color renderingsof the temperature. OH radical concen-tration, and NO concentrations areshown. Low values are in blue and highvalues in red. (Temperatures in the leftimage range from 30 C to 2,800 C andthe visible flame is in the red band.)The research and object oriented paral-lel software are the products of a collab-oration among Sandia NationalLaboratory, Lawrence BerkeleyLaboratory, and the University ofCalifornia at Berkeley

All five components of the HPCC Program are importantto achieving DOE's goals. The DOE program for FY1994 will continue to build on the foundation of jointinteragency, interdisciplinary, and private sector collabo-rations established during the earlier phases of theHPCC Program.

With its focus on applying HPCC technologies, it is criti-cal that the DOE effectively encourage technology trans-fer and collaborations among researchers in differentdisciplines and among researchers and their counter-parts in U.S. commercial enterprises. In order to ensurethe effectiveness of this approach, the program:

Funds activities in cooperation with other DOEmission programs that will develop and use HPCCtechnology.

a Involves end users of the technology to the maxi-mum extent possible in the initial and ongoingevaluation of projects.

a Carries out work with industrial partners, generallythrough Cooperative Research and DevelopmentAgreements (CRADAs).

The Office oi Scientific Computing has been assignedthe lead responsibility for Department-wide participationin the HPCC Program.

FY 1993 Accomplishments and FY 1994 Plans

HPCS

A research program has been initiated to investigate theperformance of computer systems based on an integrat-

72

Page 85: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

ma.

ESnet provides access to EnergyResearch facilities and resources forDOE and DOE-supported researchersand educators.

ed analysis of algorithms used in Grand Challengeapplications. In addition, a researcher at DOE's AmesLaboratory was awarded a patent for developingSLALOM, an innovative benchmarking system for anytype of computer including parallel computers. Twolarge university projects in parallel systems research atthe University of Illinois and at New York University wereextended. Several projects have been initiated to evalu-ate the effectiveness of early prototype HPCC systemson scientific problems important to the DOE at theDepartment's new High Performance ComputingResearch Centers.

NREN

Internet Support

Efforts continue to build on the activities initiated in FY1992 and FY 1993. The ESnet FY 1992 fast packet ser-vices procurement will provide advanced network capa-bilities and services to DOE sites and researchers, inaddition to accelerating the availability of these publictelecommunications vendor-based services to otherFederal, state, and research and education organiza-tions. The ESnet is an integral part of the Internet; itprovides multiprotocol seamless connectivity to scientificresources and facilities, for international collaborativescience, and supports DOE sponsored education activi-ties. Other activities and accomplishments include col-laborative work in packet video and voice, collaborativedistributed work environment tools, multiprotocol sup-port, GOSIP/OSI to TCP/IP gateways for e-mail andother services, and other precompetitive and short termresearch and development efforts.

Gigabit Research

DOE participates in the existing gigabit testbeds, thedevelopment of high speed local area networks (e.g.,HIPP!), tools needed to support remote experimentationand distributed computing, multimedia data manipula-tion, high speed network management, and the stan-dards associated with each of these technologies.

ASTA

Support for Grand Challenges

DOE set up an interagency panel, comprised of DOEprogram managers and other HPCC agency participants

73

BHT COPY AVAttf',1.

Page 86: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

In work jointly supported by NIH, DOE,and NSF, researchers at New YorkUniversity. Sloan Kettering CancerCenter, and Oak Ridge NationalLaboratory are elucidating the effects ofbiochemically activated environmentalchemicals, such as benzo-a-pyrene, amaterial present in automobile exhaustgases. bound to DNA. The figures illus-trate calculations of two such deriva-tives that are mirror images of oneanother, (+) and (-) anti-benzo-a-pyrene-diol-epoxide IBPDEI, bound to asegment of normal right-handed DNA.There is a profound difference in thehealth effects of these two cases, the(+) BPDE case (red figure) is tumon-genic while the (-) BPDE is benign.

and chaired by the NASA HPCC program director, toevaluate proposals for the DOE Grand Challenge com-putational research projects. The selected projects arecofunded by other DOE programs and by industrial part-ners and include DOE laboratory, university, industry,and other HPCC agency participation. The six multi-year Grand Challenge projects selected in FY 1992 are:

a Computational Chemistry to effectively paral-lelize chemistry codes ;mportant to the study ofseveral critical environmental problems. This pro-ject is cofunded by four industrial partners and theDOE Chemical Sciences program.

Computational Structural Biology for research incomputational methods important to understandingthe components of genomes and to develop a par-allel programming environment for structural biolo-gy. This project is cofunded by the DOE Healthand Environmental Research program, an NSFScience and Technology Center, a biomedicalindustrial firm, and a university foundation.

a Mathematical Combustion Modeling to developadaptive parallel algorithms for computational fluiddynamics and apply these methods to combustionmodels important to private sector and govern-ment scientists and engineers. This project iscofunded with the DOE Applied Mathematics andthe Chemical Sciences programs.

Quantum Chrornodynamics Calculations todevelop lattice gauge algorithms on massively par-allel machines. This project is cofunded oy theDOE and NSF high energy physics programs andinvolves particle physicists from a dozen universi-ties.

a Oil Reservoir Modeling to construct efficientalgorithms for parallel systems to simulate fluidflow through permeable media. The work is basedon reservoir models from the University of Texasthat are widely used by the oil industry and iscofunded by DOE Environmental Sciences, anindustry consortium of 26 companies, two comput-er vendors, NSF, and the State of Texas.

The Numerical Tokamak Project to develop andiniegrate particle and fluid plasma models on mas-sively parallel machines as part of a multidisci-plinary study of Tokamak fusion reactors. The

74F S

Page 87: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Heterogeneous Distributed Computing

The PVM (Parallel Virtual Machine) soft-ware permits a heterogeneous collec-tion of networked Unix machines (serial,vector, or parallel) to be used as a sin-gle large parallel computer. This public-domain software is now used at hun-dreds of sites worldwide.

C.2.»

project is cofunded by DOE's Fusion Energy pro-gram and is an important component of theNational Energy Strategy.

By the end of FY 1992 DOE supported 14 computationalprojects. No new computational projects or GrandChallenges were started in FY 1993 due to lack offunds. In FY 1994, the DOE expects to evaluate theexisting projects and may initiate new projects to bal-ance the program in response to the goals and objec-tives of the DOE HPCC Program.

Software Tools

Projects were initiated in: object oriented data basetechnology (applied to and cofunded with theSuperconducting Super Collider), large spatial databas-es, distributed computing technologies, and softwareenvironments.

Computational Techniques

The DOE initiated projects on computational techniquesapplied to catalysis, drug design, and in chemistry andthe materials sciences in cooperation with private sectorfirms.

High Performance Computing Research Centers(HPCRC) and Advanced Computing Resources

Two HPCRCs were created in FY 1992, at Los AlamosNational Laboratory (LANL) and at Oak Ridge NationalLaboratory (ORNL), involving industrial and universitypartnerships as well as cofunding from other DOE pro-grams. These HPCRCs conduct research in all HPCCProgram component areas; operate massively parallelhigh performance computing prototypes (a ThinkingMachines Inc. CM5 at LANL and an Intel Corp. Paragonat ORNL); perform computational investigations in glob-al climate research, environmental groundwater trans-port modeling, and in materials sciences calculations:and provide HPCS resources for the other computation-al research projects described above. The HPCRC atLANL is also a partner in an NSF-sponsored Scienceand Technology Consortium. The Serial #1 KendallSquare system was installed at the ORNL HPCRCwhere it will be evaluated on energy applications.

75

F 9

Page 88: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

.-

74-.1147

- ------- .Overview of the architecture of theNational Storage Laboratory located atNERSC, in which different networks areused for control of the attached devicesand for data transmission. This archi-tecture overcomes the bottleneck in ear-lier hierarchical storage designs inwhich all of the data had to passthrough the control computer. The tech-nology will first be used in fusion energyand climate modeling applications. TheNSL architecture is being extended in anew High Performance Storage Systemto support scalable parallel I/0 and inte-gration with massively parallel comput-ers.

UST COM

A Cray Research Inc. C-90 supercomputer system wasinstalled in FY 1993 in the National Energy ResearchSupercomputer Center (NERSC) at Lawrence LivermoreNational Laboratory to provide high performance com-puter technology in a full production environment forDOE applications. The C-90 has 16 processors and isbeing used in a mode that encourages the use of paral-lel programming techniques.

National Storage Laboratory (NSL)

In concert with NASA, the NSL was established by aCRADA among six industrial firms and NERSC. Thiscollaboration currently involves several DOELaboratories, about a dozen U.S. storage vendors, anda university. The NSL will help serve the data storageneeds of researchers at the DOE and at other Federalagencies and will help participating vendors developnew mass storage products.

The NSL will advance the state of the art in high perfor-mance data storage systems that are capable of storingterabytes of data. Such mass storage is required byapplications such as the Grand Challenges and thoserequiring storage of large amounts of experimental data.

Candidate IITA Activities

DOE brings expertise in information technologies to thetask of developing the NIL DOE has developed commu-nication networks and has applied computing technologyto a broad range of applications and technical problemsaddressed in collaborations involving scientists andengineers located across the country as well as aroundthe world The many DOE CRADAs are evidence ofindustry's ,,,trest in working with the Department. DOELaboratories have a long history of collaborating withresearch universities.

. The early deployment of the Nil is important to the suc-cessful implementation of the Energy Policy Act. TheNil is critical to the Acts goals of:

Managing energy supply and demand.

-J Developing telecommuting to reduce energyusage and provide more efficient use of theNation's human resources.

76

Page 89: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

:211.

ILIMPOMONISC.110

'Cia7VMPAY

"Superkids," participants in the High-School Science Students HonorsProgram. with the National EducationSupercomputer at NERSC. Each sum-mer this program brings one studentfrom each state, the District ofColumbia, Puerto Rico, AmericanSamoa. the DOD Dependents Schools,and eight foreign students to LawrenceLivermore National Laboratory.

-1Training technical personnel in the use of informa-tion technologies.

JProviding energy related information to citizens.

-J DOE initiatives for energy efficient manufacturingand materials processing.

In FY 1994 a candidate DOE activity is to begin devel-oping prototype applications to demonstrate nationalscale applications and use of the Nil in selected energy-related areas. It would work closely with correspondingprograms of other agencies in these efforts.

BRHR

DOE supports several educational programs includingthe National Education Supercomputer Program (NESP)at NERSC; the Southwest Indian Polytechnic Institute inwhich 30 Native Americans participate; and workshopsto train teachers in HPCC technology and use, and toconduct such workshops themselves. FY 1993 high-lights include:

-I The NESP completed its eighth High SchoolScience Students Honors in Supercomputing andan associated teachers curriculum developmentworkshop, and operated the National EducationSupercomputer (NES), a Cray X-MP/18 donatedby Cray Research, Inc. This program was recog-nized in a DOE Public Service Award to theNERSC Education Coordinator.

-I A syllabus for teaching computational science atthe graduate level has been developed and madeavailable on the Internet. This work involved 24authors from 10 disciplines. it incorporates graph-ics using Apple Macintosh and IBM RS 6000 tech-nologies and allows the user to execute exampleson Cray Research, IBM, Intel, Kendall Square,and Thinking Machines systems. The syllabus ismaintained at Vanderbilt University.

-1 A Computational Science Graduate FellowshipProgram was begun in FY 1992. The fellowshipsare awarded on one-year renewable terms to sup-port full-time graduate study and thesis researchin the U.S. The fellowships are in the applied sci-ence and engineering disciplines with applicationsin high performance computing. In FY 1993 the

77

9

Page 90: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

..MOWNS".

Clockwise from lower left, volume ren-derings display the density of a bubblebeing hit by a shock wave. New adap-tive gridding and fast, accurate, androbust numerical algorithms are used tosimulate the behavior of such highlynonlinear fluid flows in complicatedthree-dimensional geometries. Thesetechniques are implemented on the lat-est vector and massively parallel archi-tectures. Applied mathematicians andcomputational scientists performed thisresearch at Lawrence LivermoreNational Laboratory. Los AlamosNational Laboratory. New YorkUniversity. and the University ofCalifornia at Berkeley.

number of fellowships was increased to 42 at 35universities and institutes. This program and thesyllabus described above have been instrumentalin defining the discipline of computational science.

Fundamental Research Programs

The Applied Mathematics program supports a broadrange of activities at universities and DOE Laboratoriesin modeling, analysis, and numerical simulation of physi-cal and biological phenomena that arise in energy andenvironmental systems. Most of the projects involveapplications-driven studies of the mathematical andnumerical tools required to understand the behavior ofcomplex discrete and continuous systems with anemphasis on algorithms for parallel computing. It hascofunded with NSF the Geometry Science andTechnology Center at the University of Minnesota toapply new HPCC technology to traditional mathematicsproblems. It has initiated new efforts in complex nonlin-ear behavior that underlies most natural phenomena,and in graph and group theories related to discrete phe-nomena and topology for application to genomesequencing and protein structure.

In computer science the focus is on understanding howparallel and distributed computer systems can beapplied more effectively to large-scale scientific prob-lems. Supported projects include research in program-ming models and tools, improved software libraries forparallel computers, scientific visualization of large datasets, software performance analysis techniques, andmessage-passing utilities to facilitate distributed comput-ing.

FY 1994 Milestones

FIFCS

Begin evaluating one or two additional early prototypesystems in cooperation with vendors.

Expand computer systems performance analysis project,including top-down algorithmic analysis ot GrandChallenge codes at HPCRCs.

78

9 2

Page 91: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

A single frame from an eight month sim-ulation for the Pacific Northwest usingth3 Penn State/NCAR MesoscaleMeteorological (MM) model. The simu-lation was executed on the IntelTouchstone Delta at Ca !tech. MMmodel results are used in high resolu-tion regional forecasts, allowing moreefficient energy use and helping reducestorm damage. Argonne NationalLaboratory (ANL) and NSF-supportedNCAR researchers used fast fine-grained massively parallel software andPCN, a parallel programming languagedeveloped at ANL and Ca ltech with sup-port from the NSF Science andTechnology Center for ParallelComputation.

Continue development of high performance hierarchicaldata storage system at the NSL in NERSC.

NREN

Begin prototype project in telecommuting access toDOE experimental facilities.

Complete ESnet upgrades to 45 Mb/s at 20 sites.

Initiate ESnet upgrades to 144-622 Mb/s at selectedsites.

Continue gigabit research projects, emphasizing dis-tributed applications.

Implement production-quality packetized workstation-based video over ESnet and other Internet components.

ASTA

Review, evaluate, and adjust program of GrandChallenge and computational research projects begun in1992.

Implement fully three-dimensional algorithms based onadaptive mesh refinement techniques for computationalfluid dynamics.

Deliver prototypes of discipline-oriented computing envi-ronments for Grand Challenge applications.

Conduct Grand Challenge workshop together with otherHPCC participants, emphasizing progress in softwaretools and computing environments.

Develop policies, standards, and software engineeringmethodologies to ease the transition of research toolsinto commercial products.

Develop software to integrate the high performance hier-archical data storage system at the NSL into advancedcomputing environments.

Provide detailed feedback to computer vendors onexperience gained with prototype computers installed atHPC RCs.

Page 92: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Clockwise from the top, a slice of athree-dimensional model of an 8 kilome-ter/second impact on a planet the sizeof the Earth by a planet the size ofMars. at 0, 60, 120. and 200 seconds.The gravity of the larger planet latertraps mantle material and forms amoon. Using multidimensional shockphysics software developed in a collab-oration among several DOELaboratories and others under DOE andARPA sponsorship, this is an exampleof materials modeling under extremeconditions. The same software is usedto model other high velocity impact phe-nomena, including explosives, resultingin fewer experiments, reduced cost, andincreased safety.

IITA Candidates

Publish white paper reviewing applicability of Nil totelecommuting and defining the enabling technologiesrequired for and the barriers to widespread implementa-tion of telecommuting and other forms of telework.

Evaluate scope of DOE generated information for inclu-sion in Nil digital libraries.

Conduct cooperative evaluation of potential for energysupply and demand management using the Nil.

Begin collaborative development of Nil technologies forapplications testbeds.

BRHR

Refine and make widely availrNe the graduate-levelcomputational science electronic textbook.

Initiate college-level computational science electronictextbook.

Review and evaluate ongoing high school education pro-grams and associated teacher workshops.

Initiate junior high school educational projects.

Rebalance applied math research program based oncomprehensive review conducted in 1993.

804

Page 93: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

National Aeronautics and SpaceAdministration (NASA)Improved design and simulation of aerospace vehicles and increased ability of scientists tomodel the Earth's climate and to forecast global environmental trends are NASA's strategicgoals within the HPCC Program. These goals also include broadening the information appli-cations of HPCC in areas complementary to NASA's expertise, and in areas important to thedevelopment of the National Information Infrastructure. To reach these goals, NASA partici-pates in the development and use of the advanced high performance computing and commu-nications systems and tools piloted in each of the five components of the HPCC Program.

Simulation of three-dimensional distribu-tion of the Earth's ozone layer. Latitudeand longitude slices illustrate the vary-ing ozone content

.fe

HPCS and ASTA: The Grand Challenge inAeronautics

Improved design and simulation of advanced aerospacevehicles at reduced cost will enable the U.S. to enhanceits leadership in aerospace trade, especially as futureaerospace vehicles become more difficult and expensiveto simulate. In the HPCS component, NASA procuresand evaluates prototype, scalable parallel computingsystems used to develop advanced algorithms and soft-ware tools. This systematic approach is important fordeveloping improved methodologies and software toolsto model aerodynamics, aerobraking, heat transfer, corn-

ustion, and other engine elements through the full flightenvelope of the aerospace vehicle (from vehicle takeoff,to flight at different altitudes and conditions, to landing).

HPCS and ASTA: The Grand Challenges in theEarth's Environment

NASA's Earth and Space Sciences (ESS) project isdeveloping multidisciplinary models of physical, chemi-cal, and biological phenomena that will lead to moreaccurate environmental simulations. Challenges rangefrom analysis of the interactions among Earth's atmo-sphere, oceans, and land masses, to the reconstructionof planetary evolution. A crucial aspect of this programis the development of software management systems tohandle the volumes of scientific data that will be pro-duced late in this decade; multiple terabytes of data willbe transmitted and stored each day and will rely on thehigh performance data management, storage systems,and communications systems developed through theHPCC Program.

81 r 5

Page 94: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

The NASA Aeronautics Network(AERONet) provides computer network-ing facilities to the aerospace communi-ty. NASA centers are linked via highspeed communications lines, andlower-speed tail circuits connect othermembers of the aerospace communityto NASA. In addition, AERONet alsoprovides Internet access with manyother networks, thereby increasingnational and international network con-nectivity.

NASA Science Internet

NASA Science Internet

The NASA Science Internet (NS1Net)provides connectivity among NASA.industry, and academic researchers tofacilitate collaboration in Earth andspace science research. This networkbuilds on the Internet and includes tech-nology to support the evolution of theInternet

NREN and BRHR: Support for Grand ChallengeTeams

NASA has established Grand Challenge applicationsinterdisciplinary teams in Computational Aerosciences(CAS) and Earth and Space Sciences (ESS). The CASteams are examining coupled aerodynamics, struc-tures/materials, controls and combustion modeling forhigh-speed civil transport, high-performance aircraft,subsonic aircraft, and rotorcraft. Through a NASAResearch Announcement, ESS teams have been estab-lished in Earth system science, space and solar-terres-trial physics, astronomy and astrophysics, biochemicallife cycles, planetary evolutionary processors, and mas-sive data analysis.

The CAS teams use the NASA Aeronautics Network(AERONet) while the ESS teams use the NASA ScienceInternet (NSINet) for high speed network connectivityamong NASA, industry, and academic researchers (seemap). In the future, these high performance networkhighways will be constructed from fast-packet networktechnology. NASA supports the architecture evolving inthe NREN component and participates in the develop-ment and deployment of gigabit network technologiesand architectures.

Pilot programs with the K-12 educational communityhave been established to discover the best mechanismsfor using space and aeronautics assets to support edu-cational communities in the U.S., and to provide practi-c-' models for using sophisticated computational andnetworking resources in these communities. This is tobe accomplished in collaboration with Grand Challengescientists from the CAS and ESS projects, therebyachieving direct involvement between NASA scientistsand the K-12 community.

ASTA: Software Sharing

As lead agency in ASTA's software sharing activities,NASA coordinates the collection of and access to thesoftware base developed in the HPCC Program. Thegoal of the HPCC Software Exchange is to facilitate theexchange and reuse of software. Its specific objectivesare to 1) develop and demonstrate distributed architec-tures and enabling technologies that support softwareexchange, 2) implement an initial distributed HPCCSoftware Exchange that satisfies the needs of the

82

6 IV CM RAW

Page 95: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

,

Simulated temperature profile for athree-dimensional hypersonic re-entrybody traveling at an altitude of 100 kilo-meters, a speed of Mach 24, and anangle of attack of 10 degrees.Performed on the Intel TouchstoneDelta at CSCC, the simulationemployed 70 million particles and all512 of the Delta's processors runningfor 50 minutes.

HPCC Program, and 3) specify an open non-proprietaryarchitecture that will facilitate the emergence of a nation-

, al software exchange.

IITA: Toward a National Information Infrastructure

NASA is committed to broadening its participation in keyareas of research and development of prototype solu-

; tions to National Challenge applications. NASA is selec-tively augmenting on-going programs that can contributeto an early start-up of these prototype systems in appli-cations such as education and lifelong learning, digitallibrary technology, manufacturing and industrial design,and access to widely available databases of Earth andspace science data. NASA participates in each of thefour IITA elements.

BRHR: Fostering High Performance ComputingResearch

NASA's BRHR component fosters research into newtheory and application of high performance computing.

: These activities will leverage current research efforts inhigh performance computing at NASA research insti-tutes and universities.

' Major FY 1993 Accomplishments and Activities

Concurrent SuperComputing Consortium (CSCC)

NASA played an active role in the Consortium during itsfirst year of operation. Use of the Consortium's IntelTouchstone Delta supercomputer has enabled majoradvances in several scientific and engineering applica-tions including the processing of three-dimensionalimages of Venus taken by the Magellan satellite at therate of four frames per second thus putting real-timeimage processing within reach. Other accomplishmentson the Delta include the largest direct numerical simula-tion of the time-dependent compressible Navier-Stokesequations and the largest three-dimensional compress-ible turbulence simulations for high Reynolds numbers.

83

Page 96: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

The surface pressure distribution on aHigh Speed Civil Transport aircraft issimulated using an Intel iPSC/860 highperformance computer with 32 proces-sors. Areas of highest pressure are inred, areas of lowest pressure are darkviolet.

High Performance Networking

NASA continues to develop and deploy advanced net-working technologies to allow researchers and educa-tors to carry out collaborative research and educationalactivities. NASA has entered into a cooperative effortwith DOE to procure network services that will operateat 45 Mb/s using synchronous optical fiber network stan-dards. The two agencies are working closely to providea nationwide fiber optic network that will meet HPCCresearch communications needs and serve as the foun-dation for even faster networks ranging from 155 Mb/sto eventual gigabit speeds.

Software on Parallel Computers

NASA's Ames Research Center has ported single disci-pline computational fluid dynamics code to a number ofscalable parallel computers. The objective is to performmultidisciplinary optimization of a High Speed CivilTransport vehicle and the takeoff and landing of a sim-ple powered lift vehicle (described on pages 126-128 inthe Case Studies section). The optimization will consid-er aerodynamic efficiency, structural weight, and propul-sion system performance. The multidisciplinary analysiswill be performed by solving the governing equations foreach discipline concurrently on a parallel computer.Developing scalable algorithms for the solution of thisproblem will also be central t..) demonstrating the poten-tial for teraflops execution speed on a massively parallelcomputer.

High Speed Civil Transport (HSCT)

NASA's Langley Research Center is working to improvethe processes for the design and analysis of HSCTusing advanced computational fluid dynamics and struc-tural analyses. A set of highly optimized software toolshas been created that can be used to implement irregu-lar computations on massively parallel machines.These tools can be used both manually by users and bydistributed memory compilers to automatically paral-lelize irregular codes.

Benchmarking Multidisciplinary Codes on ParallelSupercomputers

NASA's Ames Research nter has developed and

84

Page 97: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

17.1116111MINN

..-=1111111.1i-

411

Computational fluid dynamics revealsthe interaction between the freestreamairflow and engine exhaust of a YAV-83Harrier Vertical Takeoff and Landing(VTOL) aircraft. The simulation showshow hot gas from the nozzles is ingest-ed by the engines, reducing their effec-tiveness.

demonstrated the ENSAERO computer algorithm, whichcouples aerodynam s and structures analysis moduleson the Intel iPSC/860 massively parallel computer.

Results involve the coupling of a low-fidelity structurescode to a high-fidelity aerodynamics analysis routine.This multidis4linary research, in conjunction with theincreased performance offered by massively parallelcomputers, will enhance the ability of aircraft manufac-turers to quickly analyze different design options andaccelerate the prototyping process, and thereby reducedesign cycle costs in addition to producing vehicles withimproved performance.

Computational Aerosciences Consortium

This consortium of U.S. industrial firms works with NASAto facilitate the development of precompetitive softwarefor the implementation of integrated multidisciplinarydesign, analysis, and optimization systems on heteroge-neous computer networks. Incorporation of these sys-tems into the product development process can increaseU.S. competitiveness through reduced design cycle timeand life cycle costs and increased quality of technology-driven products. Consortium members include:

Allison Gas TurbinesBoeing HelicopterFordGeneral ElectricGeneral MotorsGrummanMcDonnell DouglasNorthropRockwellUnited TechnologiesVought

High Performance Computing Software Exchange

The inherent size and complexity of HPCC problems willincrease the cost and difficulty of developing and main-taining robust applications software. The HPCCSoftware Exchange Experiment encourages the sharingand reuse of software modules by providing an infras-tructure of interconnected software repositories. TheHPCC Software Exchange Experiment System repre-sents the first time that a number of different agencies

85 n 9

Page 98: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

UNIIIIIMEIMINIONEW

with different philosophies for software management havecollaborated to develop an integrated, albeit experimental,system.

The second phase of development, the HPCC SoftwareExchange Prototype System, was begun in FY 1993.This phase includes the development of 1) an HPCCSoftware Standards database with initial software submis-sion standards, 2) an HPCC Repository Directory withadditional software repositories, 3) an HPCC UnionCatalogue populated with mathematicai and statisticalsoftware packages and programs, 4) a Mathematical andStatistical Software Cross-index, and 5) a Unix-to-Unixclient/server system for uniform repository access.

FY 1994 Milestones

HPCS

Install 20- to 50-gigaops testbed (scalable to 100-gigaops) for Computational Aerosciences (CAS) projectGrand Challenge teams.

NREN

Complete T3 (45 Mb/s) Level 3 interconnects to NASAHPCC Centers and T3 Level 2 interconnects amongNASA HPCC Centers.

BRHR

Evaluate initial K-12 digital educational material.

86

Page 99: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

1=1

".--,..,-7ffla111111111INEIMAP

National Institutes of Health (NIH)The National Institutes of Health (NIH) contribution to the HPCC Program focuses onbiomedical applications of computing and digital communications. Four NIH units haveHPCC programs:

NLM National Library of Medicine

NCRR National Center for Research Resources

DCRT Division of Computer Research and Technology

NCI National Cancer Institute

A cross-section of the three-dimensionalreconstruction of a herpes simplex viruscapsid is shown in the inset with themonoclonal antibody fragments boundto the protruding tips of hexons color-coded red and the original electronmicrograph shown in the background.

NIH participates in each of the five components of theoverall Federal HPCC Program, as follows:

HPCS: Evaluation for Biomedical Applications

Through DCRT, NCI, and NCRR, NIH applies al eval-uates scalable parallel computing systems to problemsof biomedical significance. This includes adapting exist-

' ing molecular analysis algorithms to new computingarchitectures, and developing eitirely new approaches

! that take advantage of computational parallelism.

NREN: Increasing Access and TransmittingBiomedical Images

DCRT and NLM have complementary investments inextending gigabit speed communications for high datavolume scientific applications, as well as lower speedconnections for a broad community of research and edu-cation institutions.

DCRT's efforts focus on high speed networking to sup-port the intramural research program of the NIH; NLMserves as a national resource, providing support formedical centers to connect to the Internet, and develop-ing prototype biomedical digital image libraries that usethe Internet as a high speed distribution channel. AnIntelligent Gateways project, co-sponsored by \ILM andNSF, is developing methods to link dissimilar databanksover the Internet, using automated "Knowbots"(Knowledge Robots).

87 101

Page 100: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

ABTA: Molecular Biology and Biomedical Imagery

: Each of the four NIH organizational units is developingalgorithms and software for advanced, high performancecomputing environments. The two major themes of thiswork are molecular biology and biomedical imaging.Molecular biology computing includes comparison ofgenetic and protein sequences, and early developmentof algorithms to predict molecular structure and function.Biomedical imaging includes imaging of molecules andnew methods for correlating and displaying clinical

. images in three dimensions.

lITA: Applications For Improved Health CareDelivery

In FY 1994 NIH will support the development of HPCCtechnologies ;or health care, with particular emphasis onthe following applications:

Testbed networks for linking hospitals, clinics,doctors' offices, medical schools, medical libraries,and universities to enable health care providersand researchers to share medical data andimagery.

-1Software and visualization technology for visualiz-ing the human anatomy and analyzing imageryfrom X-rays, CAT scans, PET scans, and otherdiagnostic tools.

-J Virtual reality technology for simulating operationsand other medical procedures.

JCollaborative technology to allow several healthcare providers in remote locations to provide real-time treatment to patients.

J Database technology to provide health careproviders with access to relevant medical informa-tion and literature.

-I Database technology for storing, accessing, andtransmitting patients' medical records while pro-tecting the accuracy and privacy of those records.

Two to five research and development projects in eachof these areas will be supported via an NIH BroadAgency Announcement contract mechanism issued inFY 1993.

88

Page 101: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

The myoglobin protein with a hydrationof 350 water molecules.

BRHR: Medical Informatics Fellowships andBiomedical Science Education

NIH provides training and an ongoing investment in fun-damental technology development. NCRR and NLMboth sponsor formal degree-granting fellowship trainingin medical informatics, and cross-disciplinary training ofestablished investigators in the use of advanced com-puting systems and methods. NCRR also supports apilot project to educate high school science teachers andtheir students about new computing technologies forbiomedical science.

FY 1993 Accomplishments

HPCS: Evaluation of Parallel Systems

DCRT evaluated a 128 processor Intel TouchstoneGamma Prototype Parallel Computer and associatedsystem software on biomedical problems in structuralbiology, computational chemistry, image processing, andgenetic database searching.

DCRT designed and implemented remote file accessand communication systems software for this computer.

NCRR funded a Kendall Square Research KSR-1 com-puter jointly with NSF and ARPA at the Cornell TheoryCenter, and is evaluating its efficiency and effectivenessfor biological problems.

NREN: Medical Connections Program

NLM initiated a Medical Connections program in collabo-. ration with NSF, to provide support for academic medical

centers to connect to the Internet. In addition, NLM hasprovided support for a pilot project to connect rural hos-pitals in the Northwest to the Internet. This project iscoordinated by the University of Washington HealthSciences Center Library, a Regonal Medical Library inthe National Network of Libraries uf Medicine.

NREN: Access to X-ray Images

NLM began developing a system to provide Internet-mediated access to an electronic archive of 20,000 digi-

' tal X-ray images for medical research purposes.

89 1: 3

Page 102: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

High performance systems underevaluation for biomedical appli-cations

Location l'endor Machine(s)

Cornell Theory Kendall Square ASR-1 (64&titer Re,earch node%)

SP/

l'inkhurgh ( 'ray C.90 & T31)SupercomputerCenter

Inivernity ofat

(liampaign-()lama

Inivernity ofat

Champaign-I'rhana and(Wumhia1 niversity

I ohtmlna1 ntrerstti

Thinking (11200Slachinci

Thinking ( 115

Machines

Com, ( 210

NREN: UMLS Research

NLM created and published on CD-ROM a prototypeMetathesaurus, Semantic Network, and InformationSources Map as part of the Unified Medical LanguageSystems research effort to link together computer-basedbiomedical information resources. These data files sup-port experimentation on advanced systems that trans-late user information requests into multiple accessvocabularies.

NREN: Digital Anatomy Demonstration Project

NLM initiated a demonstration project that will create adigital anatomy workstation that allows a health profes-sions student to browse three-dimensional images gen-erated by a specialized graphics computer at a distantsite. The prototype links the NLM's Learning Center forInteractive Technology in Bethesda to the University ofWashington at Seattle via the Internet.

NREN: Access to Cancer Treatment Guidelines

NCI implemented remote file access to treatment guide-lines from its PDQ (Physician Data Query) database viathe Internet.

ASTA: Parallel Methods for BiomedicalApplications

NCRR-supported High Performance ComputingResource Centers are evaluating high performance sys-tems for biomedical research applications (see SideBar). Investigators at these Centers have developedapplications including:

-J Modeling the molecular dynamics of proteins inmembranes, in order to enable the study of drugpermeability across membranes. This requiressimulating approximately 10,000 atoms in a mem-brane patch.

JSimulating the complex interactions between DNAand protein (specifically Eco RI, a restrictionenzyme). This work won the 1993 ComputerworldSmithsonian Award.

Page 103: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Calculation of the binding of smallmolecules to DNA and other biologicalmacromolecules depends upon a com-plex interaction of electrostatic fields.solvent-accessible surface, and otherphysicochemical properties. Structure-based drug design is a key biomedicalHPCC Grand Challenge.

Developing a "dynamical model" for the structureof proteins essential for the replication of thehuman immunodeficiency virus 1 (1-11V-1), whichmay help in generating inhibitors of the prolein.

a Determining the three-dimensional structure ofproteins by developing algorithms that model howproteins fold based on their amino acidsequences.

a Modeling the ability of molecules to bind to targetreceptors in solution by calculating their "solvationenergies." This is a crucial component of rationaldrug design.

DCRT developed parallel methods for the following; biomedical applications:

-1 The three-dimensional reconstruction of herpesvirus images from two-dimensional projections ofthe virus obtained from electron micrographs.

The calculation of the solvent accessible surfacearea of proteins that is used to predict the confor-mation of these molecules.

The generation of primary sequence patterns fromsets of related protein sequences contained in adatabase.

-iThe within-subject registration of PET (PositronEmission Tomography) images for the correctionof roll, yaw, pitch, and translation.

The processing of NMR (Nuclear MagneticResonance) spectroscopy data using theMaximum Entropy Method for determining thestructure of proteins.

a Ab initio quantum mechanical calculations formolecules of biological interest.

a Molecular dynamics calculations with a parallelimplementation of CHARMM, a molecular model-ing program that takes atomic coordinates ofmolecules and renders graphical images of thosemolecules that can be manipulated in variousways for experimental chemistry.

Page 104: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Two-dimensional clinical images fromcomputed tomography and magneticresonance imaging underpin modernmedical diagnosis. Transmission ofthese diagnostic images over wide areanetworks and their reconstruction toform three-dimensional views are impor-tant health care applications of HPCCtechnologies.

JHuman genetic linkage analysis to determine thelikely position of a disease gene using LINKAGE,a gene linkage analysis program that calculatesthe probability of the association between a pat-tern of gene inheritance and a disease condition.It is used to analyze family pedigrees based ondata obtained from gene probes.

ASTA: "Visible Human"

NLM initiated a two-year project to acquire the three-dimensional digital representation of entire humanbeings at millimeter-level resolution, derived from com-puted tomography, magnetic resonance imaging, anddigitized cryosections. This "Visible Human" researchdata set will become available nationally via the Internetin 1994.

ASTA: Prototype Program for Retrieving MolecularBiology Information

NLM created a prototype advanced molecular biologyinformation retrieval program that provides integratedaccess to genetic and protein molecular sequences, andthe biomedical literature linked to those sequences.Field testing of the system has begun.

ASTA: Faster Molecular Analysis and ImagingAlgorithms

NCRR and NLM achieved order of magnitude speedupsin several existing molecular analysis algorithms.

DCRT and NCRR developed new algorithms for regis-tration and rendering of three-dimensional images fromtwo-dimensional clinical images and micrographs.

ASTA: HIV Research

Research conducted at NCI's BiomedicalSupercomputer Center is increasing the understandingof the human immunodeficiency virus (HIV) that causesAIDS and is helping to design and develop new drugs tocombat the deadly disease. NCI researchers have suc-cessfully predicted the secondary structure of the entire9,000 unit HIV virus RNA. NCI and NCRR supercom-puting applications have assisted in the design of newdrugs to inhibit HIV replication.

92

rf;

Page 105: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

,

Digital radiology techniques require highspeed networks: a single X-ray film rep-resented by a 2K-by-2K-by-10 bit grayscale generates a 4 megabyte image

BRHR: Medical Informatics Grants and OtherTraining

NLM competed the award of 10 Medical InformaticsTraining Grant programs at academic medical centers.The program supports cross-disciplinary training ofhealth professionals in the use of advanced computingtechnologies.

NCRR conducted a pilot project to introduce scientificcomputing methods to high school science teachers andtheir students.

DCRT and NCRR sponsored "hands on" training ofbiomedical researchers in the use of new computationalbiology tools at NSF Supercomputer Centers and on theNIH campus in Bethesda.

BRHR: Basic Research Through Long DistanceMicroscopy

In the first demonstration of its kind, scientists at a work-station in Chicago viewed high-resolution images ofnerve cells in a high-voltage electron microscope located1,700 miles away at the San Diego Microscopy andImaging Resource (SDMIR), which is supported byNCRR. Further details are given on pages 123-125 inthe Case Studies section.

FY 1994 Milestones

NIH will accelerate the pace of molecular and geneticdiscovery by enabling the solution of currently :ntractableproblems in molecular structure prediction, drug design,and human genome database analysis.

The program will apply and evaluate new computerarchitectures to key problems of human health and dis-ease. in a manner that gives early feedback to computerdesigners on the strengths and limitations of their sys-tems for medical applications. The HPCC Program willrapidly build an electronic community among life scienceresearchers by connecting academic medical centers tothe Internet. It will create prototype medical imagingapplications that use the Internet and provide a modelfor distance-independent medical consultation. It willdouble the pool of computationally trained investigatorsin biomedicine.

93

Page 106: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

X-ray diffraction spectroscopy is a labo-ratory method for determining the foldedstructure of proteins and other biologicalmacromolecules. Parallel computingsystems can be used to automate inter-pretation of X-ray diffraction patternsacquired via two-dimensional array sen-sors.

HPCS: Evaluation of Parallel Systems

DCRT will obtain a next generation parallel computerthat will allow the solution of new computationally inten-sive problems in biomedicine.

NCRR will assess the efficiency and scalability ofemerging massively parallel architectures for GrandChallenge problems.

NREN: More Connections to the Internet

NLM will establish Internet connections to 70 to 100additional medical centers.

NREN: Digital Anatomy Databases

NLM will create advanced three-dimensional imagingdatabases for digital anatomy on the Internet by thenation's health professions schools. Workstations toaccess and display those images will be developed.

NREN: "Knowbots" for Natural Language Queries

An operational Knowbot-based database retrieval sys-tem will be deployed at NLM to provide integratedaccess to over 50 computerized knowledge sources inbiomedicine, underpinned by a fully operational UnifiedMedical Language System that allows users to state sci-entific questions in their own language, and have theanswer retrieved and synthesized automatically frommultiple databases at multiple sites on the Internet.

ASTA: Software Development and HardwarePlacement

NCI will expand its advanced software development pro-gram to allow research on a broader range of moleculardynamics, structure-function problems, and structure-assisted drug design.

NLM will develop and deploy advanced molecular biolo-gy workstations to approximately 1.000 molecular biolo-gy laboratories.

Through funding of five biomedical High PerformanceComputing Resource Center (HPCRC) programs at

94

S

Page 107: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

NSF and ARPA-sponsored high performance computingcenters, NCRR will support development of algorithms tocompare molecular sequences, predict molecular struc-ture from genetic and protein sequences, simulate pro-tein folding, and model complex biological systems suchas proteins interacting with membranes in an aqueoussolution. Computed images of biological structure, frommolecular to whole body, will be an additional area ofASTA software development.

IITA: Health-Related Research and Development

NIH will support the development of HPCC technologiesthrough a Broad Agency Announcement to support twoto five research and development projects in each of thesix health-related areas listed above.

NCI will provide Xconf, a prototype multimedia (includingmedical images) group conferencing tool that uses theInternet. Contingent on fundir;g, NCI will 1) connect tothe Internet both medical research centers (to tacilitatemulticenter environmental epidemiology studies) and theCancer Information Service offices at its cancer centers,and 2) begin developing telemammography over highspeed networks.

BRHR: New and Upgraded Centers and AdditionalFellowships

In order to meet the needs of biomedical researchers,NCRR will estabiish at least one additional high perfor-mance computing resource center and upgrade the fiveexisting centers. Enhanced cross training of biomedicalresearch scientists will also be possible.

NLM will fund an additional 50 medical informatics train-ing fellowships nationwide.

951. 9

Page 108: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-MmffIlm-*MONO/

National Security Agency (NSA)The goal of NSA's HPCC Program is to accelerate the development and application of thehighest performance computing and communications technologies to meet national securityrequirements and to contribute to collective progress in the Federal HPCC Program.

By integrating processing directly intootherwise standard memory chips fabri-cated at its Special ProcessingLaboratory. the SupercomputingResearch Center has developed theTerasys workstation, which outperformsone Cray Y-MP processor by 5 to 48times on a set of nine NSA applications.A mature software environment, available for the workstation, makes the sys-tem easy to use.

In support of this goal, NSA:

-1Develops algorithms and architectural simulatorsand testbeds that contribute to a balanced environ-ment of workstations. vector supercomputers,massively parallel computer architectures, andhigh speed networks.

JSponsors and participates in basic and appliedresearch and development of gigabit networkingtechnology.

-iDevelops network security and information securi-ty techniques and testbeds appropriate for highspeed in-house networking and for interconnectionwith public networks.

.-iDevelops software and hardware technology forhighly parallel architectures scalable to sustainedteraops performance.

JInvestigates or develops new technologies inmaterials science, superconductivity, ultra-high-speed switching and interconnection techniques,networking, and mass storage systems fundamen-tal to increased performance objectives of highperformance computing programs.

NSA participates in all five components of the HPCCProgram as follows.

HPCS: Heterogeneous High PerformanceComputing; Balanced Architectures

NSA deploys experimental scalable computer capabili-ties, emphasizing interoperability of massively parallelrictchines in a highly heterogeneous environment ofworkstations, vector supercomputers, and mass storage

96

'110

Page 109: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

=110MIOMMI.

0;

ICC`

<M. mono pm mmocoor

"00,0 soca pm mmenor

o 3,01000 morn rel mot mom

0:10 050 WO

noon, mmcmon

Time improvement using multiple pro-cessors vs. a single processor for a par-allel eigensolver algorithm for largedense symmetric matrices, which occurfrequently in models of physical phe-nomena. A parallel algorithm "scaleswell" when it effectively uses almost allavailable resources as the problem sizeand the number of processors increase.The achievable performance improve-ment is proportional to the increase inthe number of processors. Parallelalgorithm design and implementation isan iterative process with the goal ofachieving the maximum performancepossible. SRC research continuesseeking ever-faster eigensolvers.These data are from the IntelTouchstone Delta at Caltech. Resultsare being collected for the IBM SP1, theThinking Machines CM5, and the IntelParagon.

-7-7

: devices. NSA's HPCS program also emphasizes anopen systems approach to development and mainte-

1 nance of the HPCC environment and the integration ofspecialized high speed hardware within this environ-ment.

NREN: Network and Security Technology

NSA uses the Internet to provide high speed networkconnection among NSA, industry, and academicresearchers. NSA takes the lead in developing networksecurity and other information system security technolo-gy and products for high speed networks. It establisheshigh speed network testbeds to explore network andsecurity interface technology issues.

ASTA: High Performance Systems Software, Tools,and Algorithm Research

NSA develops systems software to enhance productivityof users of high performance systems. This softwareincludes compilers, simulators, performance monitoringtools, software tnat is portable among a variety of highperformance systems, software for distributing jobsacross a network, and visualization tools. New algo-rithms are developed to map problems to high perfor-mance architectures.

IITA: Development of D'ial-Use Technology

NSA proposes to investigate developmental Zechnolo-gies to support information infrastructure applications inmanufacturing, education, public health, and digitallibraries. Research and development in transaction pro-cessing, database management systems, and digitaldata storage and retrieval systems, are expected tomake strong dual-use technology contributions to boththe Federal and the IITA communities.

BRHR: Fundamental High Performance SystemResearch and Education

NSA supports basic research into new technologies,theory and concepts of high performance computing andnetworking, and promotes research in high performancecomputing at the Institute for Defense Analyses' (IDA)Supercomputing Research Center (SRC) and at univer-

97

Page 110: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

I I

I I

.14

1139000

Memory traffic flow from CPU memoryports through a memory arbitration net-work, and back to CPU input ports.Each colored rectangle represents onephysical queue organized in groups.The colors represent the queue state:for example an empty queue is whiteand a full queue is red. The animationof this display shows memory messagetraffic through the mem IT network. Itspurpose is to point out possible clogsand busy spots within the network.

X

sities. NSA's National Cryptologic School and the SRCdevelop courses suitable for users of high performancecomputing environments.

Management

The overall coordination of NSA's HPCC activitiesresides in the office of the NSA Chief Scientist, whoreports to the Director of NSA. Responsibilities for exe-cution of major elements of the plan lie with theTechnology and Systems Directorate and theInformation Systems Security Directorate. NSA spon-sors the SRC in exploring massively parallel computerarchitectures and in developing algorithms and systemssoftware for parallel and distributed systems. The SRCheavily supports NSA's HPCC activities directly and bycollaborative efforts with other HPCC participants, espe-cially industry and academia.

FY 1994 Plans

HPCS

Extend existing high performance system simulators andtestbeds at NSA and the SRC.

Develop mass archival storage 1016 to 1016 bits.

Define techniques for the interoperability of distributedoperating systems spanning large sets of heterogeneouscomputer assets, including supercomputers.

Integrate major new vector/scalar massively parallel pro-cessor as a research system.

Demonstrate a terabit/second (1 012-bit-operations-per-second) deskside SIMD system.

Continue technical cooperation with major vector/mas-sively parallel processor developers.

Investigate, cooperatively with industry, processing-in-memory (PIM) technology in established systems archi-tectures.

I 12 BEST COPY MADE

Page 111: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

=110

.10+-

Front and rear views of High SpeedNetwork (HNET) Testbed.

These 64 Unix processors attached to64 custom-chip switch nodes, each withseven ports, serve as a high speed net-work routing protocol testbed. The sys-tem can be configured to be anAsynchronous Transfer Mode (ATM)switch, for example, to allow experimen-tation with the efficiency of currentlyevolving commercial offerings.

NREN

Install in-house gigabit network testbed to 2xplore highspeed network architectures and techniques.

Explore network security issues for 622 Mb/s and 2.4Gb/s networks.

Install a gigabit network testbed to explore compatibilityof DOD networks with vendor-provided public networks.

Initiate development of a bulk link encryptor for highspeed networks.

Develop proof of concept for cell encryption in very highspeed switched network products.

ASTA

Develop parallel extensions to high-level programminglanguages suitable for parallel architectures.

Develop visualization techniques for performance analy-sis of parallel and vector high-performance architec-tures.

Develop system modeling tools for heterogeneous com-puting environments, including high performance sys-tems.

Develop algorithms and implementations for solvingeigensystems and manipulating sparse matfices on par-allel systems.

Develop evaluation testbed for exploring routing algo-rithms and topologies for computer interconnects.

IITA Candidates

Study applicability of heterogeneous data base technol-ogy program to information infrastructure applicationneeds.

Investigate transferability of digital library technology tothe private sector.

Researcn public sector secJrity technology issuesunique to the IITA as well as dual-use technologies

99 113

Page 112: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

NSA has used parallel processing togreatly accelerate workstation perfor-mance. Designed and fabricated at theSRC, this board employs 32 FieldProgrammable Gate Arrays which arecustom programmed for each applica-tion. This customization and paralleliza-tion yields speedups of 100 to 1.000times the already impressive worksta-tion performance. (Field ProgrammableGate Arrays are also described onpages 154-155 in the Case Studies sec-tion.)

applicable to both national security and public sectorcommunities.

BRHR

Initiate collaborative and university research efforts inperformance modeling of high performance computingsystems for generic problem domains.

Explore innovative parallel computer and network archi-tectures.

Investigate new technologies in material sciences,superconductivity, optoelectronics, ultra-high-speedinterconnection, and switching.

1.1

4

Page 113: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

...11e.111AMIMMIr

National Institute of Standards andTechnology (NIST)High performance computing and communications technology is an essential enabling com-ponent of NIST's mission to promote U.S. industrial leadership and international competi-tiveness and to provide measurement, calibration, and quality assurance techniques to sup-port U.S. commercial and technological progress. The objectives of NIST's HPCC programare: to accelerate the development and deployment of high performance computing andnetworking technologies required for the National Information Infrastructure; to apply andtest these technologies in a manufacturing environment; and to serve as coordinating agen-cy for the manufacturing component of the Federal Program.

NISTs MultiKron chip provides low per-turbation measurements for perfor-mance evaluation of computer and com-munication systems.

Specific goals of NIST's program are:

-1To apply high performance communications andnetworking technology to promote improved U.S.product quality and manufacturing performance, toreduce production costs and time-to-market, andto increase competitiveness in international mar-kets.

J To promote the development and deployment ofadvanced communications technology to supportthe education, research, and manufacturing com-munities and to increase the availability of scientif-ic and engineering data via the NationalInformation Infrastructure.

J To advance instrumentation and performancemeasurement methodologies for high performancecomputing and networking systems and compo-nents to achieve improved system and applicationperformance.

J To develop efficient algorithms and portable, scal-able software for the application of high perfor-mance computing systems to industrial problems,and to develop improved methods for the publicdissemination of advanced software and docu-mentation.

-J To support, promote, and coordinate the develop-ment of voluntary standards that provide interoper-ability and common user interfaces among sys-tems.

Page 114: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

it

A computer-controlled coordinate mea-suring machine determines the exactdimensions of a precision-machinedstainless steel part. NIST researchersuse this and similar instruments todevise ways to improve machine toolperformance.

NIST participates in four components of the HPCC! Program:

HPCS: Performance Measurement of ScalableSystems

NIST develops instrumentation and methodology forperformance measurement of high performance net-works and massively parallel computer systems.Emphasis is on the use of low-perturbation data capturehardware and simplified software-based approaches toperformance characterization.

NREN: Networking and Information Infrastructure

NIST supports and coordinates the development ofstandards within the Federal government to provideinteroperability, common user interfaces to systems, andenhanced security. NIST works with other agencies topromote open system standards to aid in the commer-cialization of technology by U.S. industry. NIST pro-motes the development of communications infrastruc-ture and the use of the Internet through informationtechnology research, development, and related activitiesto enhance basic communications capabilities.

ASTA: Application Systems and SoftwareTechnology

NIST develops algorithms and generic software foradvanced scientific, engineering, and manufacturingapplications. Common elements and techniques areencapsulated in software libraries to promote ease ofuse and application portability. NIST's Guide toAvailable Mathematical Software (GAMS) providesindustry and the public with improved electronic accessto reusable software.

IITA: Systems Integration for ManufacturingApplications

NIST will build upon its experience in information tech-nology and manufacturing engineering to accelerate theapplication of high performance computing and commu-nications technology to manufacturing environments.

102

1 16

Page 115: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

=. 7

A machinist monitors a machine toolretrofitted with a personal computercontroller. The machine is located inN1ST's Shop of the 90s where manufac-turers can learn how to use open sys-tem integration technology and low-costautomation techniques to improve pro-du....-tivity and product quality.

NIST will support expanded programs in advanced man-ufacturing systems integration technologies; develop-ment and testing of prototype components and interfacespecifications for manufacturing systems; application ofhigh performance computing and networking technolo-gies to integrate design and production processes; andtestbeds for achieving cost-effective application ofadvanced manufacturing systems and networks.

FY 1993 Accomplishments and FY 1994 Plans

Systems Integration for Manufacturing Applications

Beginning in FY 1994, NIST will establish an AdvancedManufacturing Systems and Networking Testbed to sup-port research and development in high performancemanufacturing systems and to test high performancecomputer and networking hardware and software in amanufacturing environment. The testbed will serve as ademonstration site for use by industrial technology sup-pliers and users, and to assist industry in the develop-ment and implementation of voluntary consensus stan-dards. Research and testing will be co.-!ducted at theNIST testbed as well as at testbeds funded through theNIST Advanced Technology Program. A manufacturingsystems environment will be developed to support theintegration of advanced manufacturing systems and net-working software and products. A standards-based dataexchange effort for computer integrated manufacturingwill focus on improving data exchange among computeraided design, process, and manufacturing activities.Prototype systems and interface specifications will becommunicated to appropriate standards organizations.Results will be made available to U.S. industry throughworkshops, training materials, electronic data reposito-ries, and pre-commercial prototype systems that can beinstalled by potential vendors for test and evaluation.One role of advanced computing technology in manufac-turing process modeling and simulation is described onpages 150-151 in the Case Studies section.

Networking and Information Infrastructure

NIST performance evaluation activities include measure-ment and characterization of the impact of software pro-

103

Page 116: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

tocols on communication performance in order to mini-mize communication bottlenecks between applicationprograms. NIST researchers have implemented a highspeed communications testbed that provides a HIPPIlink to a performance instrumented workstation.Testbed performance instrumentation includes a modi-

' fied NIST MultiKron performance data capture chipinterfaced to the test workstation. The workstationHIPPI interface was designed, implemented andobtained through a collaborative effort with theVISTANet project, one of the NREN gigabit testbeds.

Since the early 1970s, NIST has beendeveloping cost-effective ways to helpprotect computerized data. NIST hasdevised a prototype system for control-ling access to a computer system thatuses a password, a smart card, a finger-print reader, and cryptography.

In FY 1994, NIST will employ systems instrumented withthe MultiKron chip to investigate the behavior and per-formance of communications protocols suitable for multi-gigabit/second transmission rates. For high perfor-mance networking, NIST will support transition planningand deployment of ISDN and OSI-based protocols in theInternet and undertake research and development activ-ities to support emerging Broadband ISDN standards.NIST will continue to interact with industry by sponsoringand hosting the North American ISDN Users Forum.

N1ST activities in the areas of networking and communi-cations technology will be expanded to address applica-tions of information infrastructure including electroniccommerce, distributed multimedia environments, andadaptive systems. Support for electronic commerce willfocus on enabling electronic exchange technology andprotocols to support business transactions and manu-facturing techniques. NIST will develop improved elec-tronic data interchange methodology to describe,access, and update data. In FY 1994, an integrationfacility will be created for manufacturing applications ofelectronic commerce.

NIST activities in distributed multimedia environmentswill include the development of methodology to acquireand administer the large amounts of text and multimediamaterial associated with electronic libraries and theimplementation of information retrieval mechanisms foreasy access to information by untrained end users.NIST activities in adaptive systems will include researchin wireless communication compression and encodingtechnology. NIST will also conduct workshops toassess network security requirements and develop net-work security technology suitable for Internet and othernetworking technologies.

104

I. s

Page 117: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

4.6

svigeo.° -

Statistical analysis of video microscopeimages provides improved precision indetermining optical fiber geometry.

Performance Measurement

NIST developed the MultiKron chip to achieve low-per-turbation data capture while assessing the performanceof scalable high performance computer systems. Thistechnology was transferred to Intel Corporation for appli-cation in its Paragon systems. NIST also devised a soft-ware-based technique for portable sensitivity measure-ments of parallel programs. The new method is basedupon statistical experimental design principles andapplies to any MIMD system. A low-cost setupapproach makes the technique practical, and promisingpreliminary results on shared-memory and distributedmemory systems have been obtained.

In FY 1994, NIST will extend its hardware and softwaretools for system performance monitoring through the

, acquisition of a MultiKron instrumented scalable system.Evaluation activities will focus on the performance char-acterization of application programs running on this sys-tern and on other scalable systems.

Software Deveiop.. ent, Distribution, and Reuse

The NIST Guide to Available Mathematical Software(GAMS) project develops techniques and tools to helpscientists and engineers locate and use computer soft-ware to solve mathematical and statistical problems.The GAMS system has been incorporated in theSoftware Exchange Experiment coordinated by theNASA Goddard Space Flight Center. The status ofGAMS and future plans are described on pages 148-149in Case Studies section.

In FY 1994, NIST algorithm and software developmentactivities will address needs for improved computationalperformance and visualization capability in applicationareas drawn from biotechnology and from chemical andmaterials process design and simulation.

105 1 IA

Page 118: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

National Oceanic and AtmosphericAdministration (NOAA)NOAA's Grand Challenge research in climate prediction and weather forecasting is criticalto its mission to describe and predict changes in the Earth's environment, manage theNation's ocean and coastal resources, and promote global stewardship of the world'soceans and atmosphere. This research depends on advances in high-end computing andon the collection and dissemination of environmental data.

NOAA GrandChallenges RequiringHPCC Resources

Environmental assessment andprediction

- Global and regional modelingto support short termforecasting and warnings

Weather

River flow and waterresources

Solar and space

Living marine resourceforecasts

- Assessment of seasonal,interannual, and long-termglobal environmental change

Global change modeling

Ocean modeling

Coastal analysis and assess-ment

Environmental data andinformation dissemination

- Climate and weather

- Oceanographic

- Geophysical

- Navigational, includingcharts of U.S. waters andairspace

Increased computing power will enable higher resolutionin the current models of the Earth's atmosphere-oceansystem. Increased resolution will enable accurate repre-sentation of key features such as weather fronts andocean eddies, and eliminate distortions due to clouds.More accurate NOAA models will improve the under-standing of the behavior of climate and weather sys-tems, making possible better decision making by gov-ernment and industry on issues that affect both the envi-ronment and the economy.

NOAA is the Federal agency responsible fGr archivingand disseminating the Nation's environmental data, andis the principal repository for the Nation's climatic datafor the U.S. Global Change Research Program. Byincreasing NOAA connectivity to the Internet,researchers and other users will have better access tothis growing collection of large data sets located at morethan a score of sites.

Through HPCC efforts in climate modeling, NOAA willprovide better simulations of atmosphere-ocean couplingand a first-ever direct attack on the regional climatechange problem. More accurate and more timelyassessment of the future impact of climate change willmake it possible to avoid "false choices" between theeconomy and the global environment. In weather fore-casting, finer resolution in global and regional modelswill result in better weather forecasting and warning ser-vices, especially for hazardous weather and flight safety.

NOAA participates in the NREN, ASTA, IITA, and BRHRcomponents of the HPCC Program as follows.

106

f '211

Page 119: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

NOAA LaboratoriesInvolved in HPCC

- Geophysical Fluid DynamicsLaboratory (GFDL)Princeton, New Jersey

- Forecast Systems Laboratory(FSL)Boulder, Colorado

- National Meteorological Center

(NMC)Camp Springs, Maryland

NREN: Improving Access to Systems and Data

By using the Internet, NOAA researchers can easilyaccess massively parallel systems at distant locations.Geographically distributed researchers can collaborateand easily share NOAA computing resources and databy accessing these systems remotely from high perfor-mance workstations.

NOAA plans to make its vast environmental dataarchives more accessible to the scientific community,while preserving the integrity of operational NOAA sys-tems. Greater NOAA Internet connectivity will substan-tially improve access to these data. To support thisincreased Internet use. a NOAA Internet NetworkInformation Center has been established. The Centerwill provide information and assistance to all NOAAInternet users and to scientists at other agencies and inacademia with whom they collaborate.

ASTA: Environmental Grand Challenges

NOAA is developing advanced algorithms and redesign-ing climate prediction and weather forecasting models touse new parallel programming paradigms.

NOAA will conduct a phased acquisition of scalable par-allel systems for use in climate prediction and weatherforecast modeling. The new models will be installed andevaluated on these systems.

IITA: Environmental Applications and ImprovedData Accessibility

NOAA proposes to investigate environmental monitor-ing, prediction, and assessment applications, and toexpand efforts to make its environmental data moreaccessible.

BRHR: Increasing the Base of Researchers andUsers

The number of NOAA users who can effectively usescalable systems will grow considerably. In addition,substantially more visiting scientists will work with NOAAresearchers on Grand Challenge problems, encouragingboth evolutionary improvements and creative break-throughs in climate prediction and weather forecasting.

107I 2 1

Page 120: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

NOAA National DataCenters

- National Climatic Data CenierAshevilleVorth Carolina

- National Oceanograp:tic DataCenterWashington, D.C.

- National Geophysical DataCenterBoulder, Colorado

FY 1993 Activities and Accomplishments andFY 1994 Plans

Network Access

NOAA's pilot Internet Network Information Center hasbegun to provide assistance in using the Internet, net-work management for connected NOAA facilities, mailand other server capabilities, and protected access toselected NOAA operational data sets, such as NMCmodel output data.

In FY 1994, at least 10 more NOAA computational anddata archiving facilities will be connected to the Internet,and connection bandwidth will be increased.

Global Atmosphere and Global Ocean Modeling

Over the past two years, NOAA's Geophysical FluidDynamics Laboratory (GFDL) has begun redesigning itsmost important atmospheric and oceanic models tomake them modular and parallel in design. This activityincludes a collaborative effort by GFDL with DOE/LosAlamos National Laboratory (LANL) scientists that hasled to the successful design of a model shell version ofthe GFDL Modular Ocean Model (MOM) to a form that isamenable to highly parallel computers and that main-tains its modular coding structure. Kgh resolutionexperiments using this new code were performed on the1,024-node Thinking Machines CM-5 at LANL.

GFDL's complete SKYHI global atmospheric grid-pointmodel, which is used to study climate, middle atmo-sphere dynamics, and atmospheric ozone, has beenredesigned for parallel execution and ported to the LANLCM-5, as part of the GFDL-LANL collaboration.

The GFDL radiation physics package, which is a criticalpart of all GFDL atmospheric climate models, wasredesigned to a modular form that is suited to highly par-allel computer systems.

In FY 1994, high resolution atmospheric modeling exper-iments using a simplified-physics version of a parallelatmospheric grid model will be performed to test modelsensitivity to grid resolution.

Page 121: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

A computer model representatiot, f ablizzard over the Colorado Front Rangearea, produced jointly by NOAA'sForecast Systems Laboratory (FSL) andthe Colorado State University. Theimage represents a six hour forecast.looking over Colorado from the south-east. The white/fray area depicts theforecast region for significant clouds.and the red area within the clouds rep-resents the region in which aircraft icingis likely to occur.

In FY 1994, a parallel version of the GFDL limited-areanon-hydrostatic model will be developed for use in

investigating the interaction between radiation and

clouds.

Weather Forecast Modeling

An adiabatic shell for the NOAA National MeteorologicalCenter (NMC) spectral model has been developed and

executed on a Thinking Machines CM-200 atDOD/Naval Research Laboratories (NRL) in collabora-

tion with NRL.

A regional atmospheric model has been restructured forexecution on highly parallel systems by NMC, in collabo-ration with NOAA's Forecast Systems Laboratory (FSL).

The initial restructuring and recoding of theregional/mesoscale Optimum Interpolation (01) analysiscode for execution on the CM-200 at DOD/NRL has

been completed.

FSL has developed a parallel version of the MesoscaleAnalysis and Prediction System (MAPS) as a functionalprototype system for both the Federal AviationAdministration and the National Weather Service.

MAPS is the first of several strategic weather models tobe parallelized for the Aviation Weather Program.These models will provide high resolution forecasts onboth the national and regional levels to support opera-

tional and aviation meteorology. The MAPS mod& wasengineered using an FSL-specified layered softwareapproach to provide portability among scalable parallel

systems.

NOAA HPCC Systems

A scalable architecture system, a 208-node IntelParagon, is being installed at NOAA/FSL in conjunction

with the ARPA-sponsored National Consortium for HighPerformance Computing and as a part of the Boulder

Front Range Consortium. Consortium members include

NCAR (under NSF sponsorship), the University of

Colorado (under ARPA sponsorship), and FSL.

FSL has developed a benchmark suite to evaluate par-allel processors for use in weather analysis and predic-tion applications. ESL has also specified and is imple-

109

Page 122: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Contours representing the wind flow inthe middle atmosphere, produced by aweather forecast model running on theCray Y-MP8 at NOAA's NationalMeteorological Center, superimposedon a NOAA GOES satellite infraredcloud image.

menting a layered software approach to support theportability of such applications between various scalablesystems and networks of workstations, and has takenthe initial steps toward implementing a meteorologicalsoftware library using this approach.

GFDL has solicited technical input from massively paral-lel computer vendors through a public comment processas part of its effort to redesign its primary atmosphericclimate model to a suitable parallel design. The result-ing technical collaborations are leading to a more effec-tive code design for this and other models that are beingrewritten for parallel systems.

110."7

Page 123: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Environmental Protection Agency (EPA)Central to EPA's mission are its Grand Challenges in air and water pollution managementand in ecological assessment. The complex nature of these Grand Challenge problemsdemands use of the ;-nost comprehensive and integrated computational assessment toolsavailable. By the mid-1990s, the achievement of EPA's HPCC Program goals will provide

more reliable and useful tools to develop improved national pollution control and prevention

strategies involving billions of dollars in control costs.

411

t,.

EPA has three main goals for its HPCC Program activi-ties:

Advance the capability of environmental assess-ment tools by adapting them to a distributed het-erogeneous computing environment that includesscalable massively parallel architectures.

-I Provide more effective solutions to complex envi-ronn ental problems by developing the capability toperform multipollutant and multimedia pollutantassessments.

--I Provide a computational and decision supportenvironment that is easy to use and responsive toenvironmental problem solving needs to keyFederal, state, and industrial policy-making organi-zations.

EPA participates in the NREN, ASTA, IITA, and BRHRcomponents of the HPCC Program as follows.

NREN: Increasing Access to a HeterogeneousComputing Environment

EPA will expand Internet connectivity to a critical massof environmental problem-solving groups at the Federal,state, and industrial levels and to environmental GrandChallenge research teams, enabling distributed comput-ing approaches for linking complex ecological models ormultimedia environmental models. EPA is establishing aresearch network to support geographically distributedcollaborative development of prototype environmentalassessment frameworks using a distributed, heteroge-neous computing environment that includes massivelyparallel components and distributed data access. EPAwill also provide environmental applications software for

111

BEST COPY AVAILABLE

Page 124: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-.1.11111 1110"

Researchers evaluate molecular proper-ties of carcinogens from pollutionsources. The image shows electric fieldvectors for benzopyrene.

use in performance and reliability testing of Internettechnology.

ASTA: Environmental Assessment GrandChallenges

EPA is integrating advanced environmental assessmenttools into distributed, heterogeneous computing environ-ments. A major focus is on the development of parallelalgorithms for atmospheric chemistry, transport, andmolecular models to enable more effective study of mul-tipollutant air quality issues, the impact of man-madechemicals on the environment, and the relationshipbetween air and water pollution. In the latter case, thelinkage of distinct air and water models will enhance theknowledge of complex interactions of pollutants crossingmedia boundaries, enable integrated assessment of pol-lutant impacts, and form the foundation for further inte-gration with ecological process models. Such work willensure that the interaction of multiple pollutants in sev-eral media (such as air and water) are considered whenassessing pollutant abatement strategies. Theseassessments are not feasible without HPCC technology.

Environmental scientists and regulatory analysts lackadequate assessment tools because computationalcomplexities and slow response time inhibit effectiveand extensive use of the most advanced environmentalassessment tools. Development of a user friendly frame-work that puts the power of HPCC technology andadvanced multipollutant environmental models directlyin the hands of Federal, state, and industry groupscharged with solving environmental pollution problems isa primary EPA goal.

IITA: Enhancing User Access to EnvironmentalData and Systems

EPA, in concert with NASA and NOAA efforts to dissem-inate environmental information, proposes to expandpublic access to a variety of environmental databases,such as ecological measurements, air and water qualitymodel predictions, and population exposure to pollu-tants. Development of intelligent user interfaces forinteractive browsing, retrieval, analysis, and on-line mul-timedia tutorials for inexperienced users will facilitateroutine USE of environmental models for regulatory anal-ysis and environmental education. In addition to theseaccess and training systems, EPA proposes to support

Page 125: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Region., 'Nposition Model predic-tions of nitric acid over the eastern U.S.

curriculum development for environmental education forgrades 9-12 as well as university undergraduate andgraduate programs.

BRHR: Broadening the User Community

EPA will develop and evaluate methods and materialsfor training Federal, state, and industrial users ofadvanced environmental assessment tools.

It has initiated a program to train analysts who supportenvironmental decision making in advanced computingtechnology. The scope of this program ranges frommaking it as easy as possible to use the models todeveloping interactive software for on-line training.

EPA also sponsors fellowships, graduate student pro-grams, and high school computational science programsto develop interdisciplinary skills required for air anawater quality modeling.

Major FY 1993 Activities and Accomplishmentsand FY 1994 Plans

Multipoilutant Air Quality Management

Working with the North Carolina Supercomputing Centerand several universities, EPA has initiated a GrandChallenge air quality management project in which auser friendly, multipollutant, multiscale air quality model-ing system is being developed for modeling research,evaluation, and regulatory assessment.

The first prototype of the system will address regionaland urban-scale ozone and regional acid depositionissues using generic scalable algorithms for key physicaland chemical processes, and advanced technology fordata management, interprocess communications, analy-ses, and visualization.

EPA has also begun to evaluate the performance of bothlow and high end massively parallel computers on atmo-spheric chemistry and transport algorithms and molecu-lar modeling codes critical in environmental assessment.The Regional Oxidant Model has been ported to a 1,000processor MasPar for benchmarking. Molecularmechanics and dynamics algorithms are also being eval-

113

I 17

Page 126: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Three-dimensional rendition of salinityin Chesapeake Bay.

uated on a 32-node Kendall Square Research KSR-1I and an Intel 1PSC/860 at the National Institutes of

Health. A variety of partial differential equation solvers, are also being implemented for performance evaluation

on both the MasPar and KSR machines.

To foster communication among scientists currently per-forming molecular modeling research on highly parallelmachines, EPA and the Office of Naval Research jointlysponsored the "Molecular Modeling on ParallelComputers" workshop session at the 1993 SanibelSymposium.

Multimedia Modeling

A multimedia project has been initiated by porting three, single-medium (or single-disciplinary) models to super-, computers specifically a comprehensive regional air

quality model, a watershed-water quality model, and athree-dimensional, bay hydrologic-water quality model.These models are used to address the nitrogen eutroph-ication of Chesapeake Bay an environmental probleminvolving two major pollutant pathways of major concernto the Nation.

The three models have been ported to a Cray Y-MP,and the air quality model has been optimized there.Current research focuses on linking these modelstogether when there is only minor interaction among thedifferent media, thereby maintaining the full disciplinarycomplexity of known and tested individual models whiledemonstrating the added benefit of multimedia modelingfor environmental decision making.

Visualization

As computer capability increases, interaction among thedifferent media and synergisms involving scores of pol-lutants are being introduced into environmental models.As a result, these models are becoming increasinglycomplex. Visualization groups supported by EPA'sHPCC Program have developed high-quality videos of aregional acidic deposition model and an air qualitymodel, demonstrating the power that visualization has toopen up the workings of these models to both scientistsand non-scientists.

This effort was recently expanded as visualizationexperts began working with water quality scientists todisplay the output of increasingly complex sediment

114

Page 127: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Ozone concentrations seen through anAVS interface.

, transport water quality models. As part of these efforts, a; new volume-rendering algorithm has greatly enhanced the

realism of the three-dimensional model data, and morevisualization algorithms are under development.

Networking

EPA t,stablished a 100 Mb/s (FDDI) research network andhas provided T1 connectivity to the EnvironmentalResearch Center in Georgia and to the Chesapeake BayOffice in Maryland. This connectivity provides the band-width needed to satisfy EPA's heterogeneous computing,data management, and collaborative visualization require-ments. In FY 1994 EPA will establish a T3 Internet inter-connect to EPA's National Environmental SupercomputerCenter.

Technology Transfer

EPA is funding the establishment of a prototype trainingcenter and an on-site training program for Federal, state,and industrial personnel, initially to support air quality reg-ulatory decision-making policies. Hands-on training withstate-of-the-art environmental assessment techniques willfoster competency in high performance computing; userswill observe the increasing speed, accuracy, scope, andimproved management of environmental assessmentsavailable through high performance computing.

EPA is evaluating the usefulness of multimedia electronictutor and help approaches, and collaborative computingand visualization to provide expert advice to remote loca-tions. A prototype interactive analysis and visualizationsystem for a workstation environment has been developedto support regulatory decision making at the state level. It

uses the Urban Airshed photochemical model and theApplications Visualization System (AVS) software. Thepilot user group, the North Carolina Department ofEnvironment, Health, and Natural Resources, accessesthe North Carolina Supercomputing Center at ResearchTriangle Park via a newly established T1 link. In FY 1994the pilot program will be extended to additional Federal,state, and industrial environmental organizations.

Computational Education

EPA's HPCC Program helps support Earth Vision, EPA'scomputational science educational program for highschool students and teachers. Students prepare propos-

1 1 5

-!9 BEST COPY AVAILABLE

Page 128: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Getting user feedback on a prototypeuser interface as part of EPA's technolo-gy transfer pilot projects.

als during Saturday tutorials at EPA's NationalEnvironmental Supercomputing Center in Bay City,Michigan. Teams are selected to participate in a three-week summer educational program and to conduct envi-ronmental research using scientific workstations andEPA's supercomputer during the academic year.

EPA Milestones

NREN FY 1994

- I Complete T3 Internet interconnect to EPA HPCCCenter.

- 11ncrease bandwidth and number of state connec-tions to the Internet.

: ASTA FY 1993 - 1994

Establish Grand Challenge collaborations.

Demonstrate key environmental chemistry algo-rithms on scalable massively parallel testbeds.

ASTA Beginning in FY 1994

-I Demonstrate linked air and water models in a dis-tributed computing environment.

- I Install 10 gigaops testbed for Grand Challengeteams.

BRHR FY 1992 - 1994

- I Establish a key set of collaborations for technolo-gy transfer of advanced HPCC environmentalassessment capabilities to Federal, state, andindustrial users.

BRHR Beginning in FY 1993

Establish fellowships and graduate level supportfor Grand Challenge teams.

116

fj()

Page 129: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Department of Education (ED)The Department joined the HPCC Program in FY 1992. Through program initiatives andthrough activities sponsored by the regional laboratories and the research and developmentcenters, educators and students will have enhanced access to information and communica-tions resources that will lead to improved teaching and learning.

Att

0,1

on I

741

,

Fairfax County Public School teacherand students use PATHWAYSdatabase and STAR SCHOOLS dis-tance learning materials.

, ED has supporting activities in the NREN component ofthe Program.

NREN

The Department's on-going activities include:

-1Development of PATHWAYS, a computer-basedinformation system designed to use the Internet toprovide easy access by teachers, administrators,parents, students, and community members toeducational information.

JExpansion of research and development initiativesin networked applications for curricula, instruction,and administration.

iSupport for educational technology activitiesthrough:

-Grants for equitable access to informationresources

-Star Schools distance learning grants for studentinstruction and teacher staff development

ED will work with other agencies to define and coordi-nate educational restructuring initiatives to take advan-tage of HPCC resources.

Page 130: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

.01=E.

The speed of currents at the ocean surface simulated with a three-dimensional global ocean model devel-oped for the Thinking Machines CM-2 and CM-5 at Los Alamos National Laboratory. High speeds are inred, low in blue. Eddies are evident in the Gulf Stream along the east coast of North America and in thetropics. The spatial resolution of the computer model is 0.5 degrees in latitude and longitude with 20 verti-cal levels; realistic ocean bottom topography is used. The work was jointly sponsored by the HPCC and theDOE CHAMMP (Computer Hardware, Advanced Mathematics, and Model Physics) Program.

Page 131: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Introduction to Case StudiesThis section presents examples of on-goingGrand Challenge applications research andemerging HPCC hardware and software tech-nologies being developed in support of real-world problems and the HPCC Program.

Only within the past several decades haveadvances in high performance computing andnetworking technologies merged with longstanding theoletical and experimental methods toyield a powerful new computational approach toscientific inquiry. Thi approach has enabledscientists and engineels to transcend many of thelimitations inherent in the more traditional prac-tices. The demonstrated success of computation-al science in a wide variety of problem areas hasled to an infusion of high performance comput-ing technologic, and techniques into mainstreamscientific practice. An increasing number ofresearchers, representing almost every disci-pline. continues to probe new and complex prob-lem areas many of pressing societal concern.

The computational approach adds anotherdimension to research methods by allowing theresearcher to create a mathematical model ofsome aspect of reality. Solving the model entailstranslating its equations into a form capable ofbeing programmed and executed on a high per-formance computing system. Algorithms thatstructure input data and specify the manner inwhich the calculations are to he performed arethe primary building blocks of the computationalmodel. By exercising the model over broaddata ranges and parameter spaces. a picturealbeit a simulated one of the real phenomenaemerges. To the extent that it is complete andaccurate, this picture is useful in describing real-ity as it exists and perhaps more importantly.predicting change. In many cases such as thosedescribed in this section. scientists are able toreach and extend the understanding of phenome-na far beyond what is possible through pure rea-soning and obscnation.

119

The success of this approach directly dependsupon computing and data management capabili-ty. Input data ranges and parameter spaces maybe mathematical continua infinite in dimen-sion. It may require billions of calculations toproduce a single solution point, and some phe-nomena require billions of solution points toapproach a useful level of completeness andaccuracy. Thus the computational requirementsof many applications are potentially boundless.Similarly, the "answers" associated with a simu-lation may comprise terabytes of data. Scientificvisualization, with its ability to store and inter-pret data, has come to play an indispensable rolein the overall success of computational research.

The strengths and limitations of the computa-tional approach are readily evident: a simulationcan represent reality only to the degree that itboth holds to the physical laws of nature andcapture, the inherent complexity and detail ofthat which it attempts to represent. However, formany problems this is the only approach avail-able. Scientific instruments for observation andmeasurement confront limits either thoseimposed by the nature or location of the objectof interest, or by the economics of producing theinstrument or conducting the experiment. Thereare few alternatives to the computationalapproach for studying aspects of nature lying atthe extremes of measurability those that arevery small. ery large. very fast. very slow, veryclose. very far, and so on. Yet the unis erse isfilled with such phenomena that affect our dailylives.

The examples presented here are only a smallsubset of the early achievements to come out ofthe High Performance Computing andCommunication, Program a hint of thepromise of HPCC method, and technologiesapplied to long standing problems of science andengineering and humanity.

I 3.;

Page 132: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 1

Climate Modeling

A UP

Sea surface temperature from a multi-year simulation with the UCLA coupled atmosphere-ocean model.The atmospheric component of the model is the UCLA General Circulation Model and the oceanic compo-nent is the GFDL Modular Ocean Model. The simulation was performed on the Cray Y-MP at the SanDiego Supercomputer Center.

Understanding the Earth's climate system and itstrends is one of the most challenging problemsfacing the scientific community today. Betterunderstanding of our climate system is criticalfor the Nation as it prepares for the 21st century.The Earth's atmosphere-ocean system and thephysical laws that control its behavior are com-plex and contain subtle details. This system isonly crudely represented by the most compre-hensive of present-day climate models.Improvements in the computational modeling ofmany of the component physical processes, suchas cloud-radiation interaction, will require long-term effort. Climate model improvements w illnecessitate a hundred to thousand-fold increasein computing. communications, and data man-

120

agement capabilities before these goals can bemet. In addition, large increases in model detailand sophistication are necessary for regional cli-mate change forecasting.

Under the sponsorship of the Federal HPCCInitiative and other programs, new computingand communications resources are being devel-oped for climate research. Current state-of-the-art climate models are being redesigned to exe-cute efficiently on promising new scalable archi-tecture systems. Scientists are also ins estigatingdistributed computing strategies that will usegigabit-per-second data transfer between distantsupercomputers.

1:34

Page 133: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

aule1..,%==.1=M111 WIN

AMINOINOIM,

The coupled atmosphere-ocean model is the pri-mary tool by which climate scientists simulatethe behavior of the Earth's climate system. As afirst step in porting a fully coupled model tomassively' parallel processor systems. scientistsat several sites are redesigning separate atmo-sphere and ocean model elements for executionon scalable parallel systems.

A parallel ocean model, referred to as theParallel Ocean Program (POP), has demonstrat-ed gigallop performance during early data-paral-lel experiments on the I.024-node ThinkingMachines CM-5 at DOE's High PerformanceComputing Research Center of Los .AlamosNational Laboratoty (LANL). This researcheffort is being funded under DOE's ComputerHardware Advanced Mathematics and ModelPhysics (CHAMMP) program. LANL sdentistscollaborate on the project with scientists atNOAA's Geophysical Fluid DynamicsLaboratory (GFDL). the Naval PostgraduateSchool (NPS). and NSF's National Center forAtmospheric Research (NCAR).

Over the past decade, the U.S. academic com-munity has made extensive use of NCAR'sCommunity Climate Model (CCM) for globalclimate research. Recently, researchers at twoDOE laboratories. Argonne National Laboratoryand Oak Ridge National Laboratory, also fundedunder the CHAMMP program. have been collab-orating with NCAR scientists to develop parallelersions of the latest CCM code to run on next

generation massively parallel systems. Thismodel, referred to as the Parallel CommunitxClimate Model 2 (PCCM2). is now demonstrat-ing near gigallop performance on the 5I2-pro-cessor Intel Touchstone Delta. A data parallelSersion for the Thinking Machines CM-5 is alsobeing prepared. In ongoing investigations.researchers arc exploring new parallel algo-rithms designed to improve performance onmessage passing systems and incorporating newnumerical methods that will improse climatesimulations.

Scientists at the Lawrence Livermore NationalLaboratory (1.I.NI.) are shnultaneously des elop-lug a message passing version of the modelsdescribed abose. These \ ersions treat the inher-

121

ent parallelism of climate problems in a differentfashion and can he executed on the class a par-allel systems built around the distributed memo-ry architectural approach.

A high-resolution atmospheric general circula-tion model. known as SKYHI. has been devel-oped and used by scientists at NOAA's GFDL toinvestigate stratospheric circulation and ozonedepletion for a number of years. The oionedepletion study was presented in "GrandChallenges 1993- and is an example of some ofthe scientific results obtained from this model.GFDL scientists are now working with LANLscientists to reconstruct the programs that makeup the SKYHI model in order to execute it onmassivels parallel systems that support data par-allel and message passing programmingparadigms.

These various model development efforts allhave the same ultimate obiective to develop acoupled atmospheric-ocean climate model thatwill execute efficiently on scalable parallel com-puters.

Just as demand for more computationalresources is growing within the climate researchcommunity, so also is the need to distribute thecomputing load among different computers. bothat one site and across the country when appropri-ate. With this objective in mind, researchers atthe University of California. Los Angeles(UCLA) are collaborating with scientists atNSF's San Diego Supercomputer Center(SDSC). the California Institute of Technology(Caltech). and NASA's Jet PropulsionLaboratory (JPL) to demonstrate the feasibilityof distributed supercomputing of a coupled cli-mate model. In this research, separate compo-nents of the atmosphere and ocean codes in asingle climate computation are distributed overmultiple supercomputers connected via a highspeed network. Initial prototype experimentshas e been performed between Cray Research 1-MP systems at SDSC and NCAR. connected bya 1.5- megabit-per-second T-I data link. Theimminent availability of a gigahit-per-secondnetwork under the CASA gigabit testbed proic.:twill soon allow for a much more ads anced coto-munications capability. Plans then call for coin-

Page 134: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

IM1+INAIIMMIEW

putations to be distributed between a Cray Y-MPat either SDSC or JPL and an Intel supercomput-er at either SDSC or Caltech. Such distributedcomputing experiments provide realistic tests forthe gigabit-per-second connections that areanticipated on the NREN of the future.

SPONSORING AGENCIES AND PROGRAMSARPADOENASANOAANSF

PERFORMING ORGANIZATIONSAruonne National LaboratoryCalifornia Institute of TechnolopGeophysical Fluid D namics LaboratorJet Propulsion LaboratoryLos Alamos National LaboratoryNational Center for Atmospheric ResearchNaval Postgraduate SchoolOak Ridge National LaboratorySan Diego Supercompter CenterUniversity of California-Los Angeles Climate

Dynamics Center

122

1 3 G

Page 135: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 2

Sharing Remote Instruments

Specimens of chick cerebellum stained with osmium and reduced with potassium ferrocyanide. Image Bshows a higher magnification of a portion of Image A. The images were obtained in a study of the internalmembrane structure of the Purkinje cell.

HPCC-supported advances in computer net-works and visualization software are allowingscientists to control and share remote micro-scopes, telescopes, and other scientific instru-ments in an interactive, real-time fashion. Thesetypes of projects have opened the doors of thelaboratory to a new concept the "distributedlaboratory" which integrates laboratory equip-ment, high performance computing systems anddata visualization tools over a high speed net-work, resulting in a more comprehensive andscientifically valuable investigative environ-ment.

Remote Microscopy

Researchers at the University of California-SanDiego Microscopy and Imaging Resource haveimplemented a sophisticated. computer-con-trolled high-voltage transmission electron micro-scope (HVEM). Working in collaboration withstaff scientists at the San Diego SupercomputerCenter and the Scripps Research Institute, theresearch group has coupled the HVEM via a

123

_

high speed network to high performance com-puting systems and interactive visualization soft-ware running on scientists' workstations. Themicroscope is a unique resource, one of only afew such microscopes in the United States in usein biological science. More powerful than ordi-nary electron microscopes. it can accommodatemuch thicker laboratory specimens. yieldinggreater amounts of biological information.Using computer tomography and other visualiza-tion techniques, thc images collected can be usedto produce three-dimensional animations, allow-ing scientists to look at many previously uninter-preted areas of biomedical science relating bio-logical function with structure.

The subjects of such study include the disruptionof nerve cell components resulting fromAlzheimer's disease, the structural relations ofprotein molecules involved in the release of cal-cium inside neurons, and the three-dimensionalform of thc Golgi apparatus. where sugars areadded to proteins.

This project is an example of the application of

137

Page 136: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

4

4

MI=

Scientists no longer need to be in the same roomwith their laboratory equipment. Instead, theycan control it over the network from their desktopcomputers. Pictured is a scientist controlling ahigh-voltage electron microscope (useful forimaging thick three-dimensional biologicaltissues) from her Sun workstation.

124

diverse technologies to a single class of scientif-ic problems. The capabilities of the microscopefor scientific investigation will soon be extend-ed, combining image data acquisition with com-puting resources to render, view, and animate theimages for real-time analysis. By connecting themicroscope to a high speed network, the instru-ment will someday be made available to investi-gators located in any geographic region. extend-ing the accessibility of the resource to a broaderscientific community in a collaborative environ-ment.

Future plans call for extending this environmentto the Apple Macintosh: implementing automaticfocusing and calibration of the microscope (nowhandled by a human operator): developingremote image analysis tools: optimizing thetomography reconstruction code currently run-ning on a Cray Y-MP system: and implementingthe code on a parallel computing platform.

Real-time Radio Telescope Observatory

The application of the operational conceptsdescribed above is not limited to microscopes.A research group affiliated with the NationalCenter for Supercomputing Applications(NCSA) in Champaign-Urbana. IL is investigat-ing how to apply this model to real-time radiotelescope observation. This group is seeking todemonstrate the feasibility of connecting theBerkeley-Illinois-Maryland Array (BIMA) radiotelescope array. located at the Hat CreekObservatory in northern California. to NCSAsupercomputers via high speed networks.

One of their needs calls for transferring a giga-byte-size observed visibility dataset from thetelescope to NCSA for processing, and thenreturning the processed data to Berkeiey foranalysis on a workstation. In this way. theastronomer can judge the quality of the data. seeif the signal is strong enough to proceed with theobservations, judge whether the area of the skybeing mapped is correct. and experiment w ithprocessing parameters. By connecting BIMAdirecily to BLANCA, one of five HPCC sup-ported gigabit testbeds. and increasing the num-ber of present antennae from three to six, the

1 :3 8 IIEST COPY AVAILABLE

Page 137: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

111011/

group expects to achieve a data transfer rate I()to 100 times higher than current rates. Thiscould allow real-time steering of the telescope.thereby greatly enhancing the value of an obser-vational session and revolutionizing the wL., in

which astronomy research is performed.

REMOTE NHCROSCOPY

SPONSORING AGENCIES ANDORGANIZATIONS

Datacuhe. Inc.Nem ork S stems CorporationNIHNSFPhotometrics LTDState of and Unis ersit of CaliforniaSun Nlicros) stems Computer Corportlion

125

PERFORMING ORGANIZATIONSSan Diego Supercomputer CenterThe Scripps Research InstituteUnis ersity of California- San Diego

San Diego Microscopy and Imaging ResourceComputer Science and Engineering

Department

REAL-TIME RADIO TELESCOPEOBSERVATORY

SPONSORING AGENCIES ANDORGANIZATIONS

NSFState of and Universit of CaliforniaState of and Unis ersit) of IllinoisState of and Universit \ of NIar land

PERFORMING ORGANIZATIONSNational Center I or Supercomputing Application.University of California-BerkeleUnis ersit of Illinois-Champaign-UrhanaUnis ersity of Nlar land

139

Page 138: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

e"?

111111 110.11

Case Study 3

Design and Simulation of AerospaceVehicles

Airflow is simulated over and past the wing of a high performance aircraft that is using vectored thrust whiledescending to a few feet above the ground (in ground effect).

Improved design and simulation of advancedaerospace vehicles is a primary goal of theNASA HPCC Program. Massively parallel pro-cessing and related high performance computingtechnologies arc enabling new simulation andoptimization methodologies for flight vehicledesign. Those methodologies combine multiplephysical disciplines and integrate vehicle com-ponents such as the airftame, propulsion sys-tems. and controls. Improved aircraft designresulting from high performance computing

126

promises to give the U.S. aerospace industry acritical competitive edge. Early and continuouscoordination between NASA research centers.other Federal agencies. academia and industry iscrucial to the success of this endeavor.

Scientists at NASA's Ames Research Centerhave developed computer simulations of air flowpast a delta wing at takeoff and landing that willhelp design more efficient and cost-effective air-craft. By porting a single discipline computa-

1ST COPY AVAILABLE

Page 139: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

tional fluid dynamics code to a number of scal-able parallel computers, the scientists simulatedapplications involving moderately complexgeometries. For example, a simple powered liftvehicle that includes a wing and two lifting jet-son is shown on the previous page. NavierStokes computationS were performed to simulateflow past a delta wing with thrust reverser jets ina ground effect environment (takeoff and land-ing). These preliminary simulations will provideinsight into the computational interaction ofthrust elements, and upwash and reingestion ofhot air and gases, leading to significant improve-ments in aerodynamic performance.

Improved design processes for advanced aircraftand spacecraft through the use of advanced com-putational fluid dynamics and structural analysesis also the subject of research by scientists atLangley Research Center. Scientists are focus-ing on the design and modeling of a High SpeedCivil Transport (HSCT). One of the greatestchallenges in modeling the fluid dynamics ofthese vehicles is developing mathematical equa-tions that accurately simulate turbulent airflow.Turbulent flow simulations currently used loseaccuracy at high speeds. Development of anoptimal airframe design for a HSCT involvesthousands of design iterations, each requiring a

11

Airspeed contours around a three-dimensional aircraft traveling at Mach 0.77 are revealed by using anadvanced simulation method (lower figure). Blue areas represent slower moving air, yellow and red thefastest moving air. Flow calculations for this complex geometry were based on the unstructured meshshown in the upper figure.

127

I 4

Page 140: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Airflow through several components of a jet engine is simulated on a cluster of IBM RISC 6000workstations.

large amount of computing time to recalculatehow changes in one part of the design affect therest of the structure. Langley scientists havedeveloped a mhod to streamline this part of theoverall process and reduce the amount of com-puting time neeued for each iteration.

Researchers at NASA Lewis Research Centerare developing a prototype NumericalPropulsion System Simulator (NPSS) that willallow dynamic numerical and visual simulationof engine components. The prototype is intend-

ed to simplify thc process of dynamically con-necting engine component software across vari-ous machine architectures. This will allowindustry to design, engineer, and fabricatetomorrow's propulsion systems. This multidisci-plinary research, coupled with the improved per-formance offered by massively parallel comput-ers, will greatly enhance the ability of aircraftmanufacturers to analyze different designoptions rapidly and then efficiently producevehicle propulsion systems with optimal perfor-mance and reduced design cycle costs.

128

t 4 2

Page 141: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 4

High Performance Life Science:From Molecules to MRI

Computer modeling of the Purkinje neuron helps explain its branching network of nearly 200,000connections to other brain cells.

High performance computing and communica-tions is providing valuable insights into the mys-teries of human health and disease. Thesesophisticated new techniques allow for unprece-dented study and understanding of the remark-able self-assembly of atoms into molecules,molecules into cells, cells into tissues. and organsystems into the complete organisms that consti-tute humanity itself. Important new medicalknowledge and new diagnostic tools have result-ed from the HPCC Program.

129

At the molecular level. supercomputing and sci-entific visualization have provided valuableinsights into thc molecular basis of asthma.Corporate scientists at Eli Lilly collaboratedwith visualization specialists at the NationalCenter for Supercomputer Applications to modelthe three-dimensional molecular motions ofleukotrienes. naturally occurring "messengermolecules" that induce thc lungs to become stiffand inflamed. With the goal of designing newmedicines to block the leukotriene receptor inthe lungs. these researchers used supercomputer.

I 43 EST COPY AVAILABLE

Page 142: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

to calculate the position of each atom in severalrelated leukotriene molecules; three-dimensionalscientific visualization then displayed the subtledifferences in atom positioning that are crucial tobiological activity. More effective medicationswith fewer side effects arc the goal of this highperformance molecular dynamics computing.

Molecules assemble themselves into cells andtissues, and at this level also computerized anal-ysis has expanded our knowledge. One of themost complicated cells in the brain is a neuroncalled the Purkinje cell: each one of these cellsmay have up to 200.000 electrical connectionswith other brain cells. Researchers at theCalifornia Institute of Technology used anexperimental massively parallel computer tomodel thc response of Purkinje cells to chemicaland electrical stimuli. Their model emuiates the

Mettet

observed behavior of the living cell, and pro-vides evidence that although the connections toother neurons are voluminous, there is an elegantsimplicity in the spatial arrangement of thesebrain interconnections.

Cells and tissues organize into body systems.which can also be better understood using HPCCtechnologies. Magnetic Resonance Imaging(MRI) is a widely used method for imaginginternal structure of the human body that pro-duces two-dimensional cross section views.Researchers at Sandia National Laboratories, incollaboration with Baylor University MedicalCenter in Dallas and the Department of VeteransAffairs Medical Center in Albuquerque. haveused massively parallel supercomputers to turntwo-dimensional MRI images into three-dimen-sional views, and revealed previously hidden

Massively parallel supercomputing turns two-dimensional magnetic resonance images into three-dimensional maps that can identify early breast cancers.

130

114

Page 143: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

The molecular dynamics of three differentleukotriene (LTC4, LTD4, and LTE4) moleculesportrayed with a new supercomputingvisualization technique called "ghosting." Denseshadows represent the configurations in whichthe molecules spend a greater amount of time,while the range of molecular movement is shownby the extent of the dot patterns.

information. Communicating their data via theInternet, these groups compared three-dimen-sional MRI to standard X-ray mammography forthe detection of early breast cancer, and foundthat the new technique revealed early tumors thatcould not be detected by mammography.

SPONSORING AGENCIES AND PROGRAMSDepartment of Veterans AffairsDOEEli Lilly & Co.NIHNSF

PERFORMING ORGANIZATIONSBaylor University Medical CenterCalifornia Institute of TechnologyDepartment of Veterans Affairs Medical Center.Albuquerque

Eli Lilly & Co.National Center for Supercomputing ApplicationsSandia National Laboratory

Page 144: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 5

Energy Resource Recovery

AT,

A numerically intense geological database that describes the physical and chemical conditions of the GulfCoast Basin is assisting the scientists of the Global Basins Research Network to understand the processesthat control the movement of off 9nd gas in sedimentary basins. By visualizing the data scientists are ableto observe sedimentary conditions that would otherwise take much longer to investigate. This imagedepicts the various structures beneath the ocean flow in an area off the shore of Louisiana. Two faults areshown. in addition to three geographic layers: (from bottom) top-of-salt. shale, and sand and shale.

Hig.h performance computing plays an importantrole in the recovery of non-renewable energyresources. About two-thirds of the U.S. energysupply conies from oil and gas, and althoughmuch oil continues to he imported. there are pro-found advantages to domestic production: forexample. ensuring a stable supply and price tothe consumer. While most large U.S. oil reser-voirs have already been discovered. two-thirdsof the oil still remains in old fields after conven-tional recovery technology has been applied.Enhanced oil recovery (EOR) using advancedtechnologies has the potential to recover another100 billion barrels worth about two trillion dol-lars at today's prices.

Domestic oil and gas producers are working withscientists from other sectors to optimize recov-ery methods for existing petroleum reservoirs.Petroleum industry scientists use reservoir simu-lations run on high performance computers todetermine the production potential of reservoirsand the most efficient method, to extract

petroleum resources before investment in fieldoperations is made. The simulations model largecomplex field problems quickly, accurately, andefficiently. leading ultimately to reduced recov-ery costs. Research projects combining theexpertise of researchers from universities. indus-try, and government focus on ways to improvereservoir simulation methods for flow throughporous media, pore-scale multiphase flow, andhydrocarbon migration. As a technological spin-off. EOR simulation can he applied to remedia-lion strategies for underground. contaminatedsites.

Parallel Algorithms for Modeling Flow inPorous Media

New parallel algorithms for computational mod-els describing the flow of oil and other organicchemicals in porous media are an importanttechnical development. Models employ stochas-tic and conditional simulation and take into

132

1 4

Page 145: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

NMI

account numerous variables associated withreservoir behavior caused by complex physicaland chemical phenomena. Parallel algorithmshelp researchers utilize the computational powerof massively parallel computers to perform thesimulations. Researchers are also writing paral-lel versions of two widely used reservoir simula-tion codes, UTCHEM and UTCOMP.UTCHEM has been applied to study both surfac-tant EOR and surfactant remediation of aroundwater contaminated by dense nonaqueous phaseliquids found at weapons sites and other loca-tions.

Pore-scale Multiphase Flow Modeling

To better understand and predict the interactionsbetween fluids and rock, field simulation pro-grams must extrapolate results from laboratory

samples. To assist in interpreting experimentalresults, researchers are developing a basic theo-retical framework for multiphase flow systemsthat incorporates current knowledge of displace-ment processes and the understanding of oil androck chemistry. One technique in this frame-work, the Lattice-Boltzmann method, is essentialfor solving Navier-Stokes equations. the basicequations of fluid flow. These equations are thebasis of the mathematical models that simulatethe movement of a multiphase system of organicmaterials through rock pores.

Imaging of Present-day HydrocarbonMigration

Scientists are developing computer visualiza-tions to chart fluid movements below the oceanfloor of the Louisiana coast. Observations in

INJECTIVITY HISTORY DURING CARBON DIOXIDEFLOODING OF WEST TEXAS FIELD TEST

5.0

4.0

3.0

2.0

1.0

Pore Volumes Injected

This figure shows a comparison between the computed infectivity of carbon dioxide and field data from anoil reservoir in Texas operated by Texaco, Inc. The University of Texas compositional reservoir simulatorUTCOMP was used to make this calculation. The capability to predict the rate at which carbon dioxide canbe injected into these old wells to increase the oil production from the field is crucial for the economicsuccess of such enhanced oil recovery operations. This comparison shows close agreement with the fielddata from a geologically complex San Andres formation.

133I .14 4

14

if 7 UST COPY AVAILABLE

Page 146: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

--

this area are providing new insights into howON, er-pressured oil and gas move in sedimentarybasins. Because the study area encompasses thelargest oil field in the continental United States.these insights ma have important economicconsequences. Researchers are developing visu-al models that reflect canges in temperature.pressure. and strata of sand and shale to allowqualitative evaluation of the rate of fluid flowrequired to produce these changes.

SPONSORING AGENCIES i.NDORGANIZATIONS

ARPACornell Theor CenwrDOEEPAIBNI CorporationNew York StateN1H

NSF41 Affiliate. at Rice Link ersityState of TexasUnk ersit of Texas at Austin

134

!

PERFORMING ORGANIZATIONSAGIPChin ronComputational Nlechanics CorporationConocoCornell UnisersitElf AquitaineExxon

perMedia CorporationIBM CorporationLamont-Dohert Geological ObsersatorLandmark GraphicsLawrence Lk ermore National Lahorator)Los Alamos National LaboratorLouisiana State Unix ersitMichigan Technological Unix ersitMobil Exploration and Production Technolog and

Engineering CenterPenzoilRice UniversitTexacoUnix ersity of HoustonUniversit of TexasWoods Hole Oceanographic Institute

148

Page 147: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 6

Groundwater Remediation

'1' - '4'2;.1 1,

Map of Oak Ridge Waste Storage Facility

Modeling the Effect of Capping WAG 6 Waste Burial Trenches

Contamination of groundwater by potentiallytoxic organic liquids is one of the more seriousenvironmental problems facing the Nation. Therelease of such liquids has been widespread,originating from sources ranging from gasolineservice stations to large industrial facilities.Once trapped in the subsurface these liquidsserNe as long-term sources of aquifer contamina-tion. Remediation schemes include optimal liq-uid recoery pumping, schemes, solvent and sur-factant flushing, and steam injection.

lierarchical models of groundwater flow andassociated chemical transport are needed at thelocal, basin, and regional scales to assess wastetransport and reinediation schemes. Currentmodek are limited computaikmall) in two yea) s:

BEST COPY AVAILAUt -

;135

Computational Grid for Waste Area Grouping 6

mew/244 Yet0A0 mum, n Iva. O.varEn TAIXEPIOnkt lior ill AO WA. %WM MOW

INOW !

i

wow !

nomI:

nom ,

iTIM i

Mal

XlTOM

M

;sago I

Computed Water Table Elevations after Capping

first. models can only include a limited numberof physical and chemical processes before com-puting runtimes become prohibitive; second.models lack adequate field data to characterizegeologic and chemical factors that influencecontaminant behavior. High performance com-puting systems arc needed both to model com-plex transport processes and to overconk datalimitations by using computationally intensiveparameter estimation techniques.

The Partnership in Computational ScienceP1CS i consortia project funded by the DOE

11PCC program is exploring new approaches tomodeling groundwater flow at waste sites inextremely complex hydrogeologic settings. Theconsortia consists of members from Brookhas en

14.9

Page 148: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

National Lab. Oak Ridge National Lab. RiceUniversity. SUNY-Stony Brook. Texas A&M,and the University of South Carolina. A map ofthe site most extensively studied (WAG Of isgiven in Figure A. Researchers at ORNL haveadapted an existing finite element code to run onan Intel scalable parallel distributed-memorysystem. By using a preconditioned conjugategradient solver, users were able to distribute por-tions of the model grid to different processorsand greatly reduce solution time. Using a40.000 node grid (Figure 13). they are obtaininganisotropy factors and boundary conditions thatcan he used to model smaller areas and test theeffects of remediation schemes on models of thewaste areas. One such remediation scheme is tocover areas with a water-impermeable mem-brane (cap) to eliminate surfztce recharge (FigureCt. A calculation of the water table following acapping is given in Figure D. Such results pointto the specific piezometric wells for whichexperimental indications of capping effectsshould be sought.

Pacific Northwest Laboratory is also exploringthe use of scalable masskely parallel quantumchemistry algorithms to improve methods forredesigning eniymes to better degrade pollu-tants, for extracting contaminants from soils, andfor burning halohydrocarbons.

Researchers at the EPA's Robert S. KerrEnvironmental Research Laboratory and collab-orators at various universities have focusedefforts on developing process understandingusing complementary numerkal models and lab-oratory studies. Implementation of numericalmodels has followed approaches originallydeveloped in the petroleum industry and

136

4I

advanced multiprocessor techniques. A largescale physical model i currently being used toevaluate the numerical model. Release of anorttanic liquid is planned for the physical model.followed by testing of remedial technologies. Aremediation approach involving surfactant flood.followed by vapor extraction above the watertable, is being designed by bench scale laborato-ry studies and adaptation of a sophisticatednumerical model developed by the University ofTexas at Austin. Other remediation models arealso under development. Joint field and model-ing studies in cooperation with the USDA areunderway to assess the impact of agriculturalchemicals on water resources in the midwest.

Groundwater models V. ill not eliminate the needfor adequate field data but will continue to assistscientists in understanding these complex sys-tems. thus contributint2 to reducing the risks topublic water supplies.

SPONSORING AGENCIES ANDORGANIZATIONS

DOEEPA

PERFORMING ORGANIZATIONSEPA'. Robert S. Kerr Enxironmental Research

LaboratorPacific Northwest LaboratoryPartnership in Computational Science WIC'S;

ConsortiaBrookhm en National 1.ahOak Ridge National LabRice UnkersitSUNY-Ston BrookTexas ,A&MUniver.it of South Carolina

l'ni crsit of Texas at Austin

Page 149: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

1"

Case Study 7

Improving Environmental DecisionMaking

I I ;

Concentration of sediments in Lake Erie resulting from a large storm. Concentrations are color coded, withthe highest shown.in red. Winds from storms produce waves, which in turn create bottom currents and tur-bulence that can cause contaminated bottom sediments t i be resuspended in the water column. The shal-lower the water, the stronger the effects of the wave action. Consequently, sediment concentrations tend tobe largest in shallow sections.

High performance conlputing is crucial toimprove decision making on environmentalissues. Improved numerical computer modelsprovide more accurate predictions of pollutionso that our leaders will be in a better position tomake policy decisions involving the environ-ment and economic growth. The accuracy of:hese models depends upon descriptions of phys-ical. chemical and biological processes that ade-

137

quately incorporate important causal interac-tions. nonlinearities, synergies. and feedbacks, aswell as capture nonintuitive interactions. Thelinkage of single media models and the develop-ment of megamodels are essential for more reli-able management of the environment.

Three areas of en ironmental decision supporthave been established as part of EPA's HPCC

f jtBEST COPY AVAILABLE

Page 150: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-12140.E.=11=1

- 'ON-.111

Program: air pollution, water pollution, and thecombined effects of air and water pollution oncoastal estuaries that lead to de-oxy2enation.

Air Pollution

The Clean Air Act amendments mandate Stateuse of air quality models to demonstrate theeffectiveness of proposed approaches for reduc-ing air pollution in highly polluted areas.Researchers at EPA and the North CarolinaSupercomputing Center in Research TrianglePark. NC are developing a decision support sys-tem to satisfy a wide range of related air qualitymodeling. information and analysis needs. Newcomputational capabilities enable improved airquality models to address muitipollutant interac-tions among acidifying species, oxidants andaerosol, and interactions with meteorology.These complex, three-dimensional. time-depen-dent models are used to evaluate alternative pol-lution control strategies where interactions atdifferent scales, from urban to regional. are crit-ical and require new approaches made possibleonly through high performance computing. TheNorth Carolina Department of Environment.Health and Natural Resources is testing an earlyprototype of an urban airshed modeling system.

Water pollution

Contaminated bottom sediments and the associ-ated decrease in water quality are a major prob-lem in hundreds of rivers, lakes, harbors, estuar-ies, and near-shore areas of oceans. In order toremediate this problem and evaluate possiblemanagement alternatives for the disposition ofthese toxic sediments. EPA-supportedresearchers at the University of California- SantaBarbara are developing computer intensivethree-dimensional. time-dependent models of thehydrodynamics, particle transport, and sedimentbed dynamics coupled with meteorology.Significant biochemical reactions that affect the

138

lea

fate of contaminants are also included in themodels. Work currently focuses on the GreatLakes region of the United States. Figure 1

shows that redistribution or toxic sediments inLake Erie due to a large storm is highly depen-dent on wind direction and speed, and waterdepth.

Linked Air and Water Pollution

Interactions between pollutant media. such as airand water. often significantly affect the environ-ment. Air and water pollution both contribute toan overabundance of nutrients in coastal estuar-ies causing lack of oxy2en and eventual declineof commercial productivity of the estuaries.Researchers at EPA's Environmental ResearchLaboratories in North Carolina and Georgia, andthe Chesapeake Bay Program Office inMaryland are developing loose linkages betweenair and water quality models to study the impactof nitrogen deposition by air and water sourceson the decline of coastal estuaries of theChesapeake Bay.

In some cases, only minor feedbacks existbetween the different media. Loosely linkingcurrent single-medium models retains the knownfeatures and complexity of each model. Thislinkage provides additional information for moleeffective policy determination by individual pol-lutant media decision making communities.

SPONSORING AGENCIES ANDORGANIZATIONS

EPA

PERFORMING ORGANIZATIONSEPA's Chesapeake Bay Program 01 liceEPA's Environmental Research LaboratoriesNorth Carolina Department of Ens ironment. Health

and Natural ResourcesNorth Carolina Supercomputing CenterUnix ersit of California Santa Barbara

1 52

Page 151: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

--Case Study 8

Galaxy Formation

..-..".172i21111M1111111ffili

Visualization of a small portion of the computational results of an astrophysical simulation of 8.8 milliongravitating bodies run on 512 nodes of the Concurrent Supercomputing Consortium's Intel TouchstoneDelta System. The image shows about 137,000 gravitating bodies that form a single halo about 400 mega-parsecs across. Color represents projected density along the line of sight, i.e., different colors correspondto pixels that "contain" different numbers of bodies. Densities are much higher at the center of the object.The entire simulation contained about 700 halos like the one depicted here.

The process by which galaxies form is amongthe most important unsolved problems inphysics. There is a wealth of observational data

139

modern observations span the electromagneticspectrum from radio frequency to gamma rays.However. a firm theoretical understanding is yet

1 34 BEST COPY AYAILAgis

Page 152: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

to he acquired. Questions as simple as. "Whyare there two families of galaxies, spiral andelliptical?" remain unanswered.

The first step in understanding how galaxies andstars form is to understand the environment inwhich their formation occurs. Scientists inferthat approximately 90 percent of the mass of theUniverse is in the form of dark matter. thatwhich can be observed only indirectly, throughits gravitational influence on observable matter.such as stars. gas clouds, and nebulae.Simulations are used to study the shapes anddynamics of dark matter halos that are known tosurround obsened galaxies.

One obstacle to the development of efficientcodes to simulate the galaxy formation process isthe sheer size of the problem. To obtain usefulinformation, the software must simulate theinteractions of millions of bodies. Improvedalgorithms for N-bod) calculations have beendeveloped and adapted to this problem. Evenwith the new algorithms, however, conventionalsupercomputers lack the computational power toperform simulations in a reasonable period oftime at an acceptable level of resolution. A fewhundred thousand bodies is the most that con-ventional systems can model at a time, about twoorders of magnitude too few to obtain new scien-tific results. Hence, scientists must turn to par-allel computers and very large ones at that toperform the computations.

Parallel versions of simulation code are general-ly more difficult to develop than those forsequential computing systems because of theinherent complexity of parallel systems. In thisparticular case, code development was furthercomplicated by the requirements for internaldata organization in the program that needed toadapt as the computation progressed to reflectchanges in the structure or the evoR ing unixerse.

Concurrent Supercomputing Consortium(CSCC) scientists overcame these programmihgdifficulties and recently developed a parallelsimulation code for the 512 node IntelTouchstone Delta System, which achievedspeedups in excess of 400 Over the single pro-cessor speed.

In March 1992, researchers ran a simulation w ithalmost 8.8 million bodies for 780 tinlesteps on512 processors of the Touchstone Delta system.The simulation was of a spherical region ofspace of diameter 10 megaparsecs (or. about 30million light years): a region large enough tocontain several hundred typical galaxies. Thesimulation ran continuously for 16.7 hours, andcarried out 3.24 x 1.014 floating point Opera-tions, for a sustained rate of 5.4 gigaflops persecond. Algorithm improvements accounted fornearly a 3.000-fold improvement in executiontime over traditional calculation methods.Subsequently, the research team ran two 17.15million body simulations using the Cold DarkMatter model of the Universe. These simula-tions, the largest N-body simulations ever run.took into account recently acquired microwavebackground measurements gathered by theCOBE satellite.

SPONSORING AGENCIES ANDORGANIZATIONS

Concurrent Supercomputing ConsortiumDOENASANSF

PERFORMING ORGANIZATIONSCalifornia Institute of TechnologyCenter for Research on Parallel Computation

Los Alamos National Laborator)Uni ersit of California-Santa Barhara

140

Page 153: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 9

Chaos Research and Applications

Simple visual example of the patterns that Chaos finds in complexity.

"Chaos" is used here as a term that refers to a\ ariety of new methods fbr dealing with nonlin-ear interdependent systems. This class of com-plex systems, noted for sensitivity to smalleffects and seemingly erratic behavior, has beendifficult to comprehend using traditional meth-ods and conceptualizations. The methods of

141

chaos have prosen to be remarkably effective inrevealing rew forms of simplicity hidden withincomplexity. This in turn has led to new under-standings of how such systems operate.Nonlinear systems phenomena are present inmany fields from physics, engineering andmeteorology to medicine. psychology and eco-

155BEST COPY MAW

Page 154: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

nomics. Consequently there are many opportu-nities for application of the methods of chaos.

Computing capability makes the new approachespossible. The methods of chaos focus on thegeometry of behavior of a system as a whole andare computationally intensive. Underlying pat-terns of behavior are best revealed through theuse of graphical representation of data. It is onlywith the availability of high speed processingand improved technologies for graphical displayof data that it has become possible to explore arange of behavior of nonlinear systems suffi-ciently large to permit observ ation of patternswithin complexity. The need for geometricapproaches to such problems has been known formany years it was first demonstrated byPoincare in 1892 in his proof that the three-bod,problem could not be solved by linear or simplecurve approximation methods. However. com-putational complexity of the problems preventedPoincare's methods from being widely applieduntil the advent Of high performance computing.

As a field chaos is less than 30 years old, but itsmethods are now being applied to problems inmany fields of science and engineering. Forexample. in physics chaos has been used torefine the understanding of planetary orbits, toreconceptualize quantum level processes. and toforecast the intensity of solar activity. In engi-neering. chaos has been used in the building ofbetter .figital filters, the control of sensitivemechanisms such as ink-jet printers and lasers.and to model the structural dynamics in suchstructures as buckling columns. In medicine it

has been used to study cardiac arrhythmias. theefficiency of lung operations. EEG patterns inepilepsy. and patterns of disease communication.In psychology it has been used to study moodfluctuations, the operation of the onctory lobeduring perception. and patterns of innovation inorganizations. In economics it is being used tofind patterns and develop new types of econo-metric models for everything from the stockmarket to variations in cotton prices.

As an example of the multi-applicability ofchaos research products. a research group at theUniversity of Maryland is developing controlmethods for highly sensitive chaotic processes.The control methods are to be used in suchdiverse applications as laser control. arrhythmi-cal cardiac tissue, buckling magnetoelastic rib-bon, and, in conjunction with Oak RidgeNational Laboratory, fluidized bed devices usedin chemical and energy applications.

Because chaos is new, growing and deeply inter-disciplinary, it benefits greatly from the emerg-ing NREN services for access to remote systems.for sustaining collaborative research activities.and for initiating and maintaining scientific dia-logue.

SPONSORING AGENCIES ANDORGANIZATIONS

DOE

PERFORMING ORGANIZATIONSOak Ridge National LaboratoryTriangle Center for the Stud) of' Comples S)Ntenp.L'ni%ersit) of Mar) land at College Park

142 156

Page 155: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 10

Virtual Reality Technology

41

The photograph shows an investigator, with a head-mounted display and a force-feedback manipulator con-trolling the tip of a scanning tunneling microscope (STM) probe, exploring the surface of a sample of materi-al. The image projected on the wall indicates the view that the investigator sees and changes as his headposition and orientation change. The region of the surface shown is 25 Angstroms on a side.

Virtual reality (VR) is the human experience ofperceiving and interacting ihrough sensors andeffectors with a synthetic (simulated) environ-ment containing simulated objects as if it were

143

real. It is supported by advances in simulationtechnology that allow linking human capabilitiesand computational resources. sensor systems androbotic devices for real-time tasks. VR technol-

157 BEST COPY AVAILABLE

Page 156: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

II, "I=

In this closeup of the sample on the precedingpage, atomic-level structures of carbon rings atthe surface are seen. A button permits energy tobe supplied on demand through the STM tip tomodify the surface by breaking bonds, displacingvarious structures and enabling physical andchemical reactions.

ogy can be applied to many tasks that would bemore difficult to do by other methods.

The view of the simulation that a person sees isan encompassing "immersion" perspective thatappears to be from a position inside the simula-tion itself. By changing the view with headmotion and orientation. VR "engages" a human'svisual perception, vestibular system, sense ofbalance. cognition and motor reactions to suchan extent that the immersion scenario is experi-enced as real.

With VR technology, a human and a computeroperate together as a combined system, one thatis far more capable than either taken individuallywhen performing a large variety of tasks. Forexample. the environment is particularly well-suited to support reaction to unanticipated sce-narios, ones that can exploit the corrective judg-ment of a person or of skilled teams of experts.

This enabling technology can be used to improveproductk it in man important areas. There are

144

r

numerous applications in the domains of healthcare, education and lifelong learning, manufac-turing and other areas where this technologyshows great promise. Early results have shownincreased productivity and a dramatic reductionof resource requirements in many instances.Examples of current use include:

-I Manipulation of molecules for develop-ment of nanotechnology devices andchemical systems.

-IShared surgical interventions.

-I Exploration of networked databases anddigital libraries for learning and research.

JModeling, simulation, and analyses.

JScientific and technical visualizationapplications.

Prototyping and planning.

Trainin2 for and monitoring of complexhuman-computer tasks.

In order to keep pace with real-time interaction.VR technology must be supported by high per-formance computers, associated software andhigh bandwidth network capabilities. VR alsorequires developing new technologies in displaysthat update in real-time with head motion:advances in sensory feedback such as force.touch, texture. temperature. and smell: and.intelligent models of environments.

With future increases in technological capabilityVR holds the promise to provide significantimprovements in many' new application areas.

SPONSORING AGENCIES ANDORGANIZATIONS

NASANIHNSF

PERFORMING ORGANIZATIONSersit. ot California - Los Angeles

l'HiNersit of North Carolina

58

Page 157: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 11

HPCC and Education

ALAIN*

HPCC, science education and science researchare ushering in a new era of learning. Throughhands-on activities, state-of-the-art experiments.and novel interactive advanced .computing andcommunications technologies, high school stu-dents immerse themselves in educational materi-als based on cutting-edge, truly modern scienceresearch. Closer linkages and shared methodsbetween students and research scientists helpattract students of many backgrounds to scienceas a subject and as a profession.

The major impact of HPCC technologies on edu-cation will follow the growing integration ofHPCC technologies with inquiry-based sciencelearning in the classroom. Direct collaborationswith professional scientists and engineers, car-ried mer high bandwidth networks, allow teach-ers and students :) dynamically integrate new

145

tr.s

scientific advances and methodologies with theirown education. In addition, a growing numberof HPCC funded programs such as SuperQuestprovide individual students with access to highperformance computers for student-directedresearch projects. The two research projectsdescribed below illustrate the usefulness of bothapproaches. In many cases, these activities leadto science and technology careers for students.vho previously considered these careers outsideof their reach.

Learning Through CollaboratheVisualization

The Learning Through CollaborativeVisualization testbed explores educational usesof scientific visualization and collaborative son-

1 59 BEST COPY AVIIIMILF

Page 158: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

- 7

ware tools through shared computer workspacesand two-way audio/video connections. In thisproject, students and teachers near Chicago(shown on previous page) are working on meteo-rology with atmospheric scientists and graduatestudents in Champaign-Urbana, and will bejoined in 1993 by students and teachers inMichigan. Together, they will study the model-ing of unusual weather patterns in their area, uti-lizing real-time weather data from satellites andother sources, such as clouds, temperature andmoisture. Any current information on weather,such as the impending conditions giving rise to a

developing storm, can be accessed usine net-works and displayed using local visualizationtools, by researchers and students alike.

Diffusion Limited Aggregation

Diffusion limited aggregation (DLA) structuresarise naturally in fields of science, ranging fromelectrochemical deposition and dendritic solidifi-cation to various breakdown phenomena such asdielectric breakdown, viscous fingering, chemi-cal dissolution, and the rapid crystallization of

Color-coded electric field in the vicinity of an electrical conductor molded in the shape of a recently discov-ered DLA fractal object. The high school student performing the research was a finalist in the 1993Westinghouse Science Talent Search.

146

6 (1 BEST COPY AMIABLE

Page 159: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

'MN-lava. It is not surprising then that there is muchinterest in the computer modeling of DLA andits variants. Using the computer simulation ofDLA, the student can discover new structuresthat are aesthetically beautiful and at the sametime have physical meaning. Students modelstructures ranging from snowflakes to cancercells. In the illustration, a student modelsstraight DLA and ascertains its potential energydistribution along the perimeter. As a conse-quence. the student discovers that the mostactive regions on the cluster are near the t ips.

SPONSORING AGENCIES ANDORGANIZATIONS

EDNSF

PERFORMING ORGANIZATIONSBellcoreHigh Schools in Illinois and MichiganLearnt nt2 through Collaborati \e visualizationNational Center for Supercomputer ApplicationsNorthwestern Uniersit). Department of Ps)cholog)San Francisco ExploratoriumUniversity of Illinois at Champaign-Urbana

ersity of Michigan. Computer Science

Technology-Dissemination ParticipantsAmeritechScienceKit PublishersTechnical Education Research Center and affiliated

t national) LabNet teachers

Diffusion Limited AggregationAkron High School. Akron. OltBoston Uni ersit). Boston, NIA

Page 160: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 12

HPCC Community Resources

-,--,:gobit i

;.-,-.-^..-.-,

.,:forl=, AV !ASV: 'AtzforthlitOri--,

4---

4411; rr'aCk: F,C*7 V-FITIW ,-ir? Ty COMB ;

Lt'.6trtit 4,1

rtidYSINCe

,

Nottiree dela* tot ',COST in VPFTPSC

,

piec+.4'reith,4*,-- tosirit. trArArcirm of ouliipie avis;,.Weptor,...1,zod)..,

--v , .., , ,- _, - - -, ,.- - - -;, ..:*':- -'-.--,--"- --

_Clfs16,0_,', . 044. c qrstairenwitral, ,1-1,4ofycifilertrit "(wino, cosY/eivi, cl,u , loam kir , tf,antf;prieigi,, ;', ,', 5 ,,- ' ' '

up*: Tortran Oubroottn* itl''VFfIFIC,OaOk440.t Publio gOsiOn. ?RN, ..tabtr, , -, <

"Prfoloiaro --Al"nitlIt.,-' r, ,,: .-% - :,6.,-,

Uipasie '._' Ck.,&.: Yceetbil: 14-, X; -xf .444pftii, 04)

Sitioct 'module or parèicap'Jr*:te-_,, :3

Making effective use of high performance com-puters requires considerable expertise in diverseareas such as computer architecture, algorithms,numerical analysis, and applied mathematics.The need for insights in all of these arenas is aserious stumbling block to a researcher whoseprimary interest is in a particular physical appli-

148

cation, An essential key to making high perfor-mance computers generally usable in today'sfast-paced research environment lies in thedevelopment and dissemination of reusable soft-ware that encapsulates the expertise of computerscientists and numerical analysts.

-EST COPY AVAILABLE

Page 161: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

.,-=11! 'MN

A wealth of reusable software for solving recur-ring mathematical and statistical problems isalready emerging. Unfortunately, the volume ofthis software and its distribution overlabyrinthine computer networks has made itquite difficult for the average user to locateappropriate tools.

The Guide to Available Mathematical Software(GAMS) is a centralized software resourcemaintained for scientists at the National Instituteof Standards and Technology (NIST). GAMSfunctions like a management and retrieval sys-tem for a large central software repository.However, instead of maintaining the catalogedsofty, -e itself, it provides a seamless cross-index to mu ple repositories managed by oth-ers. The ter virtual software repository hasbeen used to a icribe systems of this type.

The steadily increasing collection of softwareindexed by GAMS now numbers more than9,200 software modules in about 60 mathemati-cal and statistical software packages availablefrom four physically-distributed software reposi-tories, including the well-known netlib collec-tion of public-domain research software main-tained jointly by Oak Ridge NationalLaboratory, AT&T Bell Laboratories, and theUniversity of Tennessee at Knoxville. Usersaccess GAMS anonymously by logging in to aserver system on the Internet. Searches basedupon problem classification. keyword. or namemay be used to locate appropriate software, afterwhich items such as abstracts, documentation.examples and source code can be retrieved.Both generic and networked (Xl I) graphical

--...1=11111111111111111V

149

user interfaces to GAMS have been developed.

The classification-based retrieval technologyused in GAMS is not unique to mathematics andstatistics. The system itself can be easily recon-fluffed to foster the reuse of software in anyapplication field.

Future directions for the GAMS system includesupport of the system as a public Internetresource, development of networked GAMSclients and servers, development of improvedclassification schemes, and research in alternatesearch mechanisms, including domain-depen-dent expert system methodologies.

GAMS is a component of an experimental soft-ware exchange being coordinated for the HPCCby the NASA Goddard Space Flight Center. Theexperiment facilitates the development and eval-uation of multiple approaches to the problem ofsoftware reuse in pursuit of an open non-propri-etary architecture to facilitate the emergence of anational software exchange.

SPONSORING AGENC'ES ANDORGANIZATIONS

NASAN1ST

PERFORMING ORGANIZATIONSAT&T Bell LaboratoriesNASAN1STOak Ridge National Librar)Universty of Tennessee - Knox s ille

I 413

Page 162: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 13

Process Simulation and Modeling

Results of a computer simulation of a liquid molding process for manufacturing high strength polymer com-posite automotive body panels. The simulation shows how the liquid polymer is injected into the mold andpredicts .`he flow front position as a function of time. The results of the simulation are used to better under-stand the design of a particular mold and part and to select polymers and preplaced fiber preforms.

In the future, one of the keys to optimizing newproduct designs and manufacturing processeswill be the ability to model and simulate produc-tion methods using ath anced computer hardwareand software. Today, the development of manyproducts, including those that require castings,forgings and injection-molded parts. for exam-ple, typically require lengthy and expensive pro-totyping and experimentation to refine details ofthe product design, tooling, and manufacturingparameters. The use of computerized processmodeling and simulation will eliminate much ofthis prototyping and dramatically reduce productde elopment times and costs.

The traditional approach of using experimenta-tion and protot ping to refine production pro-cesses is indicatke of the fact that manufactur-ing piactices ha% e hi stoi icalk been based as

150

much on "art" as on sound scientific understand-ing of the processes imolved. This is changingrapidly as a result of research to achieve bettertheoretical understanding and predictability ofspecific processes and materials and their effecton product performance and costs. Improvedtheoretical frameworks are advancing progresson a wide range of manufacturing applications.processes and materials, including stamping andforming. machining, powder compaction pro-cesses and e _a assembly of parts.

Process modeling and simulation is being pur-sued by a number of organizations, including theNational Institute of Standards and Technology(NISI). For example. as part of a cooperati ea;ree mem ith Ford. (lir> sler ani GeneralMotors, N1ST is conducting research into the useof poliner composites for structural applications

t f14 KUM AVALUL

Page 163: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

in automobiles. As part of the project. NIST hasdeveloped a finite element simulation of themold filling process used to manufacture highstrength body components. In a similar manner.NIST is developing process ino0els and simula-tions for robotic manufacturing operations andproduction of powder metal particles.

Nluch of the ability to apply new theoreticalunderstanding depends on the performance andcost of advanced computing technologies.Process modeling and simulation draws heavilyon computationally intensive techniques in suchareas as heat transfer and solidification kinetics.coupled fluid now, finite difference approxima-tions, turbulent viscosity calculations, fracture

mechanics. stress analyses, and three-dimension-al graphics and visualization techniques.Advanced computing technology directly affectsthe speed and lo el of detail that can be incorpo-rated into the models. and therefore the realismand benefits that can be obtained.

SPONSORING AGENCIES ANDORGANIZATIONS

Automotive Composites ConsortiumNIST

PERFORMING ORGANIZATIONSAutomotie Composites ConsortiumNIST

Page 164: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 14

Semiconductor Manufacturing for the241st Century

The results of applying a fully-scalable arallel iterative matrix solver to create a three-dimensional simula-tion of current flow in a bipolar transistor are shown. The transistor was simulated on the Intel Delta atCaltech using software developed at Stanford University. This simulation is the largest yet in this field andrepresents the state of the art in three-dimensional device modeling. Such modeling efforts enable pro-cess design in the "Virtual Factory."

Semiconductor manufacturing plants today arccharacterized by high capital costs and lowflexibility. They are built to serve one or twogenerations of technology and usually a narrowproduct base. New techimlogy development isvery expensive and largely empirically done. To

152

bc competitive in such an environment requireshigh capital investment.

In the 1970s, chip designers faced similar invest-ment problems because new designs were large-ly hand crafted. As a result, few VLSI (Very

1ST CM AWAKE

Page 165: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Large Scale Integration) applications werev iewed as being able to justify design costs.The CAD (Computer-Aided Design) revolutionchanged this situation by empowering a vastlyincreased number of creative people with theability to design chips at low economic risk.The CAD revolution also provided first pass suc-cess in most cases on new designs.

A current view, widely held, is that a similar rev-olution using the same general approach can rev-olutionize manufacturing. The concept is tobuild a highly flexible computer controlled man-ufacturing facility; a "Programmable Factory."In parallel with this factory, a suite of simulationtools would be constructed; a "Virtual Factory"capable of emulating all functions of the realfactory. Both of these facilities would be con-trolled and integrated through a ManufacturingAutomation Framework (MAF). The compari-son of actual results from the real factory withpredicted results from the Virtual Factory pro-vides a means for improving models.

One example of work in progress is ongoing atStanford University supported by ARPA. Here,the overall aim is to implement a basic versionof these constituents in a working IC (IntegratedCircuit) facility and to demonstrate the power ofthis approach in developing and applying newtechnologies. Much of the work in the first twoyears of this pmject has focused on developingspecific software tools needed for the VirtualFactory such as simulators and design tools;developing the software needed for theAutomation Framework including information

storage methods and frameworks for linkingsoftware tools; and putting into place the hard-ware and software tools needed to connect theVirtual Factory to the Programmable Factory.

Recent accomplishments include an integrateddemonstration of the concept including opera-tional prototypes of many of these tools. Morethan 20 different pro.hcts in the research pro-gram were integrated together to show the powerof connecting the Virtual and Real Factories toan audience of more than 100 people fromindustry, government and universities. Scalablealgorithms, developed on a workstation, wereused to demonstrate an extremely large three-dimensional device simulation; a major newsoftware tool, SPEEDIE, which simulates thinfilm etching and deposition steps in IC manufac-turing, has been released to industry; and a newmultiprocessing reactor designed to performmultiple manufacturing steps in a single machineis being used to test new manufacturing conceptsin the real factory.

SPONSORING AGENCIES ANDORGANIZATIONS

ARPADigital Equipment CorporationSemiconductor Research CorporationStanford Center for Integrated Systems industrial

sponsorsTexas Instruments

PERFORMING ORGANIZATIONSStanford Univ ersity

Page 166: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Aare =..=111...

Case Study 15

Advances Based on FieldProgrammable Gate Arrays

Be.

Splash 2 board, containing Field Programmable Gate Arrays, which attaches to a w:Jkstation. It is pro-grammed in the DOD standard VHDL language and can achieve high performance on problems such asDNA and protein sequence comparison.

Since computers were first built in the 1940s andI 950s. the structure of their processors has beenfixed at the time of their construction. As aresult, even the fastest computers will performpoorly on a computation whose structure is a badmatch for that of the processor.

15d

This basic feature of processor architecture ischanging with the use of Field ProgrammableGate Arrays (FPGAs) as processor elements.The hardware of an FPGA-based computer isreconfigured for each application. In this way,the structure of the processor can he made to

f f;BEST COPY AVAILABLE

Page 167: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

match the structure of the computation, and anFPGA-based computer can achieve supercom-puter performance on a range of applications at afraction of the cost. Importantly. the reconfigu-ration of the processor structure is done not bydesigning new computer hardware but by pro-gramming the FPGA-based computer in a stan-dard high-level language.

The Splash I and Splash 2 attached processorsystems, which were designed and built at theSupercomputing Research Center (SRC) usingFPGAs from Xilinx. Inc.. are just this kind ofFPGA-based reconfigurable computer. Togetherwith the National Cancer Institute. the SRC iswriting software that will perform DNA and pro-tein sequence comparisons and alignment onexisting databases at speeds equal to or greaterthan those of supercomputers. Applications forSplash 2 are written in VHDL (the DOD stan-dard Very High Speed Integrated CircuitHardware Description Language). On applica-tions running on Splash 2, performance equal toor as much as IOU times that of some conven-tional supercomputers has been achieved.

The CM-2X. built 1-)N Thinking Machines. Inc..for the SRC, is a CM-2 in which the Weitekfloating point accelerator chips have been

replaced by Xilinx FPGAs, which can be pro-grammed with the usual CM-2 languages andutilities. It is common on multiprocessormachines to find such floating point accelera-tors: the use of FPGAs in their place allowscomputationally intensive kernels of an applica-tion to be executed on FPGA-based hardwarespecifically configured for the application. Oncomplete applications proexammed at the SRC,the CM-2X has achieved three to four times theperformance of the CM-2.

Similar approaches are being evaluated for otherarchitectures. For example. the CM-5, also builtb) Thinking Machines. Inc.. has from 32 to1.024 network-connected nodes each consistingof a high performance microprocessor and fourcustom designed vector processors. EmbeddingFPGAs into its nodes and network offers thepotential for producing dramatic improvementon specific applications and, by altering theFPGA programming, observation of internal sys-tem states to assist in program tuning and debug-ging.

SPONSORING AGENCIES ANDORGANIZATIONS

NSA

155

Page 168: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Case Study 16

High Performance Fortran and ItsEnvironment

In an effort to provide a machine-independentprogramming interface, a working group of rep-resentatives from industry, academia and gov-ernment laboratories has defined an informalstandard called High Performance Fortran(HPF). HPF consists of language extensions toFortran 90 that support data-parallel program-ming by permitting specification of the distribu-tion of data structures across the piocessors of ascalable parallel computer system. The ideasbehind HPF have been taken in large part fromearlier research on a language called Fortran D.This work, which was conducted at the Centerfor Research on.Parallei Computation at RiceUniversity and at Syracuse University, wassponsored under the HPCC Program by NSF andARPA. The Fortran D project designed a scal-able language on the basis of analysis of existingparallel applications, prototype comp:'er imple-mentations and an extensive benchmark suitethat are all available to the HPCC community.The HPF draft standard, which was producedover a single year. promises to dramaticallyreduce the burden of specifying parallel scientif-ic programs for the new generation of massivelyparallel computer systems. Four companieshave already announccd HPF compilers that willbe available in 1993. HPF will accelerate theuse of parallel machines, as now users can use ahigh level software with the assurance that theirapplications will he supported by future as wellas current systems.

Since HPF hides many of the details of parallelprogramming for a specific target machine, theprogrammer will need assistance in preparing,de'ougging and tuning HPF programs. To thisend. ARPA is sponsoring a research project atRice University and the University of Illinois at

156

Champaign-Urbana to design and build a collec-tion of programming tools for HPF, including anintelligent editor to help construct efficient datadistributions, a debugger to locate data races andother programmer errors and a performanceanalysis and visualization system. This projecthas a secondary goal of providing a platform forthe development of other software tools for HPF.When complete, this software platform will bedistributed throughout the computational sciencecommunity in source form.

The current version of HPF supports data paral-lel programming for about 50-75 percent ofapplications but has limited support for "unstruc-tured" or "irregular" problems. Since this classincludes many important science and engineer-ing problems. NASA. ARPA. ONR and NSF arejointly sponsoring a research project at Rice.Syracuse and the University of Maryland todesign and impletnent extensions to HPF to sup-port this important problem class. This workwill affect the next round of HPF standardiza-tion, scheduled for calendar year 1994.

SPONSORING AGENCIES ANDORGANIZATIONS

ARPANASANSF

ONR

PERFORMING ORGANIZATIONSCenter for Research on Parallel ComputationSyracuse UniversityUnisersity of Illinois at Champaign-UrbanaUniversity of Mary land

t 7 0

Page 169: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Jacobi iteration, using HPF:

REAL X(N,N), Y(N.N)!HPF$ DISTRIBUTE (*.BLOCK)::X.Y

FORALL ( J = 2 : N-1 )FORALL ( I = 2 : N-1X(I,J)= 0.25 *(Y(I-1,J) + Y(I+1,J) + Y(I,J-1) + Y(I.J+1))

END FORALLEND FORALL

Jacobi iteration, using message passing:

REAL X(N,N_OVER_P), Y(N,0:N_OVER_P+1)

IF (MY_ID .NE. 0) THENSEND( Y(1:N,1), MY_ID-1

END IFIF (MY_ID .NE. P-1) THENSEND( Y(1:N,N_OVER_P), MY_ID+I )

END IFIF (MY_ID .NE. P-I) THEN

RECEIVE( Y(1:N.N_OVER_P+I), MY_ID+1END IFIF (MY_ID .NE. 0) THENRECEIVE( X(1:N,0), MY_ID-I )

END IFLOWIF (MY_ID .EQ. 0) LOW=2HIGH = N_OVER_PIF (MY_ID .EQ. P-1) HIGH=N_OVER_P-IDO J = LOW.HIGH

DO I = 2, N-1X(I.J) = 0.25 * (Y(I-1.J) + Y(1+1,J) + Y(I.J.-1) + Y(11-1-I))

END DOEND DO

These code segments both perform square mesh Jacobi relaxation on a distributed memory parallel com-puting system.

157

I 7

Page 170: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Contributing WritersThe editorial group wishes to thank the following people who contributed to this section.

Ron Boisvert. NISTDuncan Buell, Supercomputing Research Center. NSALinda Callahan. CornA Theory CenterY. T. Chien, NSFRichard Crutcher. University of Illinois - Champaign-UrbanaEduardo D'Azevedo. Oak Ridge National Laboratory. DOERobin Dennis. EPAMark Ellisman. San Diego Microscopy and Imaging ResourceSally Goerner. Triangle Center for the Study of Complex SystemsJohn Hestenes, Drexel UniversityJim Jenkins, NASAChuck Koebel. Rice UniversityJames Lick. University of California Santa BarbaraWilbur Lick. University of California - Santa BarbaraDan Masys. NLM/NIHPaul Messina. California Institute of TechnologyJohn Meyer. NISTJoan Novak. EPABob Parker. ARPATimi Pauna, Concurrent Supercomputing ConsortiumKathryne Pickens, University of California-Santa BarbaraFred Phelan, NISTCal Ramos. NCO/HPCC. NASAAnn Redelfs. Center for Research on Parallel ComputationJohn Riaanati. Supercomputing Research Center, NSACharles Romine. Oak Ridge National Laboratory'. DOEBruce Ross. GFDL/NOAANora Sabelli. NSFJohn Salmon. Center for Research on Parallel Computation. California Institute of TechnologyPaulette Sancken. National Center for Supercomputing ApplicationsStephanie Sides. San Diego Supercomputer CenterKevin Timson, Center for Research on Parallel ComputationJohn Toole. ARPALaura Toran. Oa'.: Ridge National Laboratory. DOERobert Voigt. NSFMichael S. Warren. University of California Santa Barbara. Los Alamos National Laboratory. DOEJames Weaver. Environmental Research Laboratory, EPAGil Weigand. ARPAOh ia West. Oak Ridge National Laboratory. DOEMary Wheeler, Rice UniversityPatricia N. Williams. NCO/HPCC

158

4",

Page 171: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

IfIammIN.IIMNalMiNION

Glossary

-.2.=1,11111,

ACTSAdvanced Communications Technology Satellite. part of a joint NASA-ARPA NREN col-

laboration that will provide high speed ATM/SONET transmission, and will provide inter-

face and operations experience in mating high speed terrestrial communications systems w ith

high speed satellite communications systems

AlgorithmA procedure designed to sok e a problem. Scientific computing programs contain algo-

rithms.

ARPAAdvanced Research Projects Agency, part of DOD I formerly DARPA. Defense Advanced

Research Projects Agency

ASIAAds anced Software Technology and Algorithms, a componentof the HPCC Program

ATDnetAds anced Technology Demonstrations Network. an ARPA project that will make use of

technology di celoped for and by gigabit testbeds

ATMAsynchronous Transfer Mode, a new telecommunications technology, also known as cell

sw itching, which is based on 53 byte cells

Backbone NetworkA high capacity electronic trunk connecting lower capacity networks, e.g.. NSF:NET back-

bone

BitAcronym for binary digit

BRUMBasic Research and Human Resources, a component of the HPCC Program

ByteA group of adjacent binary digits operated upon as a unit (usually connotes a group of eight

hits)

CI,NPCOP' 'CI n in Le s Network Protocol

Computer EngineeringThe creative application of engineering principles and inethods to the design and de clop-

ni-nt of hardW are and softw are sy stems

De.1159

Page 172: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Computer ScienceThe systematic study of computing systems and computation. The body of knowledge result-ing from this discipline contains theories for understanding computing systems and methods:design methodolou, algorithms, and tools; methods for the testing of concepts; methods ofanalysis and verification; and knowledge representation and implementation.

Computational Science and EngineeringThe systematic application of computing, systems and computational solution techniques tomathematical models formulated to describe and simulate phenomena of scientific and engi-neering interest

CPMESCommittee on Physical. Mathematical. and Engineering Sciences

DOCDepartment of Commerce

DODDepartment of Defense

DOEDepartment of Energy

EDDepartment of Education

EPAEnvironmental Protection Agency

ESnetEnergy Sciences Network

FCCSETFederal Coordinating Council for Science. Engineering, and Technology

FDDIFiber Distributed Data Interface

FlopsAcronym for floating point operations per second. The term "floating point" refers to thatformat of numbers that is most commonly used for scientific calculation. Flops is used as ameasure of a computing system's speed of performing basic arithmetic operations such asadding. subtracting. multiplying, or dividing two numbers.

FNCFederal Netw orking Council

Giga-I 0' or billions of ... gigabits. gigaflops, gigaops

Page 173: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

GOSIPGovernment Open Systems Interconnection Profile

Grand ChallengeA fundamental problem in science and engineering, with broad economic and scientificimpact, whose solution can be advanced by applying high performance computing techniquesand resources

Heterogeneous systemA distributed system that contains more than one kind of computer

High performance computerAt any given time, that class of general-purpose computers that are both faster than theircommercial competitors and have sufficient central memory to store the problem sets forwhich they are designed. Computer memory, throughput. computational rates, and otherrelated computer capabilities contribute to performance. In addition, performance is oftendependent on details of algorithms and on data throughput requirements. Consequently. aquantitative measure of computer power in large-scale scientific processing is difficult to for-mulate and may change as the technology progresses.

High performance computingEncompasses advanced computing. communications, and information technologies, includ-ing scientific workstations. supercomputer systems, high speed networks, special purposeand experimental systems. the new generation of large scale parallel systems. and systemsand applications software with ail components well integrated and linked over a high speednetwork

FHPPIHigh Performance Parallel Interface

HHSDepartment of Health and Human Services

HPCCHigh Performance Computing and Communications

HPCCITHigh Performance Computing, Communications, and Information Technology Subcommittee

HPCSHigh Performance Computing Systems. a component of the HPCC Program

IINREN see Interagency Internet

IITAInformation Infrastructure Technology and Applications, a component of the HPCC Program

Interagency Interim NREN see Interagency Internet

Page 174: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Interagency InternetThe federally funded part of the Internet that is directly used by the HPCC Program. It

includes NSFNET. ESnet, and NSI. The Interagency Internet is also the name of an elementof the NREN component of the HPCC Program, and is the successor to the InteragencyInterim NREN (IINREN

InternetThe global collection of interconnected, multiprotocol computer networks including Federal.mid-level, private, and international networks. The Interagency Internet is a part of theInternet, and NSFNET. ESnet. and NSI are part of the Interagency Internet.

InterNICInternet Network Information Center

InteroperabilityThe effective interconnection of two or more different computer systems, databases, or net-works in order to support distributed computing and/or data exchange

ISDNIntegrated Services Digital Network

Kb/sKilobits per second or thousands of bits per second

LANLocal Area Network

MIN'S

Megabits per second or millions of bits per second

MIMDMultiple Input Multiple Data

MPPMassively parallel processor

NAPsNetwork Access Points, a set of nodes connecting mid-lex el or regional networks and otherservice providers to NSFNET

NASANational Aeronautics and Space Administration

National ChallengeA fundamental application that has broad and direct impact on the Nation's competiti enessand the well-being of its citi/ens. and that can benefit hy the application of HPCC technologyand resources

Page 175: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

National Information Infrastructure (NII)The integration of hardware. sot tware. and skills that will make it easy and affordable to con-nect people with each other, with computers. and with a vast array of services and informa-tion resources.

NCONational Coordination Office for High Performance Computing and Communications

NetworkComputer communications technologies that link multiple computers to share informationand resources across geographically dispersed locations

NIHNational Inqitutes of Ilealth. part of HHS

msTNational Institute of Standards and Technology . part of 1X)C

NLMNational Library of Medicine, part of N114

NOAANational Oceanic and Atmospheric Administration. part of DOC

NRENNational Research and Education Network. The NREN is the reali/ation of an interconnect-ed gigabit computer network system devoted to HPCC. NREN is also a component of theHPCC Program.

NSANational Security Agency . part of DOD

NSFNational Science Foundation

NSFNETNSF computer now ork

NSINASA Science Internet

NTIANational Telecommunication and Inftirmation dministration

OC-3Nem ork transmission speed of 155 Mb/s

0C-12Net w ork transmission speed of (122 \lb/.

Page 176: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

7

OMBOffice of Management and Budgot

.A1111111111=1111111=11111

OpsAcronym for operations per second. Ops is used as a ra:, r. of the speed of computer s)stemsand components. In this report ops is generall) taken to mean the usual integer or floatingpoint operations depending on what functional units are included in a particular s% stem con-figuration.

OSIOpen S)stems interconnection

OSTPOffice of Science and Teehnolog) Polic)

Parallel processingSimultaneous pmcessing by more than one processing unit on a single application

Peta-1 O'' of ... petahits)

PortTransport a computer program from one computer ...tern to another

PortablePortahie computer programs can he run w ith little or no change on man) kinds or computersystems.

PrototypeThe original demonstration model of w hat is expected to he a series of s) stems. Protot)pesare used to prose feasihilit). hut often are not as efficient or ss elI-des igned as later productionmodels.

R&DResearch and des elopment

ScalableA system is scalable if it can he made to has e more (or less) computational power h) config-uring it w ith a larger (or smaller) number of processors. amount of memor). interconnectionhands\ idth. input/output bandwidth, and amount of mass storage.

SIMI)Single Instruction Multiple Data

SONETS)nchronous Optical Netw ork

Tera-101 or trillion. ol ( g.. terabits. tekillopsi

1 64

Page 177: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

-7111..."Fl

Network tIannhiIon of a DS1 lormatted diittI \Trial at a rate of 1.5 M1V\

T3Nem ork tran\ini.\ion of a DS.1 lormatted digital \ignal at ;t rate of .45 Mhb,

vBNSVery high \peed Backbone Network Service\

WANWide Area Network

165

79

Page 178: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Contacts

NCO (National Coordination Office for High Performance Computing andCommunications)

Bldg. 38A/B I -N308600 Rockville PikeBethesda, MD 20894301-402-4100fax: [email protected]

Donald A. B. Lindberg. M.D.Directorindherg@';hpcc.gov

Patricia R. CarsonExecutive As.sistantcarson h p cc . goy

Sally F. Howe, Ph.D.Assistant Director far TecInth.al ProgramsNEST 1992-93 Commerce Sciehee and Technology ((omSci) Fellowhowe(cePhpcc.gov

Charles R. KalinaEvecutive Secretary, 1-1PCCI7' [email protected]

Calvin T. RamosPmgram AssociateNASA 1992-93 Professional Development Program

Shannon N. UncangcoStaff As.sistantuncangcoOP hpee.gov

Patricia N. WilliamsInfornuanm ()I:ricerpwilliams(whpce.go\

HPC(7 Internet Server

N. w.hpcc.gok

166

Page 179: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

InterNIC Information Services

General AtomicsPO Box 85608San Diego. CA 92186-9784800-444-4345619-455-4601)fax: 619-455-3990infoCwinternic.net

ARPA

Lt Col John C. TooleActing Director. Computing Systems Teclmolo,gy OfficeAdvanced Research Projects Agency3701 N. Fairfax Dr.Arlington, VA 2110370? [email protected]

Stephen L. SquiresSpecial Assistant to the DirectorAdvanced Research Projets Agency3701 N. Fairfax Dr.Arlinton. VA 22203703-696-2226squires0 arpa.mil

Randy Kat/. Ph.D.Pmgrain Manager.lin- Information infrastructure TechnologYAdvanced Research Projects A tZl'Itry3701 N. Fairfax DrArlington. VA 22203703-696-2228katiOr arpa.mil

1674 t

Page 180: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

DOE

HPCC Program CoordinatorOffice qf Scientific ComputingER-30, GTNDepartmettt of EnergyWashington, DC 20585301-903-5800hpccOPer.doe.gov

ED

James A. Mitchell. Ph.D.Special Assistant for TeclmologyOffice Qf the Assistant SecretaryDepartment of Ethu.ation555 New Jersey Ave., NW #604BWashington, DC 20208202-219-2053jmitchel(g`inet.ed.gov

EPA

General information:

Joan H. NovakHPCC Program Manager, MD-80Environmental Protection Agelu.vResearch Triangle Park, NC 27711919-541-4545novak.joan epamai .epa.gov

Robin L. Dennis. Ph.D.Senior Science Program ManagerEnvironmental Protection AgencResearch Triangle Park. NC 27711919-541-2870rdennisO'llyer.ncsc.org

Internet use:

Bruce P. AlmichEPA Federal Network Council Reprewntative. MO-90Enviommental Prole, lion AgencxResearch Triangle Park. NC 27711919-541-3306almich.bruce0 epainail.epa.110

Page 181: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

NASA

Lee 13. HolcombDirector. High Ped'ormanee Computing and Cmumunications (Vice(Vice of AermuutticsNational Aerimaittic and Space Admini.stwthmCode RCWahington. D.C. 20546202-358-2747l_holcombO'aeromail.hq.nasa.go%

Paul H. Smith. Ph.D.Program Manager. HPCCNational Aemmuttics (Ind Space AdministrationCode RCWashington. D.C. 20546202-358-4617ph_smith aeromail.hq.nasa.gov

Paul E. HunterHPCC Program Manager.for Basic Reseatvh and EducationNational Aeronautics and Space AtInthlistmtionCode RCWashington. D.C. 20546102-358-4618P_Hunter0 aeromail.hq.nasa.gm

NIH

NIII HPCC Coordinator and NI.M contact:

Daniel R. Mass. M.D.Directlw. Li.ster Hill National Center tor Biomedical Comnumiewion.sNational Library tit MedicineBklg. 38A/7N707Bethesda. MD 20894301-496-4441mas) s0 lhc.nlm.nih.go%

NCRR progran1:

Judith I.. Vaitukaitis. NI.D.nireetar. Nanona/ Center for Research Resolaves,Vananal hisialacs al Health13Idg. I 2A/4007Bethesda. N1D 20892301-496-5793itid nihrrl2a.bitnet CU.M11.0

169

Page 182: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Caroline F10 11ov. ay. Ph.D.Director. Ql lice of Sciem.e Policy and CommunicathmNational Center for Research ResourcesNational Institutes of Hea/thBldg. 12A/4057Bethesda, MD 20892301-496-2992Caroline1-10.vnihrrl2a.bitnet

DCRT progrant:

Da\ id Rodhard. M.D.Director, Division of Computer Research and TechnologyNational Institutes of HealthBldg. 12A/3033Bethesda, MD 20892301-496-5703darQrcu.nih.go\

William I_ RissoDeputy Direc:or. Division of Computer Researeh and TeelmologyNational Institutes of HealthBldg. 12A/3033Bethesda. MD 2089230 ; [email protected]

Robert 1.. Martino, Ph.D.chief Computational Bioscience and bigineering LaboratoryDivision of Con.puter Research aml TechnologyNational Institutes of HealthBldg. 12A12033Bethesda. MD 20892301-496-1111martino0 aI .nih.gok

NC1113SC program:

Jacob V. 1\ laiiel. Ph.D.Hitmiedh.al Supewomfmter C'enterNational Cancer InstituteFrederick Cancer Research I ardtts301-846-5532jmai/e10 Icrls I \

1 10

Page 183: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

N1ST

General informatiem:

James H. Burrow sDirector. Computer Systems lAdwratoryNW:onal InAtitute qf Standards and TechnologyTechnology Building, Room B154Gaithersburg. MD 20899301-975-2822burrows0'ecf.ncsl.nist.gov

NOAA

General inf(wmation:

Thomas N. Pyke, Jr.DirectorjOr High PerfOrmance Computing and CommunicationsNational Oceanic and Atmospheric Administaltion1315 East-West HighwayRoom 15300National Oceanic and Atmospheric AdministrationSik er Spring. MD 20910301-713-3573tpyke hpcc.noaa.gov

Directors of high performance computing centers:

Jerr N1ahlman. Ph.D.Director. Geophysical Fluid Dynamic.% 1..abmworyNational Oceani«end tmospheric dministrationP.O. Bo \ 308Princeton, NJ 08542609-451-65112

gfdl.go

Rem MacPhers(m. Ph.D.ep r, National AleteemPlogical Cenh'r

National Oceanic and Atmospheric dministration5200 Auth Road. Room 101Camp Sprinp. N1D 20746301-763-801hromncp0 nic.11)-1.noaa.gm

171

Page 184: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Sandy MacDonald, Ph.D.Director, Forecacting Systems LaborworyNational Oceanic am! Atmospheric. Admthistration325 BroadwayBoulder, CO 80,43303-497-6378macdonaldOr fsl.noaa.gov

Internet Use:

Robert L. MairsbtlOrmation Processing Dillsiod

Office of Satellite Data Procesving.atul DiAtributionNational Oceanic and Atmospheric AdministrationFB4 Room 0301Suitland and Silver Hill RoadsSuitland, MD 20746301-763-5687rmairs@'nesdis.noaa.gov

NSA

George CotterChi(/' ScientistNational Security Agency9800 Savage Rd.

Ft. Meade. MD 20755-600(1301-688-6434grcotte(a'afterlife.ncw.mil

NSF

Merrell Patrick. Ph.D.IIPCC CoordinatorNational Science Found:nit PHDirectorate for Computer and Information Science and Engineering4201 Wikon BIN d., Room 1105Arlington. VA 22230703-306-1900mpatrick usl.lim

172

Page 185: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Index

Thic is On index to majot- sections and viecied key word ond phrases that appear al du.s docu-ment. It complements the Table of Contents (pages r-vi), List of Tables (page via List of111ust1011011s (pagec and Glovsaly (pages 159-1651.

Academia. Technology Collaboration w ith 24Aerospace Vehicles. Design and Simulation of Case Study 126-128ARPA (Athanced Research Projects Agency) 59-63ASTA (Advanced So0ware Technology and Applications) 41-46

BRHR (Basic Research and Human Resources) 55-58Budgets by HPCC Program Components. Agency

CAMRAQ (Consortium on Advanced Modeling of Regional Air Quality)Case StudiesChaos Research and Applications Case StudyClimate Modeling Case Study

45-46118-158141-142120-122

Collaboration with Industry and Academia 24Components. ON erview of the Five HPCC

11Computational Aerosciences Consortium 85Conferences and Workshops 18-19Contacts (at NCO and at participating ageft.ies) 166-172Coordination Ainong Agencies 21CPMES (Committee on Physical. Mathematical. and Engineering Sciences) i. 4CSCC (Concurrent SuperComputing Consortium) 44

DCRT (Division of Computer Research and Technology) 87-95DHHS (Department of Health and I luman Sers ices)

NIH (National Institutes of Health) 87-95Digital Anatomy 90. 94Digital Libraries National Challenge 51-53DOC (Department of Commerce)

N1ST (Nationtd Institute of Standards and Technolop I 101-105NOAA (National Oceanic and Atmospheric Administration) 106-1 I()

(6

DOD (Deptatment of Defense)ARRA (Advanced Research Projects Agency )

5NSA (National Security Agency ) 9 (:- )(3)

DOE (Department of Energy ) 71-S0

1...1) (Department of Education) 117Environmental Decision Making. Inipn,s ing Cac Study 137- I 38Education and Lifelong I.earning National Challenge 53I.Aucati(in. UPC(' and Cow Stildy 145-147Energs Resoince Recos cry . Non-Renewable ( 'me crud) 111-134

\ ironnicntal Assessment (irand Challenges 112

v.s'7

Page 186: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

AMMO,

Environmental Grand Challenges107

EPA (Environmental Protection Agency) I 11-116

Evaluation Criteria for the HPCC Program 12-13

Executive Summary1-5

Fast Packet Services, Procurement of35-36

FCCSET (Federal Coordinating Council for Science. Engineering, and Technology) i. 4

Field Programmable Gate Arrays. Advances Based on Case Study 154-155

FNC (Federal Networking Councih15

FNCAC (Federal Networking Council Advisory Committee) 15

Galaxy Formation Case Study 139-140

GAMS: An Example of HPCC Community Resources Case Study 148-149

Gigabit Research Projects37-38

Gigabit Testheds39

Global Atmosphere and Global Ocean Modeling 108

Glossary159-165

Grand Challenge(s)Contrasts Between Grand Challenges and National Challenges 49

Definition161

Grand Challenge Research43

HPCC Agency Grand Challenge Research Teams 43

Some Grand Challenge Problems5

Groundwater Remediation Case Study 135-136

Health Care National Challenge53-54

IIHS (Department of Health and Human Services)

NIH (National Institutes of Health) 87-95

Iligh performance computer - Definition 161

High performance computing - Definition 1(i 1

High Performance Computing Act of 1991 (P.L. 102-194) 15

Iligh Performance Fortran (HPF) and its Environment Case Study 156-157

HIV Research92

HPCC Agencies12

HPCC Contacts (at NCO and at participating agencies) 166-172

I1PCC Program Components. Overview of the F;

HPCC Program Goals12

11PCC Program Management15-19

IIPCC Program Strategies13

11PCC Research Areas14

HPCCIT (1110 Performance Coinputing. Communications. and

Information Technology) SubcommitteeEs aluation Criteria for the 11PCC Program

11_13

Membership on the HPCCIT Subcommittee 22-23

Roster of Suhcommittee MembersHPCS ( High Performance Computing S stems)

29-31

liTA (Information Infrastructure Iechnolog> and Applications) 47-54

I n d ustr.. Tee hnolog Collaborat ion with 24

Instruments. Sharing Remote Caw Study 1'3-125

174

Page 187: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

7

Interdependencies Among ComponentsInternet

Definition 162

Interagency Internet (Element of NREN Component) 31-33

Interagency Internet Connectivity and Usage 34-35

InterN1C (Internet Network Information Center) 36-37

Major Federal Components of the Interagency Internet 33

20

"Knowhots" 94

Life Science: From Molecules to MRI CM(' Study 129-131

Management. HPCC ProgramManufacturing National ChallengeMedical Connections ProgramMedical Informatics Fellowship\Membership on the HPCCIT Subcommitt,TMultipollutant Air Quality Management

15-195489

8911_13

113

NASA (National Aeronautic\ and Space Administration) 81-86

National Chal lenge( s IContrasts Between Grand Challenges and National Challenges 49

Definition 162

Digital Libraries 51-53

Education and Lifelong Learning 53

Health Care53-Manufacturing 5544

Some National Challenge Application Areas 5

National Consortium for 1110 Performance Computing 44-45

National Storage Laboratory (NSL) 76

NCI (National Cancer Institute) 87-95

NC() (National Coordination Office) 4. 15. 166

NCRR (National Center for Research Resources) 87-95

NIH (National Institutes of Health) 87-95

NH (National Information Infrastructure) 1. 8-10. 47-54. 163

NIST (National Institute of Standards and Technology) 101-105

NLM (National Librar) of Medicine) 4. 15. 87-95

NOAA (National Oceanic and Atmospheric Administration) 106-110

NREN (National Research aial Education Network ) 32-40

NSA (National Security Agenc ) 96-100

NSF (National Science Foundation) 64-71

NSFNET. Solicitation of the Next Generation 37

NTIA (National Telecommunications and Inlormation Administration)

()%erview UPC(' Program

Prcicess Simulation and Modeling race Study

1IPCC Components. genc

9

7-14

1SO-151

'6-27

Page 188: 94DOCUMENT RESUME ED 368 330 IR 016 567 TITLE High Performance Computing and Communications: Toward a National Information Infrastructure. INSTITUTION Federal Coordinating Council

Scalable DefinitionSemiconductor Manufacturing for the 21st Century Case SnidySoftware SharingSoftware Exchange. High Performance ComputingSupercomputing Centers and Consortia

UMLS (Unified Medical Language System)

Virtual Realit Technology Caw Sludy"Vkible Human- Project

Weather Forecast Niodeling

44.

164

152-153

82-83. 85-8685-8644-46

90

1.43-144

92

109