coupling climate and hydrological models interoperability through web services
Post on 25-Feb-2016
42 Views
Preview:
DESCRIPTION
TRANSCRIPT
Coupling Climate and Hydrological Models
Interoperability Through Web Services
Outline
• Project Objective• Motivation• System Description
– Components– Frameworks– System Driver
• Logical Workflow• Data Flow• Architecture• Future Directions
Project Objective
The development of an end-to-end workflow that executes, in a loosely coupled mode, a distributed modeling system comprised of an atmospheric climate model using ESMF and a hydrological model using OpenMI
Motivation
• Hydrological impact studies can be improved when forced with data from climate models [Zeng et al., 2003; Yong et al., 2009]
• A technology gap exists:– Many hydrological models run on personal computers– Most climate models run on high performance supercomputers
• The leveraging of ESMF and OpenMI can mitigate the communication difficulties between these modeling types– ESMF contains web services interfaces that can be used to
communicate across a distributed network– Both ESMF and OpenMI are widely used within their respective
communities
System Description
High Performance Computer
ESMF Web Services
ESMF CAMComponent
SWAT
OpenMI
CAM OpenMI Wrapper
Driver
Personal Computer• SWAT (hydrology model) runs on PC• CAM (climate model) runs on HPC• Wrappers for both SWAT and CAM
provide OpenMI interface to each model• Driver (OpenMI Configuration Editor)
uses OpenMI interface to timestep through models via wrappers
• Access to CAM across the network provided by ESMF Web Services
• CAM output data streamed to CAM wrapper via ESMF Web Services
Components: SWAT
• The hydrological model chosen for this project is the Soil Water Assessment Tool (SWAT)
• It is a river basin scale model developed to quantify the impact of land management practices in large, complex watersheds
• It was chosen for this project because it is widely used, is open source, and runs on a Windows platform
Components: CAM
• The atmospheric model chosen for this system is the Community Atmospheric Model (CAM5), part of the Community Climate System Model (CESM1.0.3)
• It was chosen because:• Has ESMF Component
Interfaces• Our group has an ongoing
collaboration with CESM• It is Open Source
Frameworks: Earth System Modeling Framework
• Is a high-performance, flexible software infrastructure that increases the ease of use, performance portability, interoperability, and reuse of Earth science applications
• Provides an architecture for composing complex, coupled modeling systems and includes array-based, multi-dimensional data structures
• Has utilities for developing individual models including utilities to make models self-describing
• Web services included in the ESMF distribution allow any networked ESMF component to be available as a web service.
Frameworks: OpenMI
• The OpenMI Software Development Kit (SDK) is a software library that provides a standardized interface that focuses on time dependent data transfer
• Primarily designed to work with systems that run simultaneously, but in a single-threaded environment [Gregerson et al., 2007]
• The primary data structure in OpenMI is the ExchangeItem, which comes in the form of an InputExchangeItem and an OutputExchangeItem (single point, single timestep)
The system driver
• Controls the application flow• Implemented using OpenMI’s Configuration Editor• Convenient tool for the testing of the OpenMI
implementations and model interactions
Hardware Architecture
Personal Computer(Windows)
High Performance Computer VirtualLinuxServer
Login Nodes(kraken)
Compute Nodes (kraken)
• The Client contains the OpenMI and SWAT software, which run on a Windows platform.
• The Atmospheric Model runs on a HPC platform
• Access to the HPC Compute Nodes must be through the Login Nodes
• Access to the Login Nodes is through the Virtual Server (Web Svcs)
Software AchitectureClient
Personal Computer (Windows)
OpenMIConfiguration Editor
CAM OpenMI Wrapper
SWAT 2005
OpenMI
… to Web Services
• Configuration Editor is the driver… it is used to link the models and trigger the start of the run.
• Hydrological model (SWAT 2005) is a modified version to work with OpenMI
• Access to Atmospheric model (CAM) is done through “wrapper” code that accesses ESMF Web Services via an OpenMI interface
Software ArchitectureServer
Linux Server (Web Svr)
Tomcat/Axis2
SOAP Svcs
HPC Login Nodes
HPC Compute Nodes
JobScheduler
CompSvc
CompSvc
CompSvc
CAM CAM CAM
ProcessController Registrar
• In some HPC systems, access to nodes can be restrictive. In XSEDE, only the Login Nodes can communicate with the Compute Nodes.
• Access to/from external systems can be controlled via “gateway” systems using Web Services.
• Running applications (such as CAM Component Svc) on Compute Nodes must be handled by a Job Scheduler.
Logical WorkflowOne-Way Coupling
Driver SWAT/OpenMI ATM/OpenMI Wrapper ESMF Web Services ESMF Component
Initialize
Initialize
Prepare
Prepare
GetValues
Finish
Finish
Dispose
Dispose
NewClient
Initialize
RunTimestep
Finalize
GetData
EndClient
GetValues
ESMF_GridCompInitialize
ESMF_GridCompRun
ESMF_GridCompFinalize
ValueSetValueSet
Logical WorkflowTwo-Way Coupling
Driver SWAT/OpenMI ATM/OpenMI Wrapper ESMF Web Services ESMF Component
InitializeInitialize
PreparePrepare
GetValues
FinishFinish
DisposeDispose
NewClient
Initialize
RunTimestep
Finalize
GetData
EndClient
GetValues
ESMF_GridCompInitialize
ESMF_GridCompRun
ESMF_GridCompFinalize
GetValues
Extrapolate
ValueSet
ValueSetValueSet
Data FlowOne-Way Coupling
ESMF Component/CAM
ESMF StateCAM/OpenMI
Wrapper
OutputExchange Item
SWAT/OpenMI
InputExchange Item
GetValues
Personal ComputerHigh PerformanceComputer
GetDataValues
• The data is pulled from the CAM Component to SWAT via the wrapper , initiated by the OpenMI GetValues call; this call is made once per timestep.
• Data is exchanged between CAM and SWAT using the OpenMI Exchange Item structures that handle the translation from grid to point values
Data FlowTwo-Way Coupling
ESMF Component/CAM
ESMF Export State CAM/OpenMI
Wrapper
OutputExchange Item
Import
SWAT/OpenMIInput
Exchange Item
GetValues
Personal ComputerHigh PerformanceComputer
GetDataValues
OutputExchange Item
InputExchange Item
GetValues
SetInputData
ESMF Import State
• In two-way coupling, each model pulls the data from the other model using the OpenMI GetValues method. Extrapolation is used on the first timestep to break the deadlock between the two model requests.
• OpenMI Input and Output Exchanges items are again used to exchange and translate the data.
Model Configurations
• SWAT– Hydrology science information
provided by Jon Goodall of University of S. Carolina
– Lake Fork Watershed (TX) – Watershed Area: 486.830 km2– Model run: 2 years, 1977 – 1978– Timestep = 1 day– Weather Stations:
• wea62 (33.03 N, 95.92 W)• wea43 (33.25 N, 95.78 W)
• CAM– Global Atmospheric Model– Model run: 1 day– Timestep: 1800 sec – Dynamical Core: finite volume– Horizontal Grid: 10x15– Export data variables:
• surface air temperature• precipitation• wind speed• relative humidity• solar radiation
Scaling Analysis
• 4 areas of increasing size
• 3 variations of CAM resolution (.25, .5, and 1 degree)
• CAM almost always gating factor in run times
• Data transfer rates minimal– 5 data values CAM to
SWAT– 1 data value SWAT to
CAM
Future Tasks
• Additional SWAT configurations for larger scales• Possible integration with other models
– Currently working on replacement of CAM with WRF• Abstraction of data exchange within the ESMF wrapper
code to accommodate configuration of different variables for different model implementations
Logical Flow - Startup
Personal Computer (Windows)
Config Editor
CAM Wrapper
SWAT 2005
OpenMI
Linux Server
Web Svcs
HPC Login Nodes
JobScheduler
ProcessController
Registrar
HPC Compute Nodes
CompSvc
CAM
11
2
3
4
5
6
7
1. Initialize2. New Client3. New Client4. Submit Job
5. Status = SUBMITTED6. Instantiate Job (Comp Svc)7. Status = READY
When loading models into the Configuration Editor, each model is initialized. For CAM, this involves starting a “New Client” in the Process Controller, which submits a new CAM Component Service using the Job Scheduler.
Logical Flow - Status
Personal Computer (Windows)
Config Editor
CAM Wrapper
SWAT 2005
OpenMI
Linux Server
Web Svcs
HPC Login Nodes
JobScheduler
ProcessController
Registrar
HPC Compute Nodes
CompSvc
CAM
1
23
1. Get Status2. Get Status3. Get State
The status of the CAM Component Service is checked often throughout the workflow. The status is stored in the Registrar, so it can be retrieved via the Process Controller.
Logical Flow - Initialize
Personal Computer (Windows)
Config Editor
CAM Wrapper
SWAT 2005
OpenMI
Linux Server
Web Svcs
HPC Login Nodes
JobScheduler
ProcessController
Registrar
HPC Compute Nodes
CompSvc
CAM
11
2
3
4 5
6
1. Prepare2. Initialize3. Initialize4. Initialize
5. Status = INITIALIZING6. Status = INIT_DONE
Before the models can be run, they need to be initialized. For CAM, the Initialize call is sent to the CAM Component Service via Web Services and the Process Controller. The CAM Component Svc updates it’s status in the Registrar prior to and after Initialization.
Logical Flow – Timestep (Run)
Personal Computer (Windows)
Config Editor
CAM Wrapper
SWAT 2005
OpenMI
Linux Server
Web Svcs
HPC Login Nodes
JobScheduler
ProcessController
Registrar
HPC Compute Nodes
CompSvc
CAM
2
1
3
4
7
5 68
1. Get Values2. Get Values3. Run Timestep4. Run Timestep
5. Run Timestep6. Status = RUNNING7. Set Output Data8. Status = TIMESTEP_DONE
For each Timestep in SWAT, the trigger to run a timestep in CAM is a Get Values request in the OpenMI Interface. The Run Timestep request is passed to the CAM Component Service and the Component Service sets the output data making it available for later retrieval (see Get Data).
Logical Flow – Timestep (Get Data)
Personal Computer (Windows)
Config Editor
CAM Wrapper
SWAT 2005
OpenMI
Linux Server
Web Svcs
HPC Login Nodes
JobScheduler
ProcessController
Registrar
HPC Compute Nodes
CompSvc
CAM
1
2
3
1. Get Data Desc*2. Get Data Desc*3. Get Data Desc*4. Get Data
5. Get Data6. Get Data
* one time only
4
5
6
After each Timestep Run, the output data is then fetched from the CAM Component Service via the Web Services and Process Controller. The first time fetching data, a description of the data structure is requested. This description is then used for the remaining Timesteps.
Logical Flow - Finalize
Personal Computer (Windows)
Config Editor
CAM Wrapper
SWAT 2005
OpenMI
Linux Server
Web Svcs
HPC Login Nodes
JobScheduler
ProcessController
Registrar
HPC Compute Nodes
CompSvc
CAM
11
2
3
4 5
6
1. Finish2. Finalize3. Finalize4. Finalize
5. Status = FINALIZING6. Status = FINAL_DONE
End Client (Next Slide)
After all timesteps have completed, the models need to be finalized. For CAM, the Finalize call is sent to the CAM Component Service via Web Services and the Process Controller. The CAM Component Svc updates it’s status in the Registrar prior to and after finalization.
Logical Flow – End Client
Personal Computer (Windows)
Config Editor
CAM Wrapper
SWAT 2005
OpenMI
Linux Server
Web Svcs
HPC Login Nodes
JobScheduler
ProcessController
Registrar
HPC Compute Nodes
CompSvc
CAM
1
2
3
4
1. End Client2. End Client3. Kill Server4. Exit Service Loop
After the Finalize call, the CAM Component Service is done, so the CAM Wrapper closes it out by calling End Client. This call results in the CAM Component Service completing it’s loop and exiting as well as the Process Controller removing all references to the client.
5. Status = COMPLETED
5
top related