python api for geoserver use cases

2
Python API for GeoNode Background Risk and impact modelling are required by governments around the world to reduce loss of life and financial loss caused by a disasters. These analyses in general are static documents/maps built in silos and forgotten on shelves. What is needed is a dynamic way to push the model output to online databases to effectively communicate the results. In general, impact assessments are about applying damage levels from a hazard map to exposed infrastructure according to specific damage curves and then aggregating the results according to some political jurisdiction. If this data is stored in GeoNodes, an API is needed that will allow hazard and risk modelling software to acquire and publish such data sets. The API should provide a means for extracting vector and raster data from GeoNodes into Python where computations reflecting the desired impact analysis are carried out and then publish resulting spatial data on a GeoNode. All from within the Python program. The API should provide this data in data structures that allow for manipulation, computations and analysis; the raster attribute values should for example reside in numpy arrays. The fundamental task is to develop a module that allows Python scripts to download raster and vector data from Geoserver. This would allow clients to perform a range of numerical processing tasks such as those needed for impact modelling. Use Case 1 This is about publishing a hazard map. a) A Python program runs a flood model and generates raster data for maximum expected depth and velocity. This data is uploaded to a GeoNode along with required meta data such as identity, projection information, etc. b) A Python program runs an earthquake model and generates raster data for peak ground acceleration. This data is uploaded to a GeoNode along with required meta data such as identity, projection information, etc. Use Case 2 A very simple use case that would test the tool is as follows: A Python script downloads from Geoserver: Raster data representing number of residents per square km (Exposure Data) Vector data representing political boundaries (Aggregation Regions) For each aggregation region: select raster points that fall within the corresponding polygon add up the associated population counts to compute the total population in the region Create a new layer in Geoserver with each region color coded by its population count. This is of course a very very simple process, but it has all the ingredients that would allow general impact modelling computations. Use Case 3

Upload: ricardo-rodriguez

Post on 07-Apr-2015

48 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Python API for Geoserver Use Cases

Python API for GeoNode

BackgroundRisk and impact modelling are required by governments around the world to reduce lossof life and financial loss caused by a disasters. These analyses in general are staticdocuments/maps built in silos and forgotten on shelves. What is needed is a dynamicway to push the model output to online databases to effectively communicate theresults.

In general, impact assessments are about applying damage levels from a hazard map toexposed infrastructure according to specific damage curves and then aggregating theresults according to some political jurisdiction. If this data is stored in GeoNodes, an APIis needed that will allow hazard and risk modelling software to acquire and publish suchdata sets.

The API should provide a means for extracting vector and raster data from GeoNodesinto Python where computations reflecting the desired impact analysis are carried outand then publish resulting spatial data on a GeoNode. All from within the Pythonprogram. The API should provide this data in data structures that allow for manipulation,computations and analysis; the raster attribute values should for example reside innumpy arrays.

The fundamental task is to develop a module that allows Python scripts to downloadraster and vector data from Geoserver. This would allow clients to perform a range ofnumerical processing tasks such as those needed for impact modelling.

Use Case 1This is about publishing a hazard map.

a)A Python program runs a flood model and generates raster data for maximum expecteddepth and velocity.This data is uploaded to a GeoNode along with required meta data such as identity,projection information, etc.

b)A Python program runs an earthquake model and generates raster data for peak groundacceleration.This data is uploaded to a GeoNode along with required meta data such as identity,projection information, etc.

Use Case 2A very simple use case that would test the tool is as follows:A Python script downloads from Geoserver:

• Raster data representing number of residents per square km (Exposure Data)• Vector data representing political boundaries (Aggregation Regions)

For each aggregation region:• select raster points that fall within the corresponding polygon• add up the associated population counts to compute the total population in the

regionCreate a new layer in Geoserver with each region color coded by its population count.This is of course a very very simple process, but it has all the ingredients that wouldallow general impact modelling computations.

Use Case 3

Page 2: Python API for Geoserver Use Cases

A Python program extracts from different GeoNodes:• Raster data representing earthquake ground acceleration at each point (Hazard

Map)• Raster data representing number of residents per square km (Exposure Data)• Vector data representing political boundaries (Aggregation Regions)

For each aggregation region:• Create a surface of ground acceleration from the hazard map using a smooth

interpolation algorithm (this is not the subject of this API, but necessary ifhazard hap and exposure data are of different resolution)

• For each population point obtain the associated ground acceleration from thehazard surface.

• Calculate the expected number of fatalities at each point and sum up (not thesubject of this API)

Publish map with associated cumulative expected fatalities for each aggregation region(Impact Map).

Use Case 4A Python program extracts from different GeoNodes:

• Raster data representing volcanic ash thickness at each point (Hazard Map)• Point data representing buildings in the affected region (Exposure Data). Each

point has a type associated with it (e.g. masonry, timber, reinforced concreteetc)

• Vector data representing political boundaries (Aggregation Regions)

For each aggregation region:• Create a surface of volcanic ash thickness from the hazard map as in use case 3.• For each building in population point obtain the associated ash thickness from

the hazard surface.• Calculate the expected damage level for each building based on its type and the

ash thickness. Convert this to estimated dollar loss and sum up.Publish map with associated cumulative expected fatalities for each aggregation region(Impact Map).

Use Case 5Like use case 3 except that this case will go through the steps for 1000 realisations ofthe hazard map. This is important to capture uncertainty. The result is an impact mapwith error bounds.

Background docs

Here's a note that describes the process mathematically:Notes for impact modelling framework

To appear: Ole will create some test data - do any one know where such can be hosted?