a web-based 3d mapping application using webgl allowing...

3
A Web-Based 3D Mapping Application using WebGL allowing Interaction with Images, Point Clouds and Models Alexandre Devaux Université Paris Est, IGN, Laboratoire MATIS [email protected] Mathieu Brédif Université Paris Est, IGN, Laboratoire MATIS [email protected] Nicolas Paparoditis Université Paris Est, IGN, Laboratoire MATIS [email protected] ABSTRACT Nowadays we see a lot of Rich Internet Applications (RIA), but real interactive 3D applications have just started years ago, and it is only in 2011 that a GPU library was integrated directly in the major internet browsers. This opened a new way of creating 3D applications on the web and a new public, as much more users can access them without complicated plugin installations and this is just what 3D web mapping was waiting for. We present our street view application allowing to move through cities from a pedestrian point of view and interact with it in order to create and update precise urban maps enabling accessibility diagnostics, services and applications around mobility. We cared about precision in the data ac- quisition (images and laser) in order to be able to make ac- curate 3D measurements for professional usages. The user can show large laser clouds in his web-browser, access infor- mation on any points, draw 3D bounding boxes and export selections to databases. He can also make measurements from images, integrate collada models, display OpenData layers, etc. Finally we use projective texture mapping in real time to texture any mesh with our images. Categories and Subject Descriptors H.4 [Information Systems Applications]: Miscellaneous; H.5.2 [Information Interfaces and Presentation]: User Interfaces General Terms Algorithms, Design Keywords WebGL, Html5, Shader, City Modeling, Street view, Panoramic, Mapping 1. INTRODUCTION Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGSPATIAL’12 Nov 07-09 2012, Redondo Beach, CA, USA Copyright 2012 ACM 978-1-4503-1691-0/12/11 ...$15.00. Serving street-level imagery on a large scale is a challenge. Interact with it and manipulate real 3D data is even more challenging. [1] presented some great work on next genera- tion map making based on multi source acquisition but this data is not available on the web. In order to offer to web users large textured city models, millions of laser points with an interactive interface on it and other visualization services we had to work on 3D rendering in a web context. As lots of multimedia web-based applications needed GPU acceler- ation since a long time, developers had to create sophisti- cated library to make 3D rendering possible using only the CPU (Papervision, Away3D, Sandy, etc). In 2009, Google launched O3D: an open source JavaScript API (plugin) for creating interactive 3D graphics applications that run in a web browser. A year and a half later WebGL was function- nal and O3D was integrated in it. The ability to put hardware-accelerated 3D content in the browser provides a mean for the creation of new web based applications that were previously the exclusive domain of the desktop environment. The new standard HTML5 al- lows WebGL to use its canvas element so it leverages the power of OpenGL to present accelerated 3D graphics on a web page.[3] have shown that 3D in the browser using We- bGL makes possible new approaches and strategies allowing motion capture studios to communicate with each other. [2] worked on both performance and scalability of the volume rendering by WebGL ray-casting in two different but chal- lenging application domains: medical imaging and radar me- teorology. The WebGL specification was released as version 1.0 on March 3, 2011. It is an extension to the JavaScript pro- gramming language to allow the generation of interactive 3D graphics within any compatible web browser. Today in 2012 it is supported by all browsers apart Internet Explorer which still have 42% of the browser market. This plays in favour of Flash which is available in all browsers since you in- stall the version 11 for GPU-rendering. But we believe that WebGL is going to be the standard for 3D visualization on the web, at least for scientific/mapping applications. How- ever both flash3D and WebGL rely on OpenGL ES which makes them close and we can imagine to make a Flash ver- sion of our application for users with incompatible browser or card. Finally, WebGL is more oriented to mobile devices than Flash, some mobile phones can already render complex 3D models using integrated GPU on WebGL website. This paper will demonstrate our experience on creating a GPU accelerating geo-viewer for the web (Figure 1). We will first introduce the data used, then the architecture of

Upload: others

Post on 15-May-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Web-Based 3D Mapping Application using WebGL allowing ...recherche.ign.fr/labos/matis/pdf/articles_conf/... · an interactive interface on it and other visualization services we

A Web-Based 3D Mapping Application using WebGLallowing Interaction with Images, Point Clouds and Models

Alexandre DevauxUniversité Paris Est, IGN,

Laboratoire [email protected]

Mathieu BrédifUniversité Paris Est, IGN,

Laboratoire [email protected]

Nicolas PaparoditisUniversité Paris Est, IGN,

Laboratoire [email protected]

ABSTRACTNowadays we see a lot of Rich Internet Applications (RIA),but real interactive 3D applications have just started yearsago, and it is only in 2011 that a GPU library was integrateddirectly in the major internet browsers. This opened a newway of creating 3D applications on the web and a new public,as much more users can access them without complicatedplugin installations and this is just what 3D web mappingwas waiting for.

We present our street view application allowing to movethrough cities from a pedestrian point of view and interactwith it in order to create and update precise urban mapsenabling accessibility diagnostics, services and applicationsaround mobility. We cared about precision in the data ac-quisition (images and laser) in order to be able to make ac-curate 3D measurements for professional usages. The usercan show large laser clouds in his web-browser, access infor-mation on any points, draw 3D bounding boxes and exportselections to databases. He can also make measurementsfrom images, integrate collada models, display OpenDatalayers, etc. Finally we use projective texture mapping inreal time to texture any mesh with our images.

Categories and Subject DescriptorsH.4 [Information Systems Applications]: Miscellaneous;H.5.2 [Information Interfaces and Presentation]: UserInterfaces

General TermsAlgorithms, Design

KeywordsWebGL, Html5, Shader, City Modeling, Street view, Panoramic,Mapping

1. INTRODUCTION

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.SIGSPATIAL’12 Nov 07-09 2012, Redondo Beach, CA, USACopyright 2012 ACM 978-1-4503-1691-0/12/11 ...$15.00.

Serving street-level imagery on a large scale is a challenge.Interact with it and manipulate real 3D data is even morechallenging. [1] presented some great work on next genera-tion map making based on multi source acquisition but thisdata is not available on the web. In order to offer to webusers large textured city models, millions of laser points withan interactive interface on it and other visualization serviceswe had to work on 3D rendering in a web context. As lotsof multimedia web-based applications needed GPU acceler-ation since a long time, developers had to create sophisti-cated library to make 3D rendering possible using only theCPU (Papervision, Away3D, Sandy, etc). In 2009, Googlelaunched O3D: an open source JavaScript API (plugin) forcreating interactive 3D graphics applications that run in aweb browser. A year and a half later WebGL was function-nal and O3D was integrated in it.

The ability to put hardware-accelerated 3D content in thebrowser provides a mean for the creation of new web basedapplications that were previously the exclusive domain ofthe desktop environment. The new standard HTML5 al-lows WebGL to use its canvas element so it leverages thepower of OpenGL to present accelerated 3D graphics on aweb page.[3] have shown that 3D in the browser using We-bGL makes possible new approaches and strategies allowingmotion capture studios to communicate with each other. [2]worked on both performance and scalability of the volumerendering by WebGL ray-casting in two different but chal-lenging application domains: medical imaging and radar me-teorology.

The WebGL specification was released as version 1.0 onMarch 3, 2011. It is an extension to the JavaScript pro-gramming language to allow the generation of interactive3D graphics within any compatible web browser. Today in2012 it is supported by all browsers apart Internet Explorerwhich still have 42% of the browser market. This plays infavour of Flash which is available in all browsers since you in-stall the version 11 for GPU-rendering. But we believe thatWebGL is going to be the standard for 3D visualization onthe web, at least for scientific/mapping applications. How-ever both flash3D and WebGL rely on OpenGL ES whichmakes them close and we can imagine to make a Flash ver-sion of our application for users with incompatible browseror card. Finally, WebGL is more oriented to mobile devicesthan Flash, some mobile phones can already render complex3D models using integrated GPU on WebGL website.

This paper will demonstrate our experience on creatinga GPU accelerating geo-viewer for the web (Figure 1). Wewill first introduce the data used, then the architecture of

Page 2: A Web-Based 3D Mapping Application using WebGL allowing ...recherche.ign.fr/labos/matis/pdf/articles_conf/... · an interactive interface on it and other visualization services we

Figure 1: A screenshot of the application while mea-suring the height of a door.

the system with the demo organization and we will finishwith some improvements to come.

2. DATA MANIPULATEDAll the data (images, lidar, positioning information) is

collected using a sophisticated vehicle we created for the ac-quisition of panoramic images and laser data which are pre-cisely georeferenced (sub-metric absolute localization). Fig-ure 2 shows the image acquisition. The data is then slightlypost-processed and organized on servers using file systemsand PostGIS databases. We want to stay as close as possi-ble to the original data to shorten post-processing, minimizeserver space usage (no duplication) and to work on real na-tive values.

Figure 2: Schema of the acquisition showing the tencamera orientations.

With this raw data comes the vector data. We manageOpenData, for example from Paris we can show the pave-ment, trees, light, etc... This data is not in 3D, it comes withno altitude information so we had to create an algorithm totransfer them at their good altitude position using the lasercloud or a DTM. The application also offer the possibilityfor researchers to show automatic extraction results in theimages or in 3D using bounding box (Collada files are alsoloadable).

3. ARCHITECTUREA very special effort has been made to efficiently transfer

data from the servers to the client. As we work with large

panoramic images (21Mpixels) and billions of 3D points wehad to find a way to stream images and lidar data fastlyenough to keep an immersive dynamic sensation. Figure 3shows the global architecture.

3.1 ImagesWe use IIPImage fastCGI on the server allowing dynamic

multi-resolution tile streaming from JPEG2000 images (weuse Kakadu library for decompressing). It allows reducingto a single image source file per camera photography onthe server and works in a pyramidal way. The client canask a specific part of the original image at a specific reso-lution with a specific compression. When a client navigatethrough the city, at every new position he loads the tencamera pictures in order to make a panoramic view. Theloading starts with very low resolution highly compressedimages hence taking a short time, then new tiles with betterquality appears automatically.

WebGL App in Web Browser (client)

Geocoder, Open Data,Images Info, geonames,Features extracted,etc.

Data

Laser clouds(File System)

Images (IIPImage FCGI)

Databases

Script PHP

Figure 3: Overview of the application exchanges

3.2 Laser cloudsIt is possible to display the laser cloud in the viewer super-

posed to the image or alone. While activating only the lasercloud you can evolve fluidly in the city through 3D pointswhich are streamed dynamically around your position. Forthat purpose we created a file system on the server to orga-nize the binary lidar data. We divided thousands of squaremeters of laser data into blocks of one meter square andused a geo hierarchy on the web server, so the user actuallyjust loads the laser data around its position quite fastly be-cause everything is already preprocessed. We are currentlyworking on another architecture subdividing the global lasercloud into facade packets which is more efficient when mea-suring functions are enabled. It is also possible to requesta specific temporal period of acquisition. The idea behindthe temporal representation of lidar data is that we can askfor 3D points that were acquired at the same time as thepanoramic picture. In some streets the vehicle passed severaltimes so the physical correlation between the panoramic pic-ture and the lidar data can be low if we had a lot of movingobjects or if we have a derivation from the geo-referencingsystem.

Each laser point can be picked with a mouse click in orderto have access to its property (3D absolute position, GPStime, intensity) thus allowing to make absolute and relativemeasurements as we have created an interface to draw lines

Page 3: A Web-Based 3D Mapping Application using WebGL allowing ...recherche.ign.fr/labos/matis/pdf/articles_conf/... · an interactive interface on it and other visualization services we

between points showing distance, polygon showing surfaceand even 3D volumes. All this is possible with or withoutshowing the laser clouds. Within the panoramic imageryalone every click in the image is then projected on the lasercloud to get direct absolute 3D position.

3.3 3D points extractionThe application interface is simple and effective for visual-

ization, creation and edition of polyline based features. Theuser can easily create 3D bounding box around an objectusing just 3 clicks he can shape a box around the objecthe wants to extract (Figure 4 shows a user selecting a boxaround a car). Then the application automatically colorsand addresses a specific class to the 3D points inside thebox. It can recognize the object using gabarit features andthe user can adjust the class of the object or add specificinformation. The application gives also the absolute posi-tion of the gravity center of the object, the length, width,height, volume, etc. Then the user can drag the objectwhere he wants and decide to save it to the object extrac-tion database. Useful to create urban maps, this can alsobe useful for scientists allowing them to create ground truthdatabase of specific class of object.

Figure 4: 3D Bounding Box and points extraction

3.4 Projective texturing on urban generatedmesh

Working in WebGL allows us to use shaders. They arevery useful to calculate rendering effects on graphics hard-ware with a high degree of flexibility. We created a projec-tive shader so we can project any image to any mesh 3D(city models for us). The process is the following: We havea map of building print (2D). We create a mesh of the build-ing with the a-priori that the facades are perpendicular tothe ground. We use the laser cloud to obtain the height ofbuildings or we give a specific height that is supposed to bethe maximum height of buildings we can meet and createa mesh. It is also possible to load more precise meshes ifthey are available. Then with multiple shaders we texturethe mesh using photography taken from different points ofview which in our case allows us to fill holes because ouracquisition geometry didn’t capture a full 360 ◦ sphere. Thenavigation can now be fluid between the different points ofview as we now evolve in a real textured 3D model (Fig 5).

4. DEMONSTRATIONThe setup for the demonstration is as follow: First we will

show how a non-professional public can use it. Second, we

Figure 5: MultiSource projective texturing allowsto fill holes in real time for realistic visualization.

will present the possibilities of precise measurement, integra-tion of future public infrastructures for professional users.

5. FUTURE WORKSFor more than a decade, rich 3D content on the web has

only been available via external, often proprietary, browserplugins. However, a new standard has emerged to changethis: WebGL. Within that decade, local bandwidth andserver capacity have exploded allowing new possibilities forthe internet.

We have shown here how powerful was already WebGLwith a case-study, a Street View navigation web-applicationand how well suited WebGL is for modeling/mapping appli-cations. We will now work on optimizing data integrationand accessibility to allow an even more collaborative appli-cation. With the other popular GPU technology (Flash 11)we will see a lot of powerful 3D applications on the web inthe next years and we are pretty excited about contributingto sculpting the web.

6. REFERENCES[1] X. Chen, B. Kohlmeyer, M. Stroila, N. Alwar,

R. Wang, and J. Bach. Next generation map making:geo-referenced ground-level lidar point clouds forautomatic retro-reflective road feature extraction. GIS’09.

[2] J. Congote, A. Segura, L. Kabongo, A. Moreno,J. Posada, and O. Ruiz. Interactive visualization ofvolumetric data with webgl in real-time. Web3D ’11.

[3] C. Leung and A. Salga. Enabling webgl. WWW ’10.