imulation of artificial intelligence agents using modelica

8
Simulation of Artificial Intelligence Agents using Modelica and the DLR Visualization Library Alexander Schaub Matthias Hellerer Tim Bodenmüller German Aerospace Center, Robotics and Mechatronics Center Münchner Straße 20, 82234 Weßling Abstract This paper introduces a scheme for testing artificial intelligence algorithms of autonomous systems using Modelica and the DLR Visualization Library. The simulation concept follows the ’Software-in-the-loop’ principle, whereas no adaptations are made to the tested algorithms. The environment is replaced by an artificial world and the rest of the autonomous system is modeled in Modelica. The scheme is introduced and explained by using the example of the ROboMObil, which is a robotic electric vehicle developed by the DLR’s Robotics and Mechatronics Center. Keywords: Simulation of Artificial Intelligence Agents; Autonomous Systems; Software-in-the-Loop; DLR Visualization Library; ROboMObil 1 Introduction The variety of autonomous systems, or also known as artificial intelligence agents (AIA), can range from small toys like Lego mindstorms to full-sized robotic cars like the ROboMObil (ROMO)[1]. In all cases an agent consists of three essential parts: sensors, the core artificial intelligence for the agent’s functionality, and actuators [2]. The agent perceives its current en- vironment through its sensors, interprets it and plans the next actions to reach its goal before acting upon the environment through its actuators. For a sufficient simulation of an autonomous system the bidirectional connection of an agent to its environment must be con- sidered. In the past decade several open source simulation envi- ronments for autonomous systems, mostly for robots, have been launched due to increasing computational power and decreasing hardware costs, which have made the use of autonomous (mobile) robots feasible for education. Published in 2001, the socket-based device server Player in combination with the multi-robot systems simulator Stage [3] was widely used in academia and industry. Player provides simple TCP sockets to exter- nal devices like sensors and actuators. Player is lan- guage neutral and uses the UNIX abstraction of de- vices being considered as files. Stage is a simulation environment for multiple robots with computationally cheap, but in terms of fidelity only sufficient models. The linear scaling with the population of the simulated world was very important. It is a 2D simulator for in- door scenarios. The simulated sensors are rather sim- ple laser range finders or sonar than complex sensors like cameras. In 2003 Gazebo [4] was released to satisfy the need for a 3D simulation environment for Player. It enables the simulation of cameras, uses rigid body models, and still works, despite the increased complexity, with sim- ulating several autonomous systems concurrently. Nowadays the Robot Operating System ROS [5] is the most popular environment for connecting algorithms, sensors, and actuators of robot systems. Many func- tions and drivers were adopted from Player. Moreover, it also uses interfaces to Stage for 2D and to Gazebo for 3D simulations. Microsoft’s Robotics Developer Studio [6] is a free, but not open source, development suite using user friendly techniques for visual programming, easy par- allelization, and debugging via web-interfaces. It is equipped with a DirectX based Visualization, its own rigid-body physics engine, and provides interfaces to commercial products from FischerTechnik, iRobot, Lego etc. Furthermore, there are also several commercial robot simulators like the Virtual Robot Experimentation Platform V-Rep [7] or Webots [8]. Proprietary simulation environments were developed for larger projects like Junior - Stanford’s robotic car for the DARPA Urban Challenge [9]. That proprietary software can be adapted to special demands, which are not completely fulfilled by generalized tools like the ones named before. DOI Proceedings of the 9 th International Modelica Conference 339 10.3384/ecp12076339 September 3-5, 2012, Munich, Germany

Upload: others

Post on 22-Apr-2022

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: imulation of Artificial Intelligence Agents using Modelica

Simulation of Artificial Intelligence Agentsusing Modelica and the DLR Visualization Library

Alexander Schaub Matthias Hellerer Tim BodenmüllerGerman Aerospace Center, Robotics and Mechatronics Center

Münchner Straße 20, 82234 Weßling

Abstract

This paper introduces a scheme for testing artificialintelligence algorithms of autonomous systems usingModelica and the DLR Visualization Library. Thesimulation concept follows the ’Software-in-the-loop’principle, whereas no adaptations are made to thetested algorithms. The environment is replaced by anartificial world and the rest of the autonomous systemis modeled in Modelica. The scheme is introduced andexplained by using the example of the ROboMObil,which is a robotic electric vehicle developed by theDLR’s Robotics and Mechatronics Center.

Keywords: Simulation of Artificial IntelligenceAgents; Autonomous Systems; Software-in-the-Loop;DLR Visualization Library; ROboMObil

1 Introduction

The variety of autonomous systems, or also knownas artificial intelligence agents (AIA), can range fromsmall toys like Lego mindstorms to full-sized roboticcars like the ROboMObil (ROMO)[1]. In all casesan agent consists of three essential parts: sensors, thecore artificial intelligence for the agent’s functionality,and actuators [2]. The agent perceives its current en-vironment through its sensors, interprets it and plansthe next actions to reach its goal before acting uponthe environment through its actuators. For a sufficientsimulation of an autonomous system the bidirectionalconnection of an agent to its environment must be con-sidered.In the past decade several open source simulation envi-ronments for autonomous systems, mostly for robots,have been launched due to increasing computationalpower and decreasing hardware costs, which havemade the use of autonomous (mobile) robots feasiblefor education.Published in 2001, the socket-based device serverPlayer in combination with the multi-robot systems

simulator Stage [3] was widely used in academia andindustry. Player provides simple TCP sockets to exter-nal devices like sensors and actuators. Player is lan-guage neutral and uses the UNIX abstraction of de-vices being considered as files. Stage is a simulationenvironment for multiple robots with computationallycheap, but in terms of fidelity only sufficient models.The linear scaling with the population of the simulatedworld was very important. It is a 2D simulator for in-door scenarios. The simulated sensors are rather sim-ple laser range finders or sonar than complex sensorslike cameras.In 2003 Gazebo [4] was released to satisfy the needfor a 3D simulation environment for Player. It enablesthe simulation of cameras, uses rigid body models, andstill works, despite the increased complexity, with sim-ulating several autonomous systems concurrently.Nowadays the Robot Operating System ROS [5] is themost popular environment for connecting algorithms,sensors, and actuators of robot systems. Many func-tions and drivers were adopted from Player. Moreover,it also uses interfaces to Stage for 2D and to Gazebofor 3D simulations.Microsoft’s Robotics Developer Studio [6] is a free,but not open source, development suite using userfriendly techniques for visual programming, easy par-allelization, and debugging via web-interfaces. It isequipped with a DirectX based Visualization, its ownrigid-body physics engine, and provides interfaces tocommercial products from FischerTechnik, iRobot,Lego etc.Furthermore, there are also several commercial robotsimulators like the Virtual Robot ExperimentationPlatform V-Rep [7] or Webots [8].Proprietary simulation environments were developedfor larger projects like Junior - Stanford’s robotic carfor the DARPA Urban Challenge [9]. That proprietarysoftware can be adapted to special demands, which arenot completely fulfilled by generalized tools like theones named before.

DOI Proceedings of the 9th International Modelica Conference 339 10.3384/ecp12076339 September 3-5, 2012, Munich, Germany

 

 

 

 

 

 

 

 

 

 

   

Page 2: imulation of Artificial Intelligence Agents using Modelica

All of the mentioned simulation environments usephysics engines like Bullet Physics Engine [10] orOpen Dynamics Engine [11], which have a stronggaming or computer animation background and pro-vide rigid body modeling and collision detection.They try to reach a fast computation while providing asufficient accuracy of the physics. Modelica providesseveral advantages being able to model complex phys-ical systems containing e.g. flexible-bodies, electricaland hydraulic components. To be used for the sim-ulation of artificial intelligence agents Modelica hasto be extended by an advanced visualization like theDLR Visualization Library [12]. The combination ofModelica with the DLR Visualization Library createsa powerful tool for an efficient development of com-plex physical agent and environment models.Our motivation for the presented scheme is a bidirec-tional autonomous systems simulation, which com-bines complex Modelica models of the ROMO withthe artificial intelligence system used in the real vehi-cle.The remaining chapters are organized as followed:The second chapter provides an overview of our simu-lation concept. Chapter three gives a detailed explana-tion of the used tools and interfaces. Afterwards, theresults of a simple example will demonstrate the func-tionality of the AIA simulation scheme. Finally, wewill conclude with a brief summary and outlook.

2 Concept of the AIA Simulation

The main target for the proposed scheme is a’Software-in-the-Loop’ simulation, which means thatno changes are made to the algorithm that should betested. In order to test the artificial intelligence ofan autonomous system the perception, planning, andcontrol algorithms are kept and its hardware and theenvironment are simulated. The system’s hardwareis substituted by a Modelica model, where the detailof the model varies depending on the purpose of thesimulation. It can range from a rigid body model toan overall system model containing electrical, flexible,hydraulic, thermal, and tire (sub-)models.The second step is the replacement of the environ-ment by using the DLR Visualization Library, whichextends Modelica by an advanced visualization andinteractive simulation. Standard sensors for velocity,torque etc. are part of the basic Modelica library, butcomplex perception sensors like cameras require thisadvanced visualization for a sufficient simulation. Thealgorithms tested with this scheme and also their inter-

faces to the rest of the autonomous system do not haveto be changed. Hence, the algorithms have to run out-side the Modelica environment during the simulation,which is made possible by the interactive interfacesprovided by the DLR Visualization Library.The proposed simulation scheme using the example ofthe ROMO is depicted in Figure 1. The main distinc-tion is made between the autonomy hardware and thesimulation hardware. Both can run on the same PC,but the hardware of the autonomous system usuallyconsists of several connected processing units. The in-tention is to follow the ’Hardware-in-the-Loop’ prin-ciple and to connect the AIA system to a simulationPC.The primary perception sensors of the ROMO arecameras, which are widely used in modern au-tonomous systems, as they provide a great variety ofinformation [13]. A typical cycle of the scheme startswith the virtual cameras taking images of the simu-lated environment. The images and other sensor data ispacked according to the SensorNet format and passedinto the shared memory of the autonomy hardware.The interface from the Visualization library to Sensor-Net is described later in detail. Different algorithmsthat process and interpret those data can access theshared memory concurrently. The processed data ispassed to the planning module both directly and viaa module that updates the environment representation.The planned trajectory and other control data is passedvia an interactive interface to the Modelica model.Sensors are triggered and the controller gets its refer-ence input. In this example the vehicle dynamics con-troller is nested in the simulation module, since it doesnot run on the same hardware as the autonomous driv-ing components in the real vehicle. With the controllercommanding the actuators the ROMO model moves inthe virtual world and the loop is closed.Such a ’Software-in-the-loop’ scheme for autonomoussystems has several advantages. It is possible to test analgorithm under reproducible settings, which is usu-ally not the case in reality. The camera-based percep-tion is very sensitive to changing light conditions. Ad-ditionally, it is difficult to keep relative positions andvelocities of objects the same in every test. The re-producibility is also desirable for comparing differentalgorithms and an essential requirement for reverse en-gineering.Moreover, algorithms can be tested with optimal con-ditions. At the very beginning of an algorithm de-velopment it is helpful to see if the general conceptis working while neglecting sensor and actuator noise

Simulation of Artificial Intelligence Agents using Modelica and the DLR Visualization …

 

340 Proceedings of the 9th International Modelica Conference DOI September 3-5, 2012, Munich Germany 10.3384/ecp12076339

   

Page 3: imulation of Artificial Intelligence Agents using Modelica

Autonomy Hardware Simulation Hardware

SensorNet

Shared

Memory

DLR Visualization Library

3D Environment

Modelica

ROMO CAD-Model

Camera Model

...C1 Cn

Sensor DataTCP/IP

ROMO

Multi-

Physics-

Model

Sim

ula

ted

Motion

Vision Algorithms

3D-Reconstruction

Optical Flow

...

Lane Detection

Inte

ract

ive

Inte

rface

Local Map Global Map

PlanningControl Data

TCP/IP

Cam

era

Act

ivat

ionEnvironment Representation

GPS/IMU

Data

Veh

icle

Dyn

amic

s

Co

ntr

olle

r

Reactive Classical

Figure 1: ROMO AI-Simulation Concept

and other disturbing influences. After the basic func-tionality has been proven the noise can be increasedstep by step to test different levels of robustness.Another advantage is the adjustable level of abstrac-tion. Autonomous driving software can be evaluatedfirstly with a simple model of the system’s dynamics.Limitations of actuators can be neglected, sensors canbe considered as all knowing and physical constraintscan be softened. During the development process themodel complexity can be raised to achieve a more re-alistic behavior of the simulated system. Furthermore,the developer can build a virtual world according tohis needs. New types of sensors and systems can bemodeled that do not exist yet. The preparation andexecution of tests done by the simulation scheme ismuch faster than tests in reality. The simulation canbe run faster than real time causing less costs and pos-ing no harm for people and equipment. Nevertheless,there is still the need for tests with the real system, buttheir frequency can be considerably decreased. Hence,the ’Software-in-the-Loop’ principle is very helpfulfor rapid prototyping.

3 Combining Virtual Reality withPerception

An essential part of the proposed simulation conceptis the link between 3D simulation provided by theDLR Visualization Library and image processing al-gorithms, which utilize SensorNet for image data dis-patching.

3.1 The DLR Visualization Library

The DLR Visualization Library is an extension toModelica for 3D visualization of simulations. It iscomposed of two parts: a Modelica library and a stan-dalone program.The library part defines Modelica multi-body elementswhich do not influence the simulation’s physics butare used for configuration of the simulation’s visual-ization. The visualization is then displayed in a sepa-rate application called SimVis. An example of this canbe seen in Figure 2. On the left it shows a Modelicamodel using the DLR Visualization Library library andon the right the corresponding visualization in SimVis.The DLR Visualization Library library provides a

Session 3A: Mixed Simulation Techniques I

DOI Proceedings of the 9th International Modelica Conference 341 10.3384/ecp12076339 September 3-5, 2012, Munich, Germany

Page 4: imulation of Artificial Intelligence Agents using Modelica

Figure 2: A Modelica model of the ROMO and the corresponding visualization

wide range of 3D objects from simple elements likeboxes and gearwheels to complex 3D files to objectsdefining the representation like Head-Up-Displaysshowing variables or camera positions both in the 3Denvironment and their images on the screen.From a technical perspective this is achieved by uti-lizing Modelicas C language interface to establish aTCP/IP connection between the simulation and theSimVis application, transmitting information about theconfiguration of the 3D elements to be displayed [12].

3.2 SensorNet

Modern robotics applications often use cameras andrequire the real-time analysis of images. The problemfor this application is twofold:First the amount of data is immense. For example asingle VGA camera generates about 640 ·480 ·3Byte ·30Hz = 28MByte/s of raw data. Moreover, recentvideo compression methods, e.g. mpeg4 or divx, arecomputational expensive and also degenerates the im-age quality and therefore should be avoided in imageprocessing tasks. Additionally, robots interact withtheir environment. Therefore, real-time restrictionsapply to the image processing. The time from imageacquisition to a possible reaction has to be minimized.This requires extremely efficient dispatching of imagedata which is achieved by the communication frame-work SensorNet. It is designed to provide sensordata, e.g. from cameras, with low latency to multiple,concurrent applications. Therefore, previous conceptson local real-time communication via shared memory[14] and on unified description of camera and rangesensor data [15] are combined and extended in theSensorNet data streaming concept. In detail, a ringbuffer on a shared memory in conjunction with a sig-naling mechanism is used to distribute data from aserver process to multiple client processes with low

latency (<100 µs). The interface also comprises datatype metadata that allows for type checking. Further,predefined, unified data types are used, e.g. color im-age or depth image, and act as a abstraction layer. Asa result, sensors of same type can easily be exchangedby just replacing the server process. Data can be dis-tributed across system borders by connecting sharedmemories on the different systems with UDP- or TCP-based data transfer. Additionally, a separate TCP-based configuration channel allows for setting and get-ting parameters, e.g. camera shutter time, without in-fluencing real-time data streaming.

3.3 Interface

Acquiring images with real cameras under controlledconditions is not always feasible as described above.The intention is to reuse the existing solutions for 3Dsimulation and image data dispatching.The DLR Visualization Library cameras are designedfor displaying images on screen. Since Modelica is anobject-oriented language the new camera model is de-rived from the existing solution and extended by addi-tional parameters. The cameras are by default alignedwithin the 3D environment using rigid body transfor-mations and displayed either in the SimVis window orfull screen. In both cases the camera resolution is de-termined as a ratio of either the window size or thescreen size for easier portability from one PC to an-other. In contrast real cameras have a fixed resolutionin pixels. The simulation therefore requires this reso-lution as new parameter. Likewise the image data isalways displayed on screen in RGB format, yet cam-eras often use different formats. This implementationcurrently supports two alternative formats: YUV andgrayscale. Furthermore, SensorNet needs a name forthe shared memory object to identify a specific cameraand a role for the camera in a stereo setup. If the cam-

Simulation of Artificial Intelligence Agents using Modelica and the DLR Visualization …

 

342 Proceedings of the 9th International Modelica Conference DOI September 3-5, 2012, Munich Germany 10.3384/ecp12076339

Page 5: imulation of Artificial Intelligence Agents using Modelica

era is part of a stereo setup, both cameras use the sameshared memory object. One camera has to be set asmaster and one camera has to be the slave. Both im-ages will then be acquired simultaneously and put inthe same shared memory object, whenever the mastercamera is triggered.With these additional parameters set up in the DLR Vi-sualization Library, SimVis can also reuse the majorityof its existing camera implementation. The main dif-ference lies in the render target. Normal cameras ren-der to a frame buffer that is then displayed on screen.For the simulated cameras the render target is redi-rected to a frame buffer object (FBO), which is notdisplayed but read back to the main memory. Thereby,the image is rendered the same way but off-screen andaccessible by the application as a data array in RGBformat. This image data first has to be converted to thedesired image format. Conversion into YUV is carriedout using the following equation per pixel:

Y = 0.299R+0.587G+0.114B

U = 0.493(B−Y )

V = 0.877(R−Y )

This equation is also applicable for conversion ingrayscale by only using the Y component describingthe pixels luminance [16]. The preprocessed image isthen packed into one of SensorNets default image for-mats, a time-stamp corresponding to the current simu-lation time is added and the image is released. Releas-ing an image in the SensorNet context makes it avail-able for other applications. The primary focus lies onimage interpretation algorithms that are part of an ar-tificial intelligence agent.

4 Experimental Results

The proposed scheme is evaluated by a simple exam-ple, in which a testing environment for a vision basedcontrol (VBC) platooning algorithm is created.The basic idea is that the ROMO follows a precedingcar, while using only a front stereo camera pair for per-ception. Initially, the ROMO has only an image of theback of the target car, which was taken from an appro-priate distance for following.After the platooning mode is activated the ROMO triesto find the target car in the current camera image. Ana-log to vision based robot control [17] the goal is to seethe target object in the same size and at the same an-gle as in the reference image. Therefore, the ROMOtries to reach and hold the very same relative position

in which the initial image was taken. The target posi-tion can also be made velocity dependent for keepingthe minimum safety clearance.A simulation model in Dymola is created. First, theROMO’s top front camera pair, refer to Figure 3, ismodeled by using the DLR Visualization Library’sSensorNet camera class, which was developed for theproposed simulation scheme, with the appropriate pa-rameterization. They are attached at their respectivepositions to a 3D geometry model of the ROMO,which is extracted from CAD. An overall model of theROMO containing all actuators, sensors, the electricalsystems etc. can be used, but the example focuses onthe perception part, which is the main interest in thispaper. Buildings, streets, and a surrounding land-

Figure 3: The DLR’s ROboMObil

scape are placed in the virtual environment. For thisthe DLR Visualization Library provides an integrationblock for common 3D files like the ’.3ds’ format. Thetarget car consists of an animated 3D model boundto a trajectory block that moves the object within thevirtual world. Now the simulation is started and im-ages are sent to the shared memory via SensorNet.The AIA algorithms receive a notification that new im-ages are available and begin to run. First, a SensorNetimplementation of the DLR’s 3D reconstruction algo-rithm, called Semi-Global Matching (SGM)[18], cal-culates depth information out of the two images fromthe stereo camera. The result can be seen in Figure4, whereas the left part shows the scene recorded bythe stereo cam and the right part a visualization of thedepth values. The color value of every pixel is set ac-

Session 3A: Mixed Simulation Techniques I

DOI Proceedings of the 9th International Modelica Conference 343 10.3384/ecp12076339 September 3-5, 2012, Munich, Germany

Page 6: imulation of Artificial Intelligence Agents using Modelica

Figure 4: Semi Global Matching applied to the virtual images

(a)

(b)

x

z

x

z

targetposition

targetposition

Figure 5: Matching features for estimating the relative position: (a) At target position (b) 4 meters deviation incamera direction

cording to the depth value at that point. Small valuesare colored red and with increasing depth they go fromorange to green to blue. Black parts of the depth imageare either too far away like the sky or cannot be recon-

structed. This is often the case in regions with homo-geneously textured surfaces, where the reconstructionalgorithm cannot find matching points in both images.After calculating the depths the SGM writes a struc-

Simulation of Artificial Intelligence Agents using Modelica and the DLR Visualization …

 

344 Proceedings of the 9th International Modelica Conference DOI September 3-5, 2012, Munich Germany 10.3384/ecp12076339

Page 7: imulation of Artificial Intelligence Agents using Modelica

ture consisting of a rectified actual image, a qualitymap, and the depth image back into the shared mem-ory. This structure is accessed by the VBC car fol-lowing algorithm, which starts with running a featuredetection algorithm, e.g. AGAST [19], on the actualimage. A descriptor for matching is calculated for ev-ery feature point and the 3D coordinates of every fea-ture point are determined using the depth image fromSGM. The keypoints of the target image with their de-scriptors and 3D values are available a priori and sodescriptors are compared to find matches in both im-ages. The results can be seen in Figure 5, whereas theright is the target and the left one is the current cameraimage of the simulation. Matching keypoints are con-nected with a green line. Besides the markings thereis no color in Figure 5, as the feature detection workswith grayscale images. At least four matching key-points are randomly selected. By using their respective3D coordinates the rotation matrix R and translationvector T between the current keypoints and the targetkeypoints are calculated. The quality of the estimatedR,T is measured by applying R,T to all keypoints ofthe current image that have matches. After that theyare projected back into the 2D image space and thedistance to their matching points in the target imageis measured and summed up over all keypoints. Thewhole procedure is repeated with other sets of featuresuntil the quality of R,T is sufficient or a certain num-ber of iterations is exceeded and the best iteration willbe kept.The preceeding car in the simulation starts at the tar-get relative position and moves four meters in the cam-era’s z direction, whereas the ROMO remains station-ary. The matches at the beginning can be seen in Fig-ure 5a and that the end of the movement is depictedin Figure 5b. The number of found correspondencesdecreases during the movement. This is normal on theone hand due to the changed perspective, but on theother hand it is additionally disadvantaged here by thesimple textures, which lead to weak feature points. In-correctly matched or too few feature points can disturbthe results of the algorithm immensely.Nevertheless, the simulation has shown the generalfunctionality of the algorithm as can be seen in Fig-ure 6. The z value of the deviation to the goal posi-tion changes from 0 to 4000mm, while the target carmoves in z-direction. The deviaton to the real positioncan be due to badly chosen feature points, depth mea-surement errors, or imprecise calibration of the virtualcameras. Based on the computed rotation and trans-lation a trajectory can be calculated and fed back into

the Modelica simulation via a TCP/IP channel to con-trol the simulated ROMO in its virtual environment.In this early state of the algorithm’s development theAIA simulation scheme is very helpful to identifyweaknesses and increase robustness before tests withthe real ROMO are possible.

[mm

]

[seconds]0 1 2 3 4 5 6

−500

0

500

1000

1500

2000

2500

3000

3500

4000

4500

z − Estimation

z − Simulation

Figure 6: The deviation to the relative target position.z is in camera direction

5 Conclusions and Future Work

This paper presents a scheme for testing artificial intel-ligence algorithms for autonomous systems accordingto ’Software-in-the-loop’ and ’Hardware-in-the-loop’principles. Existing multi-physics models are com-bined with the actual artificial intelligence algorithmthat do not have to be adapted for the simulation. Thisis achieved by extending the DLR Visualization Li-brary by an interface to the sensor data managementtool SensorNet, which is utilized in real autonomoussystems in DLR’s Robotics and Mechatronics Center.The capability of the concept is proven by a short ex-ample, in which the translation and rotation to a lead-ing vehicle are determined by a vision based car fol-lowing algorithm.In further developments more sensor types, which aretypically used in autonomous systems like a dGPSaided Inertial Measurement Unit (IMU), will be mod-eled. Camera models can be extended with more re-alistic effects such as lens distortions. Moreover, weplan to use virtual objects with more complex texturesto generate more realistic virtual pictures. In order tovalidate the simulation results they have to be com-pared to those using data taken from vehicle tests.

Session 3A: Mixed Simulation Techniques I

DOI Proceedings of the 9th International Modelica Conference 345 10.3384/ecp12076339 September 3-5, 2012, Munich, Germany

Page 8: imulation of Artificial Intelligence Agents using Modelica

Acknowledgements We thank the following col-leagues for supporting us with their expert knowl-edge: Darius Burschka (image processing), Mar-tin Otter (Modelica), Tobias Bellmann (DLR Visual-ization Library), Heiko Hirschmüller (SGM), KlausStrobl (camera calibration), and the whole ROboMO-bil Team.

References

[1] Jonathan Brembeck, Lok Man Ho, Alexan-der Schaub, Clemens Satzger, and GerhardHirzinger. Romo - the robotic electric vehicle.In 22nd International Symposium on Dynamicsof Vehicle on Roads and Tracks. IAVSD, 2011.

[2] Stuart Russell and Peter Norvig. Artificial Intelli-gence: A Modern Approach. Prentice Hall, thirdedition, December 2009.

[3] B. Gerkey, R. Vaughan, and A. Howard. Theplayer/stage project: Tools for multi-robot anddistributed sensor systems. In 11th InternationalConference on Advanced Robotics (ICAR 2003),Coimbra, Portugal, June 2003.

[4] N. Koenig and A. Howard. Design and useparadigms for gazebo, an open-source multi-robot simulator. In Intelligent Robots and Sys-tems, 2004. (IROS 2004). Proceedings. 2004IEEE/RSJ International Conference on, vol-ume 3, pages 2149 – 2154 vol.3, sept.-2 oct.2004.

[5] Morgan Quigley, Ken Conley, Brian P. Gerkey,Josh Faust, Tully Foote, Jeremy Leibs, RobWheeler, and Andrew Y. Ng. Ros: an open-source robot operating system. In ICRA Work-shop on Open Source Software, 2009.

[6] J. Jackson. Microsoft robotics studio: A techni-cal introduction. Robotics Automation Magazine,IEEE, 14(4):82 –87, dec. 2007.

[7] Marc Freese, Surya P. N. Singh, Fumio Ozaki,and Nobuto Matsuhira. Virtual robot experimen-tation platform v-rep: A versatile 3d robot sim-ulator. In Noriaki Ando, Stephen Balakirsky,Thomas Hemker, Monica Reggiani, and Oskarvon Stryk, editors, SIMPAR, volume 6472 ofLecture Notes in Computer Science, pages 51–62. Springer, 2010.

[8] O. Michel. Webots: Professional mobile robotsimulation. Journal of Advanced Robotics Sys-tems, 1(1):39–42, 2004.

[9] M. Montemerlo, S. Thrun, and et al. Junior: Thestanford entry in the urban challenge. Journal ofField Robotics, 2008.

[10] http://bulletphysics.org - 15.05.2012.

[11] Russell Smith. Open dynamics engine, 2008.http://www.ode.org/.

[12] Tobias Bellmann. Interactive simulations andadvanced visualization with modelica. In Pro-ceedings 7th Modelica Conference, Como, Italy,2009.

[13] Alexander Schaub, Jonathan Brembeck, DariusBurschka, and Gerd Hirzinger. Robotic electricvehicle with camera-based autonomy approach.ATZelektronik, 2(2):10–16, April 2011.

[14] G. Hirzinger and B. Bauml. Agile robot devel-opment (ard): A pragmatic approach to roboticsoftware. pages 3741 –3748, oct. 2006.

[15] T. Bodenmuller, W. Sepp, M. Suppa, andG. Hirzinger. Tackling multi-sensory 3d dataacquisition and fusion. In Intelligent Robotsand Systems, 2007. IROS 2007. IEEE/RSJ Inter-national Conference on, pages 2180 –2185, 292007-nov. 2 2007.

[16] Charles Poynton. Digital Video and HDTV Algo-rithms and Interfaces. Morgan Kaufmann Pub-lishers Inc., San Francisco, CA, USA, 1 edition,2003.

[17] F. Chaumette and S. Hutchinson. Visual servocontrol. i. basic approaches. Robotics Automa-tion Magazine, IEEE, 13(4):82 –90, 2006.

[18] H. Hirschmuller. Stereo processing bysemiglobal matching and mutual informa-tion. Pattern Analysis and Machine Intelligence,IEEE Transactions on, 30(2):328 –341, 2008.

[19] Elmar Mair, Gregory D. Hager, Darius Burschka,Michael Suppa, and Gerhard Hirzinger. Adap-tive and generic corner detection based on theaccelerated segment test. In Proceedings of the11th European conference on Computer vision:Part II, ECCV’10, pages 183–196, Berlin, Hei-delberg, 2010. Springer-Verlag.

Simulation of Artificial Intelligence Agents using Modelica and the DLR Visualization …

 

346 Proceedings of the 9th International Modelica Conference DOI September 3-5, 2012, Munich Germany 10.3384/ecp12076339