mobile manipulation for planetary exploration

12
Publications of the DLR elib elib elib This is the author’s copy of the publication as archived with the DLR’s electronic library at http://elib.dlr.de. Please consult the original publication for citation. Mobile manipulation for planetary exploration P. Lehner; S. Brunner; A. D¨ omel; H. Gmeiner; S. Riedel; B. Vodermayer; A. Wedler Keywords: aerospace robotics; manipulators; mobile robots; path planning; planetary rovers; telerobotics; complex manipulation tasks; arbitrary tool handling; moon-analogue demonstration mission; mobile manipulation; planetary exploration; robotic systems; foreign planets; teleoperation; autonomous rover; versatile constraint motion planner; remote control; autonomous task control; Light Weight Rover Unit; Task analysis; Payloads; Manipulators; Software; Computer architecture; Planets Copyright Notice c 2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Citation Notice @INPROCEEDINGS{8396726, author={P. Lehner and S. Brunner and A. D\"omel and H. Gmeiner and S. Riedel and B. Vodermayer and A. Wedler}, booktitle={2018 IEEE Aerospace Conference}, title={Mobile manipulation for planetary exploration}, year={2018}, volume={}, number={}, pages={1-11}, keywords={aerospace robotics; manipulators; mobile robots; path planning; planetary rovers; telerobotics; complex manipulation tasks; arbitrary tool handling; moon-analogue demonstration mission; mobile manipulation; planetary exploration; robotic systems; foreign planets; teleoperation; autonomous rover; versatile constraint motion planner; remote control; autonomous task control; Light Weight Rover Unit; Task analysis; Payloads; Manipulators; Software; Computer architecture; Planets}, doi={10.1109/AERO.2018.8396726}, ISSN={}, month={March},}

Upload: others

Post on 21-Feb-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Publications of the DLR elibelibelib

This is the author’s copy of the publication as archived with the DLR’s electronic library at http://elib.dlr.de.Please consult the original publication for citation.

Mobile manipulation for planetary explorationP. Lehner; S. Brunner; A. Domel; H. Gmeiner; S. Riedel; B. Vodermayer; A. WedlerKeywords: aerospace robotics; manipulators; mobile robots; path planning; planetary rovers; telerobotics; complexmanipulation tasks; arbitrary tool handling; moon-analogue demonstration mission; mobile manipulation; planetaryexploration; robotic systems; foreign planets; teleoperation; autonomous rover; versatile constraint motion planner;remote control; autonomous task control; Light Weight Rover Unit; Task analysis; Payloads; Manipulators; Software;Computer architecture; Planets

Copyright Noticec 2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material

for advertising or promotional purposes or for creating new collective works for resale or redistribution toservers or lists, or to reuse any copyrighted component of this work in other works must be obtained fromthe IEEE.

Citation Notice@INPROCEEDINGS{8396726,

author={P. Lehner and S. Brunner and A. D\"omel and H. Gmeiner and S. Riedel and B. Vodermayer and A. Wedler},

booktitle={2018 IEEE Aerospace Conference},

title={Mobile manipulation for planetary exploration},

year={2018},

volume={},

number={},

pages={1-11},

keywords={aerospace robotics; manipulators; mobile robots; path planning; planetary rovers; telerobotics; complex manipulation tasks; arbitrary tool handling; moon-analogue demonstration

mission; mobile manipulation; planetary exploration; robotic systems; foreign planets; teleoperation; autonomous rover; versatile constraint motion planner; remote control;

autonomous task control; Light Weight Rover Unit; Task analysis; Payloads; Manipulators; Software; Computer architecture; Planets},

doi={10.1109/AERO.2018.8396726},

ISSN={},

month={March},}

Mobile Manipulation for Planetary Exploration

Peter Lehner∗, Sebastian Brunner∗, Andreas Domel, Heinrich Gmeiner,Sebastian Riedel, Bernhard Vodermayer, Armin Wedler

German Aerospace Center (DLR)Institute of Robotics and Mechatronics

Muenchener Str. 20, 82234 Wessling, Germany{Peter.Lehner, Sebastian.Brunner, Andreas.Doemel, Heinrich.Gmeiner,

Sebastian.Riedel, Bernhard.Vodermayer, Armin.Wedler}@dlr.de∗Both authors contributed equally to this work.

Abstract—Robotic systems map unknown terrain and collectscientific relevant data of foreign planets. Currently, pilots fromEarth steer these rovers on Moon and Mars surfaces viateleoperation. However, remote control suffers from a high delayof the long distance communication which leads to a reduction ofthe time the rover can spent gathering scientific data. We proposea system architecture for an autonomous rover for planetaryexploration. The architecture is centered around a flexible,scalable world model to record and represent the environment ofthe robot. An autonomous task control framework and a versatileconstraint motion planner use the live information from theworld model to steer the rover through complex manipulationtasks. Furthermore, we present the enhancement of our LightWeight Rover Unit (LRU) with an innovative docking interfacefor arbitrary tool handling. We showcase the effectiveness of ourapproach at the moon-analogue demonstration mission of theROBEX project on Mt. Etna, Sicily. We show in two experimentsthat the robot is capable of autonomously deploying scientific in-struments and collecting soil samples from the volcano’s surface.

I. INTRODUCTION

Planetary exploration aims at discovering scientific insightsabout other planets. Scientists want to explain the planet’scomposition and creation, find new elements, and ultimatelydiscover new life on a remote planet. All these discoveriesrequire data. Scientists want to analyze images of the planetsurface, measure seismic activities and take samples of theplanet’s soil. One solution is to send a robotic system tothe remote planet in order to gather this data. Employingan autonomous rover has one main advantage: Data canbe gathered faster than by human teleoperation as it is nothindered by the delay of a remote distance communication.

Nevertheless designing an autonomous mobile manipulationapproach for planetary exploration poses many challenges.One issue is the mass of the robotic system. While therover needs to be relatively lightweight, it needs to manip-ulate objects which are relatively heavy. Another issue isthe communication delay. As live monitoring is limited, therobot must feature a high degree of autonomy and robustness.Finally many challenges of autonomous manipulation apply:Complex system architecture with many sensors and actors,partial information of the robots surroundings, and locationuncertainty of the robot and the manipulated objects.

Fig. 1. The Light Weight Rover Unit lifts the scientific instrument fromthe ground and places the seismometer into the payload carrier on therobot’s back.

We present an integrated approach for mobile manipulation,which tackles the key challenges of planetary exploration.We feature a fully autonomous rover prototype with stereocameras, a lightweight manipulator and an innovative dockinginterface. The rover is steered by our autonomous task controlsoftware with an architecture design centered around a flexibleworld model. The robot’s environment information is sharedin a graph database between the core software components:A constrained motion planner, an autonomous navigation andexploration system, and object detection modules for landerand measurement instrument localization.

We showcase our approach and its robustness in twoplanetary exploration scenarios on a moon-analogue site onMt. Etna, Sicily. In the first experiment, the rover delivers aseismographic instrument in a payload box from a landingunit to a chosen location. The rover autonomously deploysthe instrument, which includes leveling of the ground surfaceand creating an impulse for a test measurement. In the second

1

experiment, the rover takes a payload box with storage slotsfor soil samples from a landing unit and deploys it at a targetlocation. The robot then docks to a shovel, acquires a sample,inserts it into the payload box, and recovers the payload boxfrom the ground. Both missions pose significant challenges tothe rover’s autonomy but were completed successfully.

II. RELATED WORK

Real space environments like on the Moon or Mars posesmany challenges to both the hardware and software of arover. The most famous examples of rovers ever built areSojourner [1], Lunokhod [2], Sprit, Curiosity and Opportu-nity [3]. All of them were deployed on the Moon or Marsand covered distances from 100 meters to more than 40kilometers [4]. Unfortunately, they all lack an autonomousbehavior as all their actions are remotely controlled [2, 3, 5].Thus, new systems evolve, which attempt to complete missionsfully autonomous, like the Light Weight Rover Unit (LRU)rover [6].

Robot autonomy requires flexible system and software ar-chitectures to divide the complexity of the overall task. TheClaraty architecture [7] was proposed at the beginning of thiscentury, but was not implemented on a real Moon or Marsmission. Further software architectures proposed by NASA are[8] and [5], which generate action sequences automatically butare double-checked by humans each time before they are sentto the Mars rovers. Architectures for fully autonomous systemsare proposed and tested by Schuster et al. [6], Eich et al. [9]and Schneider et al. [10].

Autonomy requires frameworks for task programming, be-havior definition, and mission control. We designed and imple-mented our own framework as all existing software did not fitour needs [11]. Unfortunately, the development and supportfor many software packages already discontinued [12, 13],although the general concepts behind were quite promising.Other frameworks did not ship a graphical editor for roboticbehavior, which is, in our opinion, vital to cover very complexscenarios [14, 15]. Thus, we developed our own solution forcreating complex, robotic tasks which is called RMC AdvancedFlow Control (RAFCON) [11]. The best alternative to ourframework is FlexBE, which supports many of the featuresof RAFCON but lacks semantic state annotation, generationof meaningful task metrics during state machine creation orexecution, and post-mortem analysis [16].

The robotics open source community lacks software pack-ages for semantic knowledge representation. There are twoRobot Operating System (ROS) packages [17, 18], which areboth discontinued. Apart from ROS there are few other worldmodel frameworks available, and some of them represent theirknowledge in a scene graph-like structure [19, 18, 20, 21].In contrast, we integrate a graph database for the storage ofhighly connected heterogeneous data and designed a flexibleinterface for distributed remote access to the world model,which is presented in Section III-B.

Planning motions for manipulators is a wide field inthe robotics community. Sampling based planners like the

Rapidly Exploring Random Tree (RRT) and the ProbabilisticRoadmap Method (PRM) have emerged as the most widelyused planning approaches due to their ability to explore thevast solution spaces of the manipulators configuration spaceefficiently and with probabilistic completeness [22, 23]. Ourmotion planner works in concept similar to the ConstrainedBi-RRT (CBiRRT2), which extends the RRT algorithm withthe concept of task constraints [24]. Our approach exceeds theCBiRRT2 by its tight integration in the system architectureand its additional constraints for planetary exploration, forexample gravitational constraints. Additionally our approachhas the ability to learn from previous planning queries whichis presented in detail in [25].

Docking and interfacing systems targeting deep space andplanetary missions differ greatly in their requirements. Theyare usually adapted to narrowly defined tasks, lowering risksfor the mission and costs, but leaving no options for flexi-bility. Common functionalities of these systems are that theyenable connection of mechanical loads, transfer of electricalpower and data as well as establishing thermal distribution byliquid exchange. Despite the synergies between these fieldsof interest, several approaches by renowned agencies leadto the implementation of individual, proprietary designs. Thedocking of mechanical loads can be solved through exposedguide elements [26, 27]. As a result, the active interface partmust be capable of generating forces towards the passivecoupling partner, which are high enough to accomplish matingwithin the correct orientation followed by establishing theconnection for the target application. High forces have tobe controlled and monitored carefully as docking actions canbear the risk of collision with other parts of the system. Assystems often employ space grade components of the shelfwith rectangular shape, the mating and connecting requires ahigh precision to lower the risk of system damage and missionfailure.

One way to classify the process of robotic manipulationand the necessary components involved is by the weightof the handled payload. Solutions comprising a gripper thatprovides functionality similar to that of a human hand arein general more appropriate for the manipulation of lightweight objects. Compared to this the handling of heavierpayloads requires an interface comparable to an industrialtool changer. However, for the application of such a systemthe manipulator platform has to provide precise positioningand a static model of the working environment has to beestablished for the time of operation. Dynamic changes of theenvironment lead to a time-consuming update of the modeland even technical components, possibly rendering the systemunable to operate correctly. In consequence, standard industrialtool changers contradict to systems required for explorativetasks performed by mobile robots in an unknown environment.In such cases the world model is changing dynamically, thussystems involved in docking or undocking maneuvers have toprovide a capability for larger tolerances.

2

III. MOBILE MANIPULATION FOR PLANETARYEXPLORATION

Planetary exploration missions contain key milestoneswhich define the mission success or failure. One exampleof such a milestone in our experiments is when the roverdocks to the payload element on the lander. To reach thismilestone autonomously the rover solves a whole sequenceof tasks: The rover must locate the landing unit, drive to thecorrect docking location, fine position the rover body, executea manipulator motion to position the docking interface, anddock rigidly to the payload element. Every step must positionthe rover, the manipulator, or the docking interface within acertain distance of the payload for the next step to work. Asthis describes a long chain of dependencies the main difficultyis not each individual step (nevertheless hard by themselves)but the robustness of the integrated system.

To achieve a robustness level to execute a whole planetaryexploration mission, we designed an architecture which isfocused on integrating all components, as shown in Fig. 2. Thecenterpiece of the architecture is a world model which holds allthe information the rover knows about itself, its environment,as well as the mission status. This information is shared bythe main software components: The control flow RAFCON,the motion planner for the manipulator, the navigation andexploration modules, and the object detection and localizationcomponents. To control the access to the world model wedesigned special interfaces, which provide different views ontothe world model. For example, the motion planner gathers itsrequired information through an interface which only exposesthe rigid body kinematics of the current scene. By havingall information in one central representation with controlledaccess, we can ensure that each component always receivesthe current data and does not access data which is irrelevantfor its domain.

Next to this central world model the main componentsfor mobile manipulation in our system are the task controlframework, the navigation and exploration module, the ma-nipulator motion planner, and the docking interface. To buildon our previous work we only briefly present the individualcomponents and focus on the novel contributions centeredaround the world model. As the navigation and explorationcomponent remained similar in concept please refer to ourprevious publication for details [6].

A. Autonomous Task Control

To program the autonomous behavior of the LRU rover andto monitor the progress of the mission execution we employedour flow control software RAFCON [11]. Compared to thestate machines programmed for our previous experiments(see [6]) the complexity of the autonomous behavior forour scenarios in the project Robotic Exploration of ExtremeEnvironments (ROBEX) rose even more. For the seismicmeasurements (see Section IV-A) we created state machinesof more than 1400 states and 1900 transitions of a maximumhierarchy level of eight.

Fig. 2. The abstract system architecture of the Light Weight Rover Unit,depicting the main components.

Fig. 3. The RAFCON state machine for the seismic measurementexperiment described in Section IV-A.

As described in [6], we use a main decision maker for therobot to decide with subtask to execute next based on theknowledge of the robot’s environment (i.e. the world model).Moreover, we employ local decision makers to decide uponthe next actions based on different sensor inputs or failureevents. Error recovery procedures for many failure types canbe added easily as RAFCON offers powerful support for errorhandling in its very core design.

We extended the robot behavior to cover different autonomylevels depending on the current testing situation. In general,the main mission commands are issued by the sequencer onthe landing unit, so the rover has to synchronize with thelander. Furthermore the rover has to communicate with theseismic measurement units (or ”remote units”) to retrieve e.g.the gravity vector of the current deployment pose. As usual

3

for real outdoor tests, both the communication to the landerand to the remote units happens to fail. In these situationsthe robot needs to choose the best action path to continue themission as good as possible.

RAFCON furthermore enables us to test the whole actionsequences remotely from a control computer in the base stationbefore it is deployed onto the robot for fully autonomousbehavior. In practice, this development procedure saves a lotof time, as a frequent deploying procedure over a bandwidthlimited, high delayed communication would block the robotfor long durations.

B. World Model

1) Basic Concept: In the last years the development ofrobotic systems has benefited immensely from the designconcept of a fine grained modular approach. One major goalis to decouple all components. So why do we come up with arobot world model as a central component? The considerationis that most components have an internal world model. Theworld models of some components are a complex, geometricrepresentation, e.g. for path planning modules; others are onlya few parameters identifying the systems properties for control.However, since almost all modules are related to the realworld, the components indirectly depend on each other. Forexample if an object is grasped the load data changes, addi-tional constraints arise for the planner, the field of view of thecameras change, etc. Instead of keeping all models individuallyupdated our approach has one model of the world. Changesof the real world are tracked and modeled in that component.All other components can extract their world representationsfrom this world model: Ensuring synchronization.

Since computational resources on the system are limited,we use an abstract representation of the world, instead ofsimulating the environment and physical effects in detail.The basic consideration is that the most important relationbetween objects is the geometric relation. Hence our worldmodel is based on geometric transformations connecting dif-ferent objects. Since our use cases are usually in quasi-static environments most of these relations are constant. In-stead of simulating physical laws our assumption is: Everytransformation is constant until we get new information. Wedecided to use a tree structure due to the fact that most ofthe knowledge we have about the environment are relativerelations. For example we know the static relation betweenlander and docking port, but not the pose of the docking port inthe map frame. Therefore every object has exactly one parentobject with a defined transformation. Changing this relationleads implicitly to changed poses of all child objects regardingthe map frame. Besides physical objects our world model hasdifferent object types, like e.g. frames, markers and graspsrepresenting additional knowledge in the model (see Figure 4).

2) Application: For manipulation tasks the robot has tochange its environment. These changes of the environment canbe modeled explicitly. Since our world model representationof the robot is very simple, consisting of two objects, onlyfew operations and their effects are possible during execution:

• Move manipulator: When the robot moves its manipulatorthe transformation between robot base and flange has tobe updated.

• Move platform: When the robot’s platform is moved, thetransformation between the scene root and robot base hasto be updated.

• Pick up object: When an object is grasped, the parent ofthe object has to be changed to the robot flange in theworld model. The transformation to the robot flange isgiven by the applied grasp.

• Place object: When an object is placed onto an object inthe scene, the parent of the object changes from the robotflange to the object on which the object was placed.

To estimate a pose of an object a local reference frameis needed. Knowing the world pose of the reference frameallows computing the world pose of the object. Often the samereference object or frame is used for many objects. Given thecase that the measurement of the reference object to the worldframe was improved, all referenced objects have to be movedto benefit from that information.

3) Implementation: The proposed world model is imple-mented using the graph database Neo4j1 as backend. Neo4jrepresents data using a property-graph model, meaning thedatabase is a graph composed of nodes and interconnectedby edges, both of which can store arbitrary properties in akey-value fashion. For our world model, we directly representobjects in the world model as nodes in the database graph andcorrespondingly edges in the graph build up the world modeltree structure. The relative position and orientation betweentwo objects are stored as properties of the connecting edge.We define and ensure a common type-system for objects inthe world, by defining a type-hierarchy with precisely listedrequired and optional properties for each object type (e.g.every PhysicalObject has to have a mass; every Grasp a widthand force). This is done via a so called object-graph-mapping(OGM) using the neomodel library 2.

A database as backend in general provides useful featuresto keep the world model consistent at any time, e.g. synchro-nized multi-client read-/write-access and transaction supportfor batching multiple modifications into an atomic operation.In particular, we use transactions with pre-/post-transactionsanity checks to allow complex yet safe operations on theworld model. In case of a software error or violated sanitychecks (e.g. a world model object suddenly has two parents),the world model is automatically rolled back to the consistentstate before this modification. As a graph database, Neo4joffers a pattern-related query language called CYPHER, whichmakes it very easy to query the world model for e.g. ”all rigidbodies for which a grasp is defined.”

We use YAML-files as a human read-/writeable way tospecify the initial world model, but also complex object tem-plates (e.g. complete subgraphs of PhysicalObjects togetherwith attached Grasps and Markers, etc.). To add such tem-

1https://neo4j.com/2https://github.com/robinedwards/neomodel

4

Leg

en

d

Map

Lander

Marker1

Marker2

Marker3

DockingPort

RemoteUnit 1

Marker1

Marker2

Grasp1

ApproachPose

RobotBase

RobotFlange

Robot

PhysicalObject

Marker

Grasp

Frame

static

quasi-static

referenced

Fig. 4. Schematic of a part of the world model after the robot referenced itself to the Lander, shortly before picking up the Remote Unit.

a b c

Fig. 5. Different solution paths found by the manipulator motion planner shown as discretized configurations of the arm. a) shows the manipulatormotion of moving the payload from the lander onto the rover, b) shows the motion of picking up the payload from the ground, and c) shows themotion of approaching the ground with a shovel.

plates into the world model, we developed a domain-specificmodule and API for higher level operations based on the rawCYPHER- and OGM-access to the database. Other more com-plex operations include querying the relative transformationbetween arbitrary objects in the graph or maintaining uniquelabels for nodes (e.g. exactly one node might hold a labelCURRENT PLANNING SCENE).

This API is the basis for several adapter modules whichmake operations accessible via ROS services. New function-ality or composite operations can be added in a modular wayby writing additional adapters as necessary. Two exampleswould be 1) the GeometricScene-Adapter, which providesservices to get a reduced object tree with everything relevantfor building a geometric scene representation for motionplanning, and 2) a ROS-Tf-Adapter, which takes care ofperiodically publishing the world as ROS-Tf-tree as well asupdating externally published transformations (for examplerobot f lange→ robot base) in the world model.

C. Manipulator Motion Planning

For the exploration of a remote planet scientists want todeploy scientific instruments, take samples of the soil, andperform maintenance on the landing equipment. To equip arover with a general purpose device for all of these tasks a

robotic manipulator can be employed. But if the rover is toautonomously use the manipulator, it must plan motions whichsatisfy the following constraints:• The motion must be collision free, permitting only desired

contacts with the environment.• The motion must respect the manipulators kinematic and

dynamic constraints: The manipulators overall structureand the joint and torque limits.

• The motion must fulfill the required task: For exampleconnect the docking interface to the payload box.

Planning these motions is particular difficult as the searchspace, the configuration space, of the manipulator is vast. Toefficiently search the space of possible solutions we employ asampling based planner, as previously presented [6]. By usinga derivation of the Rapidly Exploring Randomized Tree (RRT)we are able to compute a motion for the manipulator whichsatisfies all the mentioned constraints [22]. Figure 5 showsexample solutions for tasks solved during the experiments.

Building on the previous work we have included two majornew concepts in the manipulator motion planning:• Synchronization of the manipulation planner with the

world representation.• Torque constraints to manage the high payload of the

seismic instruments.

5

Fig. 6. The payload box with passive coupling partner “P” placed on theground (left side). The active coupling partner “A” comprising the metalspring grasping elements “S” (right side) while opening the capturingzone.

The synchronization of the manipulation motion planner isa central component of the overall manipulation pipeline. Tosafely plan and execute motions of the manipulator, the motionplanner always requires the latest model of the rover’s stateand the environment. This information is always present inthe world model as described in Section III-B. To extract thegeometric information, we designed a specialized interface:The rigid body interface extracts all geometric and shape infor-mation from the graph database and offers this information tothe motion planner for a specific scene. For example before therover manipulates an object close to the lander, it detects theprecise location of the lander and the surrounding objects. Thisinformation is inserted into the world model by the perceptionprocesses of the rover. Once the rover plans a motion for itsmanipulator, it extracts the shape and geometric relations ofthe lander and the surrounding objects to generate a collisionfree motion.

Respecting the torque limits of the manipulator is particularimportant in planetary exploration. On the one hand the roboticmanipulator must be lightweight to limit the cost of sending itto the remote planet. On the other hand, the manipulator mustlift relatively heavy payload elements to deploy the scientificinstruments. This leads to the fact that the manipulator cannothold all payload elements at every configuration. For example,if the manipulator is fully stretched, the torques at joint twoexceed the joint limits by far. To avoid such configurations,we include the torque limits when planning motions. At eachconfiguration the planner computes the expected gravitationaltorques at the arm joints and selects a path which satisfies thetorque limits of the manipulator.

D. Envicon Docking Interface

To avoid high precision positioning of the docking interfaceat the beginning of the docking process, we developed a

Fig. 7. The mounting point of the docking interface’s active part isshifted behind the robotic arm’s TCP towards the last joint “J” in orderto keep the distance between the payloads center of gravity and the jointas short as possible.

concept that uses a rotationally symmetric geometry for thedocking core including the connection of mechanical loads,electrical power, data transfer and fluid transfer. Furthermore,the concept foresees the principle of retraction accomplishedby the active part of the docking interface, providing a zoneof higher tolerance to initiate the docking process insteadof requiring exact prepositioning by the robot or vehicle asdescribed in [28], [29], [30] and [31]. The presented concept iscapable of increasing the misalignment tolerance, thus leadingto a more robust docking process especially on mobile robotsin rough and undefined environments. Furthermore, the con-cept can lower the overall system weight, as the forces requiredduring the docking process do not have to be produced bythe manipulation platform. In fact, the forces provided by theinterface during the docking procedure exceed those of themanipulator platform. Despite the lack of guide rails in thedocking core, the traction and friction created by the interfacecan withstand maximum torques of the robotic arm. Figure 6shows an overview of the docking interface in its validationenvironment.

The docking interface consists of an active and a passivecoupling partner. As proposed by the novel concept, thelatter is a rotationally symmetric cylinder with a defined,partially conic shape. The active coupling partner is basicallya cylindrical structure with an outer diameter of 102mm, aninner diameter of 65mm and a length of 79mm. The cylindricalstructure incorporates the mechanical capturing mechanismand the system controller unit along with redundant sensorsand power supplies. The system controller runs a set ofsoftware processes that serve for inter-system communication,safety mechanisms and the control of the docking process. Thedesign was adapted to the robotic arm used during the mission,leading to the interfaces overall weight of 390g while beingcapable of safely docking and manipulating payloads of more

6

Fig. 8. The left figure shows the model of the Light Weight Rover Unit and the RODIN lander during the un-docking of a seismic measurementunit. The right figure shows the same scene augmented with all transformations relevant for manipulation or scene registration.

than 5kg. The maximum payload range for the robotic arm,however, was limited to 2.7kg payload. To reduce the dynamictorques and forces generated by the payload during dockedmanipulation, the design was adapted to push the payloadscenter of gravity towards the outmost joint of the roboticarm Figure 7. The proposed design foresees easy upscalingto larger diameters and heavier payloads.

To enable the system’s docking functionality, the active cou-pling partner incorporates two motorized, ring shaped liftingplatforms that can be moved along the docking axes withinthe cylindrical structure. One lifting platforms carries ninemetal spring elements arranged along the inner circumferencewhich open up a funnel-shaped capturing zone as they moveoutwards. As initially described, this capturing zone enables ahigher degree of misalignment tolerance with respect to othercomparable systems. This way the system can increase theprobability for a successful docking process as soon as thepassive coupling partner has entered the capturing zone.

IV. EXPERIMENTS

We proved the applicability of our architecture in severalexperiments during the moon-analogue demonstration missionof ROBEX on Mt. Etna, Sicily. The first experiment targets thedeployment of seismic measurements units in rough terrain,the second one consists of a sample return task.

A. Seismic Measurements

During the seismic measurement experiment, the roverperforms the local transportation of the measurement device:The rover picks a payload box containing the instrument froma landing unit and carries the seismograph to a specifieddeploy location. Once at the deploy location, the robot deploysthe seismograph and waits for a test measurement. Once themeasurement is complete, the rover picks the seismographagain and transports it back to landing unit.

The seismic measurement experiment poses three majordifficulties:• The measurement device (seismograph) must be placed

in a predefined location with a tolerance of a few meters.

• The seismograph must be aligned with the gravity vectorwith a tolerance of a few degrees

• The deployment of the seismograph must be verified witha controlled impact onto the ground.

Overall the rover was able to successfully complete theseismic measurement experiment fully autonomously in aboutone hour and ten minutes. Figure 9 shows the main scenesfrom the experiment. At the beginning the rover started at adistance of 5 m in front of the Robex Demonstration Lander(RODIN) (a). From the start position it autonomously droveto the pickup location at the back of the RODIN (b). Onceat the pickup location the rover computed and executed amanipulator motion to press the envicon docking interfacecompliantly against the passive adapter on the payload box(see Figure 8) and rigidly connected to the payload box byclosing the docking interface (c). After the release of thepayload box by the landing unit, the rover pulled the payloadbox from the holder and placed it onto the rover’s carrier withan online planned, collision free manipulator motion (d,e).Once the arm was back in the drive position (f), the rover droveto the deploy location (g,h,i) with Mt. Etna in the background.At the deploy location the rover reconnected to the payloadbox and placed it onto the ground (j, k). As the seismographorientation differed from the gravity vector by ca. 12 deg, therover leveled the ground multiple times by flattening the sandwith the long edge of the payload box (l). Once the orientationwas within a 5 deg tolerance the rover optimized the contactof the seismograph with the ground by pressing the payloadbox onto the ground and performing a compliant rotary motion(m). To test the correct measurement of the seismograph witha predefined impulse, the rover hit the ground with its wristcompliantly (n). After the test measurement, the rover pickedthe payload box again and placed it on its carrier again (o,p). Once the manipulator was back in drive position, the roverdrove back to the landing unit and successfully completed theseismic measurement mission (q, r).

7

a b c d e f

g h i j k l

m n o p q r

Fig. 9. Main scenes from the seismic measurement mission. The LRU rover picks the seismograph from the lander (a-f), drives to the deploy location(g-i), deploys the instrument (i-n), and returns to the lander (o-r).

B. Sample Return

The target of the second experiment is to collect a soilsample from a target location. The rover approaches the regionof interest and places the probe container on the ground. Aftergrasping the shovel from a special holder element on therobot’s side, it shovels a soil probe into the probe container.Subsequently, the rover puts the sample box back onto thestorage of the rover’s back and returns the sample to the basestation.

Next to planning collision free movements in a reasonableamount of time, another main challenge is to consider specialconstrains while manipulating the target objects. Specifically,while placing the soil sample into the sample container, themotion planner must ensure that the manipulator does not spillany of the collected soil. The same consideration has to betaken into account for the probe container after filling it withthe soil. Arbitrary motions of the box could spill the storedsample. These orientation constrains for the docked objectsmust be considered by the motion planner.

Overall the rover was able to successfully complete thesample experiment fully autonomously in about ten minutes.Figure 10 shows the main scenes from the experiment. Atthe beginning the rover drove to the sample location (a) anddeployed the probe container by docking to the container andafterwards planning and execution a motion to place it onto theground (b-d). After subsequently docking to the shovel withthe manipulator, the rover used the shovel to collect a probeof the ground (e) and inserted the probe into one of the slotsof the probe container (f). Once the manipulator had storedthe shovel again, the rover lifted the probe container withoutspilling the probe (g-i).

V. DISCUSSION

The successful execution of both experiments shows thatthe LRU rover can complete complex mobile manipulationtasks for planetary exploration. The system’s main featureis that it can complete the tasks autonomously, robustly,and within a relatively short time. Both the autonomy andthe robustness stem from the presented system architecturewith the central world model. By gathering all informationcentralized and distributing the data over specialized interfacesto the individual components, each component is informedof the current world state, but only accesses the informationrelevant for its data.

Compared to the mars rovers of NASA [3] our rover isable to act more autonomously. In general, at NASA theautonomous behavior for a robot is generated on earth e.g.via MAPGEN [5]. Thereafter a human operator checks theaction sequence, which is subsequently sent to the rover onMars. This cycle is likely to consume a lot of time, as thecommunication delay between Earth and Mars can be up to24 minutes. As described in Section III, our rover is capableof deciding upon the next action sequence by evaluating theautonomous collected data of the environment.

The task execution software RAFCON provided the abilityto capture the complex mission procedure in human under-standable state machines. The hierarchical concept of thestate machines allowed us to encapsulate the mission partsinto clear segments and intuitively compose these segmentsinto the overall mission. The intuitive visualization of thecontrol flow during programming as well as testing allowedfor swift debugging and monitoring of the rover’s process inthe mission.

Our integrated manipulator motion planner was able tocompute all paths necessary for the individual manipulation

8

a b c d e

f g h i

Fig. 10. Main scenes from the sample return mission. The LRU rover autonomously places the probe container (a-d), inserts a sand probe into thecontainer with a shovel (e-f), and lifts the probe container again (g-i).

steps. During the sample experiment, the motion planner wasable to plan constrained motions for inserting the extractedsoil with the shovel into the probe container. Planning thismotion was particular hard due to the narrow solution spacewithin the vast search space. During the seismic measurementexperiment, the motion planner was able to generate paths forpicking the payload from the lander, placing the payload onthe soil, executing a test impulse as well as picking the payloadfrom the ground again. Handling the heavy payload requiredthe planner to observe the torque constraints induced throughgravity.

The envicon docking interface enabled the rover to rigidlyconnect to a versatile range of objects. In the probe experimentthe rover was able to dock rigidly to the probe container aswell as the shovel. Both connections withstood all torquesinduced to the manipulation steps especially during shovel-ing the ground. In the seismic measurement experiment, thedocking interface withstood the torques induced through theheavy payload of the seismograph. Additionally, the dockinginterface did not succumb to the adverse conditions on Mt.Etna and was able to resist the fine lava dust as well as highwind speeds over a test campaign of several weeks.

Although our system can perform the majority of thetasks autonomously and in a robust manner, there are stillmany problems our rover cannot tackle. Especially, weatherconstraints complicate object detection and lead to lowerobject detection success rates or inaccurate pose estimations.Furthermore, heavy wind put additional constraints on theimpedance controller for in-contact motions. In summary,there are many open challenges that have to be tackled to builda truly robust system able to cover a wide field of outdoorscenarios.

An open issue is also the verification of the system auton-omy to not perform any actions which endanger the mission.Since the system consists of many distributed processes,an exhaustive formal verification of all individual processesand the overall integration would pose a momentous effort.Therefore we propose three aligned strategies to reach thenecessary robustness: A verification of the autonomy on the

state machine level in RAFCON, which can be automatizedand is thus feasible; an error recovery concept which leveragesthe hierarchical composition of the state machines as well asthe centralized information of the world model; and extensivetesting in mission analog test sites, e.g. on Mt. Etna.

VI. CONCLUSION

In this paper we proposed a robust and scalable systemarchitecture for autonomous robots in the context of planetaryexploration. The main features of this architecture consists ofthe online motion planning of highly constraint tasks and apowerful flow control framework closely linked to a worldmodel capable of storing arbitrary information about the rover,its environment and gathered scientific data. We showcasedour approach on several experiments in the context of themoon-analogue demonstration mission on Mt. Etna, Sicily.Ultimately, we could prove the robustness of our systemin a rough terrain environment in the presence of harshweather conditions including heavy wind and changing lightand temperature conditions.

REFERENCES

[1] R. Washington, K. Golden, J. Bresina, D. Smith, C. An-derson, and T. Smith, “Autonomous Rovers for MarsExploration,” in IEEE Aerospace Conference, vol. 1,Snowmass at Aspen, Colorado, 1999.

[2] V. Gromov, A. Kemurdjian, A. Bogatchev,V. Koutcherenko, M. Malenkov, S. Matrossov,S. Vladykin, V. Petriga, and Y. Khakhanov, “Lunokhod2 - A retrospective Glance after 30 Years,” in EGS -AGU - EUG Joint Assembly, Apr. 2003.

[3] J. P. Grotzinger et al., “Mars Science Laboratory Missionand Science Investigation,” Space Sci. Rev., vol. 170,no. 1, pp. 5–56, 2012.

[4] “Out-of-this-world records!” https://www.jpl.nasa.gov/images/mer/2014-07-28//odometry140728.jpg, accessed:2017-08-07.

[5] M. Ai-Chang, J. Bresina, L. Charest, A. Chase, J.-J. Hsu,A. Jonsson, B. Kanefsky, P. Morris, K. Rajan, J. Yglesias,et al., “Mapgen: mixed-initiative planning and scheduling

9

for the mars exploration rover mission,” IEEE IntelligentSystems, vol. 19, no. 1, pp. 8–12, 2004.

[6] M. J. Schuster, C. Brand, S. G. Brunner, P. Lehner,J. Reill, S. Riedel, T. Bodenmller, K. Bussmann,S. Bttner, A. Dmel, W. Friedl, I. Grixa, M. Hellerer,H. Hirschmller, M. Kassecker, Z.-C. Marton, C. Nissler,F. Ruess, M. Suppa, and A. Wedler, “Towards au-tonomous planetary exploration: The lightweight roverunit (lru), its success in the spacebotcamp challenge, andbeyond,” in ICARSC - IEEE International Conference onAutonomous Robot Systems and Competitions, 2016.

[7] R. Volpe, I. Nesnas, T. Estlin, D. Mutz, R. Petras,and H. Das, “The CLARAty architecture for roboticautonomy,” in Aerospace Conference, 2001, IEEE Pro-ceedings., vol. 1. IEEE, 2001, pp. 1–121.

[8] V. Verma, A. Jonsson, R. Simmons, T. Estlin, andR. Levinson, “Survey of command execution systems forNASA spacecraft and robots,” 2005.

[9] M. Eich, R. Hartanto, S. Kasperski, S. Natarajan, andJ. Wollenberg, “Towards Coordinated Multirobot Mis-sions for Lunar Sample Collection in an Unknown En-vironment,” J. Field Robot.R, vol. 31, no. 1, 2014.

[10] F. E. Schneider, D. Wildermuth, and H.-L. Wolf, “EL-ROB and EURATHLON: Improving Search & RescueRobotics through Real-World Robot Competitions,” inInternational Workshop on Robot Motion and Control(RoMoCo). Poznan, Poland: IEEE, 2015, pp. 118–123.

[11] S. G. Brunner, F. Steinmetz, R. Belder, and A. Doemel,“RAFCON: A Graphical Tool for Engineering Complex,Robotic Tasks,” in IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS), Deajeon,Korea, 2016. [Online]. Available: http://ieeexplore.ieee.org/document/7759506/

[12] H. Nguyen, M. Ciocarlie, K. Hsiao, and C. C. Kemp,“Ros commander (rosco): Behavior creation for homerobots,” in Robotics and Automation (ICRA), 2013 IEEEInternational Conference on. IEEE, 2013, pp. 467–474.

[13] M. Loetzsch, M. Risler, and M. Juengel, “XABSL- A pragmatic approach to behavior engineering,” inProceedings of IEEE/RSJ International Conference ofIntelligent Robots and Systems (IROS), Beijing, China,2006, pp. 5124–5129.

[14] J. Bohren and S. Cousins, “The smach high-level exec-utive [ros news],” IEEE Robotics Automation Magazine,vol. 17, no. 4, pp. 18–20, Dec 2010.

[15] M. Beetz, L. Mosenlechner, and M. Tenorth, “CRAM– A Cognitive Robot Abstract Machine for EverydayManipulation in Human Environments,” in Proceedingsof the IEEE/RSJ International Conference on IntelligentRobots and Systems, Taipei, Taiwan, October 18-22 2010,pp. 1012–1017.

[16] S. G. Brunner, P. Lehner, M. J. Schuster, S. Riedel,R. Belder, A. Wedler, D. Leidner, F. Stulp, and M. Beetz,“Design, Execution and Post-Mortem Analysis of Pro-longed Autonomous Robot Operations,” in submittedto IEEE/RSJ International Conference on Robotics and

Automation (ICRA), Brisbane, Australia, 2018.[17] “An object-based semantic world model,” http://wiki.ros.

org/worldmodel, accessed: 2017-08-07.[18] “Spatial world model for object tracking,” http://wiki.ros.

org/spatial world model, accessed: 2017-08-07.[19] S. Blumenthal, H. Bruyninckx, W. Nowak, and

E. Prassler, “A scene graph based shared 3d world modelfor robotic applications,” in Robotics and Automation(ICRA), 2013 IEEE International Conference on. IEEE,2013, pp. 453–460.

[20] J. Elfring, S. van den Dries, M. Van De Molengraft,and M. Steinbuch, “Semantic world modeling usingprobabilistic multiple hypothesis anchoring,” Roboticsand Autonomous Systems, vol. 61, no. 2, pp. 95–105,2013.

[21] J. Mason and B. Marthi, “An object-based semanticworld model for long-term change detection and seman-tic querying,” in Intelligent Robots and Systems (IROS),2012 IEEE/RSJ International Conference on. IEEE,2012, pp. 3851–3858.

[22] S. M. LaValle, Planning Algorithms. Cambridge Uni-versity Press, 2004.

[23] L. E. Kavraki, P. Svestka, J. C. Latombe, and M. H.Overmars, “Probabilistic roadmaps for path planning inhigh-dimensional configuration spaces,” IEEE Transac-tions on Robotics and Automation, vol. 12, no. 4, pp.566–580, 1996.

[24] D. Berenson, S. S. Srinivasa, and J. Kuffner, “Task spaceregions: A framework for pose-constrained manipulationplanning,” The International Journal of Robotics Re-search, vol. 30, no. 12, pp. 1435–1460, 2011.

[25] P. M. Lehner and A. Albu-Schaffer, “Repetition samplingfor efficiently planning similar constrained manipulationtasks,” in Proc. 2017 IEEE Int. Conf. Intelligent Robotsand Systems. IEEE, 2017, p. accepted.

[26] S.-I. Nishida and T. Yoshikawa, “Development of spacerobot end-effector for on-orbit assembly,” Journal ofthe Japan Society for Aeronautical and Space Sciences,vol. 53, no. 614, pp. 130–138, 2005.

[27] M. Nilsson, “Heavy-duty connectors for self-reconfiguring robots,” in Robotics and Automation,2002. Proceedings. ICRA ’02. IEEE InternationalConference on. IEEE, 2002.

[28] P. Roberts, “A novel passive robotic tool interface,” in15th European space Mechanisms & Tribology Sympo-sium - ESMATS, 2013.

[29] A. Dettman, Z. Wang, W. Wenzel, F. Cordes, andF. Kirchner, “Heterogeneous modules with a homoge-neous electromechanical interface in multi-module sys-tems for space exploration,” in Robotics and Automation(ICRA), 2011 IEEE International Conference on. IEEE,2011.

[30] W.-M. Shen, R. Kovac, and M. Rubenstein, “Singo:A single-end-operative and genderless connector forself-reconfiguration, self-assembly and self-healing,” inRobotics and Automation, 2009. ICRA ’09. IEEE Inter-

10

national Conference on. IEEE, 2009.[31] R. Gelmi, A. Rusconi, J. Gonzalez Lodoso, P. Campo,

R. Chomicz, and A. Schiele, “Design of a compacttool exchange device for space robotics applications,”

in Proceedings of the 9th ESA Workshop on AdvancedSpace Technologies for Robotics and Automation, AS-TRA. ESTEC, 2006.

11