pose determination and tracking for autonomous satellite...

8
Proceeding of the 6 th International Symposium on Artificial Intelligence and Robotics & Automation in Space: i-SAIRAS 2001, Canadian Space Agency, St-Hubert, Quebec, Canada, June 18-22, 2001. Pose Determination and Tracking for Autonomous Satellite Capture Piotr Jasiobedzki 1 , Michael Greenspan 2 and Gerhard Roth 2 1 MD Robotics, 9445 Airport Rd., Brampton ON, Canada, L6S 4J3 2 National Research Council of Canada, Bldg. M-50, 1200 Montreal Road, Ottawa ON, Canada, K1A 0R6 [email protected] Abstract This paper presents a concept and initial results of testing a vision system for satellite proximity operations. The system uses natural features on the satellite surfaces and does not require any artificial markers or targets for its operation. The system computes motion of unknown objects, and using object models computes their positions and orientations (pose), and tracks them in images sequences. The system processes images from stereo cameras to compute 3D data. Motion of an unknown object is computed using projective vision. Initial pose of the object is acquired using an approach based on Geometric Probing. The pose is tracked using a fast version of an Iterative Closest Point algorithm. Selected modules of the system have been characterised in a representative environment; the other are undergoing systematic sets of experiments. Keywords Vision system, satellite proximity operations, motion estimation, pose estimation, tracking 1 Introduction Satellite servicing includes conducting repairs, upgrading and refuelling spacecraft on-orbit. It offers a potential for extending the life of satellites and reducing the launch and operating costs. Astronauts regularly travel on board of the Shuttle to the Hubble Space Telescope, MIR station and to the International Space Station, and perform similar servicing tasks. The high cost of manned missions, however, limits the servicing to the most expensive spacecraft only. Also, many communication and remote sensing satellites reside on higher, Geo Stationary (GEO) orbits that the shuttle cannot reach. Future space operations will be more economically executed using unmanned space robotic systems. The satellite servicing concepts have been discussed for over a decade but only now advanced technologies that are required for such unmanned missions are becoming available. The demand for new satellites and expected lower operating costs provide the motivation. Future satellites will be designed for servicing by autonomous servicer spacecraft and equipped with suitable mechanical interfaces for docking, grasping, berthing, and fluid transfer and electrical interfaces (expansion ports) for installing new computer and sensor hardware [12]. The communication time lag, intermittence and limited bandwidth between the ground and on-orbit segments renders direct teleoperated ground control infeasible. The servicers (chasers) will have to operate with a high degree of autonomy and be able to rendezvous and dock with the serviced satellites (targets) with minimum or no assistance from the ground. One possibility is to employ advanced semi- autonomous robotic interface concepts. An example is supervisory control [5], wherein human operators on the ground segment issue high level directives and sensor-guided systems on the space segment guide the execution. The on-board sensor system should allow the chaser spacecraft to approach the target satellite and identify its position and orientation from an arbitrary direction and manoeuvre the chaser to appropriate docking interface [8, 9]. In this paper we present a computer vision system that can determine and track the relative pose of an observed satellite during proximity operations. The system processes monocular and stereo images, such as are acquired from standard cameras, to compute satellite pose and motion. Initially the vision system estimates motion of a free-floating satellite to determine that it is safe to approach it. The system does not use any prior knowledge or models at this

Upload: others

Post on 11-Jul-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Pose Determination and Tracking for Autonomous Satellite ...robotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM061.pdf · new satellites and expected lower operating costs

Proceeding of the 6th International Symposium on Artificial Intelligence and Robotics & Automation in Space: i-SAIRAS 2001, Canadian Space Agency, St-Hubert, Quebec, Canada, June 18-22, 2001.

Pose Determination and Tracking for Autonomous Satellite Capture

Piotr Jasiobedzki1, Michael Greenspan2 and Gerhard Roth2

1MD Robotics, 9445 Airport Rd., Brampton ON, Canada, L6S 4J32National Research Council of Canada, Bldg. M-50, 1200 Montreal Road, Ottawa ON, Canada, K1A 0R6

[email protected]

AbstractThis paper presents a concept and initial results oftesting a vision system for satellite proximityoperations. The system uses natural features on thesatellite surfaces and does not require any artificialmarkers or targets for its operation. The systemcomputes motion of unknown objects, and usingobject models computes their positions andorientations (pose), and tracks them in imagessequences. The system processes images from stereocameras to compute 3D data. Motion of an unknownobject is computed using projective vision. Initialpose of the object is acquired using an approachbased on Geometric Probing. The pose is trackedusing a fast version of an Iterative Closest Pointalgorithm. Selected modules of the system have beencharacterised in a representative environment; theother are undergoing systematic sets of experiments.

KeywordsVision system, satellite proximity operations, motionestimation, pose estimation, tracking

1 IntroductionSatellite servicing includes conducting repairs,upgrading and refuelling spacecraft on-orbit. It offersa potential for extending the life of satellites andreducing the launch and operating costs. Astronautsregularly travel on board of the Shuttle to the HubbleSpace Telescope, MIR station and to the InternationalSpace Station, and perform similar servicing tasks.The high cost of manned missions, however, limitsthe servicing to the most expensive spacecraft only.Also, many communication and remote sensingsatellites reside on higher, Geo Stationary (GEO)orbits that the shuttle cannot reach. Future space

operations will be more economically executed usingunmanned space robotic systems.

The satellite servicing concepts have been discussedfor over a decade but only now advancedtechnologies that are required for such unmannedmissions are becoming available. The demand fornew satellites and expected lower operating costsprovide the motivation. Future satellites will bedesigned for servicing by autonomous servicerspacecraft and equipped with suitable mechanicalinterfaces for docking, grasping, berthing, and fluidtransfer and electrical interfaces (expansion ports) forinstalling new computer and sensor hardware [12].

The communication time lag, intermittence andlimited bandwidth between the ground and on-orbitsegments renders direct teleoperated ground controlinfeasible. The servicers (chasers) will have tooperate with a high degree of autonomy and be ableto rendezvous and dock with the serviced satellites(targets) with minimum or no assistance from theground. One possibility is to employ advanced semi-autonomous robotic interface concepts. An exampleis supervisory control [5], wherein human operatorson the ground segment issue high level directives andsensor-guided systems on the space segment guidethe execution. The on-board sensor system shouldallow the chaser spacecraft to approach the targetsatellite and identify its position and orientation froman arbitrary direction and manoeuvre the chaser toappropriate docking interface [8, 9].

In this paper we present a computer vision systemthat can determine and track the relative pose of anobserved satellite during proximity operations. Thesystem processes monocular and stereo images, suchas are acquired from standard cameras, to computesatellite pose and motion. Initially the vision systemestimates motion of a free-floating satellite todetermine that it is safe to approach it. The systemdoes not use any prior knowledge or models at this

Page 2: Pose Determination and Tracking for Autonomous Satellite ...robotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM061.pdf · new satellites and expected lower operating costs

phase. The vision system then determines an initialestimate of its pose using a 3D surface model of thesatellite. Once the approximate pose has beenestimated, the vision system tracks the satellitemotion during subsequent proximity operations up tothe point of mating.

2 Satellite proximity operationsand environment

The autonomous servicer spacecrafts will servicesatellites located on a variety of orbits extendingfrom the Low Earth Orbit (LEO), 320 km above theearth surface, to Geo-Stationary Orbits (GEO), over35,000 km.

2.1 Trajectories

The servicers will approach the targets in severalsteps by reaching predefined waypoints and typicallyusing minimum energy transfers. The initial stepsinvolve placing the chaser on an orbit with the sameinclination as the target’s orbit. The target can thenbe approached along two trajectories: R-Bar and V-Bar [6] (or their combination). R-Bar trajectory isoriented along the Earth radius and when a spacecraftis moving along R-Bar the chaser is slowed down asit moves onto higher orbits closer to the targetspacecraft. V-Bar approach is executed when thechaser is following the target along the same orbitand approaching it towards its negative velocity face[8]. Space shuttle docks with MIR and ISS occuralong R-Bar. ETS-VII, a Japanese experimentaldocking satellite, performed several dockingexperiments along both V-Bar and R-Bar trajectories[6, 14].

Future satellite servicing operations may requiremore flexibility and rendezvous / docking may haveto be performed along other approach trajectories.Such trajectories will require more fuel but themaneuevres will be completed faster and with nointeraction with the ground control station whenpassing though pre-defined waypoints [17].

Execution of the final contact phase will depend onthe servicer design. If the servicer is equipped with arobotic arm it might be used to capture a free-floatingsatellite and position it on a specially designedberthing interface on the servicer. Alternatively, theservicer may approach the target and dock directlywith it. The shuttle crew uses the first technique toretrieve and service the Hubble telescope, and thesecond one to dock with the International SpaceStation.

2.2 Visual environment on orbit

The on-orbit environment’s lack of atmosphere andrich background, that might diffuse sunlight, createshighly contrasting scenes. When the contrast of thescene exceeds the dynamic range of the camera, thenpart of the image data is lost. On LEO the typicalorbital period is from 90 –180 minutes, whichincludes a complete change of illumination from fullsunlight to eclipse. On elliptical and GEO orbits theorbital periods are between 12 and 24 hours; and onsun-synchronous orbits the illumination does notchange.

On orbit space environments are often regarded asstructured and well known, as all of the objects areman-made, and design documents (CAD models) areavailable. In order to protect the hardware from thespace environment the structures and spacecraft areoften covered with loosely attached reflectivematerials (foil) or featureless thermal blankets. Suchsurfaces when illuminated with directional sunlightand on-board lights create shadows and specularitiesthat pose difficult problems for imaging cameras andvision systems.

Images obtained during shuttle proximity operationsare shown in Figure 1. The Shuttle RemoteManipulator System, the Canadarm, is preparing tograsp a free-floating experimental satellite. Thesatellite is equipped with a special interface that canbe grasped by the end-effector on the arm (see thebottom image in Figure 1). The interface consists of acircular base with a central shaft, three radial supportstructures and can be seen in the centre of the satellitein the right image. Future satellite servicing missionsmay use docking interfaces similar in size and shapeto this interface.

Page 2

Page 3: Pose Determination and Tracking for Autonomous Satellite ...robotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM061.pdf · new satellites and expected lower operating costs

Figure 1 Grasping of a free-floating satellitewith Canadarm. The satellite in the bottomimage has its grappling interface oriented

towards the camera

3 Space vision technologiesThe vision systems for satellite proximity operationswill be used at the range from 100m to contact. Thisrange may be divided into three categories long,medium and short. In the long range (20 – 100m) thevision system should detect and identify the satelliteof interest, and determine its bearing and thedistance. In the medium range (1-20m) the visionsystem should identify the satellite motion, distance,and determine its orientation and location of adocking/robotic interface, see Figure 1. At the shortrange (<2m) the vision system will track themechanical interface during the final approach anddocking/grasping phase.

3.1 Sensors

Data processed by the vision systems may beobtained from a variety of sensors such as videocameras, scanning and non-scanning rangefinders.The video cameras commonly used in current spaceoperations are available at a relatively low cost, havelow mass and energy requirements. Their maindisadvantage is their dependence on ambientillumination and sensitivity to direct sunlight.Scanning rangefinders [4, 11] are independent of theambient and less sensitive to sunlight. This comes ata cost/weight/energy requirement of complexscanning mechanisms that may not survive the spaceenvironment well. Non-scanning rangefinders [16]offer a potential of low cost/mass/energy and theabsence of scanning mechanisms. However, they arestill in early development phase.

3.2 Computer vision approaches

The approaches to computer vision in space can begrouped in the following categories:

• Detection of artificial targets (markers) placed onspacecraft

• Detection of individual features (2D and 3D) andmatching them with corresponding 3D modelfeatures

• Detection of 3D surface data and matching itwith 3D models

The current approaches to space vision rely oninstalling easy to detect and high contrast visualtargets on satellites and payloads. Algorithmsspecifically tuned to these patterns robustly detecttheir locations in 2D camera images [13, 4]. Thesatellite pose is estimated by finding a 3Dtransformation of the observed markers from adefault co-ordinate system to the current unknownlocation, so that the markers will be projected into theimage in their observed locations. The internalcamera calibration is assumed to be known. Thisproblem is referred to as single cameraphotogrammetry or external camera calibration. Thevisual targets can be used only in specific tasks(limited by the distance and viewing angles)requiring the definition of all operations of interest atthe design phase and precluding any unexpectedtasks. Lack or mis-detection of a target may lead to avision system failure, as there will be not enough datato compute the reliable camera pose solution. Thereis certain additional installation and maintenancecost, as the targets must survive the exposure to harshspace environments. The targets are mostly suitablefor predefined and short range operations, and whenthe target satellite/docking interface can be modified.

Using natural features and object models willalleviate the limitations of the target based systems.Techniques based on detecting natural features (2Dand 3D lines, circles, and regions) and matching themdirectly with model features do not requireinstallation of any artificial targets, as they rely onexisting object geometry and structure. However,reliable detection of features required for matchingmay be difficult under some conditions encounteredon-orbit (illumination, viewing angles and distances).This may lead to ambiguous matching with the 3Dmodel and breakdown of the computed solution. Thisapproach is suitable for short range operations, whenthe features can be reliably detected and tracked, andwhen the initial pose is approximately known.

Page 3

Page 4: Pose Determination and Tracking for Autonomous Satellite ...robotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM061.pdf · new satellites and expected lower operating costs

Vision algorithms that will most likely succeed inspace will use arbitrary surface features of the object,and do not require establishing and maintaining directcorrespondence between small numbers of high levelobject and model features. This will remove the needto detect specific features under all possibleconditions. Presence of shadows, specular reflectionsand background features are likely to confusealgorithms based on analysis of single 2D images;therefore using 3D data computed from imagesequences and/or stereo becomes essential. Suchtechniques can be used in the medium operationalrange as any detectable surface peculiarity includingshadows and edges will provide 3D data that can beused in 3D model matching. As directcorrespondence is not required, different 3D data setsmay be detected depending on the instant viewpointand illumination. Our vision system uses thisapproach for medium range operations.

Other computer vision methods, such as those basedon object appearance that attempt to match theintensity distributions of the model to images, willhave difficulty in the presence of unexpectedshadows and reflections typical in spaceenvironments. Despite their shortcomings, it may benecessary to rely upon these methods for long rangeoperations where 3D data is not available.

4 Vision system for satelliteproximity operations

The vision system described in this paper is intendedfor the medium range of satellite proximityoperations that extends approximately between 2 –20m. The vision system processes sequences ofimages acquired from several cameras arranged in astereo configuration. The sensors are thus passive,contain no moving parts, are of low mass, consumelow amounts of energy, and are relativelyinexpensive. The algorithms implemented in thissystem rely on the presence of natural features and donot require any visual targets. Extraction andprocessing of redundant data provides robustness topartial data loss due to shadows and reflections.Some of the algorithms may use object models. Thevision system operates in several modes and is usedin three operational modes: Monitoring, Acquisition,and Tracking.

In the monitoring mode the vision system processessequences of monocular and stereo images toestimate the rotation and distance to the observedsatellite. In this phase the system does not require anyprior object models. Once the vision systemdetermines that it is safe to approach the observed

satellite then the servicer may draw near to obtainbetter 3D data from the stereo system and continuesthe operations.

When in the acquisition mode the vision systemprocesses 3D data computed by the stereo vision sub-system and estimates an approximate pose of thesatellite. During this phase the vision system may bealso used to confirm identity, configuration and stateof the satellite.

Once the initial pose of the satellite is determinedthen the tracking mode is invoked and provided withthis estimate. During tracking the satellite pose andmotion is computed with high precision and updaterate. Subsequent proximity operations such as fly-around, homing and docking are performed in thetracking mode.

The vision system handles a handover between themodes passing, for example, a pose estimate from theacquisition to tracking mode; and may use the motionestimated in the monitoring mode to correct for theprocessing delay of the acquisition phase. Failure todetermine the initial pose requires repeating theprocess with occasional updates of the relativemotion using the monitoring mode. Failure of thetracking mode causes the vision system to revert toeither the monitoring or acquisition mode dependingon the distance to the satellite and current chaseroperation. The vision system may be re-configuredduring its operation and operate simultaneously inmultiple modes ensuring correct data handover anderror recovery.

4.1 Image acquisition and stereo sub-system

The developed vision system uses a two-camerastereo imaging sub-system. The images aresynchronously captured and transferred to a host forprocessing. Geometrical warping corrects the imagesfor lens distortions and rectifies to simplify the stereogeometry [10]. The images are then processed by oneof two stereo algorithms: dense and sparse stereo.The dense stereo algorithm uses normalisedcorrelation and produces densely sampled data of10k-100k points per scene. The processing time is inthe order of seconds and depends on the selectedimage resolution and selected disparity range. Thesparse stereo algorithm extracts edges from bothstereo images and computes a sparse representationof the scene (1k of 3D points) in a fraction of second.An example input image of a mock-up used inexperiments, dense disparity map and extracted 3Dpoints are shown in Figure 2.

Page 4

Page 5: Pose Determination and Tracking for Autonomous Satellite ...robotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM061.pdf · new satellites and expected lower operating costs

Figure 2 A image of a mockup, densedisparity map and sparse 3D edges

Sequences of stereo images, dense and sparse rangedata are available to specialised algorithms for furtherprocessing. Only data required by currently runningalgorithms is computed.

4.2 Monitoring

It is sometimes necessary to compute the motion of asatellite without knowing the object type; that iswithout having a model. This is called model-freesatellite motion estimation. When dealing with anisolated object, it is the case that the change incamera position is simply the opposite of the changein the object position. Using this principle we cancompute the satellite motion by simply finding therelative camera motion. It has been shown thatcomputing the camera motion is feasible even in anunstructured environment [15]. This is done usingonly natural features, without any special targets. Thecombination of random sampling and the linear

algorithms for structure from motion (SFM) arecapable of producing very reliable correspondencesin an image sequence. From these correspondences itis possible to compute the camera motion assumingthat we know the camera calibration a-priori, or wehave computed it via an auto-calibration process.

There are two different approaches to computing thesatellite motion depending on whether the satellite isin the medium range or the far range. For the farrange the stereo cameras will not produce accuratedepth, so we must rely on a single camera. We havesuccessfully computed both orientation and distancefrom a single camera using SFM algorithms [15].However, the process becomes unstable when there isvery little translation of the features between views.In this case we plan to compute only the satelliterotation, and not the distance to individual features.Experiments are ongoing with this approach.

Once the satellite reaches the medium range we canachieve reasonably accurate depth from the stereorig. In this situation we combine the stereo and SFM.At each position of the stereo cameras we get a set ofsparse 2D points. Now using only the right images ofthe sequence we match the 2D pixel locations ofthese 2D feature points. More precisely, assume wehave two stereo camera positions: left1, right1 andleft2, right2. We take the 2D pixel locations of thematching stereo features in right1 and attempt tomatch them to the stereo features in right2. We arebasically performing SFM across the image sequence[15], but using only the 2D stereo features as SFMfeatures. This type of processing is not traditionaltracking because we use rigidity to prune falsematches, which is a very strong constraint [15]. Wethen use the 2D co-ordinates of these matchedfeatures to compute the transformation between theright images of the stereo cameras in the sequence.

This method has a number of advantages. First, itavoids the problem of motion degeneracy that isinherent in SFM. Degeneracy occurs when themotion is pure rotation, or when the translationalcomponent of the motion is very small. Second, sincethe stereo rig is calibrated we know the true scale ofthe computed camera motion. This is not the casewhen we are using SFM, since without extrainformation about the scene geometry we can notknow the true scale of the reconstruction of cameramotion. Third, the results for small camera motionsare no less accurate than the accuracy of the stereodata. When using SFM with small camera motionsthe reconstruction of the camera path is rarelyaccurate. However, with our approach the accuracydoes not depend on the SFM accuracy, but on thestereo accuracy. We have implemented the second

Page 5

Page 6: Pose Determination and Tracking for Autonomous Satellite ...robotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM061.pdf · new satellites and expected lower operating costs

approach and show the results for a loop sequenceshown in Figure 3.

Figure 3 First and last image of a sequence

Figure 4 Different views of the reconstructedcamera trajectories and the accepted 3D

points from the sequnce shown in Figure 3

Here we show the results of processing twenty fivestereo views in a sequence of the grapple fixture.Figure 4 shows the reconstructed camera path and the3D stereo features. The 3D features displayed inFigure 4 are all supported by the SFM analysis. Theyrepresent reliable data that was used to compute thecamera path, since not all the original stereo featureswere reliable. We are in the process of performingmore systematic experiments to evaluate theperformance.

4.3 Acquisition

The monitoring mode estimates the motion of thesatellite relative to the sensing system, but it does notresolve the pose parameters. The pose parametersdetermine how the satellite is oriented along itsmotion trajectory, and are essential to establish thelocation of the grapple fixtures. And dockinginterfaces.

In the acquisition mode, the pose of the satellite isapproximated. This is more generally known as thepose determination problem. Using model-basedmethods, where the geometry is known, a solutioncan be formulated as a search within the pose spaceof the object. As the satellite can be positioned with 3translational and 3 rotational degrees of freedom (i.e.6 dofs) the search space is of a high dimension andcomputational efficiency is a major concern. That thesatellite is isolated in the scene simplifies theproblem somewhat, in that the translationalcomponents of the pose can be coarsely resolved.

In the far range the satellite is visible but the distanceis too great for the stereo algorithms to reliablyextract sparse 3D points. In this case, appearancebased methods such as those that use silhouettes [1]may be appropriate, particularly since the satellitecan be segmented from the background. In themedium to near range, the 3D point data becomesmore reliable than the intensity data, which can becorrupted by severe illumination effects. In theseranges, it is preferable to use techniques that arebased upon the extracted sparse or dense 3D pointdata.

One approach is based Geometric Probing [7], wherepose determination in range image data is formulatedas template set matching. The satellite model isrepresented as a set of voxel templates, one for eachpossible pose. The set of all templates is composedinto a binary decision tree. Each leaf node referencesa small number of templates. Each internal node

Page 6

Page 7: Pose Determination and Tracking for Autonomous Satellite ...robotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM061.pdf · new satellites and expected lower operating costs

references a single voxel, and has two branches,“true” and “false”. The subtree branching from the“true” branch contains the subset of templates thatcontain the node voxel. Conversely, the subtreebranching from “false” branch contains the subset oftemplates that do not contain the node voxel.Traversing the tree at any image location executes aGeometric Probing strategy that efficientlydetermines a good match with the template set. Thistechnique has been demonstrated to be both efficientand reliable at identifying objects and their 6dof poseusing dense range data.

In order to improve efficiency and reliability, it maybe possible to reduce the dimensionality of the searchspace by resolving some of the dofs prior toexecuting the Geometric Probing method. TheRadarsat satellite (see Figure 5), for example, hasvery distinct major and minor axis, which can beidentified in the data and used to resolve 2 of the 3orientation parameters. For other satellite models,there may be identifiable features in either the rangedata (such as circular fixtures or planes) or intensitydata (such as logos) that could be reliably extractedand used to resolved some of the positionalambiguity.

Figure 5 Radarsat-2

4.4 Tracking

In the tracking mode the vision system uses sparsedata computed by the stereo sub-system and a 3Dmodel of the observed object to compute its pose.The model is represented as set of geometricalshapes: boxes, cylinders, spheres and surface meshes(a hybrid model) [10]. For accelerated poseestimation the model may be pre-processed andrepresented as an octtree [2].

The tracker estimates pose of an object using aversion of an Iterative Closest Point (ICP) algorithmthat minimises the distance between two data sets [3].In our case the first data set is obtained from the

sparse stereo sub-system and the second data set is amodel of the object of interest. The tracker iterativelyminimises the distance between the 3D data and thesurface model. The first version of our trackingmodule used the hybrid model and relied on closeform solutions for efficient and accurate computationof distances between the model and the data. Thecurrent version uses octtree models, which, at a smallexpense of lower accuracy, allow tracking pose atmuch higher rates than using the hybrid models.

The tracker is initialised from an approximateestimate provided by the pose acquisition modulebased on Geometrical Probing module (see section4.3). The tracker matches currently computed pointswith the surface model, and it does not requireestablishing and maintaining correspondencesbetween detected 3D points and the model features.This significantly reduces its sensitivity to partialshadows, occlusion and local loss of data caused byreflections and image saturation. An adaptable gatingmechanism eliminates outliers that are not part of themodel. The tracker operates at rates of 1 Hz on a PCclass computer and using ~1000 3D points.

The vision system was tested in the tracking modeusing various calibrated image sequences. An imagefrom a loop sequence (approach along Z-axis withrotation about the yaw axis) together withvisualisation of the computed pose are shown inFigure 4. Additional information about thesealgorithms and conducted experiments can be foundin a companion paper in this volume [2].

Figure 4 One of stereo images obtainedduring a loop sequence (right) and a virtualimage showing the computed model pose

(left)

5 Concluding remarksAll algorithms for pose determination and trackinghave been developed, and are currently beingintegrated into a test platform. This paper presents thesystem concept, design, algorithmic details, and somepreliminary results of the integration and testing.There exist other solution approaches to satellite posedetermination and tracking, such as systems thatemploy co-operative targets and active range sensors.

Page 7

Page 8: Pose Determination and Tracking for Autonomous Satellite ...robotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM061.pdf · new satellites and expected lower operating costs

It is believed that the approach presented here offersa number of benefits over the alternatives. Thesystem depends only upon standard video sensors,which are currently available and space qualified, andwhich will likely prove to be more reliable and lessexpensive than other exotic sensors. The system alsodoes not require any targets to be placed on thesatellites, which can be expensive. Further, therecurrently exist ~9,000 satellites in orbit which do nothave suitable targets. Another benefit over the use oftargets is that harsh lighting conditions can reduce thevisibility of targets, while acquiring clear images ofother parts of the satellite body.

6 References[1] Abbasi S., Mokhtarian, F., "Affine-Similar ShapeRetrieval: Application to Multiview 3-D ObjectRecognition", IEEE Transactions on ImageProcessing, vol. 10, no. 1, Jan. 2001, pp. 131-139.

[2] Abraham M, Jasiobedzki, P., Umasuthan M.:"Robust 3D Vision for Autonomous Robotic SpaceOperations". in Proceedings of ISAIRIS- 2001.

[3] Besl, P. & N. McKay, "A Method for Registrationof 3-D Shapes", IEEE Transactions on PatternAnalysis and Machine Intelligence, vol. 14, no. 2,February 1992, pp. 239-256.

[4] Blais F, Beraldin J, Cournoyer L.: "Integration ofa Trackinbg Laser Range Camera with thePhotogrammetry based Space Vision System". SPIEAerosense, vol. 4025, April 2000.

[5] Burtnyk N., Basran J., "Supervisory Control ofTelerobotics in Unstructured Environments”, Proc.5th International Conference on Advanced Robotics,Pisa, Italy, June 1991, pp 1420-1424.

[6] http://oss1.tksc.nasda.go.jp/ets-7/ets7_e/rvd/rvd_index_e.html

[7] Greenspan, M.A., Boulanger, P., "Efficient andReliable Template Set Matching for 3D ObjectRecognition", 3DIM99 : Proceedings of the 2ndInternational Conference on 3-D Digital Imaging andModeling, Ottawa, Canada, Oct. 4-8, 1999, pp. 230-239.

[8] Hollander S.,"Autonomous Space Robotics:Enabling Technologies for Advanced SpacePlatforms", Space 2000 Conference, AIAA 2000-5079.

[9] Jasiobedzki P, Anders C: Computer Vision forSpace Applications: Applications, Role andPerformance. Space Technology v.2005, 2000

[10] Jasiobedzki P, Talbot J, Abraham M, "Fast 3Dpose estimation for on-orbit robotics. InternationalSymposium on Robotics", ISR 2000 Montreal,Canada, May 14-17, 2000

[11] Laurin, Denis G.; Beraldin, J.-Angelo; Blais,Francois; Rioux, Marc; Cournoyer, Luc. "A Three-Dimensional Tracking and Imaging Laser Scannerfor Space Operations", SPIE Vol. 3707, pp. 278-289.

[12] Madison R., Denoyer K, "Concept forAutonomous Remote Satellite Servicing", IAFSymposium on Novel Concepts for Smaller, Faster &Better Space Missions, Redondo Beach, CA, April1999.

[13] McCarthy J.: Space Vision System. ISR-1998,Birmingham, UK.

[14] Oda M, Kine K, Yamagata F., "ETS-VII – aRendezvous Docking and Space Robot TechnologyExperiment Satellite",46 Int. Astronautical Congress,Oct 1995, Norway, pp. 1-9.

[15] G. Roth, A. Whitehead, “Using projective visionto find camera positions in an image sequence”,Vision Interface 2000, pp. 87-94, Montreal Canada,May 2000.

[16] Sackos J, Bradley B, Nellums B, Diegert C: TheEmerging Versatility of a Scanerless Range Imager.Laser radar Technology and Applications, SPIE v.2748, pp. 47-60

[17] Whelan D.: Future Space Conceptshttp://www.darpa.mil/tto/

Page 8