submitted to ieee transactions on robotics and …eavr.u-strasbg.fr › ~christophe › publications...

11
SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 1 Autonomous 3-D positioning of surgical instruments in robotized laparoscopic surgery using visual servoing A. Krupa , J. Gangloff , C. Doignon , M. de Mathelin G. Morel * , J. Leroy , L. Soler , J. Marescaux Abstract— This paper presents a robotic vision system that automatically retrieves and positions surgical instruments during robotized laparoscopic surgical operations. The instrument is mounted on the end-effector of a surgical robot which is controlled by visual servoing. The goal of the automated task is to safely bring the instrument at a desired 3-D location from an unknown or hidden position. Light Emitting Diodes are attached on the tip of the instrument and a speci£c instrument-holder £tted with optical £bers is used to project laser dots on the surface of the organs. These optical markers are detected in the endoscopic image and allow to localize the instrument with respect to the scene. The instrument is recovered and centered in the image plane by means of a visual servoing algorithm using feature errors in the image. With this system, the surgeon can specify a desired relative position between the instrument and the pointed organ. The relationship between the velocity screw of the surgical instrument and the velocity of the markers in the image is estimated online and, for safety reasons, a multi-stages servoing scheme is proposed. Our approach has been successfully validated in a real surgical environment by performing experiments on living tissues in the surgical training room of IRCAD. Index Terms—Medical robotics, minimally invasive surgery, visual servoing. I. I NTRODUCTION In laparoscopic surgery, small incisions are made in the human abdomen. Various surgical instruments and an endo- scopic optical lens are inserted through trocars at each incision point. Looking at the monitor device, the surgeon moves the instruments in order to perform the desired surgical task. One drawback of this surgical technique is due to the posture of the surgeon which can be very tiring. Teleoperated robotic laparoscopic systems have recently appeared. There exist several commercial systems, e.g., ZEUS (Computer Motion, Inc.) or DaVinci (Intuitive Surgical, Inc.). With these systems, robot arms are used to manipulate surgical instruments as well as the endoscope. The surgeon tele-operates the robot through Laboratoire des Sciences de l’Image de l’Informatique et de la el´ ed´ etection (LSIIT - UMR CNRS 7005), Strasbourg I University, France. E- mail: krupa,gangloff,doignon,[email protected] * G. Morel is now with the LRP (CNRS FRE 2705), Paris VI University, France. IRCAD (Institut de Recherche sur les Cancers de l’Appareil Digestif), Strasbourg, France. The £nancial support of the french ministry of research (“ACI jeunes chercheurs” program) is gratefully acknowledged. The experi- mental part of this work has been made possible thanks to the collaboration of Computer Motion Inc. that has graciously provided the AESOP medical robot. In particular, we would like to thank Dr. Moji Ghodoussi for his technical support. master arms using the visual feedback from the laparoscopic image. This reduces the surgeon’s tiredness, and potentially increases motion accuracy by the use of a high master/slave motion ratio. We focus our research in this £eld at expanding the potentialities of such systems by providing “automatic modes” using visual servoing (see [7], [8] for earlier works in that direction). For this purpose, the robot controller uses visual information from the laparoscopic images to move instruments, through a visual servo loop, towards their desired location. Note that prior research was conducted on visual servoing techniques in laparoscopic surgery to automatically guide the camera towards the region of interest (see, e.g., [15], [1] and [16]). However, in a typical surgical procedure, it is usually the other way around: the surgeon £rst drives the laparoscope into a region of interest (for example by voice, with the AESOP system of Computer Motion Inc.), then, he or she drives the surgical instruments at the operating position. A practical dif£culty lies in the fact that the instruments are usually not in the £eld of view at the start of the procedure. Therefore, the surgeon must either blindly move the instruments or zoom out with the endoscope in order to get a larger £eld of view. Similarly, when the surgeon zooms in or moves the endoscope during surgery, the instruments may leave the endoscope’s £eld of view. Consequently, instruments may have to be moved blindly with a risk of undesirable contact between instruments and organs. Therefore, in order to assist the surgeon, we propose a visual servoing system that automatically brings the instruments at the center of the endoscopic image in a safe manner. This system can be used also to move the instruments at a position speci£ed by the surgeon in the image (with, e.g., a touch- screen or a mouse-type device). This system allows to do away with the practice of moving the endoscope in order to vizualize the instrument at any time it is introduced to the patient. It includes a special device designed to hold the surgical instruments with tiny laser pointers. This laser pointing instrument-holder is used to project laser spots in the laparoscopic image even if the surgical instrument is not in the £eld of view. The image of the projected laser spots is used to guide the instrument. Visibility of laser spots in the image is suf£cient to guarantee that the instrument is not blocked by unseen tissue. Because of the poor structuration of the scene and the dif£cult lighting conditions, several laser pointers are used to guarantee the robustness of the instrument recovery

Upload: others

Post on 10-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 1

Autonomous 3-D positioning of surgicalinstruments in robotized laparoscopic surgery using

visual servoingA. Krupa†, J. Gangloff†, C. Doignon†, M. de Mathelin†

G. Morel∗, J. Leroy ‡, L. Soler‡, J. Marescaux‡

Abstract— This paper presents a robotic vision system thatautomatically retrieves and positions surgical instruments duringrobotized laparoscopic surgical operations. The instrument ismounted on the end-effector of a surgical robot which iscontrolled by visual servoing. The goal of the automated task isto safely bring the instrument at a desired 3-D location from anunknown or hidden position. Light Emitting Diodes are attachedon the tip of the instrument and a speci£c instrument-holder£tted with optical £bers is used to project laser dots on thesurface of the organs. These optical markers are detected inthe endoscopic image and allow to localize the instrument withrespect to the scene. The instrument is recovered and centered inthe image plane by means of a visual servoing algorithm usingfeature errors in the image. With this system, the surgeon canspecify a desired relative position between the instrument and thepointed organ. The relationship between the velocity screw of thesurgical instrument and the velocity of the markers in the imageis estimated online and, for safety reasons, a multi-stages servoingscheme is proposed. Our approach has been successfully validatedin a real surgical environment by performing experiments onliving tissues in the surgical training room of IRCAD.

Index Terms— Medical robotics, minimally invasive surgery,visual servoing.

I. INTRODUCTION

In laparoscopic surgery, small incisions are made in thehuman abdomen. Various surgical instruments and an endo-scopic optical lens are inserted through trocars at each incisionpoint. Looking at the monitor device, the surgeon moves theinstruments in order to perform the desired surgical task. Onedrawback of this surgical technique is due to the posture ofthe surgeon which can be very tiring. Teleoperated roboticlaparoscopic systems have recently appeared. There existseveral commercial systems, e.g., ZEUS (Computer Motion,Inc.) or DaVinci (Intuitive Surgical, Inc.). With these systems,robot arms are used to manipulate surgical instruments as wellas the endoscope. The surgeon tele-operates the robot through

† Laboratoire des Sciences de l’Image de l’Informatique et de laTeledetection (LSIIT - UMR CNRS 7005), Strasbourg I University, France. E-mail: krupa,gangloff,doignon,[email protected]

∗ G. Morel is now with the LRP (CNRS FRE 2705), Paris VI University,France.

‡ IRCAD (Institut de Recherche sur les Cancers de l’Appareil Digestif),Strasbourg, France. The £nancial support of the french ministry of research(“ACI jeunes chercheurs” program) is gratefully acknowledged. The experi-mental part of this work has been made possible thanks to the collaboration ofComputer Motion Inc. that has graciously provided the AESOP medical robot.In particular, we would like to thank Dr. Moji Ghodoussi for his technicalsupport.

master arms using the visual feedback from the laparoscopicimage. This reduces the surgeon’s tiredness, and potentiallyincreases motion accuracy by the use of a high master/slavemotion ratio. We focus our research in this £eld at expandingthe potentialities of such systems by providing “automaticmodes” using visual servoing (see [7], [8] for earlier worksin that direction). For this purpose, the robot controller usesvisual information from the laparoscopic images to moveinstruments, through a visual servo loop, towards their desiredlocation.

Note that prior research was conducted on visual servoingtechniques in laparoscopic surgery to automatically guide thecamera towards the region of interest (see, e.g., [15], [1] and[16]). However, in a typical surgical procedure, it is usually theother way around: the surgeon £rst drives the laparoscope intoa region of interest (for example by voice, with the AESOPsystem of Computer Motion Inc.), then, he or she drives thesurgical instruments at the operating position.

A practical dif£culty lies in the fact that the instrumentsare usually not in the £eld of view at the start of theprocedure. Therefore, the surgeon must either blindly movethe instruments or zoom out with the endoscope in order toget a larger £eld of view. Similarly, when the surgeon zooms inor moves the endoscope during surgery, the instruments mayleave the endoscope’s £eld of view. Consequently, instrumentsmay have to be moved blindly with a risk of undesirablecontact between instruments and organs.

Therefore, in order to assist the surgeon, we propose a visualservoing system that automatically brings the instruments atthe center of the endoscopic image in a safe manner. Thissystem can be used also to move the instruments at a positionspeci£ed by the surgeon in the image (with, e.g., a touch-screen or a mouse-type device). This system allows to doaway with the practice of moving the endoscope in orderto vizualize the instrument at any time it is introduced tothe patient. It includes a special device designed to holdthe surgical instruments with tiny laser pointers. This laserpointing instrument-holder is used to project laser spots in thelaparoscopic image even if the surgical instrument is not in the£eld of view. The image of the projected laser spots is usedto guide the instrument. Visibility of laser spots in the imageis suf£cient to guarantee that the instrument is not blocked byunseen tissue. Because of the poor structuration of the sceneand the dif£cult lighting conditions, several laser pointers areused to guarantee the robustness of the instrument recovery

Page 2: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 2

system. A dif£culty in designing this automatic instrumentrecovery system lies in the unknown relative position betweenthe camera and the robot arm holding the instrument, and inthe monocular vision that induces a lack of depth information.This problem is also tackled in [3] where an intraoperative3-D geometric registration system is presented. The authorsadd a second endoscope with an optical galvano-scanner.Then, a 955 fps high-speed camera is used with the £rstendoscopic lens to estimate the 3-D surface of the scannedorgan. Furthermore, external cameras watching the wholesurgical scene (OptotrakTM system) are added to measure therelative position between the laser-pointing endoscope and thecamera.

In our approach, only one monocular endoscopic visionsystem is needed for the surgeon and the autonomous 3Dpositioning. The camera has two functions: to give the surgeona visual feedback and to provide measurements of the positionof optical markers. The relative position from the instrumentto the organ is estimated by using images of blinking opticalmarkers mounted on the tip of the instrument and images ofblinking laser spots projected by the same instrument.

Note that many commercially available tracking systemsmake also use of passive or active blinking optical markerssynchonized with image acquisition [17]. The most famousamong these systems is the OptotrakTM from Nothern DigitalInc. which uses synchronized infrared LED markers trackedby 3 IR sensitive cameras. However, in the case of all thesesystems, the imaging system is dedicated to the markersdetection task since they are the only features seen by thecamera(s). This greatly simpli£es the image processing: thereis no need to segment the whole image to extract the markerslocations.In our system, only one standard, commercially available,endoscopic camera is used for both 3D measurement andsurgeon visual feedback. To do so, we propose a novelmethod to extract ef£ciently, in real-time, with a high S/Nratio, markers in a scene as complex as an inner humanbody environment. Furthermore, with our method, it is easyto remove, by software, images of the markers from theendoscopic image and give to the surgeon a quasi-unmodi£edvisual feedback.

The paper is organized as follows. The next sectiondescribes the system con£guration with the endoscopic laserpointing instrument-holder. Robust image processing for laserspots and LEDs detection is explained in section III. Thecontrol scheme used to position the instrument by automaticvisual feedback is described in section IV. The method forestimating the distance from the instrument to the organ isalso presented. In the last section, we show experimentalresults in real surgical conditions at the operating room ofIRCAD.

II. SYSTEM DESCRIPTION

A. System con£guration

The system con£guration used to perform the autonomouspositioning of the surgical instrument is shown in Fig. 1. The

motionless camerasurgical robot

laser pointinginstrument

trocar 1

trocar 2

laser spot

organ

surgical robot

Fig. 1. System con£guration.

system includes a laparoscopic surgical robot, an endoscopicoptical lens and an endoscopic laser pointing instrument-holder. The robotic arm allows to move the instrument acrossa trocar placed at a £rst incision point. The surgical instrumentis mounted into the laser pointing instrument-holder. Thisinstrument-holder projects laser patterns on the organ surfacein order to provide information about the relative orientationof the instrument with respect to the organ, even if the surgicalinstrument is not in the camera’s £eld of view. Anotherincision point is made in order to insert an endoscopic opticallens which provides the visual feedback and whose locationrelatively to the robot base frame is generally unknown.

B. Endoscopic laser pointing instrument-holder

The prototype of an endoscopic laser pointing instrument-holder is shown in Fig. 2. This instrument-holder, with thesurgical instrument inside, is held by the end-effector of therobot. It is a 30 cm long metallic pipe with 10 mm externaldiameter to be inserted through a 12 mm standard trocar. Itsinternal diameter is 5 mm, so that a standard laparoscopicsurgical instrument can £t inside. The head of the instrument-holder contains miniature laser collimators connected to opti-cal £bers which are linked to external controlled laser sources.This device allows to use remote laser sources which cannot be integrated in the head of the instrument due to theirsize. Optical markers are also added on the tip of the surgicalinstrument. These markers (made up with three LEDs) aredirectly seen in the image. They are used in conjunction withthe image of the projected laser pattern in order to measurethe distance between the pointed organ and the instrument.

III. ROBUST DETECTION OF LASERS SPOTS AND OPTICAL

MARKERS

Robust detection of markers from endoscopic image is quitea challenging issue. In our experiments, we encountered threetypes of problems that make this task very dif£cult:

• i) Lighting conditions: the light source is on the tip of theendoscope. In this con£guration, the re¤ection is maximal

Page 3: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 3

Fig. 2. The endoscopic laser pointing instrument-holder.

in the center of the image yielding highly saturated areasof pixels.

• ii) Viscosity of the organs: this accentuates the re¤ectionsof the endoscopic light producing speckles in the image.Furthermore, projected laser spots are diffused yieldinglarge spots of light with fuzzy contours.

• iii) Breathing motion: due to the high magni£cation factorof the endoscope, the motion in the endoscopic imagedue to breathing is of high magnitude. This may lead toa failure of the tracking algorithm.

To cope with these dif£culties, we have developed a newmethod for real-time robust detection of markers in a highlynoisy scene like an endoscopic view. This technique is basedon luminous markers that are blinking at the same frequencythan the image acquisition. By switching the marker on whenacquiring one £eld of an interlaced image and turning it offwhen acquiring the other £eld, it is possible to obtain veryrobust features in the image. Fig. 3 explains how the featuredetection works. In this example, we use two blinking disk-shaped markers: the left marker is switched on during theeven £eld acquisition whereas the right marker is switchedon during the odd £eld. To simplify the explanations, onlytwo levels for the pixels (0 for dark and 1 for bright) are usedin Fig. 3.

The result of the convolution of the image with a 5 by 5vertical high-pass £lter mask shows that, the two markers canbe easily detected with a simple thresholding procedure. Fur-thermore, it’s easy to separate the two markers by thresholdingseparately the even and the odd £eld in the image. The £lteringof the whole image can be performed in real-time due to thesymmetry of the convolution mask (for a 768× 572 image, ittakes 5 ms with a Pentium IV 1.7 GHz).

This detection is very robust to image noise. Indeed,blinking markers yield patterns in the image whose verticalfrequency is the spatial Nyquist frequency of the visual sensor.Usually, in order to avoid aliasing, the lens is designed so thatthe higher frequencies in the image are cut. So, objects inthe scene cannot produce the same image than the blinkingmarkers (one line bright, the next dark, and so on ...). The

Fig. 3. Robust detection of optical markers.

only other source of vertical high frequency components inthe image is motion as shown in Fig. 4.

Fig. 4. Suppression of artifacts due to motion.

In this example, the left pattern in the original imageis produced by a blinking marker and the right pattern isproduced by the image of an edge moving from left to rightin the image. After high-pass £ltering and thresholding, theblinking marker is detected as expected but also the movingedge. The artifacts due to the moving edge are removed by amatching algorithm. The horizontal pattern around the detectedpixel is compared with the horizontal patterns in the lines thatare next to this pixel. If they match, then the pixel is removed.This matching is very fast since it is limited to the detectedpixels.

Our setup uses two kinds of optical markers that are blinkingalternatively: lasers that are projected on the organs, and CMS(Component Mounted on Surface) LEDs that are attached onthe tip of the tool. A robust detection of the geometric centerof the projected laser spots in the image plane is neededin our system. Due to the complexity of the organ surface,laser spots may be occluded. Therefore a high redundancyfactor is achieved by using four laser pointers. We have foundin our experiments in vivo with four laser sources that thecomputation of the geometric center is always possible witha limited bias even if three spots are occluded. Fig. 5 showsimages resulting from different steps of the image processing

Page 4: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 4

applied to the laser spots.

Fig. 5. Detection of laser spots. a) original interlaced image. - b) high-pass£ltering and thresholding on even frame. - c) matching. - d) localization ofcenter of mass (square).

CMS LEDs markers are turned on during the odd £eld andturned off during the even £eld. Edge detection is applied onthe result of high-pass £ltering and matching in the odd £eld.Edges detector always yields contours with many pixels ofthickness. Thinning operations are performed on the extractedset of pixels based on the comparison of gradient magnitudeand direction of each pixel with their neighbors (non maximasuppression) producing a 1-pixel wide edge: this thinning isrequired to apply hysteresis thresholding and an edge trackingalgorithm. Then, contours are merged by using a method calledmutual favorite pairing [6] that merges neighboring contourchains into a single chain. Finally, the contours are £tted byellipses (see Fig. 6).

Fig. 6. - (left) Detection of optical markers and laser spots (+) - (right)Contours detection of optical markers (odd frame).

For safety reasons, we’ve added the simple following test:the signal to noise ratio is monitored by setting a thresholdon the minimum number of pixels for each detected marker.If the test fails, the visual servoing is immediately stopped.Furthermore, to reduce the effect of noise, a low-pass £lter isapplied on the time-varying image feature coordinates. Areasof interest around detected markers are also used in order to

reduce the processing time.Since the markers appear only on one out of two image lines,and since the areas of the LASER and LED markers do notoverlap, it is possible to remove these markers from the imageby software. For each marker, each detected pixel can bereplaced by the nearest pixel that is unaffected by the lightof the marker. Therefore the surgeon do not see the blinkingmarkers in the image, which is more comfortable. This methodwas validated with two standard endoscopic imaging systems:the StrykerTM888 and the StrykerTM988.

IV. INSTRUMENT POSITIONING WITH VISUAL SERVOING

The objective of the proposed visual servoing is to guideand to position the instrument mounted on the end-effectorof the medical robot. In laparoscopic surgery, displacementare reduced to four degrees of freedom, since translationaldisplacements perpendicular to the incision point axis are notallowed by the trocar (see Fig. 7). In the case of a symmetricalinstrument like, e.g., the cleaning-suction instrument, it is notnecessary to turn the instrument around its own axis to dothe desired task. For practical convenience, rotation around theinstrument axis is constrained in a way to keep optical markersvisible. In our system, a slow visual servoing is performed,based on the ellipses minor/major semiaxes ratio £tting theimage projections of optical markers. Since this motion doesnot contribute to position the tip of the instrument, it’s notfurther considered.

A. Depth estimation

To perform the 3-D positioning, we need to estimate thedistance between the organ and the intrument (depth d0, inFig. 7). Three optical markers P1, P2 and P3 are placed alongthe tool axis and are assumed to be collinear with the center ofmass, P , of the laser spots (see Fig. 7). Under this assumption,a cross ratio, τ , can be computed using these four geometricpoints [12]. This cross ratio can also be computed in the imageusing their respective projections p1, p2, p3 and p, assumingthe optical markers are in the camera £eld of view (seeFig. 7 a). Since a 1-D projective basis can be de£ned eitherwith P1, P2, P3 or their respective images p1, p2, p3, theselected cross ratio built with the fourth point (P or p) isa projective invariant that can be used to estimate the depthd0. Indeed, a 1-D homography H exists between these twoprojective bases, so that the straight line ∆ corresponding tothe instrument axis is transformed, in the image, into a lineδ = H(∆).

τ =

(pp2

p1p2

)

(pp3

p1p3

) =

(PP2

P1P2

)

(PP3

P1P3

) (1)

d0 = PP1 = (1− τ)P1P3

τ − P1P3

P1P2

= α1− τ

τ − β(2)

where α and β depend only on the known relative position ofP1, P2 and P3. Similar computations lead to the same rela-tionship between d2 and another cross-ratio µ de£ned with thepoints P1, P2, P3, OQ and their respective projections provided

Page 5: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 5

Fig. 7. The basic geometry involved.

that oq , the perspective projection of the incision point OQ,can be recovered. Since OQ is generally not in the camera £eldof view, this can be achieved by considering a displacementof the surgical instrument between two con£gurations yieldingstraight lines δ and δ′ in the image. Then, oq is the intersectionof these lines since OQ is motionless. Finally:

µ =

(p1p3

p2p3

)

(p1oq

p2oq

) =

(P1P3

P2P3

)

(P1OQ

P2OQ

) (3)

d2 = P1OQ =

α1−β

µ+ β1−β

(4)

Fig. 8 shows the results of a sensitivity analysis on thedepth estimation. The standard deviation σd0

of the estimateddepth is plotted as a function of the standard deviation σpof the markers coordinates (pixel) in the image plane forseveral geometrical con£gurations of the camera and surgicalinstrument. These con£gurations are de£ned by the angle α

between the camera’s optical axis and the instrument axis,the depth d0 and the depth dC between the camera and thelaser spot. It can be seen that for standard con£gurations, thesensitivity of the depth measurement with respect to noise(that is, sσ =

d(σd0)

dσpin the £g. 8) is proportional to the

distance dc and d0. The sensitivity, sσ , varies in the interval[0.4, 3] corresponding to [0.4, 3] mm for σd0

if σp = 1 pixel.Experimentally, σp is typically 0.5 pixel, resulting in σd0

= 1mm. In practice, this noise does not affect the precision ofthe positioning due to the low pass £lter effect of the visualservoing.

0 0.2 0.4 0.6 0.8 10

1

2

3

σp (pixel)

σ d 0 (m

m)

Standard deviation of measured d0 (mm)

α = 20 degd

0 = 10 mm

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

σp (pixel)

σ d 0 (m

m) α = 45 deg

d0 = 10 mm

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

σp (pixel)

σ d 0 (m

m) α = 90 deg

d0 = 10 mm

0 0.2 0.4 0.6 0.8 10

1

2

3

4

σp (pixel)

σ d 0 (m

m)

Standard deviation of measured d0 (mm)

α = 20 deg

dC

= 60 mm

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

2

σp (pixel)

σ d 0 (m

m)

α = 45 degd

C = 60 mm

0 0.2 0.4 0.6 0.8 10

1

2

3

σp (pixel)

σ d 0 (m

m)

α = 90 degd

C = 60 mm

dC

= 40 mm

dC

= 80 mm

dC

= 40 mm

dC

= 60 mm

dC

= 80 mm

dC

= 80 mm

dC

= 60 mm

dC

= 40 mm

dC

= 60 mm

d0 = 10 mm

d0 = 15 mm

d0 = 20 mm

d0 = 20 mm

d0 = 15 mm

d0 = 10 mm

d0 = 10 mm

d0 = 15 mm

d0 = 20 mm

sσ = 2.6

sσ = 1.5

sσ = 0.7

sσ = 1.2

sσ = 0.8

sσ = 0.4

sσ = 1

sσ = 0.8

sσ = 0.5

sσ = 3 sσ = 2.4

sσ = 1.5

sσ = 1.8

sσ = 1.3

sσ = 0.8

sσ = 2.5

sσ = 1.5

sσ = 0.8

Fig. 8. Standard deviation σd0of the estimated depth d0 (mm) as a function

of the standard deviation σp of the markers image coordinates (pixel) forseveral geometrical con£gurations.

B. Visual servoing

In our approach, we combine image feature coordinates anddepth information to position the instrument with respect tothe pointed organ. There exist previous works about this typeof combination (see e.g. [10], [11]), however the depth, d0, ofconcern here is independant of the position of the camera andit can be estimated with an uncalibrated camera. A featurevector S is built with image coordinates of the perspectiveprojection of the laser spots center Sp = (up, vp)

T , and thedepth d0 between the pointed organ and the instrument S =[STp d0]

T . In our visual servoing scheme, the robot arm isvelocity controlled. Therefore, the key issue is to express theinteraction matrix relating the derivative of S and the velocityscrew of the surgical instrument reduced to three degrees offreedom Wop = (ωx, ωy, vz = d2)

T (see the appendix formore details):

[upvp

]=

[J11 J12 J13J21 J22 J23

]

︸ ︷︷ ︸JS

ωxωy

d0 + vz

(5)

Even though all components of JS could be recovered fromimages of optical markers and camera parameters, JS isnot invertible. Therefore, the velocity screw applied to therobot, W ∗

op = (ω∗x, ω

∗y , v

∗z)T , cannot be directly computed

without some additional assumptions (like, e.g., the surfaceof the organ in the neighborhood of the pointed directionis planar). Furthermore, when the instrument is not in thecamera £eld of view, d0 cannot be measured. Therefore,we propose to decompose the visual servoing in twocontrol loops that partly decouple the control of the pointeddirection given by (up, vp)T and the control of the depth d0.The instrument recovery algorithm is splitted into three stages:

Page 6: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 6

Fig. 9. The full visual servoing scheme.

Instrument recovery and positioning procedure:

• Stage 1: Positioning of the laser spot projection, p, at thecenter of the image by visual servoing of Sp = (up, vp)

T

only.It means that only ωx and ωy are controlled. For safetyreasons, v∗z = 0 during this stage. Thus, from (5), itcomes:[

upvp

]−

[J13J23

]d =

[J11 J12J21 J22

]

︸ ︷︷ ︸J2×2

[ωxωy

](6)

Assuming a classical proportional visual feedback [5], thecontrol signal applied to the robot is W ∗

op = (ω∗x, ω

∗y , 0)

T ,with:[

ω∗x

ω∗y

]= J

−12×2

(K

[u∗p − upv∗p − vp

]−

[J13J23

]d

)(7)

where K is a positive constant gain matrix.

• Stage 2: Bringing down the instrument along its axisuntil the optical markers are in the £eld of view.This is done by an open-loop motion at constant speedv∗z with W ∗

op = (0, 0, v∗z)T .

• Stage 3: Full visual servoing.Since strong deformations may be induced by breathing,an entire decoupling (i.e. ω∗

x = 0, ω∗y = 0) is not suitable.

The £rst stage control, as in equation (7), must go on inorder to reject disturbances. Since vz = d− d0, a propor-tional visual feedback law based on the measurement ofd0 with the cross-ratio is given by:

v∗z = d − k (d∗0 − d0(τ, α, β)) (8)

where k is a positive scalar and d0 is a function of the cross-ratio τ , α and β. The full servoing scheme is shown in Fig.9.

C. Implementation issues

The signal d in equations (7) and (8) can be obtained byderivating equations (2) and (4). However, since d is slowlyvarying at stage 1 and since Sp is generally constant at stages2 and 3, the approximation d ≈ 0 is made during practicalexperiments resulting in an approximately decoupled behavior.

For practical convenience, the upper (2× 2) sub-matrix J2×2of JS must be computed even if the optical markers arenot visible. When the instrument is out of the £eld of view,this sub-matrix is identi£ed in an initial procedure. Thisidenti£cation consists in applying a constant rotational velocityreference W ∗

op = (ω∗x, ω

∗y , 0)

T during a short time interval ∆T

(see Fig. 11). Small variations of laser spot image coordinatesare measured and the estimated, Jω , of the interaction matrixis given by:

Jω =

[ ∆up

ω∗x∆T∆up

ω∗y∆T∆vp

ω∗x∆T∆vp

ω∗y∆T

](9)

It is not suitable to try to compensate induced depth motionsduring the centering stage, since the instrument is not usuallyin the £eld of view at that stage. Furthermore, when theinstrument is going up or down (v∗z 6= 0), no bias appearson the laser spot centering. Therefore, it is recommended inpractice to choose the interaction matrix MI , mapping W ∗

op

into S∗ with the following structure:

MI =

[Jω 0[2×1]0[1×2] −1

](10)

This leads to the experimental control scheme shown in Fig.10, with K > 0. The bandwith of the visual control loop isdirectly proportional to K.

For the stability analysis, we consider an ellipsoid as ageometric model for the abdominal cavity, so that d is relatedto ωx and ωy . In this case, the interactions matrix, in equation(5), is reduced to a 2× 2 matrix Jω:

[upvp

]= Jω

[ωxωy

](11)

and the stability of the visual feedback loop is guaranteedas long as Jω[Jω]−1 remains positive de£nite [2]. In ourapplication, if the camera and the incision point are motionless,the stability is ensured in a workspace much larger than theregion covered during experiments. To quantify the stabilityproperties, we have modelled the organ as an ellipsoid. Theestimated Jacobian Jω is constant and correspond to a nominalcon£guration. We have then computed Jω when the laser spotis moved accross the organ surface, and computed the eigen-values of Jω[Jω]−1 in the different con£gurations. Unsafe

Fig. 10. Visual servoing scheme used for experiments.

Page 7: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 7

150 200 250 300 350 400 450 500 550 600100

150

200

250

300

350

400

450

up (pixel)

v p (pi

xel)

Image trajectory during visual servoing

measured trajectory

desired trajectory

Fig. 11. Initial on-line identi£cation of the interaction matrix (displacementsaround the image center) and image-based visual servoing along a squareusing this identi£cation. (1 mm ≈ 25 pixels)

Fig. 12. Stability analysis - laser spot surface area delimiting the stabilityof the control vision loop for a constant identi£ed Jacobian.

con£gurations, corresponding to a damping factor ζ < 0.5 areclearly out of the camera £eld of view, which is representedby the black contours (see Fig. 12). This leads to a goodrobustness over the whole image, that was experimentallyveri£ed.Furthermore, an accidental motion of the endoscope couldalso affect Jω , and thus the stability. However, in practice,experiments have demonstrated that a rotation of the endo-scope as big as 60 degrees still preserves the stability of thesystem. Should the convergence properties be degraded in caseof an exceptional change of Jω, the tracking performancescan be easily monitored and a re-identi£cation of Jω can beprogrammed ([13], [14]). The validity of the Jacobian matrixcan be measured by two ways: the monitoring of the rotationalmotions of the endoscope or the monitoring of the trajectoryerror signal in the image (the optimal trajectory should be astraight line for an image-based servoing).

V. EXPERIMENTS

Experiments in real surgical conditions were conducted onliving tissues in the operating room of IRCAD (see Fig. 1).The experimental surgical robotic task was the autonomousrecovery of a instrument not seen in the initial image andthen its positioning at a desired 3-D position.

A. Experimental setup

We use a bi-processor PC computer (1.7 GHz) runningLinux for image processing and for controlling, via a seriallink, the ComputerMotion surgical robot. A standard 50fpsPAL endoscopic camera held by a second robot (at standstill)is linked to a PCI image capture board that grabs images ofthe observed scene (see Fig. 13). We have modi£ed the driverof the acquisition board in order to use the vertical blankinterrupt as a mean to synchronize the blinking markers. TheTTL synchronization signals that control the state of the lasersand the LEDs are provided by the PC’s parallel port. For eachimage, the center of mass of the laser spots and centers of thethree LEDs are detected in about 20 ms.

B. Experimental task

Successive steps in the autonomous recovery andpositioning are as follows:

Step 1: changing the orientation of the instrument byapplying rotational velocity trajectories (ω∗

x and ω∗y)

in open loop in order to scan the organ surface withthe laser spots until they appear in the endoscopicview.Step 2: automatic identi£cation of the componentsof the interaction matrix Jω (cf. Eq. (9)).Step 3: centering of the laser spots in the image bya 2-D visual servoing.Step 4: descent of the instrument by applying avelocity reference signal v∗z in open loop until it

Fig. 13. The experimental setup.

Page 8: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 8

appears in the image, while the orientation servoingis running with a £xed desired set point.Step 5: real time estimation of the distance d0 anddepth servoing to reach the desired distance, whileorientation servoing is running with a £xed desiredset point.Step 6: new positioning of the instrument towardsa desired 3-D location by automatic visual servoingunder the surgeon’s control. The surgeon indicateson the screen the new laser point image coordinates,S∗p = (u∗p v∗p)

T , and speci£es the new desireddistance d∗0 to be reached. Then, the visual servoingalgorithm performs the 3D positioning.

C. Experimental measurements

Fig. 14 show experimental measurements of the laser imagecoordinates up and vp during the identi£cation stage of Jω .For the identi£cation procedure, four positions have beenconsidered to relate variations of the laser image positionsand angular variations (see also Fig. 11). One can noticea signi£cant perturbation due to the breathing during visualservoing. For robust identi£cation purpose, we average severalmeasurements of small displacements. This allows to reducethe effect of breathing which acts as a disturbance.

Fig. (15 top and bottom left) shows the 2-D trajectoryobtained in the image during the centering step by visualservoing. The oscillating motion around the initial and desiredposition are also due to the effect of breathing that acts asa periodical perturbation. Fig. (15 bottom right) shows themeasured distance d0 during the depth servoing at step 5.

Fig. 16(a)(b) displays the laser spot image coordinates whenthe surgeon speci£es new positions to be reached in the image,at step 6. These results (on living tissues) should be comparedwith those obtained by the use of an endo-trainer on Fig.

20 30 40 50 60

100

150

200

250

300

350

400

450

time (s)

up during identification step (pixel)

pos 1 pos 2 pos 3 pos 4

20 30 40 50 60150

200

250

300

350

time (s)

vp during identification step (pixel)

pos 1 pos 2 pos 3 pos 4

0 0.5 1 1.5460

480

500

520

540

560

580

up during fast identification step (pixel)

time (s)0 0.5 1 1.5

400

410

420

430

440

450

460

vp during fast identification step (pixel)

time (s)

Fig. 14. Experimental measurements for the identi£cation procedure of theinteraction matrix. (top) Slow identi£cation procedure (averaging the effectof breathing) and (bottom) fast identi£cation procedure (short time intervalbetween two regular breathings). (1 mm ≈ 25 pixels)

TABLE I

TIME PERFORMANCES OF THE RECOVERING AND POSITIONING TASKS FOR

A SET OF 10 EXPERIMENTS.

16(c)(d). Note that the fact that the instrument seems to goshortly in the wrong direction at times t = 10 s and t = 20 sis due to a non perfect decoupling between up and vp by theidenti£ed Jacobian matrix. With our experimental setup, themaximum achieved bandwith is about 1 rad/sec. Fig. I showsthe time performances of the system. A set of 10 experimentswas performed on the instrument recovering task. It takestypically 10 s to bring the instrument in the image center,(5 s is the best and 20 s is the worst). For the autonomous3D positioning, the time is typically 4 s (2 s is the best and 8s is the worst). This should be compared with a teleoperatedsystem to the time it takes for a surgeon to command vocallyan AESOP system, holding the endoscope and to bring theinstrument and the camera back to the operation £eld.

VI. CONCLUSION

The robot vision system presented in this paperautomatically positions a laparoscopic surgical instrumentby means of laser pointers and optical markers. To add

0 10 20 30150

200

250

300

350

400

450

up response during visual servoing (pixel)

time (s)

u*p = 384 pixels

0 10 20 30200

250

300

350

vp response during visual servoing (pixel)

time (s)

v*p = 286 pixels

100 200 300 400 500200

250

300

350

up (pixel)

v p (pi

xel)

initial position

desiredposition

0 20 40 600

20

40

60

80

time (s)

2−D image trajectory depth d0 response during visual servoing (mm)

desired

measured

measured

desired

measured trajectory

measured

desired

Fig. 15. Experimental results of the 3-D positioning.(top) image centering,(bottom left): 2-D trajectory, (bottom right): depth d0 control by visualservoing. (1 mm ≈ 25 pixels)

Page 9: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 9

0 10 20 30 40 50100

200

300

400

500

600

700

up response during visual servoing (pixel)

time (s)

(a)

0 10 20 30 40 500

100

200

300

400

500

vp response during visual servoing (pixel)

time (s)

(b)

0 10 20 30 40 50100

200

300

400

500

600

700

up response during visual servoing (pixel)

time (s)

(c)

0 10 20 30 40 500

100

200

300

400

500

vp response during visual servoing (pixel)

time (s)

(d)

desired

measured

desired

measured

desired

measured measured desired

Fig. 16. New desired respective positions (of the surgical instrument withrespect to the pointed organ) speci£ed in the image by the surgeon (step 6).Responses (a)-(b) are obtained on living tissue. Responses (c)-(d) are obtainedby the use of an endo-trainer with no disturbance due to breathing. (1 mm≈ 25 pixels)

structured lights on the scene, we designed a laser pointinginstrument-holder which can be mounted with any standardinstrument in laparoscopic surgery. To position the surgicalinstrument, we propose a visual servoing algorithm thatcombines pixel coordinates of the laser spots and theestimated distance between organ and instrument. Successfulexperiments have been held with a surgical robot on livingpigs in a surgical room. In these experiments, the surgeon wasable to automatically retrieve a surgical instrument that wasout of the £eld of view and then position it at a desired 3-Dlocation. Our method is essentially based on visual servoingtechniques and on-line identi£cation of the interaction matrix.It does not require the knowledge of the initial respectiveposition of the endoscope and the surgical instrument.

REFERENCES

[1] A. Casals, J. Amat, D. Prats, and E. Laporte, “Vision guided roboticsystem for laparoscopic surgery”, In Proc. of the IFAC InternationalCongress on Advanced Robotics, p.33–36, Barcelona, Spain, 1995.

[2] B. Espiau, F. Chaumette, and P. Rives, “A new approach to visualservoing in robotics”, IEEE Trans. on Robotics and Automation,8(3):p.313–326, june 1992.

[3] M. Hayashibe and Y. Nakamura, “Laser-pointing endoscope systemfor intra-operative geometric registration”, In Proc. of the 2001 IEEEInternational Conference on Robotics and Automation, Seoul, Korea,May 2001.

[4] K. Hosoda, M. Asada, “Versatile visual servoing without knowledgeof true jacobian”, In Proceedings of the IEEE/RSJ InternationalConference on Intelligent Robots and Systems, p.186–191, Munich,Germany, September 1994.

[5] S. Hutchinson, G.D. Hager, P.I. Corke, “A tutorial on visual servocontrol”, IEEE Trans. on Robotics and Automation, 12(5):p.651–670,October 1996.

[6] D.P. Huttenlocher, S. Ullman, “Recognizing solid objects by alignmentwith an image”, The International Journal of Computer Vision, 5(2),p.195–212, 1990.

[7] A. Krupa, C. Doignon, J. Gangloff, M. de Mathelin, L. Soler, andG. Morel, “Towards semi-autonomy in laparoscopic surgery throughvision and force feedback control”, In Proc. of the Seventh InternationalSymposium on Experimental Robotics (ISER), Hawaii, December 2000.

[8] A. Krupa, M. de Mathelin, C. Doignon, J. Gangloff, G. Morel, L. Soler,and J. Marescaux, “Development of semi-autonomous control modesin laparoscopic surgery using visual servoing”, In Proc. of the FourthInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), p.1306–1307, Utrecht, The Nether-lands, October 2001.

[9] A. Krupa, J. Gangloff, M. de Mathelin, C. Doignon, G. Morel,L. Soler,J. Leroy, J. Marescaux, “Autonomous retrieval and positioningof surgical instruments in robotized laparoscopic surgery using visualservoing and laser pointers”, Proceedings of the IEEE InternationalConference on Robotics and Automation, p.3769–3774, WashingtonD.C., May 2002.

[10] E Malis, F. Chaumette, S. Boudet, “2D 1/2 visual servoing”, IEEETransactions on Robotics and Automation, 15(2), p.238–250, April 1999.

[11] P. Martinet, E. Cervera, “Combining Pixel and Depth Information inImage-Based Visual Servoing”, Proceedings of the 9

th InternationalConference on Advanced Robotics, p.445-450, Tokyo, Japan, October1999.

[12] S.J. Maybank, “The cross-ratio and the j-invariant”, Geometricinvariance in computer vision, Joseph L. Mundy and Andrew Zisserman,MIT press: p.107–109, 1992.

[13] M. Jagersand, O. Fuentes, and R. Nelson, “Experimental evaluation ofuncalibrated visual servoing for precision manipulation”, In Proc. of theIEEE International Conference on Robotics and Automation, p.2874–2880, Albuquerque, NM, April 1997.

[14] J. A. Piepmeier, G. V. McMurray, and H. Lipkin, “A dynamic quasi-Newton method for uncalibrated visual servoing”, In Proc. of the IEEEInternational Conference on Robotics and Automation, p.1595–1600,Detroit, Michigan, May 1999.

[15] G.-Q. Wei, K. Arbter, and G. Hirzinger, “Real-time visual servoingfor laparoscopic surgery”, IEEE Engineering in Medicine and Biology,16(1):p.40–45, 1997.

[16] Y.F. Wang, D.R. Uecker, Y. Wang, “A new framework for vision-enabledand robotically assisted minimally invasive surgery”, ComputerizedMedical Imaging and Graphics, vol.22,p.429-437,1998.

[17] Miguel Ribo, “State of the Art Report on Optical Tracking”, TechnicalReport 2001-25, Sept 2001, VRVis, TU Wien.

APPENDIX - DERIVATION OF THE INTERACTION MATRIX

We derive here the relationship between the velocity screwW of the instrument and the time-derivative of the featurevector S. Let RL be a reference frame at the tip of theinstrument, RQ the incision point reference frame and RC

the camera reference frame (see Fig. 7).Here we derive this interaction matrix in the case where

the incision point frame RQ and the camera frame RC aremotionless. The degrees of freedom that can be controlledare the insertion velocity d2 and the rotational velocity of theinstrument, (ΩL,Q)/RL

= (ωx ωy ωz)T , with respect to

the incision point frame RQ.Let (VOL

RQ)/RL

be the velocity of the tip of the instrumentOL with respect to the frame RQ expressed in RL. We have:

(VOL

RQ)/RL

=[

d2ωy −d2ωx d2]T

(12)

On the other hand, the velocity (VPRC)/RL

of the laser spotscenter, P = (xp, yp, zp)T/RC

, with respect to the camera frameRC can be expressed in the instrument frame RL as follows :

(VPRC)/RL

= (VOL

RC)/RL

+ (VPRL)/RL

+ (ΩL,C)/RL×OLP

= RTCL(xp, yp, zp)

T/RC

(13)

where RCL = (r1, r2, r3)T is the rotation matrix between

the camera frame and the instrument frame. In the previous

Page 10: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 10

equation, (VPRL)/RL

= d0 zL. Since there is no relative

motion between RQ and RC , (VOL

RC)/RL

= (VOL

RQ)/RL

and(ΩL,C)/RL

= (ΩL,Q)/RL. From equations (12) and (13) it

comes:[

xp yp zp]T= RCL

[d ωy −d ωx d

]T(14)

where d = d0 + d2 is the distance between the incision pointand the organ.

Considering a pin-hole camera model, P and its perspectiveprojection p = (up, vp)

T are related by :

zp

upvp1

= A

1 0 0 00 1 0 00 0 1 0

xpypzp1

(15)

where A = (a1, a2, a3)T is the (3 × 3) upper triangular realmatrix of the camera parameters. It follows that:

zp

[upvp

]=

[a1a2

]

xpypzp

− zp

[upvp

](16)

Substituting the expression (14) of the velocity of P in (16),one obtains the (2 × 3) interaction matrix JS relating Sp =(up, vp)

T to the velocity screw (ωx, ωy, d)T :

JS =1

zp

[[a1a2

]RCL −

[upvp

]r3

]0 d 0−d 0 00 0 1

(17)

Alexandre Krupa was born on April 20, 1976 inStrasbourg, France. He received the M.S. degree(DEA) in Control Science and Signal Processingfrom the National Polytechnic Institute of Lorraine,Nancy, France in 1999. He is currently working to-ward the completion of his Ph.D. degree in roboticsin the Laboratoire des Sciences de l’Image, del’Informatique et de la Teledetection (LSIIT) atthe University of Strasbourg. His research interestsinclude medical robotics, visual servoing of roboticmanipulators, computer vision and micro-robotics.

Jacques Gangloff graduated from the Ecole Na-tionale Superieure de Cachan in 1995. He re-ceived the M.S. degree and the Ph.D. degree inrobotics from the University Louis Pasteur, Stras-bourg, France in 1996 and 1999 respectively. Since1999, he is Maıtre de Conferences at the Universityof Strasbourg. He is member of the EAVR teamat the Laboratoire des Sciences de l’Image, del’Informatique et de la Teledetection (LSIIT). Hisresearch interests include mainly visual servoing ofrobotic manipulators, predictive control, and medical

robotics.

Christophe Doignon received the B.S. degree inPhysics in 1987 and the Engineer diploma in1989 both from the Ecole Nationale Superieurede Physique de Strasbourg, France. He receivedthe Ph.D. degree in Computer Vision and Roboticsfrom Louis Pasteur University, Strasbourg, Francein 1994. In 1995 and 1996, he worked with thedepartment of Electronics and Computer Science atUniversity of Padua, Italy, for the European Commu-nity under HCM program “Model Based Analysisof Video Information”. Since 1996, he’s associate

professor in computer engineering at the Louis Pasteur University, Strasbourg,France. His major research interests include computer vision, signal and imageprocessing, visual servoing and robotics.

Michel de Mathelin received the electrical engi-neering degree from Louvain University, Louvain-La-Neuve, Belgium, in 1987. He completed hisM.S. and Ph.D. degrees in electrical and computerengineering at Carnegie Mellon University, Pitts-burgh, PA, USA, in 1988 and 1993 respectively.He was a research scientist in the electrical engi-neering department of the Polytechnic School ofthe Royal Military Academy, Brussels, Belgium,during the 1991-1992 academic year. In 1993, hebecame Maıtre de Conferences at the Universite

Louis Pasteur (Strasbourg I University), France. Since 1999, he is Professor atthe Ecole Nationale Superieure de Physique de Strasbourg (ENSPS). He is afellow of the Belgian American Educational Foundation. His research interestsinclude adaptive and robust control, visual servoing and medical robotics.

Guillaume Morel obtained his Ph.D. at the Univer-sity of Paris 6 in 1994. He has been successively apostodoctoral research assistant at the dept. of Mech.Eng. of the Massachusetts Institute of Technology(’95-’96), and an associate professor at the Univer-sity of Strasbourg, France (’97-’00). After a yearpassed in the industrial world, as a research scientistfor the French company of electricity (EDF, ’00-’01)he is back to the academic world since sept 2001,at the Laboratoire de Robotique de Paris, France.During all these years, his research has concerned

the sensor based control of robot, with a particular focus on force feedbackcontrol and visual servoing. Application of these techniques for medicalrobotics is now the main area of his reserach.

Page 11: SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND …eavr.u-strasbg.fr › ~christophe › publications › paper_x2002-217.pdfcontrolled by visual servoing. The goal of the automated

SUBMITTED TO IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION - SPECIAL ISSUE ON MEDICAL ROBOTICS 11

Joel Leroy was born on December 18, 1949. Agraduate of the Medical University of Lille, France,Professor Leroy completed his residency in diges-tive surgery at the University Hospital of Lille. In1979, he was appointed Chief of the Departmentof General, Visceral and Digestive Surgery in aprivate surgical hospital in Bully les Mines, France.During the end of the 1980’s, He participated in thedevelopment of gynecologic laparoscopic surgery. In1991, he created and developed the £rst success-ful surgical division of minimally-invasive surgery

(MIS) specialising in colorectal surgery in France. In 1997, he was nominatedAssociated Professor of Digestive Surgery at the University of Lille, France.Professor Leroy is recognized world-wide as an innovative pioneer in the£eld of laparoscopic digestive surgery. He has been an expert contributor inthe development of IRCAD/EITS (Institut de Recherche contre les Cancersde l’Appareil Digestif, Institute for Research into Cancer of the DigestiveSystem) since its creation in 1994. In 1998, he joined Professor JacquesMarescaux’s pioneering team full time as the Co-Director of IRCAD/EITS,and he was a key participant in the development of the Lindbergh TelesurgeryProject, led by Professor Jacques Marescaux. Professor Leroy’s pivotalcontribution was in the standardisation of the ZEUS Robotic Surgical Systemassisted laparoscopic cholecystectomy. This important standardisation helpedenable Professor Marescaux’s performance of the world’s £rst telesurgery, theZEUS Robotic Telesurgery Surgical System assisted laparoscopic cholecys-tectomy on September 7, 2001.

Luc Soler Luc Soler was born on October 6, 1969.In 1994, he was validictorian at the magister at theComputer Science High School of Paris. He obtainedhis Ph.D. degree in computer science in 1998. Since1999, he is the research project manager in computerscience at the Digestive Cancer Research Institute(IRCAD Strasbourg). The same year, he is co-laureate of a Computer World Smithsonian Awardfor his work on virtual reality applied to surgery.His principal areas of interest are computerized med-ical imaging and especially automated segmentation

methods, discrete topology, automatic atlas de£nition, organ modeling, liveranatomy and hepatic anatomical segmentation.

Jacques Marescaux was born on August 4, 1948.He quickly set his sights on a medical career, andhis passion increased as he advanced in his medi-cal studies. This commitment was apparent in hisranking at the top of his class each year, includ-ing the top score on the internship exam of theMedical University of Strasbourg, France, in 1971.He received a Ph.D. surgery award in 1977 and hejoined a team of researchers at INSERM, the FrenchInstitute of Health and Medical Research. Thanksto this exceptional scienti£c collaboration, he was

able to apply a position as university professor at the age of 32, in thedigestive surgery department. In 1992, he decided to create a truly innovativestructure for teaching and research. He was £rmly convinced that surgeonscould not only play a role in the information revolution, but act as high-pro£leambassadors, so he founded the Institute for Research into Cancer of theDigestive System (IRCAD) and the European Institute of Telesurgery (EITS)in 1994. Professor Jacques Marescaux has since 1989 been Head of Digestiveand Endocrine Surgery Department at Strasbourg University Hospitals. InJune 2003, he received a ComputerWorld Honors award for the Lindberghtransatlantic Telesurgery project (September 7th, 2001) in partnership withComputer Motion Incorporation and France Telecom. He is a member ofnumerous scholarly societies.