designing a wearable navigation system for image-guided...

10
Designing a Wearable Navigation System for Image-Guided Cancer Resection Surgery PENGFEI SHAO, 1 HOUZHU DING, 1 JINKUN WANG, 1 PENG LIU, 1,2 QIANG LING, 3 JIAYU CHEN, 3 JUNBIN XU, 1 SHIWU ZHANG, 1 and RONALD XU 1,2 1 School of Engineering Science, University of Science and Technology of China, Hefei, China; 2 Department of Biomedical Engineering, The Ohio State University, 270 Bevis Hall, 1080 Carmack Rd., Columbus, OH 43210, USA; and 3 School of Information Science and Technology, University of Science and Technology of China, Hefei, China (Received 27 February 2014; accepted 25 June 2014; published online 1 July 2014) Associate Editor Tingrui Pan oversaw the review of this article. AbstractA wearable surgical navigation system is devel- oped for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisi- tion and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasi- bility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure. KeywordsSurgical resection margin, Fluorescence imaging, Google glass, Surgical navigation, Head mounted display, Augmented reality. INTRODUCTION Despite the recent therapeutic advances, surgery remains a primary treatment option for many diseases such as cancer. Accurate assessment of surgical resec- tion margins and early recognition of occult diseases in a cancer resection surgery may reduce the recurrence rates and improve the long-term outcomes. However, many existing medical imaging modalities focus pri- marily on preoperative imaging of tissue malignancies but fail to provide surgeons with real-time and disease- specific information critical for decision-making in an operating room. Therefore, it is important to develop surgical navigation tools for real-time detection of the surgical resection margins and accurate assessment of the occult diseases. Clinical significance of intraoper- ative surgical navigation can be exemplified by the inherent problems in breast-conserving surgery (BCS). BCS is a common method for treating early stage breast cancer. 21 However, up to 30–60% patients after BCS require a re-excision procedure that delays the postoperative adjuvant therapies, increases patient stress, and yields suboptimal cosmetic outcomes. 20 The high re-excision rate is partially due to the failure to obtain clean margins, which can subsequently lead to increased local recurrence rates. 6 The current clinical gold standard for surgical margin assessment is histo- pathology analysis of biopsy specimens. However, conventional histopathology analysis procedure is time-consuming, costly, and hard to provide real-time imaging feedback in the operating room. In addition, this procedure only evaluates a small fraction of resected surgical specimen and therefore may miss the residual cancer cells owing to specimen sampling errors. In some instances, the residual tumor foci Address correspondence to Ronald Xu, Department of Bio- medical Engineering, The Ohio State University, 270 Bevis Hall, 1080 Carmack Rd., Columbus, OH 43210, USA. Electronic mail: [email protected] Annals of Biomedical Engineering, Vol. 42, No. 11, November 2014 (Ó 2014) pp. 2228–2237 DOI: 10.1007/s10439-014-1062-0 0090-6964/14/1100-2228/0 Ó 2014 Biomedical Engineering Society 2228

Upload: others

Post on 13-Aug-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

Designing a Wearable Navigation System for Image-Guided Cancer

Resection Surgery

PENGFEI SHAO,1 HOUZHU DING,1 JINKUN WANG,1 PENG LIU,1,2 QIANG LING,3 JIAYU CHEN,3 JUNBIN XU,1

SHIWU ZHANG,1 and RONALD XU1,2

1School of Engineering Science, University of Science and Technology of China, Hefei, China; 2Department of BiomedicalEngineering, The Ohio State University, 270 Bevis Hall, 1080 Carmack Rd., Columbus, OH 43210, USA; and 3School of

Information Science and Technology, University of Science and Technology of China, Hefei, China

(Received 27 February 2014; accepted 25 June 2014; published online 1 July 2014)

Associate Editor Tingrui Pan oversaw the review of this article.

Abstract—A wearable surgical navigation system is devel-oped for intraoperative imaging of surgical margin in cancerresection surgery. The system consists of an excitation lightsource, a monochromatic CCD camera, a host computer,and a wearable headset unit in either of the following twomodes: head-mounted display (HMD) and Google glass. Inthe HMD mode, a CMOS camera is installed on a personalcinema system to capture the surgical scene in real-time andtransmit the image to the host computer through a USB port.In the Google glass mode, a wireless connection is establishedbetween the glass and the host computer for image acquisi-tion and data transport tasks. A software program is writtenin Python to call OpenCV functions for image calibration,co-registration, fusion, and display with augmented reality.The imaging performance of the surgical navigation system ischaracterized in a tumor simulating phantom. Image-guidedsurgical resection is demonstrated in an ex vivo tissue model.Surgical margins identified by the wearable navigationsystem are co-incident with those acquired by a standardsmall animal imaging system, indicating the technical feasi-bility for intraoperative surgical margin detection. Theproposed surgical navigation system combines the sensitivityand specificity of a fluorescence imaging system and themobility of a wearable goggle. It can be potentially used by asurgeon to identify the residual tumor foci and reduce therisk of recurrent diseases without interfering with the regularresection procedure.

Keywords—Surgical resection margin, Fluorescence imaging,

Google glass, Surgical navigation, Head mounted display,

Augmented reality.

INTRODUCTION

Despite the recent therapeutic advances, surgeryremains a primary treatment option for many diseasessuch as cancer. Accurate assessment of surgical resec-tion margins and early recognition of occult diseases ina cancer resection surgery may reduce the recurrencerates and improve the long-term outcomes. However,many existing medical imaging modalities focus pri-marily on preoperative imaging of tissue malignanciesbut fail to provide surgeons with real-time and disease-specific information critical for decision-making in anoperating room. Therefore, it is important to developsurgical navigation tools for real-time detection of thesurgical resection margins and accurate assessment ofthe occult diseases. Clinical significance of intraoper-ative surgical navigation can be exemplified by theinherent problems in breast-conserving surgery (BCS).BCS is a common method for treating early stagebreast cancer.21 However, up to 30–60% patients afterBCS require a re-excision procedure that delays thepostoperative adjuvant therapies, increases patientstress, and yields suboptimal cosmetic outcomes.20 Thehigh re-excision rate is partially due to the failure toobtain clean margins, which can subsequently lead toincreased local recurrence rates.6 The current clinicalgold standard for surgical margin assessment is histo-pathology analysis of biopsy specimens. However,conventional histopathology analysis procedure istime-consuming, costly, and hard to provide real-timeimaging feedback in the operating room. In addition,this procedure only evaluates a small fraction ofresected surgical specimen and therefore may miss theresidual cancer cells owing to specimen samplingerrors. In some instances, the residual tumor foci

Address correspondence to Ronald Xu, Department of Bio-

medical Engineering, The Ohio State University, 270 Bevis Hall,

1080 Carmack Rd., Columbus, OH 43210, USA. Electronic mail:

[email protected]

Annals of Biomedical Engineering, Vol. 42, No. 11, November 2014 (� 2014) pp. 2228–2237

DOI: 10.1007/s10439-014-1062-0

0090-6964/14/1100-2228/0 � 2014 Biomedical Engineering Society

2228

Page 2: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

unrecognized during BCS may be further developedinto the recurrent disease.5 In this regard, developingan intraoperative surgical navigation system may helpthe surgeon to identify the residual tumor foci andreduce the risk of recurrent diseases without interferingwith the regular surgical procedures.

In the past, optical imaging has been explored forsurgical margin assessment in various medical condi-tions such as tumor delineation,2,4,8 lymph node map-ping,18,19 organ viability evaluation,7 tissue flapvascular perfusion assessment during plastic surgery,15

necrotic boundary identification during wounddebridement,1,17 breast tumor characterization,22

chronic wound imaging,23 and detection of iatrogenicbile duct injuries during cholecystectomy.14 However,these imaging systems are not wearable devices.Therefore, they cannot provide a natural environmentto facilitate intraoperative visual perception and navi-gation by the surgeon. Recently, fluorescence gogglesystems have been developed for intraoperative surgi-cal margin assessment.10,12 These goggle systems inte-grate excitation light source, fluorescence filter,camera, and display in a wearable headset, allowing forintraoperative evaluation of surgical margins in anatural mode of visual perception. However, clinicalusability of these goggle systems is suboptimal, owingto several design limitations. First of all, these gogglenavigation systems integrate multiple opto-electricalcomponents in a bulky headset that is inconvenient towear or take off. The headset significantly reduces thesurgeon’s manipulability, blocks the surgeon’s vision,and may even cause iatrogenic injury owing to visualfatigue or distraction. Second, current efforts to reducethe headset weight, such as replacing a CCD camerawith a miniature CMOS camera, may sacrifice imagingsensitivity and dynamic range required for fluorescencedetection of cancerous margin. Finally, using a head-mounted light source to provide fluorescence excitationmay result in poor illumination conditions vulnerableto many imaging defects, such as insufficient lightintensity for fluorescence imaging in the specific regionof interest, non-uniform illumination, motion artifact,and specular reflection. In order to overcome theselimitations, we propose a wearable surgical navigationsystem that combines the high sensitivity of a station-ary fluorescence imaging system and the high mobilityof a wearable goggle device. The concept of thiswearable surgical navigation system is illustrated inFig. 1. As the surgical site is illuminated by a handheldexcitation light source, fluorescence emission from thedisease is collected by a stationary CCD camerathrough a long pass filter. The resultant fluorescenceimage of the surgical margin is transmitted to a hostcomputer. In the meantime, a miniature CMOS camerais installed on the wearable headset unit to monitor the

surgical site and transmit the surgical scene images tothe computer host. The surgical scene images are fusedwith the surgical margin images and sent back to thewearable headset unit for display. The wearable sur-gical navigation system is implemented in two opera-tion modes. In the first mode, a Google glass devicecaptures the surgical scene images and communicatewith the host computer wirelessly. In the second mode,a miniature camera installed on a head-mounted dis-play (HMD) device captures the surgical scene andtransmit the image data to the host computer through aUSB port. Co-registration between the surgical marginand the surgical scene images is through four fiducialmarkers placed within the surgical scene.

MATERIALS AND METHODS

Hardware Design for the Surgical Navigation System

The major hardware components of the proposedsurgical navigation system include a MV-VEM033SMmonochromatic CCD camera (Microvision, Xian,China) for fluorescence imaging, a FU780AD5-GC12laser diode (FuZhe Corp., Shenzhen, China) for exci-tation illumination at 780 nm, and a wearable headsetunit comprising either a HMD device or a Googleglass. In the HMD mode, an ANC HD1080P CMOScamera (Aoni, Shenzhen, China) is installed on a PCSpersonal cinema system (Headplay, Inc., Los Angeles,CA) to capture the surgical scene images. In theGoogle glass mode, the build-in CMOS camera in theGoogle glass (Google Labs, Mountain View, CA)captures the surgical scene images. The excitation lightof 780 nm is delivered to the surgical site by an opticalfiber. A beam expander further expands the illumina-tion area up to 25 cm2. To leave adequate space forsurgical operation and ensure sufficient level of fluo-rescence emission without irreversible photobleachingor tissue thermal damage, the excitation light source isplaced 7 cm away from the surgical site, with its flu-ence rate controlled between 8 and 50 mW/cm2. TheCCD camera and the CMOS camera are used to ac-quire the fluorescence and the surgical scene imagesrespectively at a resolution of 640 9 480 pixels and aframe rate of 30 frames per second (fps). The CCDcamera uses a one-fourth inch monochrome CCDsensor with a dynamic range of 10 bits. An FBH800-10800 nm long pass filter (Thorlabs Inc., Newton, NJ) ismounted in front of the CCD camera to collect fluo-rescence emission in the near infrared wavelengthrange. The CCD camera is installed 40 cm above thesurgical site. A desktop computer is used to synchro-nize all the image acquisition, processing, transferring,and display tasks.

Surgical Goggle 2229

Page 3: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

Navigation Strategy for the Surgical Navigation System

A multi-step co-registration strategy is implementedfor seamless fusion of the tumor margin images withthe surgical scene. To facilitate adequate co-registra-tion between the fluorescence images acquired by thestationary CCD camera and the surgical scene imagesacquired by the wearable headset unit, four blackfiducial markers are placed around the designatedsurgical site. Before navigation, the surgical site isimaged by the CCD camera with and without a fluo-rescence filter so that the coordinates for fluorescenceimaging can be calibrated with respect to these fiducialmarkers. During navigation, the fluorescence and thesurgical scene images are concurrently transmitted tothe host computer for further processing.

For both the HMD and the Google glass navigationsystems, the image acquisition and processing tasks arecarried out by a custom program implemented in Py-thon programming language that calls the OpenCVfunctions. A find and locate specific blob method(FLSBM) as illustrated in Fig. 2 determines the coor-dinates of the tumor margin and the surgical scene bythe following steps21: find the block connected areas inthe image20; encircle each block by an rectangular boxwith internal ellipse6; generate a normalized circle foreach elliptical area through rotation and resizing5;segment each block area into 50 9 50 normalizedpicture8; subtract the normalized picture in Ref. 5 by astandard circular picture to get different pixel numbers(DPNs)4; define a DPN threshold to classify eachblock into the marker and the non-marker areas. Theabove steps are repeated for each of the block con-nected areas until all the four fiducial markers areidentified and sorted to form a quadrangle. The fourvertices of the quadrangle define the specific perspec-

tive whose coordinates can be obtained by geometriccalculation. After that, an OpenCV function, getPer-spectiveTransform, is used to transform the perspec-tive of the stationary fluorescence camera to that of thewearable headset unit (i.e., the surgeon’s perspective).The transformed fluorescence image is then co-regis-tered and fused with the surgical scene image. Theimage fusion is then transmitted back to the wearableheadset unit for display. For surgical navigation per-formed in the Google glass mode, augmented reality isachieved by seamless registration of the Google glassdisplay with the surrounding surgical scene perceivedby the surgeon in reality.

Real-time display of the image fusion to the HMDdevice is relatively easy since the HMD device is con-nected to the host computer via a USB port and servesas an external monitor. In comparison, data transportbetween the Google glass and the host computer needsmore programming efforts. In general, a Google glasscommunicates with a host computer in the followingthree ways: Bluetooth, Wi-Fi, and USB port. Consid-ering that USB communication requires a physicalcable and Bluetooth communication has speed anddistance limitations, we choose to use Wi-Fi connec-tion for data transport between the Google glass andthe host computer. Figure 3 shows the flow chart forthe Wi-Fi communication protocols from both theGoogle glass side and the host computer side. First, awireless network connection is established between theGoogle glass and the host following the transmissioncontrol protocol (TCP).9 The host keeps broadcastingsynchronization (SYN) packets that contain its IP andaim at a special port (Port A) upon advanced agree-ment with the Google glass. The Google glass monitorsthe incoming packets at Port A. Upon receiving a SYN

Google glass

WIFIFilter

CCD camera

Host computer

HMD

Identified lesion

Surgical scene

Fiducial markers

Excitation light source

CMOS Camera

(a) (b) (c)

FIGURE 1. (a) Schematic diagram of the surgical navigation system in two operation modes: Google glass mode and HMD mode.(b) Surgical navigation in the Google glass mode. (c) Surgical navigation in the HMD mode.

SHAO et al.2230

Page 4: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

packet, the Google glass obtains the IP of the host andimmediately sends an acknowledgment packet (ACK)(which includes the IP of the Google glass) to Port B ofthe host. As the host receives ACK packet and extractsthe IP, it sends an ACK packet back to the Googleglass to confirm the successful establishment of thenetwork connection between the host and the Googleglass. It is worthwhile to note that this network con-nection will not be interfered by the neighboringwireless devices since these devices do not respond tothe SYN packets in the same way as the glass and thehost. Once the network connection is established, theTCP/IP protocol is implemented for communicationbetween the glass and the host. The communicationfunctionality of the Google glass is realized by twoconcurrent threads: the receiving thread and thesending thread. In the sending thread, a camera entityresponsible for capturing live images is declared. Thecaptured frame preview is first compressed by theJPEG image compression algorithm and then trans-mitted to the host through the sending socket. TheJPEG format is chosen because of its high compressionratio (up to 15:1) and good imaging quality afterreconstruction. In the receiving thread, the Googleglass keeps receiving packets from the host through areceiving socket. After the Google glass receives all thedata for the whole JPEG compressed image, itdecompresses and displays the image immediately. Theabove data transport functions are implemented byAndroid programming.13

Performance Validation and Optimization for SurgicalNavigation

Multiple tumor simulating phantoms are developedfor validation and optimization of the wearable sur-gical navigation technique. Indocyanine green (ICG) ismixed with the phantom materials to simulate fluo-rescence emission from the tumor simulators. ICG is acyanine dye that has been approved by US Food andDrug Administration (FDA) for fluorescence imagingin many medical diagnostic applications such as oph-thalmic angiography and hepatic photometry.3

Depending on its solvent environment and concentra-tion, ICG varies its absorption peak between 600 and900 nm, and emission peak between 750 and 950 nm.In aqueous solution at low concentration, ICG has anabsorption peak at around 780 nm and an emissionpeak at around 820 nm. Accordingly, a laser diode of780 nm and a long pass filter of 800 nm are used forsurgical navigation setup.

To prepare the tumor simulators, 2.1 g of agar–agar, 0.28 g of titanium dioxide (TiO2), and 4.9 mL ofglycerol are mixed with distilled water to make a70 mL volume. The mixture is heated up to 95 �C forapproximately 30 min on a magnetic stirrer with a highpower setting and then cooled down. As the mixturetemperature drops down to 65 �C, 0.008 mg of ICG isdissolved in distilled water and added to the mixture tomake a total volume of 70 mL. The mixture is thenmaintained at 65 �C with continuous stirring to pre-vent sediment of TiO2. By injecting this tumor simu-

(b)

(e)(a) (c)

(d) (f)

(j)

(i)

(h)

(g)

FIGURE 2. Flow chart for navigation system calibration and image co-registration. (a) Image of the surgical scene acquired by thestationary CCD camera. (b) Image of the tumor-simulating tissue acquired by the wearable headset. (c, d) Fiducial markers areidentified by FLSBM algorithm. (e, f) Four vertices of the fiducial marker quadrangle are calculated for (c, d) respectively. (g) Atransform matrix is generated from (e, f). (h) Fluorescence image acquired by the stationary CCD camera. (i) The transformedfluorescence image after the transform matrix (g) is applied to (h). (j) Image fusion between the surgical scene image (f) and thetransformed fluorescence image (i).

Surgical Goggle 2231

Page 5: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

lating mixture in agar–agar gel and chicken tissuerespectively, we have developed both a solid phantommodel and an ex vivo tissue model for design optimi-zation of the surgical navigation system and feasibilitytesting of the image-guide tumor resection technique.

A cylindrical agar–agar gel phantom of 1 cm thickis used to study the co-registration accuracy betweenthe fluorescence image and the surgical scene image, asshown in Fig. 4a. The tumor simulating material isinjected in four cavities on one side of the agar–agargel phantom, simulating a set of four solid tumors. Thefluorescence image of the tumor margin and the sur-gical scene image are co-registered and fused followingthe algorithm described before. As shown in Fig. 4b, alarge positional offset exists for the fused fluorescenceimage, indicating a major co-registration error betweenthe projected center of the tumor set and the actualcenter. This co-registration error is primarily caused bythe mismatch between the surgical plane and the ref-erence plane. As illustrated in Fig. 4d, the offset of theprojected tumor image from its actual position DL is afunction of the surgical plane height Dh, the cameraperspective angle h, the camera height H, and thecamera horizontal distance L:

DL ¼ Dhtanh

¼ Dh �HL

The resultant deviation of the tumor set center Dxcan be approximated as:

Dx ¼ Dh � L=ðffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

L2 þH2p

Þ

The corresponding co-registration error in theimaging plane of the camera is therefore:

Dp ¼ Dx � fffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

ðL� DLÞ2 þ H� Dhð Þ2q

¼ Dh � L � f �H=ðL2 þH2ÞðH� DhÞ

where f is the focal length of the camera.Since L, H, and f are known experimental parame-

ters, the above co-registration error can be corrected aslong as the height of the surgical plane is given. In thecase of surgical navigation in a three dimensional (3D)scene, surface profiles and geometric features of thesurgical cavity can be reconstructed by optical topo-graphic methods such as multiview imaging andstructured illumination.11 The reconstructed surfaceprofiles will define the fiducial marker coordinates withrespect to other biologic features and correct theco-registration error. Figure 4c shows the accuratelyco-registered fluorescence image of the tumor simula-tor set after the compensation factor Dp is calculatedand applied.

FIGURE 3. Program flow chart for image acquisition, processing, transport, and display on the Google glass side and the hostcomputer side of the surgical navigation system.

SHAO et al.2232

Page 6: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

RESULTS

The achievable spatial resolutions of the surgicalnavigation system in the Google glass mode and theHMD mode are compared using a test pattern asshown in Fig. 5a. The size of the test pattern is71 mm 9 36 mm, consisting of horizontal and verticalstripes with thicknesses from 0.5 to 4 mm, separated by0.5–4 mm. Quantitative evaluation of the achievablespatial resolution by the human eye through the nav-igation system is difficult. Therefore, we use a DMC-GF5XGK camera (Panasonic, Japan) to record thetest patterns displayed on the Google glass and theHMD device so that their relative spatial resolutions

can be semi-quantitatively evaluated. According toFig. 5b, the Google glass is able to differentiate thefeatures smaller than 0.5 mm, with a vertical separa-tion greater than 1.5 mm and a horizontal separationgreater than 1 mm. According to Fig. 5c, the HMDdevice is able to differentiate the features smaller than0.5 mm, with a vertical separation greater than 2 mmand a horizontal separation greater than 1.5 mm.Therefore, under the same illumination condition andthe same imaging system configuration, a Google glassis able to differentiate more details than a HMDdevice.

An ex vivo tumor simulating tissue model as shownin Fig. 6a is used to evaluate the navigation system

1 cm

Camera

A set of 4 tumor simulators

L

Δh

ΔL

H

Reference plane

Surgical plane

: Actual center of tumor simulator set

: Offset for center of tumor simulator set

(d)(a)

(b)

(c)

FIGURE 4. Correction of the co-registration error between the fluorescence image of the tumor margin and the surgical sceneimage. (a) Photo of agar–agar gel phantom with four embedded tumor simulators. (b) Without correction, the co-registration errorbetween the fluorescence image (green) and surgical scene image is significant. (c) With height correction, the center of theprojected tumor simulator set matches the actual position.

FIGURE 5. Relative comparison of the achievable spatial resolutions in the Google glass and the HMD navigation modes. (a) Thetest pattern is 71 mm 3 36 mm, consisting of horizontal and vertical stripes with thicknesses from 0.5 to 4 mm and the separationdistances from 0.5 to 4 mm. (b) Test pattern image acquired by the Google glass. (c) Test pattern image acquired by the HMDdevice.

Surgical Goggle 2233

Page 7: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

performance and demonstrate the image-guided tumorresection procedure. The tissue model is made bysubcutaneous injection of the tumor simulating mate-rial into fresh chicken breast tissue obtained from alocal grocery store. Within a few minutes after injec-tion, the liquid mixture is cooled down to form a solidtumor with the designated geometric feature. Aug-mented reality of the Google glass navigation system isdemonstrated by changing the field of view at differentpositions of the surgical scene and capturing theGoogle glass display as well as the surrounding surgi-cal scene by the DMC-GF5XGK camera. Accordingto Figs. 6b, 6c, and 6d, the Google glass displayseamlessly matches the surrounding surgical sceneduring navigation. Especially in Fig. 6c, only a portionof the surgical margin is highlighted in the Googleglass display but invisible in the surrounding surgicalscene, indicating that the Google glass navigationsystem is able to identify surgical margin in a naturalmode of visual perception and with minimal distrac-tion to the routine surgical procedure.

Considering that a surgeon will make frequentmotions during a surgical procedure, it is importantto optimize the navigation system configuration inorder to obtain high quality images of the resectionmargin without significant ‘‘lagging’’ effect. Themotion-induced image lagging is evaluated quantita-tively in an experimental setup where the motion of a

circular tumor simulator placed on a translationallinear stage is tracked by the Google glass navigationsystem. The navigation system is operated at anoverall imaging speed of 6 fps. The velocity of thelinear stage is controlled from 0 to 6 meters perminute (m/min). An additional camera is placed be-hind the Google glass to capture the glass display andthe background scene at an imaging speed of 30 fps.The position of the linear stage is adjusted carefullyso that half of the circular tumor simulator is shownin the glass display. Figure 7 shows the images of theGoogle glass display and the background scene atdifferent translational velocities. According to thefigure, the navigation system is able to track thesurgical scene without significant lagging at a trans-lational velocity of less than 1 m/min. However, sig-nificant lagging effect will occur when the surgeon’smotion velocity is greater than 1 m/min.

Image-guided tumor resection surgeries are carriedout in the ex vivo tissue model using the wearablesurgical navigation system in both the Google glassmode and the HMD mode. Before resection, the sur-gical scene is imaged by the stationary CCD camerawith and without fluorescence filter. The acquiredsurgical margin image is transformed, fused with thesurgical scene image, and projected to the wearableheadset unit for resection guidance. After resection, thesurgical cavity is re-examined by the surgical naviga-

FIGURE 6. Augmented reality of the surgical margin image. (a) The ex vivo tumor simulating model with tumor simulatingmaterial injected at the location of the white arrow. (b–d) Google glass display and the surrounding surgical scene as the glassmoves to different locations.

SHAO et al.2234

Page 8: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

tion system to identify the residual lesion and guide there-resection process. By the end of each experiment,the image fusion of the resected tissue specimen andthe surgical cavity is compared with that acquired byan IVIS Lumina III small animal imaging system(Perkin Elmer, Waltham, MA). Figure 8 shows thesurgical scenes without fluorescence imaging (a), sur-gical margins with fluorescence image fusion (b), sur-gical cavity and resected tissue specimen images afterresection (c), and IVIS images of the same tissuespecimens. The images are acquired by the wearablesurgical navigation system in the Google glass and theHMD modes, respectively. These experimental resultsimply that our wearable surgical navigation system isable to detect the tumor margin with sensitivity andspecificity comparable to a commercial fluorescenceimaging system.

DISCUSSION

We have developed a wearable surgical navigationsystem that combines the accuracy of a standardfluorescence imaging system and the mobility of awearable goggle. The surgical navigation system can beoperated in both a Google glass mode and a HMDmode. The performance characteristics of these twooperation modes are compared in Table 1. Accordingto the table, Google glass navigation has severaladvantages over HMD and is suitable for image-gui-ded surgery in clinical settings.

With the current hardware and software configu-rations, the Google glass surgical navigation system isable to achieve an overall imaging speed up to 10 fps.Considering that a human visual system is able toprocess 10–12 separate images per second, this imaging

(a) Stationary (b) 1 m/min (c) 2 m/min (d) 6 m/min

FIGURE 7. Experimental results that simulate the effect of the surgeon’s motion on the imaging lag of the surgical navigationsystem. (a) No motion exists between the navigation system and the surgical scene. (b–d) The navigation system moves withrespect to the surgical scene at a speed of 1, 2, and 6 m/min, respectively.

FIGURE 8. Image-guided tumor resection surgery on an ex vivo tumor-simulating model. Top row are images for Google glassguided surgery. Bottom row are images for HMD guided surgery. (a) Photographic images of the surgical field without fluorescencefilter. (b) Fluorescence images of the surgical margins before resection. (c) Fluorescence images of the surgical margins and theresected tissue samples after resection. (d) IVIS image of the resected tissue sample and the surgical cavity.

Surgical Goggle 2235

Page 9: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

speed is able to generate nearly smooth images of thesurgical scene. In order to capture the field of view ofthe surgeon in real time, we also have to limit thesurgeon’s locomotion velocity. According to the sim-ulated experimental results, our surgical navigationsystem is able to track the surgical scene without sig-nificant lagging effect if the relative motion between theglass and the surgical scene is controlled slower than1 m/min. Further improvement in hardware design,communication protocol, and image processing algo-rithm is required in order to improve the intraoperativeguidance in clinical surgery.

The wearable surgical navigation system describedin this paper can be potentially used to improve theclinical outcome for cancer resection surgery. Possibleapplications include21: identifying otherwise missedtumor foci20; determining the resection margin statusfor both resected tissue specimens and within thesurgical cavity6; guiding the re-resection of additionaltumor tissues if positive surgical margins or addi-tional tumor foci are identified; and Holland et al.5

directing the sampling of suspicious tissues for his-topathologic analysis. In addition to cancer surgery,the wearable navigation technique can be used inmany other clinical procedures such as wounddebridement, organ transplantation, and plastic sur-gery. The achievable imaging depth of the wearablenavigation system is affected by multiple parameterssuch as tissue type, chromophore concentration, andworking wavelength. In general, fluorescence imagingin the near infrared wavelength range may achievedeep penetration in biologic tissue of low absorption.Therefore, an ideal clinical application of the systemincludes surgical navigation tasks in superficial tissue,such as sentinel lymph node biopsy. The system is notsuitable for surgical guidance in thick tissue or tissueof high absorbance, such as liver tumor resection. Wehave previously demonstrated that fluorescenceemission of a simulated bile duct structure embedded7 mm deep in a tissue simulating phantom can beimaged using ICG-loaded microballoons in combi-

nation with near infrared light.14 Further increase ofthe imaging depth for the wearable navigation systemrequires spatial guidance provided by other imagingmodalities.16,24

Further engineering optimization and validationworks are required for successful translation of theproposed navigation system from the benchtop to thebedside. For example, the navigation system needs tobe integrated with a topographic imaging module sothat 3D profiles of the surgical cavity can be obtainedin real-time for the improved co-registration accuracy.In the current experimental setup, four fiducial mark-ers are placed within the field of view of the wearablenavigation system for appropriate image co-registra-tion. For clinical implementation in the future, thefiducial markers will be placed on the patient and thesurgeon; and a set of stereo cameras will be used totrack the relative locations of the surgical scene withrespect to the wearable goggle. Furthermore, our cur-rent surgical navigation software grabs only the imageswithin the designated window for further processingand display tasks. Therefore, it is technically flexible tointegrate the wearable surgical navigation system withother modalities, such as magnetic resonance imaging,positron emission tomography, computed tomogra-phy, and ultrasound, for intraoperative surgical guid-ance in a clinical setting.

ACKNOWLEDGMENTS

This project was partially supported by NationalCancer Institute (R21CA15977) and the FundamentalResearch Funds for the Central Universities. Theauthors are grateful to Ms. Chuangsheng Yin atUniversity of Science and Technology of China forhelping the manuscript preparation and Drs. EdwardMartin, Stephen Povoski, Michael Tweedle, and AlperYilmaz at The Ohio State University for their technicaland clinical helps and suggestions.

TABLE 1. Performance comparison between HMD guided surgical navigation and Google glass guided surgical navigation.

Headset Clinical mobility Interference to surgery Accuracy Speed

HMD surgical navigation

Heavy (>250 g),

hard to wear

Poor,

requires USB cable

connection

Block regular vision,

causes distraction

and fatigue

Comparable to standard

fluorescence imaging

Fast (~30 fps), direct image

projection

Google glass surgical

navigation

Light (~36 g), easy to wear Good, wireless

connection

without cable

Minimal Comparable to standard

fluorescence imaging

Slow (6–10 fps), require

wireless data transport

SHAO et al.2236

Page 10: Designing a Wearable Navigation System for Image-Guided …staff.ustc.edu.cn/~swzhang/paper/JP28.pdf · 2017-07-12 · Designing a Wearable Navigation System for Image-Guided Cancer

REFERENCES

1Brem, H., O. Stojadinovic, R. F. Diegelmann, H. Entero, B.Lee, et al.Molecularmarkers in patientswith chronicwoundsto guide surgical debridement.Mol. Med. 13:30–39, 2007.2De Grand, A. M., and J. V. Frangioni. An operationalnear-infrared fluorescence imaging system prototype forlarge animal surgery. Technol. Cancer Res. Treat. 2:553–562, 2003.3Desmettre, T., J. M. Devoisselle, and S. Mordon. Fluo-rescence properties and metabolic features of indocyaninegreen (ICG) as related to angiography. Surv. Ophthalmol.45:15–27, 2000.4Haglund, M. M., D. W. Hochma, A. M Spence, and M. S.Berge. Enhanced optical imaging of rat gliomas and tumormargins. Neurosurgery 35:930–940; discussion 40–41, 1994.5Holland, R., S. H. Veling, M. Mravunac, and J. H.Hendriks. Histologic multifocality of Tis, T1-2 breastcarcinomas. Implications for clinical trials of breast-con-serving surgery. Cancer 56:979–990, 1985.6Horst, K. C., M. C. Smitt, D. R. Goffinet, and R. W.Carlson. Predictors of local recurrence after breast-con-servation therapy. Clin. Breast Cancer 5:425–438, 2005.7Kubota, K., J. Kita, M. Shimoda, K. Rokkaku, M. Kato,et al. Intraoperative assessment of reconstructed vessels inliving-donor liver transplantation, using a novel fluores-cence imaging technique. J. Hepatobiliary Pancreat. Surg.13:100–104, 2006.8Kuroiwa, T., Y. Kajimoto, and T. Ohta. Development of afluorescein operative microscope for use during malignantglioma surgery: a technical note and preliminary report.Surg. Neurol. 50:41–48; discussion 8–9, 1998.9Kurose, J. F., andK.W.Ross.ComputerNetworking:ATop-Down Approach. London: Pearson Education, Inc., 2007.

10Liu,Y.,R.Njuguna, T.Matthews,W. J.Akers,G. P. Sudlow,et al. Near-infrared fluorescence goggle system with comple-mentary metal-oxide-semiconductor imaging sensor and see-through display. J. Biomed. Opt. 18:101303, 2013.

11Liu, P., S. Zhang, and R. X. Xu. 3D topography of biologictissue by multiview imaging and structured light illumina-tion. Proc. SPIE 8935-0H:1–8, 2014.

12Martin E. W., R. Xu, D. Sun, S. P. Povoski, J. P. Here-mans, et al. Fluorescence Detection System. US Patent 09/30763, 2009.

13Mednieks, Z., L. Dornin, G. B. Meik, and M. Nakamura.Programming Android. Sebastopol, CA: O’ Reilly Media,2011.

14Mitra, K., J. Melvin, S. Chang, K. Park, A. Yilmaz, et al.Indocyanine-green-loaded microballoons for biliary imag-ing in cholecystectomy. J. Biomed. Opt. 17:116025, 2012.

15Mothes, H., T. Donicke, R. Friedel, M. Simon, E.Markgraf, and O. Bach. Indocyanine-green fluorescencevideo angiography used clinically to evaluate tissue perfu-sion in microsurgery. J. Trauma 57:1018–1024, 2004.

16Ntziachristos, V., A. G. Yodh, M. D. Schnall, and B.Chance. MRI-guided diffuse optical spectroscopy ofmalignant and benign breast lesions. Neoplasia 4:347–354,2002.

17Shah, S. A., N. Bachrach, S. J. Spear, D. S. Letbetter, R. A.Stone, et al. Cutaneous wound analysis using hyperspectralimaging. Biotechniques 34:408–413, 2003.

18Soltesz E. G., S. Kim, R. G. Laurence, A. M. DeGrand, C.P. Parungo, et al. Intraoperative sentinel lymph nodemapping of the lung using near-infrared fluorescent quan-tum dots. Ann. Thorac. Surg. 79:269–77; discussion 77,2005.

19Tanaka, E., H. S. Choi, H. Fujii, M. G. Bawendi, and J. V.Frangioni. Image-guided oncologic surgery using invisiblelight: completed pre-clinical development for sentinellymph node mapping. Ann. Surg. Oncol. 13:1671–1681,2006.

20Waljee, J. F., E. S. Hu, L. A. Newman, and A. K. Alder-man. Predictors of re-excision among women undergoingbreast-conserving surgery for cancer. Ann. Surg. Oncol.15:1297–1303, 2008.

21Weber, W. P., S. Engelberger, C. T. Viehl, R. Zanetti-Dallenbach, S. Kuster, et al. Accuracy of frozen sectionanalysis vs. specimen radiography during breast-conservingsurgery for nonpalpable lesions. World J. Surg. 32:2599–2606, 2008.

22Xu, R. X., J. Ewing, H. El-Dahdah, B. Wang, and S. P.Povoski. Design and benchtop validation of a handheldintegrated dynamic breast imaging system for noninvasivecharacterization of suspicious breast lesions. Technol.Cancer Res. Treat. 7:471–482, 2008.

23Xu, R. X., K. Huang, R. Qin, J. Huang, J. S. Xu, et al.Dual-mode imaging of cutaneous tissue oxygenation andvascular thermal reactivity. J. Vis. Exp. 2010. doi:10.3791/2095.

24Zhu, Q., S. H. Kurtzma, P. Hegde, S. Tannenbaum, M.Kane, et al. Utilizing optical tomography with ultrasoundlocalization to image heterogeneous hemoglobin distribu-tion in large breast cancers. Neoplasia 7:263–270, 2005.

Surgical Goggle 2237