kapil sharma thesis all
TRANSCRIPT
-
8/3/2019 Kapil Sharma Thesis All
1/111
PHYSICAL SIMULATION OF IMAGE-DIRECTED RADIATION
THERAPY OF LUNG TARGETS
by
KAPIL SHARMA
Submitted in partial fulfillment of the requirements
for the degree of Master of Science
Thesis Advisor: Wyatt Newman
Department of Electrical Engineering and Applied Physics
CASE WESTERN RESERVE UNIVERSITY
August 1999
-
8/3/2019 Kapil Sharma Thesis All
2/111
iii
Table of Contents
Chapter 1 Introduction .................................................................................................. 1
1.1 Overview of Cancer ............................................................................................ 1
1.2 Radiotherapy ....................................................................................................... 4
1.3 Stereotactic Radiosurgery and Radiotherapy ...................................................... 7
1.4 Image-Directed Radiation Therapy..................................................................... 8
1.5 IDRT of Lung Tumors ...................................................................................... 10
1.6 Goal and Organization of this Thesis................................................................ 15
Chapter 2 Tool-Frame Calibration .............................................................................. 16
2.1 Relation Between Tool-Frame and World Frame............................................. 17
2.2 Solving for6 / 7P ................................................................................................ 21
2.3 Results............................................................................................................... 24
2.4 Conclusions ....................................................................................................... 27
Chapter 3 Robot-Camera Calibration.......................................................................... 28
3.1 Camera Models ................................................................................................. 28
3.1.1 A Distortion-Free Camera Model .............................................................. 28
3.1.2 Lens Distortion Model ............................................................................... 32
3.2 RAC-Base Camera Calibration ......................................................................... 37
3.3 Computation of 3-D coordinates from Calibrated Camera ............................... 38
3.4 Automated Robot/Camera Calibration.............................................................. 40
3.4.1 Generation of 3-D Points in World Frame for Calibration ........................ 40
-
8/3/2019 Kapil Sharma Thesis All
3/111
iv
3.4.2 Generation of 2-D Image Coordinates ....................................................... 40
3.4.3 Calibration Computation............................................................................ 41
3.4.4 Calibration User-Interface Software .......................................................... 42
3.5 Results............................................................................................................... 44
Chapter 4 Treatment Simulation Physical Components .......................................... 46
4.1 Phantom and Film ............................................................................................. 46
4.2 Emulation of Target Motion.............................................................................. 48
4.3 Proxy for Target Location................................................................................. 51
Chapter 5 Treatment Simulation: Software Components ........................................... 53
5.1 Hardware Platform........................................................................................... 53
5.2 Real-Time Computation of Target Coordinates................................................ 54
5.3 Graphical Interface............................................................................................ 57
5.3.1 Display ....................................................................................................... 57
5.3.2 Controls...................................................................................................... 58
5.3.3 Menu........................................................................................................... 60
5.4 Beam Control .................................................................................................... 60
5.5 Node Generation ............................................................................................... 61
5.6 Summary of Treatment Simulation Protocol .................................................... 64
5.7 Beam Size Selection.......................................................................................... 65
Chapter 6. Results and Conclusions............................................................................ 72
6.1 Results............................................................................................................... 73
-
8/3/2019 Kapil Sharma Thesis All
4/111
v
6.2 Conclusions ....................................................................................................... 88
6.3 Future Work ...................................................................................................... 90
Appendix 1 .................................................................................................................. 93
Bibliography................................................................................................................ 98
-
8/3/2019 Kapil Sharma Thesis All
5/111
vi
List of TablesTable 2.1: Identification of
6 / 7P in the Presence of Noise........................................... 25
Table 2.2: Computed Coordinates at a Test Reference Point 1 From 3 Approaches Using
Identified 6 / 7P : .................................................................................................... 26
Table 2.3: Computed Coordinates at a Second Test Reference Point From 3 Approaches
Using the Same Identified6 / 7P :.......................................................................... 26
Table 3.1: Accuracy Test 1 ......................................................................................... 45
Table 3.2: Accuracy Test 2 ......................................................................................... 45
Table 5.1: Statistical Results for First Set of Values for Tumor Size and Distance
Threshold............................................................................................................. 67
Table 5.2: Statistical Results for Second Set of Values for Tumor Size and Distance
Threshold............................................................................................................. 67
Table 5.3: Resulting Values for Non-Target Coverage and Beam Time Utilization with
Change in Distance Threshold ............................................................................ 70
-
8/3/2019 Kapil Sharma Thesis All
6/111
vii
List of FiguresFigure 1.1: Probability of Tumor Tissue and Normal Tissue Morbidity versus Dose. 5
Figure 1.2: Linear Accelerator ...................................................................................... 6
Figure 1.3: Cleveland Clinic Caner Center Cyberknife Treatment System................ 10
Figure 1.4: Male Cancer Risks.................................................................................... 11
Figure 1.5: female Cancer Risks ................................................................................. 11
Figure 1.6: Translational Motion of Lung Tumor during Respiration........................ 12
Figure 2.1: Tool Used for Calibration......................................................................... 18
Figure 2.2: Coordinate Frames Defined on the Robot ................................................ 19
Figure 2.3: Robots Tool Tip Touches a Reference Point ........................................... 22
Figure 3.1: Camera Coordinate System Assignment .................................................. 30
Figure 3.2 Effects of Radial Distortion....................................................................... 34
Figure 3.3: Effects of Tangential Distortion ............................................................... 34
Figure 3.4 Communication between Different Hardware Components...................... 42
Figure 3.5: Calibration Software Interface.................................................................. 43
Figure 4.1: X-Y Table and Phantom under Treatment Beam Source ......................... 47
Figure 4.2: Parabolic Velocity Curve for PVT Moves................................................ 49
Figure 4.3: Describing a Contour in Segments of PVT Moves .................................. 50
Figure 4.4: Generated Trajectory, Position vs. Time .................................................. 51
Figure 4.5: Surface-Mounted LED Used as Proxy ................................................... 52
Figure 5.1: Treatment Software Interface ................................................................... 57
Figure 5.2: Node Generation Software Interface ........................................................ 63
-
8/3/2019 Kapil Sharma Thesis All
7/111
viii
Figure 5.3: Block Diagram Description of Treatment Simulation.............................. 65
Figure 5.4 Coverage Area Histogram ......................................................................... 68
Figure 6.1: Exposed Film with No Gating.................................................................. 73
Figure 6.2a: Exposed Film for Manual Gating, 1-D Target Motion........................... 74
Figure 6.2b: Isodose Lines, Manual Gating, 1-D Motion .......................................... 75
Figure 6.2c: Dose Area Histogram, Manual Gating, 1-D Motion .............................. 75
Figure 6.3a: Exposed Film for Automated Gating, 1-D Motion................................ 76
Figure 6.3b: Isodose Lines, Automated Gating, 1-D Motion..................................... 76
Figure 6.3c: Dose Area Histogram, Automated Gating, 1-D Motion ......................... 77
Figure 6.4a: Exposed Film for Manual Gating, 2-D Motion ..................................... 78
Figure 6.4b: Isodose Lines, Manual Gating, 2-D Motion .......................................... 78
Figure 6.4c: Dose Area Histogram, Manual Gating, 2-D Motion............................... 79
Figure 6.5a: Exposed Film for Automated Gating, 2-D Motion................................ 79
Figure 6.5b Isodose Lines, Automated Gating, 2-D Motion...................................... 80
Figure 6.5c Dose Area Histogram, Automated Gating, 2-D Motion .......................... 80
Figure 6.6a: 9 Stacked Films for 3-D Tumor, 2-D motion, Automated Gating from 5
Beam Approaches ............................................................................................... 82
Figure 6.6b: Isodose Lines for 9 stacked Films (1to 9), 2-D Motion, Automated Gating
from 5 Beam Approaches.................................................................................... 84
Figure 6.6c Dose Volume Histogram for Stacked Films, ), 2-D Motion, Automated
Gating from 5 Beam Approaches........................................................................ 85
-
8/3/2019 Kapil Sharma Thesis All
8/111
ix
Figure 6.7a: Exposed Film for Traditional Non-Gated Treatment, 2-D Motion and 1
Approach Direction............................................................................................. 86
Figure 6.7b: Isodose Lines for Traditional Treatment Example ................................. 87
Figure 6.7c: Dose Area Histogram for the Traditional Treatment Example.............. 87
-
8/3/2019 Kapil Sharma Thesis All
9/111
x
Acknowledgments
This work was supported and motivated by the Cleveland Clinic Foundation,
Department of Radiation Oncology. The above support is gratefully acknowledged.
I would like to thank my advisor, Dr. Wyatt Newman, for his ideas and technical
guidance. I would like to thank the rest of my committee: Dr.Martin Weinhous and Dr.
Michael Branicky. I also appreciate those who helped me at the Clinic, specifically Dr.
Roger Macklis, Mr. Greg Glosser, Dr. Ray Rodebaugh and Dr. Qin Sheng Chen.
I would like to extend my deepest appreciation to my family: without their love
and support, none of this would have been possible.
-
8/3/2019 Kapil Sharma Thesis All
10/111
xi
Physical Simulation of Image-Directed RadiationTherapy of Lung Targets
Abstract
By
KAPIL SHARMA
Traditional radiation therapy systems operate in an open-loop fashion with no
real-time feedback on patient or target position. They are often constrained by the
volume of normal tissue that must be irradiated when treating a moving target such as a
lung tumor (moving with respiration). In this study, a novel means of cancer treatment image-directed radiation therapy (IDRT) has been explored experimentally. This
treatment method offers the potential for more highly targeted radiation dose delivery to
tumors, reducing the collateral damage to surrounding, healthy tissue. It is shown that
smaller, more conformal fields, irradiating only when the target is within the portal
(known as gating), can provide an increased therapeutic ratio.
-
8/3/2019 Kapil Sharma Thesis All
11/111
xii
-
8/3/2019 Kapil Sharma Thesis All
12/111
1. INTRODUCTION
At the Cleveland Clinic, a novel means of cancer treatment image-directed
radiation therapy (IDRT) is being explored experimentally. This treatment means
offers the potential for more highly targeted radiation dose delivery to tumors,
reducing the collateral damage to surrounding, healthy tissue. This thesis presents the
motivation for IDRT, identification of the challenges in accomplishing IDRT, and
simulated and experimental results for evaluating the potential benefits of IDRT.
1.1 Overview of Cancer
Cancer is a group of diseases characterized by uncontrolled growth and spread
of abnormal cells. If the process gets out of control, the cells will continue to divide,
developing into a mass called a tumor. If a tumor is left untreated, it may invade and
destroy surrounding tissue leading to formation of new tumors in new locations, often
referred to as metastasis.
The National Cancer Institute estimates that approximately 8.2 million
Americans alive today have a history of cancer [1]. About 1,221,800 new cancer
cases are expected to be diagnosed in 1999 [1]. Since 1990, approximately 12 million
new cancer cases have been diagnosed. Lifetime risk refers to the probability that anindividual, over the course of a lifetime, will develop cancer or die from it. In the
US, men have a 1 in 2 lifetime risk of developing cancer, and for women the risk is 1
in 3.
-
8/3/2019 Kapil Sharma Thesis All
13/111
2
Treatment choices for a person with cancer depend on the type and stage of
the tumor, that is, if it has spread and how far. Treatment options may include
surgery, radiation, chemotherapy, hormone therapy, and immunotherapy. Often
several forms of treatment are combined to increase the efficacy. For example,
surgery can be followed by chemotherapy or radiation therapy to ensure the
elimination of cancerous cells. It requires experience to determine the appropriate
form of treatment from different choices.
Surgery is the oldest form of treatment for cancer and remains one of the most
important treatment components for solid tumors. Before the discovery of anesthesia
and antisepsis (methods such as sterilization of instruments to prevent infection),
surgery was performed with great discomfort and risk to the patient. Today surgery
offers the greatest chance for cure for many types of cancer. About 60% of people
with cancer will have some type of surgery [2]. The aim of surgery is to remove
malignant growth as completely and rapidly as possible. Surgery alone can be
curative in patients with localized disease, but because many patients (~70 %) have
evidence of micro-metastases at diagnosis, combining surgery with other treatment
modalities is usually necessary to achieve higher response rates [2]. Also, reducing
the tumor mass in certain cancers can increase the effectiveness of subsequent
radiation therapy or chemotherapy, both of which are most effective against small
numbers of cancer cells.
Chemotherapy is one of the most recent cancer treatment methodologies.
Chemotherapy is the use of medicines (drugs) to treat cancer. Systemic
-
8/3/2019 Kapil Sharma Thesis All
14/111
3
chemotherapy uses anticancer (cytotoxic) drugs that are usually given intravenously
or orally. These drugs enter the bloodstream and reach all areas of the body, making
this treatment potentially useful for cancer that has spread. It can include one drug or
several drugs, taken from a choice of different available drugs.
Chemotherapy drugs work by interfering with the ability of a cancer cell to
divide and reproduce itself. The affected cells become damaged and eventually die.
As the drugs are carried in the blood, they can reach cancer cells all over the body.
Unfortunately, chemotherapy drugs can also affect normal cells, sometimes causing
unpleasant to toxic side effects. Chemotherapy is particularly valuable as the primary
form of treatment for cancers that do not form a shape, like leukemia and lymphoma.
Radiation therapy is one of the major treatment modalities for cancer.
Approximately 60% of all people with cancer will be treated with radiation therapy
sometime during the course of their disease [2]. With advances in radiobiology and
equipment technology, radiation therapy can now be delivered with maximum
therapeutic benefits, minimizing toxicity and sparing healthy tissues. In addition to
its therapeutic benefits, radiotherapy is a non-invasive or minimally invasive
procedure.
Radiotherapy, or radiation therapy, is the treatment of cancer and other
diseases with ionizing radiation. Ionizing radiation deposits energy that injures or
destroys cells in the area being treated (the target tissue) by damaging their DNA
structure, making it impossible for these cells to continue to grow (mitotic death).
Although normal cells can also be affected by ionizing radiation, they are usually
-
8/3/2019 Kapil Sharma Thesis All
15/111
4
better able to repair their DNA damage. Radiation therapy may be used to treat
localized solid tumors, such as cancers of the skin, brain, breast and lung. It can also
be used to treat leukemia and lymphoma.
1.2 Radiotherapy
A novel approach to radiation therapy, image-directed radiation therapy is the
focus of this thesis. Soon after discovery of X-rays by Roentgen in 1895, radiations
dramatic effects on normal tissues were discovered [3]. The higher the energy of the
X-rays, the deeper the X-rays can penetrate into target tissue. Linear accelerators are
machines that produce X-rays of increasingly greater energy. The use of these
machines to focus radiation (such as X-rays) on a cancer site is called external beam
radiotherapy.
Gamma rays are the another form of photons used in radiotherapy. Gamma
rays are produced spontaneously as certain elements (such as radium, uranium and
cobalt 60) release radiation as they decay. X-rays and gamma rays have the same
effect on cancer cells.
Another technique for delivering radiation to cancer cells is to place
radioactive implants directly on or in a tumor or body cavity. This is called internal
radiotherapy. Brachytherapy, interstitial irradiation, and intracavitary irradiation are
the types of internal radiotherapy [2]. In this treatment, the radiation dose is
concentrated in a small area, and the patient usually stays in the hospital for few days.
-
8/3/2019 Kapil Sharma Thesis All
16/111
5
Internal radiotherapy is frequently used for cancers of the tongue, uterus, cervix,
prostate and others.
An investigational approach is particle beam radiation therapy, in which fast
moving subatomic particles (like neutrons, pions and heavy ions) are used instead of
photons.
Figure 1.1: Probability of Tumor Tissue and Normal Tissue Morbidity versusDose (reprinted from [4])
Radiations effect on individual cells is a probabilistic process [4]. However,
the effects of radiation on a large set of cells are more deterministic. As shown in
figure 1.1, there is a minimum dose threshold to achieve a clinical effect and
maximum dose above which all cells will demonstrate the effect. The primary aim of
radiotherapy is to deliver a high dose to maximize the probability of tumor control
-
8/3/2019 Kapil Sharma Thesis All
17/111
6
with risk to normal tissue below the intolerable level. In certain areas, the
radiosensitivity of surrounding normal tissue becomes the dominant factor (e.g optic
chiasm in brain tumor, spine in lung tumors), thus limiting the maximum amount of
dose that can be delivered. Some tissues, such as in the lung, have a low dose
threshold for permanent radiation effects. Doses as low as 25 Gray (joule/kg) can
lead to permanent damage, resulting in the loss of lung functionality.
Figure 1.2: Linear Accelerator
In traditional radiotherapy a medical linear accelerator (figure 1.2) is used to
deliver a dose to target tissue from one or more angles, typically 2-4 angles.
Fractionation (dividing the treatment over time into multiple smaller doses or
fractions of radiation) is used to improve the radiation effect on the tumor while
minimizing the effect on normal cells. The rationale behind fractionation is that
-
8/3/2019 Kapil Sharma Thesis All
18/111
7
normal tissue tolerates small, daily doses of radiation relatively well. The tumor does
not tolerate the small, daily doses, resulting in control of the tumor.
1.3 Stereotactic Radiotherapy and Radiosurgery
Stereotactic technology has been applied to neurosurgery since the early
nineties [5,6]. Recently, it has been applied to radiation treatment of tumors,
particularly brain tumors [7,8]
Stereotactic radiotherapy involves varying the angle of a radiation treatment
beam in 3-d together with varying beam intensities to achieve very precise delivery of
radiation to target tissue. Radiation beams are aimed at a focal point. The dose
distributions achieved by these techniques assure large doses to the target volume and
much lower doses to the surrounding normal tissues. Most of the time spent during
the procedure is in precisely planning the delivery of radiation beams to focus on the
tumor and minimize damage to surrounding, normal tissue. This is known as
conformal treatment planning. Stereotactic radiotherapy is primarily used for
treatment of brain tumors. A head frame is attached to the patients skull; with the
assistance of a CT or MRI scanner providing a three-dimensional image, the frame
helps pinpoint the tumor location without opening the skull. Further, stereotactic
radiosurgery is typically given as single treatment (single fraction) whereas
stereotactic radiotherapy is given as a course of treatments (multiple fractions). The
Cleveland Clinic has four kinds of external beam treatment systems; standard medical
linear accelerators, the Leksell Gamma Knife [9], a Peacock intensity modulated
-
8/3/2019 Kapil Sharma Thesis All
19/111
8
radiation therapy system [10], and a Cyberknife image-directed therapy system [11
12]. The Gamma Knife provides non-fractionated stereotactic radiosurgery. The
others are capable of both stereotactic radiosurgery and fractionated stereotactic
radiotherapy. The Gamma Knife functions by delivering beams from 201 Cobalt-60
sources to a focal point. The standard medical accelerators deliver radiation using
beam arcs. The Peacock uses a fan beam with intensity modulated X-rays within
the fan to achieve a conformal dose distribution. The Cyberknife delivers radiation
from a miniature accelerator mounted on a robotic manipulator under real-time
image-directed, computer control to provide a confromal dose distrbution.
1.4 Image-Directed Radiation Therapy (IDRT)
Interactive image-guided surgery has been used in the field of neurosurgery
[13]. But its use in the field of radiation treatment is very new [12,14] Conventional
stereotactic radiation therapy involves use of a frame rigidly attached to the patients
skull to provide a reference for both targeting and treatment. The idea is that after
positioning the patient with the help of a frame, if a beam is constrained to pass
through a particular point in the frame coordinate system, it will also pass through the
intended target within a patient. But this assumes that the patient does not move after
alignment is done. It is an open-loop treatment system in the sense that once the
alignment is done, there is no adjustment for subsequent motion of patient or tumor.
This assumption is reasonable for targets within the skull when a frame is bolted to
the skull and also rigidly fixtured to ground. Image-directed radiation therapy uses
-
8/3/2019 Kapil Sharma Thesis All
20/111
9
real-time images of the target (or fiducial markers in or around the target) in place of
the frame to alter the aim of the radiation source so that the intended target is always
in the beams path, hence providing a closed-loop system.
Currently the Cyberknife (see figure 1.3) is the only radiation treatment
system using this technology. It uses a pair of orthogonal ceiling-mounted diagnostic
quality x-ray sources to provide near real-time feedback of patient position. The
treatment source is a miniature X-band linear accelerator manipulated by a six
degree-of-freedom Fanuc robot. The system has a set of predefined treatment
nodes or directions, from which a portion of a treatment can be delivered.
Selection of particular nodes and the dose delivered from each node is done by
computerized treatment planning. During treatment, the robot sequentially moves the
accelerator to each of the selected nodes, it waits while the real-time diagnostic
imager acquires a pair of target/anatomy images, and it compares and registers the
diagnostic images with reconstructed synthetic images from previously-acquired CT
data. This comparison enables the system to see if any patient motion has occurred; if
so, the robot moves the accelerator to correct for that motion. As long as the patient
motion is less than 1 centimeter, the system will automatically correct for the motion.
-
8/3/2019 Kapil Sharma Thesis All
21/111
10
Figure 1.3: Cleveland Clinic Cancer Center Cyberknife Treatment System
1.5 IDRT of Lung Tumors
Lung cancer is the most common cancer-related cause of death among men
and women. It is the most commonly occurring cancer (figures 1.4 and 1.5) among
men and women. There will be estimated 171,600 new lung tumor cases in 1999
[15], accounting for 14% of cancer diagnoses. An estimated 158,900 deaths due to
lung cancer will occur in 1999, accounting for 28% of all cancer deaths [15].
-
8/3/2019 Kapil Sharma Thesis All
22/111
11
Figure 1.4: Male Cancer Risks [15] Figure 1.5: Female Cancer Risks [15]
One of the difficulties of radiation treatment of lung tumors is that, of all the
tumors, lung tumors demonstrate the greatest motion and deformation due to both
breathing and heartbeat (figure 1.6). During treatment, however, there is no
adjustment for this motion in real time. Instead a wider treatment beam is used to
conservatively guarantee that the target remains inside the beam [17].
-
8/3/2019 Kapil Sharma Thesis All
23/111
12
Figure 1.6: Translational Motion of Lung Tumor during Respiration [16]
Tumor identification is done using Computerized Tomography. Physicians
draw outlines of tumors and critical structures using these images. Using a prescribed
minimum dose to the tumor and maximal dose to critical structures, a dosemetrist
uses a computer treatment planning system to calculate the optimal treatment. At
present, the area of the beam is made larger than the tumor area to ensure coverage of
all cancerous tissue and to account for motion. This margin is usually ~2cm [17].
Finally, the length of time that the beam is on is at least several seconds, which is
longer than the breathing cycle. Conventional treatment planning and delivery cannot
fully account for the fundamental inaccuracy of using static images and no feedback
to treat a moving tumor. This provides a motivation for the use of Image-Directed
Radiation Therapy to provide a closed-loop treatment system adjusting the beam with
tumor motion.
-
8/3/2019 Kapil Sharma Thesis All
24/111
13
The Cyberknife currently is being used for treatment of tumor sites within the
skull and near the spinal cord. The imaging system in Cyberknife keys on rigid skull
features to perform image correlation with CT images. One of the greatest
advantages of the Cyberknife system is that it has six degrees of freedom. This
flexibility allows the system to be used for the treatment of extracranial tumor sites.
But the system, in its present form, cannot be used for treatment of extracranial
tumors - specifically lung tumors - due to the following constraints.
1. The image quality of the X-ray images is poor, so it can only use rigid structures
or bones to perform image correlation. In the case of lung tumors this is particularly
problemetic as the number of obstructions and occlusions in the torso makes
automatic detection of tumors nearly impossible in real-time.
2. Assuming tumors could be identified within the images, the current image
correlation will take around 6 seconds, which would render the system useless for
treatment of moving lung tumors. A typical tumor will have a motion period of 1-3
seconds during which it can move anywhere from 0-2 centimeters.
3. Cyberknife is a point-and-shoot system. It is not designed to track tumors.
4. In addition to the technical challenges of adapting the system for treatment of
other tumor sites, there are legal and regulatory challenges. The Food and Drug
Administration (FDA) must approve all experimental devices and treatment. While
the Cyberknife is presently approved under an Investigational Device Exemption for
treatment of intacranial tumors, treatment outside the skull requires additional FDA
-
8/3/2019 Kapil Sharma Thesis All
25/111
14
approval. These constraints can be overcome to a certain extent with the use of image
proxies and human interaction.
A proxy is an indirect, external, visible marker, which can be used to infer the
position of a tumor inside the body. The proxy position can be determined with the
help of calibrated video cameras. If a coordinate transform between a proxy and a
tumor is known, the position of the tumor can be computed from the proxy location.
Use of a proxy can thus avoid dependence on unreliable and poor quality diagnostic
imaging for computing the 3-dimensional tumor positions. Given a reliable transform
between a proxy (or proxies) and a target tumor, it would be possible to identify
tumor coordinates reliably and accurately using conventional video cameras.
Assuming fast, accurate, and reliable identification of tumor coordinates, one
could exploit control over treatment beam power of aim to achieve more precise
radiation dose delivery. In this scenario, a physician would see a real-time display of
tumor and beam coordinates on a screen and could gate or track the beam using a
mouse or keypad or joystick. Here, gating means turning the beam on whenever the
tumor is in position, as opposed to tracking, which means following the target with
the beam turned on. Previous work done in computer simulation has shown that
using real-time feedback of images, a trained physician can treat a tumor with
increased dose while reducing the dose to healthy tissue [18].
-
8/3/2019 Kapil Sharma Thesis All
26/111
15
1.6 Goal and Organization of this Thesis
The purpose of this study was to evaluate the feasibility of image-directed
gated treatment of lung tumors using the Cyberknife. A treatment environment was
simulated using both hardware as well as software. The experimental testbed
consisted of the following main components:
The Cyberknife system with a robotically manipulated liner accelerator.
Experimental target and means for measuring the results.
Generation of motion emulating the trajectory of a lung tumor due to respiration.
Choice of a proxy to imply the position of a tumor inside the phantom.
A calibrated video camera.
Means to compute real-time 3-D tumor coordinates from video images of moving
proxies.
Real-time graphical display of computed tumor and beam coordinates.
Means to manually or automatically modulate (gate) the radiation beam.
This thesis is organized as follows:
Calibration of the robots tool-frame with respect to the robots base frame is
discussed in chapter 2. Chapter 3 discusses the camera calibration technique
employed. Chapter 4 describes the physical components of the experimental testbed.
Chapter 5 describes the software components of the testbed. Finally the results and
conclusions are presented in chapter 6.
-
8/3/2019 Kapil Sharma Thesis All
27/111
16
2. TOOL-FRAME CALIBRATION
Success of image-directed radiation therapy depends critically on accurate
calibration between computed beam coordinates and computed target coordinates.
Achieving this calibration requires identification of multiple coordinate
transformations. Coordinate transforms include: robot joint angles to tool-flange
position and orientation (with respect to the robot base frame coordinates); tool frame
(e.g. radiation beam) coordinates to robot tool-flange coordinates; camera-frame
coordinates to robot base-frame coordinates; and proxy coordinates to target
coordinates. Identification of the first coordinate transform, i.e. robot joint angles to
tool-flange position and orientation, already has been done by the robots
manufacturer. Identification of all other transformations was a part of this thesis. In
this respect, the first step was identification of the tool frame to tool-flange coordinate
transformation.
To reconcile treatment-beam coordinates with camera coordinates, an
intermediate step was used, involving a tool which was easy to align with the beam
and easily recognized by the camera. The tool was a modified calibration pointer,
which fit precisely within a mount aligned collinear with the beam axis. The pointer
was retrofit with a light-emitting diode (LED) at its tip, which was easily recognized
in camera scenes by simple thresholding. The mounted tool is shown in figure 2.1.
Calibration was performed in two steps. First, the tool frame transform (from robot
flange coordinates to pointer tip) was identified using a fixed reference point, then the
-
8/3/2019 Kapil Sharma Thesis All
28/111
17
camera was calibrated using the tool. This chapter describes the tool-frame
calibration, chapter 3 presents the camera calibration.
2.1 Relation between Tool Frame and World Frame
The Cyberknife robot has a default tool frame defined on its tool flange.
Whenever the robot is jogged in space, the 3-D coordinates corresponding to the
robots forward kinematics from base frame to tool-flange frame are computed and
displayed. Figure 2.2 shows the world frame, the default tool-flange frame and the
new tool frame defined parallel to the tool-flange frame.
-
8/3/2019 Kapil Sharma Thesis All
29/111
18
Figure 2.1: Tool Used for Calibration
-
8/3/2019 Kapil Sharma Thesis All
30/111
19
Figure 2.2: Coordinate Frames Defined on the Robot
In figure 2.2, subscript 0 refers to the world frame coordinates, subscript 6
refers to the default tool-flange coordinate frame, and subscript 7 refers to the defined
tool-frame at the pointer tool tip.
We can express the following relation among the different frames [19]:
6 / 70 / 60 / 60 / 7 P RPP += 2.1
where 0 / 7P is the position of the origin (LED center) of tool frame 7 with respect to
the world frame 0, 0 / 6P is the position of the origin of the default tool-flange frame 6
with respect to the world frame 0, 6 / 7P is the position of the origin of the tool frame 7
-
8/3/2019 Kapil Sharma Thesis All
31/111
20
with respect to default tool-frame 6 and 0 / 6 R is the rotation matrix of default tool
frame 6 with respect to world frame 0.
Let w be the yaw angle, which is the angle of rotation between frame 6 and
frame 0 about the x axis, p be the pitch angle, which is the corresponding angle about
the y axis, and r be the roll angle, which is the corresponding angle about the z axis.
Then the rotation matrix 0 / 6 R can be written as:
x y z R R R R 0 / 60 / 60 / 60 / 6 =
where superscripts z y x ,, represent the rotation matrix for yaw , pitch and roll
respectively. Notice that the order of rotation is yaw, pitch and then roll. The order
of rotation is important because matrix multiplication is not commutative. Also note
that the defined tool frame is parallel to the default tool frame. The w,p,r rotations for
the defined tool frame are the same as for the default tool frame. The matrices for
yaw , pitch and roll can be written as [19]:
=
=
=
100
0)cos()sin(
0)sin()cos(
)cos(0)sin(
010
)sin(0)cos(
cos)sin(0
)sin()cos(0
001
0 / 6
0 / 6
0 / 6
r r
r r
R
p p
p p
R
ww
ww R
z
y
x
Rearranging equation 2.1 we have:
-
8/3/2019 Kapil Sharma Thesis All
32/111
21
0 / 66 / 70 / 60 / 7 PP R IP = 2.2
Here I is the 3x3 identity matrix100010
001
Equivalently we can write equation 2.2 as
[ ] [ ] 130 / 6166 / 7
0 / 7630 / 6
= PP
P R I M 2.3
which is of the form
131663 = B X A
Here we have 6 unknowns given by vector X and 3 equations, which can not solved
to obtain a unique solution. We need at least three more equations to obtain a
solution, which we obtain as follows.
2.2 Solving for 6 / 7P
A reference point is used for generating more equations to solve for the
unknowns. The tool tip, i.e. the LED, is touched to the reference point from different
directions (see figure 2.3).
-
8/3/2019 Kapil Sharma Thesis All
33/111
22
Figure 2.3: Robots Tool Tip Touches a Reference Point
Since the reference point is unchanged, we have constant 0 / 6P . If n denotes
the number of different directions from which the tool tip touches the reference point,
we have the following equation:
130 / 6
20 / 6
10 / 6
166 / 7
0 / 7
630 / 6
20 / 6
10 / 6
.
.
..
..
=
n
n
n
n P
P
P
P
P
R I
R I
R I
2.4
where superscript 1,2,,n corresponds to the approach angle each of the n
measurements.
-
8/3/2019 Kapil Sharma Thesis All
34/111
23
Note that ii RP 0 / 60 / 6 and are known for each case i from the robot controllers
display of forward kinematics to the tool-flange.
For n=2 we have 6 unknowns and 6 simulations equations, which can be
easily solved to compute the solution. For n>2 we can compute the least squares
solution using the following method.
Equation 2.4 is equivalent to the following form:
131663 = nn B X A 2.5
where,
130 / 6
20 / 6
10 / 6
6 / 7
0 / 7
0 / 6
20 / 6
10 / 6
.
.,,
..
..
==
=
n
nn P
P
P
BP
P X
R I
R I
R I
A
Computing the pseudo inverse as:
A A A A T 1)( + = 2.6
the least squares solution follows as:
B A X +=)
2.7
Further, we can also compute the least square error as:
[ ] [ ] B X A B X An
Error T =
))
31
2.8
The following are the steps used for toolframe calibration:
Use the default tool frame as the robots tool-frame.
-
8/3/2019 Kapil Sharma Thesis All
35/111
24
Jog the robot and touch the tool tip to a reference point from multiple different
directions.
Record the roll angle, yaw angle, pitch angle and tool position 0 / 6P for each such
pose.
Compute the solution for the tool-frame as per equations 2.6 and 2.7, solving for
0 / 76 / 7 and PP .
2.3 Results
To test the solution, first a set of synthetic data was generated. The data
included the values of 6 / 70 / 60 / 60 / 7 and,, P RPP which solved equation 2.1. In the first
experiment, no error value was introduced, allowing for a perfect solution. In
subsequent analysis, uniform random noise of 1 mm, 2mm, 4mm, 6mm and 8mm
peak value was added to the values of 0 / 60 / 6 and RP . Fifteen different sets of
0 / 60 / 6 and RP were generated. Equations were solved by the method described in
section 2.2, and resulting values for 6 / 70 / 7 and PP were recorded. The results
obtained are summarized in table 2.1:
-
8/3/2019 Kapil Sharma Thesis All
36/111
25
Synthetic random errorpeak value
Computed
X (in mm)
Computed
Z (in mm)
Computed
Y(in mm)
Calculatederror(mm)
1mm -830.309 0.529 109.254 0.499
2mm -830.136 0.855 108.962 0.999
4mm -829.790 1.505 108.378 1.998
6mm -829.443 2.15 107.795 2.998
8mm -829.0976 2.80758 107.211 3.99
Table 2.1: Identification of 6 / 7P in the Presence of Noise. Actual 6 / 7P = {-830.48,0.204,109.54}
The X,Y and Z coordinates in table 3.1 are the coordinates of 6 / 7P and the error is
computed by equation 2.8.
For the purpose of tool-frame computation, the robot touched the reference
point from 15 different directions. The resulting error calculated by equation 2.6 was
2.1 mm. To test the accuracy of the tool-frame coordinate identification, the tool
frame used by the robots controller was changed to the computed tool-frame, and the
LED tip was touched to the reference point from different directions. The location of
the reference point was different from the location used for calibration of the tool
frame. The values for X,Y and Z world coordinates were recorded from the robots
-
8/3/2019 Kapil Sharma Thesis All
37/111
26
teach pendant display. Tables 2.2 and 2.3 summarize the results for two different test
point locations.
X (in mm) Y (in mm) Z (in mm) Euclidean distancefrom centroid (in mm)
2185.00 654.654 89.417 3.01
2182.093 655.068 89.337 0.36
2179.344 654.582 88.598 2.8
Tables 2.2: Computed Coordinates at a Test Reference Point 1 from 3Approaches Using Identified 6 / 7P
X (in mm) Y (in mm) Z (in mm) Euclidean distancefrom centroid (in mm)
2155.488 536.00 133.207 3.06
2157.594 538.065 133.716 0.64
2160.84 538.67 134.715 3.37
Tables 2.3: Computed Coordinates at a Second Test Reference Point from 3
Approaches Using the Same Identified 6 / 7P
-
8/3/2019 Kapil Sharma Thesis All
38/111
27
2.4 Conclusion
Use of the identified coordinate transform in the robot kinematic computations
resulted in positioning errors in excess of 3mm. For treatment, beam positioning
accuracy should be better than 2mm. However such precision can not be obtained
through improved tool-frame identification. The source of the error can be the robot
mastering (joint-angle calibration), transmission wind-up or backlash, gravity
droop, or other effects not included in a rigid-link kinematic model. Section 5.5 will
discuss a method to further improve the precision using addition of pre-computed
offsets for each required pose.
-
8/3/2019 Kapil Sharma Thesis All
39/111
28
3. ROBOT-CAMERA CALIBRATION
The most important step in our treatment testbed is obtaining the 3-
dimensional coordinates of a proxy, which can be later used to compute the 3-
dimensional location of a tumor. A video camera is used to obtain the positional
information of a proxy in the robots base frame. The first and foremost requirement
in this process is robot-camera calibration. Robot-camera calibration means
obtaining the transformation parameters between a cameras image frame and a
robots base frame. We first discuss different camera models, then present our
calibration procedure, and conclude with our calibration results.
3.1 Camera Models
3.1.1 A Distortion-Free Camera Model
The purpose of a model is to relate the coordinates of a point in a cameras
image frame to the coordinates of the corresponding point in space, expressed in a
reference coordinate system. Let },,,{ wwww O Z Y X denote the world coordinate
system centered on the world frame origin wO , },,,{ cccc O Z Y X denote the camera
coordinate system, whose origin is at the optical center point cO , and whose axis
coincides with the optical axis; and let },,{ iii OY X denote the image coordinate
system centered at iO (at the intersection of the optical axis c Z and the image plane as
illustrated in figure 3.1). The image frame axes },{ ii Y X lie on a plane parallel to the
-
8/3/2019 Kapil Sharma Thesis All
40/111
29
c X and cY axes. Let ),(and),,(,),,( iicccwww y x z y x z y x be the coordinates of a point
in world, camera and image frames respectively. The transformation of the point P
from the world coordinates wp to the camera coordinates cp is given by:
wc
w z
y
x
wc
c z
y
x
p
p
p
R
p
p
p
/
/
/
/
o+=
or, for simplicity of notation,
t+=w
w
w
c
c
c
z y
x
R z y
x
3.1
Where the rotation matrix R and translation vector t are written as:
=
987
654
321
r r r
r r r
r r r
R
and
=
z
y
x
t
t
t
t
-
8/3/2019 Kapil Sharma Thesis All
41/111
30
Figure 3.1: Camera Coordinate System Assignment
We invoke the standard distortion-free pin-hole model assumption that
every real object point is connected to its corresponding image point through a
straight line that passes through the focal point of the camera lens [23]. The
following perspective equations result, relating coordinates of point p expressed in the
camera frame to coordinates in the image plane:
z x
f u = 3.2
z y
f v = 3.3
In the above, f is the (effective) focal length of the camera and ),( vu are the analog
coordinates of the object point in the image plane. The image coordinates ),( ii y x are
related to ),( vu by the following equations,
-
8/3/2019 Kapil Sharma Thesis All
42/111
31
us x ui = 3.4
vs y vi = 3.5
The scale factors, us and vs , not only account for TV scanning and timing
effects, but also perform units conversion from camera coordinates ),( vu , the units of
which are meters, to the image coordinates ) ,( ii y x measured in pixels.
The camera calibration parameters are divided into extrinsic parameters (the
elements of tand R ), which convey information about the camera position and
orientation with respect to the world coordinate system, and the intrinsic parameters
(such as f ss vu ,, and distortion coefficients that will be discussed later), which
convey the internal information about the camera components and about the interface
of the camera to the vision system (frame grabber).
Since there are only two independent parameters in the set of intrinsic
parameters vu ss , and f , it is convenient to define:
u x fs f = 3.6
v y fs f = 3.7
Combining the above equations with equation 3.1 yields the undistorted
camera model that relates coordinates in the world frame } ,,{ www Z Y X to the image
coordinate system },{ ii Y X
zwww
xwww xi t zr yr xr
t zr yr xr f x
++++++
=987
321 3.8
-
8/3/2019 Kapil Sharma Thesis All
43/111
32
zwww
ywww yi t zr yr xr
t zr yr xr f y
++++++
=987
654 3.9
Note that the image (pixel) coordinates stored in the computer memory of the
vision system are generally not equal to the image coordinates ) ,( ii y x computed by
equations 3.8 and 3.9. Let ),( f f y x be the image (pixel) coordinates stored in
computers memory for an arbitrary point, and let ),( y x cc be the computed image
coordinates for the center iO in the image plane. ),( ii y x is then related to
),( f f y x by the relation
y f i
x f i
c y y
c x x
==
The ideal values of xc and yc are the center of the pixel array. But in reality
there is usually uncertainty of about 10-20 pixels [25, 26].
3.1.2 Lens Distortion Model
Actual cameras and lenses include a variety of aberrations and thus do not
obey the above ideal model. The main sources of error are:
a) Image spatial resolution defined by spatial digitization is relatively low. e.g
512x480
b) Lenses introduce distortion.
c) Camera assembly involves a considerable amount of internal misalignment. e.g.
the center of the CCD sensing array may not be coincident with the optical
principal point (the intersection of the optical axis with the image plane).
-
8/3/2019 Kapil Sharma Thesis All
44/111
33
d) Hardware timing introduces mismatches between the image acquisition hardware
and the camera scanning hardware.
As a result of several types of imperfections in the design and assembly of
lenses, the distortion-free pinhole model may not be sufficiently. Accuracy can be
improved by models that take into account positional errors due to distortion:
),( vu Duu u+= 3.10
),( vu Dvv v+= 3.11
where, u and v are the unobservable distortion-free image coordinates, and u and v
are the corresponding coordinates taking distortion into account.
-
8/3/2019 Kapil Sharma Thesis All
45/111
34
Fig 3.2 Effects of Radial Distortion [22]
Fig 3.3: Effects of Tangential Distortion [22]
-
8/3/2019 Kapil Sharma Thesis All
46/111
35
Two types of lens distortion are radial and tangential distortions, as shown in
figure 3.2 and figure 3.3. Radial distortion causes an inward or outward displacement
of a given image point from its ideal location. This type of distortion is mainly
caused by flawed radial curvature of lens elements. Camera calibration researchers
argued and experimentally verified that radial distortion is the dominant distortion
effect [24]. We can approximate the radial component of distortion as:
]),[()(),( 522 vuOvukuvu D u ++= 3.12
]),[()(),( 522 vuOvukvvu Dv
++= 3.13
The higher-order terms can for all practical purposes be dropped. Substituting the
above into equations 3.10 and 3.11 yields
)1( 2r k uu +=
)1( 2r k vv +=
where
222 vur +=
Because the undistorted image coordinates u and v are unknown, it is
desirable to replace these by measurable image coordinates of x and y. Thus,
222 )()( viui s ys xr +=
Define the radial distortion coefficient 2 / as, vsk k k , and the ratio of scale factors
, as:
u
v
x
y
s
s
f
f = 3.14
-
8/3/2019 Kapil Sharma Thesis All
47/111
36
Further, define
2222ii y xr + 3.15
With the above substitutions, one obtains the following camera model that takes into
account small radial-distortion effects:
zwww
xwww xi t zr yr xr
t zr yr xr f kr x +++
+++=+987
3212 )1( 3.16
zwww
ywww
yi t zr yr xr
t zr yr xr
f kr y +++
+++=+
987
6542
)1( 3.17
Under the approximation that 12
-
8/3/2019 Kapil Sharma Thesis All
48/111
37
3.2 RAC-Based Camera Calibration
The camera calibration problem is to identify the set of extrinsic parameters
(camera location and orientation in world coordinates) and intrinsic parameters (such
as focal length, scale factors, distortion coefficients, etc.) of the camera using a set of
points known both in world coordinates and image coordinates. The camera
calibration methods can be divided into two categories: iterative and non-iterative.
The non-iterative methods provide a closed form solution for the calibration
parameters, and hence are faster [20, 21]. But they have a fundamental inaccuracy
present due to neglecting the lens distortion effect. The iterative methods, which take
lens distortion into account, are done usually in two steps involving iterative as well
as non-iterative approaches [23, 24 and 27]. In this project we used an iterative
calibration method known as the radial alignment constraint (RAC)-based camera
calibration method as proposed by Tsai [23, 24]. The mathematical details of the
calibration procedure are described in appendix 1. It is initially assumed that image
center ),( y x cc coordinates and the ratio of scale factors are known. Methods for
estimation of y x cc and, are described in references [25, 26]. The results from
calibration will be the estimated values of the intrinsic and extrinsic parameters.
-
8/3/2019 Kapil Sharma Thesis All
49/111
38
3.3 Computation of 3-D Coordinates from a Calibrated Camera
a. USING ONE CAMERA AND KNOWN Z WORLD COORDINATE
After performing camera calibration we get the intrinsic and extrinsic
parameters of a camera, which can be used to compute the 3-D position of a point
whose coordinates are known in the image plane and whose world w z coordinate is
known. Rearranging equations 3.16 and 3.17 we obtain:
z x
i xw
x
iw
x
iw
x
i t kr f
xt zr r kr
f
x yr r kr
f
x xr r kr
f
x)1(])1([])1([])1([ 239
228
217
2 +=+++++
z y
i yw
y
iw
y
iw
y
i t kr f yt zr r kr
f y yr r kr
f y xr r kr
f y )1(])1([])1([])1([ 269
258
247
2 +=+++++
These are simultaneous equations of the type:
2232221
1131211
b za ya xa
b za ya xa
www
www
=++=++
Now, if we know the value of w z , these equations simplify to two simultaneous
equations of two unknowns, which can be easily solved to obtain ww y x , .
b. USING STEREO VISION
Two calibrated cameras can be used to compute the complete 3-D coordinates
of a point whose image coordinates in both the camera frames are known. For two
cameras we have the following equations.
-
8/3/2019 Kapil Sharma Thesis All
50/111
39
z x
i xw
x
iw
x
iw
x
i t kr f
xt zr r kr
f
x yr r kr
f
x xr r kr
f
x)1(])1([])1([])1([ 239
228
217
2 +=+++++
z y
i yw
y
iw
y
iw
y
i t kr f
yt zr r kr
f
y yr r kr
f
y xr r kr
f
y)1(])1([])1([])1([ 269
258
247
2 +=+++++
z x
i xw
x
iw
x
iw
x
i t r k f
xt zr r r k
f
x yr r r k
f
x xr r r k
f
x)1(])1([])1([])1([ 239
228
217
2 +
=+
++
++
z y
i yw
y
iw
y
iw
y
i t r k f
yt zr r r k
f
y yr r r k
f
y xr r r k
f
y)1(])1([])1([])1([ 269
258
247
2 +
=+
++
++
where, primed parameters are for camera 2 and non-primed parameters are for camera
1. These are simultaneous equations of the type
2232221
1131211
b za ya xa
b za ya xa
www
www
=++=++
4434241
3333231
b za ya xa
b za ya xa
www
www
=++=++
These are four simultaneous linear equations with three unknowns. These can
be solved by the linear least squares method using the pseudo-inverse to compute the
solution with least mean square error.
Note that in both the methods we have assumed that the image coordinates are
the same as the computer representation of the image coordinates. But in reality they
are related by following relation:
y f i
x f i
c y y
c x x
=
=
where f f y x , are the computer representation of the image coordinates.
-
8/3/2019 Kapil Sharma Thesis All
51/111
40
3.4 Automated Robot/Camera Calibration
So far we have discussed the mathematical aspects of camera calibration.
Now we will discuss the actual method that was involved in calibration of our camera
with the robots base frame. The base frame of the robot was used for camera
calibration because ultimately we want to get the 3-D coordinates of points in the
robots base frame.
3.4.1 Generation of 3D Points in World Frame for Calibration
The robot was used to generate random 3D points for calibration poses. Our
tool with a Light Emitting Diode (LED) was used as an end-effector of the robot (see
figure 2.1). The 3-D position of the LED was computed using robots kinematics (see
chapter 2). For generation of sample points, a program was used which generated
random points within the camera view frame. These positions were recorded and
stored in a file. While performing the calibration, the robot was sequenced through
these positions automatically.
3.4.2 Generation of 2D Image Coordinates
Live video stream from the video camera was captured. For each position of
the robot, a snapshot of the illuminated LED was taken in a darkened room. Images
were thresholded, resulting in the LED corresponding to the only non-zero pixels.
Centroids of the LED images were computed, which served as our 2D image
-
8/3/2019 Kapil Sharma Thesis All
52/111
41
coordinates. This step was performed by an automated calibration routine. The
following standard algorithm was used for centroid calculation [28]:
1. Threshold the image using a threshold T.
2. Compute the centroid of the white pixels using the formula
=
== N
ii
N
iii
P
P x x
1
1
=
== N
ii
N
iii
P
P y y
1
1
where, N is the number of white pixels and iP is the pixel intensity value for ith pixel.
Here, white pixels are the pixels with pixel value greater than threshold.
3.4.3 Calibration ComputationAfter computing the corresponding 2-D image coordinates for each 3D
position of the LED, the algorithm discussed in section 3.3 was used for computing
the calibration parameters. The calibration parameters were saved for the subsequent
3-D coordinate computations. The simplified algorithm is:
1) Read the next position from the file and command the robot to move to that
position.
2) Capture a snapshot of the LED from a video camera.
3) Process the image to obtain the 2-D coordinates of the LED.
-
8/3/2019 Kapil Sharma Thesis All
53/111
42
4) Store the 2D image coordinates. If there are more positions, go to step 1,
otherwise go to step 5.
5) Compute the calibration parameters using the recorded 2D image coordinates and
the corresponding stored 3-D world coordinates. Store the parameters in a file.
The calibration program also performed the synchronization of a user-interface
workstation, which also captured video from the video camera, and the robot-
controller workstation, which controlled the robots positioning. The two
workstations communicated through TCP/IP sockets. The robot-controller
workstation communicated with the robots servo controller through a serial port.
Figure 3.4 Communications between Different Hardware Components
3.4.4 Calibration User-Interface Software
The automated calibration includes a graphical user interface, as shown in
figure 3.5. The display portion of the interface displays the thresholded image from
-
8/3/2019 Kapil Sharma Thesis All
54/111
43
live video stream (see figure 3.4). The Connect button is for setting up network
communications with the client. The Gather Data button commands the robot to
successively go to all the positions and invokes computation of the centroid position
for all those positions. The Calibrate Button performs the calibration process
discussed in section 3.3 on the recorded data and then stores the calibration data in a
file. The Exit button exits the calibration program.
Fig 3.5: Calibration Software Interface, Only the Target LED SurvivesThresholding
-
8/3/2019 Kapil Sharma Thesis All
55/111
44
3.5 Results
A set of 80 data points was used for camera calibration. The image size was
720x486 pixels. The camera view-port was about 20cmx15cm. The calibration
parameters identified from data were recorded. With these values of calibration
parameters, the calibration data was consistent with the identified model to the
following extent:
Image plane error in pixels:
Mean = 0.95
Standard deviation=0.59,
Maximum=2.48
Object space error in millimeters:
Mean=0.252
Standard deviation=0.158
Maximum=0.672
Two more accuracy tests were done. In the first test, world x,y,z coordinates
were given as input, then image coordinates were computed from the identified
camera model. In the second test, x,y image coordinates and z world coordinate were
given as input, and x,y world coordinates were computed from the calibration
parameters. The results are summarized in tables 3.1 and 3.2
-
8/3/2019 Kapil Sharma Thesis All
56/111
45
Computed
Image X
Computed
Image Y
Actual
Image X
Actual
Image Y
Error
539.83 324.16 539.92 324.27 0.14431.70 92.22 432.45 92.11 0.76522.63 234.25 521.61 235.69 1.74
Table 3.1: Accuracy Test 1. (All dimensions are in pixels)
Computedworld x
Computedworld y
Actualworld x
Actualworld y
Error
2327.43 739.60 2327.27 740.44 0.862311.39 698.91 2311.11 699.27 0.452362.07 745.73 2361.92 745.85 0.19
Table 3.2: Accuracy Test 2. (All dimensions are in millimeters)
The desired accuracy in locating a tumor for radiation therapy is 2mm. From
the results in table 3.2 it is clear that the camera calibration precision is within desired
limits.
-
8/3/2019 Kapil Sharma Thesis All
57/111
46
4. TREATMENT SIMULATION: PHYSICAL COMPONENTS
A major intent of the experimental testbed is to mimic actual conditions for lung
tumor treatment. The main elements can be summarized as follows:
Choice of phantom.
Way to record the results of gating.
Generation of trajectory for phantom to mimic tumor motion due to respiration.
Choice of proxy for indirect measure of tumor location inside the phantom.
The following sections will discuss these elements in detail.
4.1 Phantom and Film
A phantom is an experimental target made of material, such as plastic that is
transparent to the radiation beam. A small cubical phantom was used to act as the
target (see figure 4.1). The phantom consists of alternating polystyrene and radio-
sensitive-film slabs. The plastic and films are stacked together using a set of 4
screws and bolts. The films are Kodak Xomat-V types. This film is sensitive to both
radiation as well as normal light, so when it is developed there is darkening in the
portions where the film is exposed or irradiated. The amount of darkening is
determined by the amount of exposure to radiation or light, so it can be used as an
indirect measure for target coverage. These films can be analyzed by film scanning
hardware/software to obtain the iso-coverage lines. In this process, the films were
optically scanned to obtain tranmissivity vs. x and y direction. The transmissivity
-
8/3/2019 Kapil Sharma Thesis All
58/111
47
was converted to equivalent radiation dose vs. x and y. This dose distribution was
analyzed to find contours at equivalent dose (isodose lines). Before the films can be
used, they have to be cut to a size that fits inside phantom. Since these films are
sensitive to visible light, the cutting and stacking of the films has to be done inside a
dark room. A jig was made to ease the process of cutting the films in the dark room.
For 2-dimensional experiments, only one film was used inside the phantom,
and for the 3-dimensional tests, multiple films were alternated with plastic within the
phantom.
Figure 4.1: X-Y Table and Phantom under Treatment Beam Source.
-
8/3/2019 Kapil Sharma Thesis All
59/111
48
4.2 Emulation of Target Motion
To create a realistic scenario for the gated treatment of lung tumors, the
phantom had to be moved in space, imitating the actual movement of a tumor with
respiration. For the generation of motion a computer controlled X-Y table was used
(see figure 4.1). A two-axis motion controller board was used to interface the X-Y
table with a PC through the ISA bus. An actual tumor motion plot (figure 1.6) was
used to design the target trajectory.
The X-Y table was controlled by a programmable motion-controller, which
accepts quadrature input from the encoders on the x and y axes, computes servo
feedback calculations, and outputs corresponding analog voltage signals to the DC
motors. For this purpose we used a mini-PMAC, which is a two-axis, ISA bus
motion-controller board for PCs running Windows 95 or 3.1. The mini-PMAC
comes with software with a user-friendly graphical user-interface, which can be used
to:
Configure the PMAC board for applications including setting PID gains, DC
output voltage range, maximum velocity, maximum acceleration bounds for
motion programs and jogging, etc.
Edit, download, upload and run motion programs.
Perform simple jogging and homing operations on the X-Y table.
-
8/3/2019 Kapil Sharma Thesis All
60/111
49
Tumor trajectories were programmed using the controllers PVT (position,
velocity, time) trajectory specification format. In PVT moves, the user specifies the
values for destination position, destination velocity and time to be taken to reach that
position. From the specified parameters for each such move piece and the beginning
position and velocity (from the end of the previous piece), the PMAC computes a
third-order position trajectory path to meet the constraints. This results in a linearly
changing acceleration, a parabolic velocity profile, and a cubic position profile for
each trajectory segment (see figure 4.2). The PVT mode is useful for creating
arbitrary trajectory profiles. It provides a building-block approach to put together
parabolic velocity segments to create whatever overall profile is desired (see figure
4.3).
Figure 4.2: Parabolic Velocity Curve for PVT Moves
-
8/3/2019 Kapil Sharma Thesis All
61/111
50
Figure 4.3 Describing a Contour in Segments of PVT moves
The PMAC controller is put into PVT mode with the program statement PVT
{data} where {data} is a constant, variable or expression, representing the piece time
in milliseconds. A PVT mode move is specified for each axis to be moved with the
statement of the form {axis}{data}:{data} , where axis is a letter specifying the axis,
the first {data} is a value specifying the end position or the piece distance and the
second {data} is a value representing the ending velocity. For example, the
command:
PVT200
X9000:150
specifies the XY table should move its X-axis 9000 units with an ending velocity of
150 units /sec in time 200 ms.
Two different trajectories were used to generate motion for the XY table. One
was simple to and fro motion in one dimension, and the other one was 2 dimensional
-
8/3/2019 Kapil Sharma Thesis All
62/111
51
motion, imitating figure 1.6. Figure 4.4 shows a position-time plot of the generated
motion. The position dimension in the plot is in encoder counts, where 4000 encoder
counts = 1 cm.
Figure 4.4: Generated Trajectory, Position Vs Time
4.3 Proxy for Target Location
A surface-mounted light-emitting diode (LED) was used as a proxy for an
indirect measure of the target location inside the phantom. The LED was mounted on
the top surface of the phantom, as shown in figure 4.5.
-
8/3/2019 Kapil Sharma Thesis All
63/111
52
Figure 4.5: Surface Mounted LED Used As Proxy
A 9-Volt battery was used to power the LED. A CCD video camera was used
to capture real-time images of the LED in dark surroundings. In dark surroundings,
the illuminated LED acted as a high-contrast proxy, images of which could be easily
thresholded in real time. The location of a hypothetical spherical target was defined
with respect to the LED center. The 3-dimensional location of the LED proxy was
deduced based on a calibrated video camera, as discussed in chapter 3.
-
8/3/2019 Kapil Sharma Thesis All
64/111
53
5. TREATMENT SIMULATION: SOFTWARE COMPONENTS
The final elements of the experimental testbed are a graphical display of
computed target and beam coordinates and real-time gating of the treatment beam.
The aim is to graphically indicate relative coordinates of the tumor and beam in a
consistent, easily interpreted display, and to permit gating control over the beam
interactively. To perform gating, two methods were introduced: human-in-loop
gating, where a human performs the gating via a key press using live graphical
images of the target and beam as feedback, and automated gating, where a computer
performs the gating using a simple control algorithm. A graphical user interface was
developed to allow the user to add various features, including choosing the beam
portal size, color, choosing tumor size, choosing automatic or manual gating, etc.
Further, cumulative target coverage (exposure) was also shown on the screen in real
time, which assisted the operator in selectively gating the exposures in under-covered
portions of the target. Software solutions addressing these needs are described in the
following sections.
5.1 Hardware Platform
For the project, two workstations were used: a Silicon Graphics Incorporated
(SGI) O 2 desktop workstation and a Silicon Graphics 1440 desktop workstation. The
O2 was used for the purpose of grabbing the video input and integrating all the
software units in one user-interface. The 1440 workstation was used to control the
-
8/3/2019 Kapil Sharma Thesis All
65/111
54
robot through a serial link with the robot controller. In addition, the two workstations
communicated through a network link with each other. In this thesis we refer to the
O2
workstation as the user-interface workstation and 1440 workstation as robot-
controller workstation.
The O 2 has an add-on video-digitizing unit, which was used to capture the
video stream. SGI has developed a standard application-programming interface
(API) called the Digital Media Library [29] for dealing with multimedia, such as
video grabbing. All SGI workstations are optimized for OpenGL, an open 2D and 3D
graphics standard [30, 31]. For the purpose of this project only the 2D elements of
OpenGL were used. For user interface design, O 2 supports OSF-Motif as well as the
lower level X-window system [32]. In addition, SGI has developed a C++ version of
Motif called Viewkit [33, 34]. For this project we used RapidApp [35], a GUI builder
that supports the creation of both Motif and ViewKit user interfaces.
5.2 Real-time Computation of Target Coordinates
The phantom was placed on the X-Y table, and motion emulating lung tumor
motion was produced using the motion program described in the chapter 4. The
surface-mounted light-emitting diode (LED) on the phantom was driven by a 9-Volt
battery. A black and white video camera with a zoom lens was used to capture live
video images of this proxy (LED) against a dark background. The video stream
(RS170) was digitized using a frame grabber board inside the Silicon Graphics O 2
-
8/3/2019 Kapil Sharma Thesis All
66/111
55
workstation. The video images were captured and buffered using software library
functions by Silicon Graphics for the O 2 [29].
Once an image is stored in a buffer (array), it can be processed using standard
image processing techniques. To enable real-time image processing, images of the
LED were taken in dark surroundings, which produced high contrast images. These
high contrast video images were thresholded, and the centroid of the LED was
computed for each captured frame. The algorithm that was used is described in the
following:
1. Choose a small threshold, scan every 5 th vertical and every 5 th horizontal line in
the image, and find the first pixel whose value is greater than the threshold. Mark
the pixel location as (x,y)
2. Threshold a small window around (x, y) of 50x50 pixels with a larger threshold.
3. Compute the centroid using the formula
=
=
=
=
N
i
N
i
i y N
y
i x N
x
1
1
)(1
)(1
where y x , are the x and y coordinates of the centroid, N is the number of white pixels
(white pixels are the pixels whose gray scale value is greater than the threshold) and
)(),( i yi x are the x and y coordinates for the ith white pixel.
The next step performed by the software is the computation of 3D coordinates
of the LED in the robots base frame. The camera is calibrated with respect to the
-
8/3/2019 Kapil Sharma Thesis All
67/111
56
base frame of the robot using the method discussed in chapter 3. The calibration
parameters are used to compute the 3-D coordinates of the LED using the method
described in section 3.4. However, these equations require a known world Z
coordinate of the LED. For that, the robot calibration tool (used in the calibration, see
figure 2.1) was jogged so that the tool tip touched the surface mounted LED on the
phantom. The program used the known world Z coordinate along with the image X
and Y coordinates to compute the 3-D world coordinates of the LED.
Finally, the computed target coordinates must be displayed in a frame
permitting an easy visualization of registration with respect to the treatment beam.
We chose a beams eye view, in which the beam axis is normal to the display. The
tool frame used for camera calibration (see chapter 2) can be used for the purpose of
locating the target in the beams eye view, because the x-axis of this frame is
coincidental with the beam axis. Equation 2.1 can be used to solve for 6 / 7P , which is
the position of a point in the tool frame.
6 / 70 / 60 / 60 / 7 P RPP +=
The only difference here is that subscript 6 refers to the tool frame used for
camera calibration, not the robots default tool frame. Since the x-axis of this frame
is parallel to beam, the y and z coordinates in this frame give the location of the target
from the beams eye view.
-
8/3/2019 Kapil Sharma Thesis All
68/111
57
5.3 Graphical Interface
The software components were integrated using a graphical interface. The
graphical interface consisted of three parts: Controls, Menu and display (figure 5.1).
Figure 5.1 Treatment Software Interface
5.3.1 Display
This portion of the window was used to graphically display tumor and beam
coordinates in real time. This is used as feedback by the operator to gate the beam.
OpenGL was used for performing graphics operations. The display consisted of two
overlapping regions: main drawing area and overlay.
-
8/3/2019 Kapil Sharma Thesis All
69/111
58
The main drawing area was used to draw a moving circle representing the
cross-section of a spherical tumor. Intensity values of the pixels inside the circle
represented the target coverage values. The higher the intensity of pixels, the higher
was the cumulative irradiation of that part of the target. This display of distribution
inside the tumor assisted the operator in adjusting the gating in real-time to perform
preferential gating in under-dosed parts of the target.
The overlay was used to draw a circle representing the cross-section of the
beam portal. OpenGLs overlay feature was used, which prevented the unnecessary
redrawing of the beam whenever the tumor circle was redrawn. An overlay has the
property that only the non-black pixel values are drawn on the frame buffer. The
main drawing pixel values are drawn wherever the overlay has black pixels. (See
figure 5.1)
5.3.2 Controls
This part of the interface had button and menus for controlling various
functions, as described in the following:
1. Mode radio button
This button was used to select between manual and automated gating. If
manual gating was selected, the gating could be performed by pressing . If
automated gating was selected, then the computer performed the gating. In
automated gating a simple gating criteria was used. It was: shoot the beam when the
-
8/3/2019 Kapil Sharma Thesis All
70/111
59
distance between the beam center and the target center is less than a threshold. In our
experiment we chose a distance threshold value of 2.5mm.
2. Display buttons
Display Tumor and Display Beam buttons were used to display the target and
the beam on the main drawing area and the overlay respectively.
3. Node selection
A Stereotactic feature was added to treatment simulation. A node file was
used to store node direction values. The user could select a node number from the
selection box, and press the go button to command the robot to that node location.
Before using this, the network connection between the server (O 2) and client (IRIS
4D) must be established, which is made possible by the connect button.
4. Connect button
This button was used to start the server on the user-interface workstation.
This step is followed by starting a client program present on the robot-controller
workstation. This socket-based connection was necessary in order to be able to
command the treatment beam to gate or to command the robot to go to new node
location. The robot was not directly controlled by the O 2, but it was controlled by the
Cyberknife robot-controller workstation. The network connection between the two
workstations was used to give indirect control over the robot by the O 2 (See figure
3.3).
-
8/3/2019 Kapil Sharma Thesis All
71/111
60
5. Exit button
This button was to stop all the processes and exit the program.
6. DCH button
This button allowed the user to draw the coverage area histogram (CAH) for
target and non-target areas (see figure 5.4). The Y-axis of coverage area histogram is
the percentage of target covered and the X-axis is the exposure time.
5.3.3 Menu
There are two menu items, tumor and beam , which were used to select size
and color options for the target and beam. Selecting the options submenu item from
the target or beam menu items brought about two dialog boxes. These dialog boxes
can be used to select different options, including size, color and target offset (distance
of target from LED) for target and beam.
5.4 Beam Control
A child process was created to give the ability to control the beam. This process
waited for input from the keyboard. Upon pressing in manual mode (the
automatic gating is performed by computer), then the beam was turned on. A
separate process was created because the parent process, which creates the graphical
user interface and captures live video stream, is blocked until video stream is present.
The child process performed two functions:
-
8/3/2019 Kapil Sharma Thesis All
72/111
61
To gate the beam when the operator presses (in manual mode of
gating)
Terminating the video capture when the user enters a particular key from the
keyboard.
Pressing sends a character message through the network to the
client, which is recognized by the client as a command for gating the beam. The
client then sends a message to the robots controller through serial port, commanding
the accelerator to turn the beam on for a short duration of time. Obviously, the
network connection must be set before the operator can use this feature. With each
press of , the beam turns on for a pre-specified amount of time (50 ms in
our case).
Terminating the video capture gives control back to the user interface, which can
then be used to perform other functions, such as selecting a different node and
selecting different gating mode.
5.5 Node Generation
A separate program was developed only for the purpose of selecting the nodes
for treatment. Here, node means a direction from which the beam irradiates the
target. From the results in chapter 2, it is clear that there is an inherent tool frame
calibration error associated with each tool direction. This means that the actual tool
tip coordinates do not exactly match the coordinates displayed by the robot controller.
-
8/3/2019 Kapil Sharma Thesis All
73/111
62
To use the tool frame for the purpose of computing the beam direction and hence the
tumor location in the beams eye-view, we have to compensate for that error, as well
as any error due to misalignment between the tool frame and beam axis. For this
purpose we utilize an alignment laser beam present in the accelerator. The laser beam
is co-linear with treatment beam, so it can be used for beam direction adjustments.
For selection of a node, the robot is jogged to a direction with the laser turned on so
that the laser beam falls exactly on the surface mounted LED proxy on the phantom.
The direction coordinates computed by the robot controller are recorded.
Additionally, the location of the LED in the beams eye view is computed based on
an image from the calibrated camera. Ideally, if there were no error the computed
location of the center of the LED in the beams eye view would be exactly at the
center of the beam. This is not the case, due to calibration imprecision.
The differences between the computed and ideal values are stored as offsets in
a file along with the actual robot coordinates. When the treatment program
subsequently uses this node, it adds these offsets to the computed coordinates to
compensate for the error.
-
8/3/2019 Kapil Sharma Thesis All
74/111
63
Figure 5.2: Node Generation Software Interface
To ease the process of node calibration correction, a graphical interface tool was
developed (see figure5.2). Steps involved in generating nodes are:
1.
Jog the robot so that the laser beam falls exactly on the LED2. Press the button Add Node on the software interface, which performs the
following operations:
a. Gets the 3-D coordinates of the robot tool using the network connection,
b. Captures the image of the LED from the calibrated camera, thresholds and
computes the LED centroid as explained in section 5.2.
c. Computes the location of the LED center in the beams eye view and
stores the resulting value in a file along with the 3-D coordinates reported
by the robot.
-
8/3/2019 Kapil Sharma Thesis All
75/111
64
3. To add more nodes, r