image segmentation applied to electrical impedance tomography
Post on 25-Oct-2021
8 Views
Preview:
TRANSCRIPT
Clemson University Clemson University
TigerPrints TigerPrints
All Theses Theses
August 2021
Image Segmentation Applied to Electrical Impedance Image Segmentation Applied to Electrical Impedance
Tomography Tomography
Scott Randall Scruggs Clemson University, srscrug@g.clemson.edu
Follow this and additional works at: https://tigerprints.clemson.edu/all_theses
Recommended Citation Recommended Citation Scruggs, Scott Randall, "Image Segmentation Applied to Electrical Impedance Tomography" (2021). All Theses. 3592. https://tigerprints.clemson.edu/all_theses/3592
This Thesis is brought to you for free and open access by the Theses at TigerPrints. It has been accepted for inclusion in All Theses by an authorized administrator of TigerPrints. For more information, please contact kokeefe@clemson.edu.
Image Segmentation Applied to Electrical ImpedanceTomography
A Thesis
Presented to
the Graduate School of
Clemson University
In Partial Fulfillment
of the Requirements for the Degree
Master of Science
Mathematical Sciences
by
Scott Randall Scruggs
August 2021
Accepted by:
Dr. Taufiquar Khan, Committee Chair
Dr. Shitao Liu
Dr. D. Andrew Brown
Abstract
Electrical impedance tomography (EIT) has many significant applications and has gained
popularity due to its ease of use and its non-invasiveness. While it has notable applications, EIT
is severely ill-posed. Since EIT is ill-posed, various techniques have been considered to solve the
inverse problem. While the inverse problem has been well-studied, a common question is to ask how
we can improve in solving the inverse problem of the electrical impedance tomography?
In this thesis, we consider using image segmentation alongside with iteratively-regularized
Gauss-Newton Method (IRGN) to solve the inverse problem of EIT in various geometries. A com-
parison between the reconstruction of the image with IRGN and with IRGN alongside image seg-
mentation is presented. Alongside the comparison, we analyze the parameter and residual error
between IRGN and image segmentation to show the efficiency of image segmentation in solving the
inverse problem of EIT. In the end, we discuss future work that could be done to extend the results
of this thesis.
ii
Acknowledgments
First, I would like to thank Dr. Khan for being my advisor. His comments, time, patience,
and dedication to make me a better researcher is paramount to my successes, and I am more than
grateful to you. I also want to thank my committee members, Drs. Andrew Brown and Shitao Liu,
for their comments and help to make this thesis better. I also want to thank the people who have
been in my research group these past few years: Dr. Sanwar Ahmed, Thowhida Akther, Matthew
Brinkerhoff, Shyla Kupis, Lee Redfern, and Dr. Thanh To. I especially want to thank Shyla Kupis
for her incredible help with the coding part of this thesis. Without her help, I would not be able to
understand the existing code as well as making my codes to the existing project.
During my time as a graduate student, I have made many friends both within and outside
the math department. Because the list would fill the entire page, I want to collectively thank you
all. You all have made the difficult endeavor known as grad school more enjoyable and bearable,
and you have all made significant impacts on my life. I am forever grateful for your friendships.
iii
Table of Contents
Title Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Multiphase Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1 Level Set Approaches in Solving Multiphase Image Segmentation . . . . . . . . . . . 32.2 Main Idea of Multiphase Image Segmentation . . . . . . . . . . . . . . . . . . . . . . 42.3 Gradient Descent Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Forward Problem in Electrical Impedance Tomography . . . . . . . . . . . . . . . 123.1 Complete Electrode Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.2 FEM of EIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Iteratively Regularized Gauss-Newton Method . . . . . . . . . . . . . . . . . . . . 214.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.2 IRGN for EIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5 Level Sets for EIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.1 Decompose into Level-Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6 Numerical Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38A Codes for Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
iv
List of Tables
6.1 Parameter Errors of IRGN vs image segmentation in Geometry 1 . . . . . . . . . . . 316.2 Parameter Errors of IRGN vs image segmentation in Geometry 2 . . . . . . . . . . . 326.3 Parameter Errors of IRGN vs image segmentation in Geometry 3 . . . . . . . . . . . 326.4 Parameter Errors of IRGN vs image segmentation in Geometry 4 . . . . . . . . . . . 326.5 Residual Errors of IRGN vs image segmentation in Geometry 1 . . . . . . . . . . . . 346.6 Residual Errors of IRGN vs image segmentation in Geometry 2 . . . . . . . . . . . . 346.7 Residual Errors of IRGN vs image segmentation in Geometry 3 . . . . . . . . . . . . 346.8 Residual Errors of IRGN vs image segmentation in Geometry 4 . . . . . . . . . . . . 34
v
List of Figures
2.1 The domain D being covered by the regions Ωlnl=0 specified by the curves Γ =
n⋃l=0
Γl.
Image is given in [13]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
6.1 Inverse Problem EIT without and with image segmentation . . . . . . . . . . . . . . 316.2 Residual error of IRGN vs image segmentation . . . . . . . . . . . . . . . . . . . . . 33
vi
List of Symbols
Ω Medium or object∂Ω Boundary of ΩΩl Region l in image domainK edge setI(x) image intensity functioncl average image intensityΓl optimal curve to fit region lΓ union of Γlµ length penalty term∇Γ tangential gradient∆Γ tangential LaplacianH(·) Hessian matrix of functionκ scalar curvatureκ curvature of a vectorV velocity fielddJ shape derivativecinl average image intensity in Ωl ∩Dcoutl average image intensity over outer region Ωk∇· divergence operator∇× curl operatorE electric fieldH magnetic fieldB magnetic inductionD electric displacementρc charge densityJ current densityε permittivityσ conductivity of mediumJs current sourceUl voltage of lth electodeIl total current of lth electrodezl contact impedance of the lth electrodeel lth electrodeF forward operatorg dataδ accuracy of dataλk regularization parameter for iteration kpk step size
vii
Chapter 1
Introduction
Electrical Impedance Tomography (EIT) is a noninvasive imaging technique used to give
real-time information about the regional distribution of changes in the electrical resistivity of the
object. This is done by implementing electrodes around the object in question and inject voltages to
the object. From these readings, we construct an image inside the object using the resistivity of the
surfaces inside the object. EIT has various applications, including medical [5, 15, 28], geothermal
[2, 5, 28], and civil engineering [4, 5].
Both the forward problem and the inverse problem of EIT have been heavily studied; [28]
provides a thorough summary for interested readers. The current research topics for EIT are either
applications or novel techniques to improve solving the inverse problem for EIT and their analysis.
The most common technique to solve the inverse problem is to use the iteratively regularized Gauss-
Newton Method (IRGN), which will be discussed further in Chapter 4 of this thesis. However, there
are some drawbacks to IRGN. The main drawback is that it requires us to invert the Jacobian matrix
of our solution, which is computationally expensive. Thus, one question that comes up is how can
we improve on IRGN to help with the process?
Multiphase image segmentation is an imaging technique based on level-sets seen in opti-
mization. It segments an image domain into subdomains and detect curves within each subdomain.
Like the IRGN, it is a gradient descent method. Since both methods are gradient descent methods,
would it be possible to combine the two and improve in solving the inverse problem for the EIT?
In this thesis, we attempt to answer this question. Section 2 will outline the theory behind
multiphase image segmentation. Section 3 will outline the forward model for EIT. Section 4 will
1
explain IRGN and how it is used to solve the inverse problem for EIT. Section 5 will summarize the
current theory behind combining level-sets and EIT together. Chapter 6 will have the major results
of this thesis and show numerical results behind combining EIT with image segmentation. Chapter
7 will provide conclusion and future projects for this work.
2
Chapter 2
Multiphase Image Segmentation
In this chapter we introduce the theory behind multiphase image segmentation and discuss modern
techniques to solve these problem. The majority of the background comes from [10] and [13], but
other resources are used to aid in describing the background information.
2.1 Level Set Approaches in Solving Multiphase Image Seg-
mentation
Most works for image-segmentation and image smoothing ([8],[14]) are based on the following min-
imization problem proposed in [21]:
minu,K
1
2
∫D
(u− I)dx+µ
2
∫D∩Kc
|∇u|2dx+ γlength(K)
(2.1)
where the goal of (2.1) is to find the edge set K and a piecewise smooth approximation u of a given
image intensity function I(x). However, as noted by [14], is difficult to solve because the edge set K
has the possibility of exhibiting singularities. Thus, a direct method to approximate (2.1) has not
been discovered.
To circumvent this issue, many papers have substituted (2.1) with similar minimization
problems that are more suitable for computation. One approach proposed by Chan and Vese in [29]
3
minimizes the following energy with respect to closed curves Γ:
J(Γ) =
2∑i=1
1
2
(∫Ωi
(ui − I)2 + µ|∇ui|
)dx+ γ
∫Γ
dS (2.2)
where ui is obtained from the following PDE:
−µ∆ui + ui = I in Ωi
∂ui∂νi
= 0 on ∂Ωi
(2.3)
for i = 1, 2. Chan and Vese represent Γ as a level set function and converted the problem (2.2) as a
curve evolution problem.
2.2 Main Idea of Multiphase Image Segmentation
Let D ⊂ R2 be the image domain, and let I : D → R be the given image. In this thesis, we assume
that D is an open, bounded and connected domain with Lipschitz boundary. In most application
settings, I will be a noisy and degraded image modeled in the form
I(x) =∑l
clχΩl+ noise, (2.4)
where Ωlnl=0 are distinct regions in our image domain and clnl=0 denotes the average values of
image intensity of each region,i.e.
cl =1
|Ωl|
∫Ωl
I(x)dx. (2.5)
We assume that within each Ωl we have homogeneous image intensity, which means that we have
the following:
I|Ωl≈ cl.
The main goal of the problem is that we want to extract the boundaries of all of the regions
in the image and the average values of image intensity for all regions. This can be expressed as a
minimal partition problem, where we set the unknowns as cl and the curve Γ =⋃nl=1 Γl, which is
4
a set of simple closed curves. From this, we seek to minimize the following variation of (2.2)
J(Γ, cl) =1
2
n∑l=0
∫Ωl
χD(I(x)− cl)2dx+ µ
∫Γ
dΓ, (2.6)
where χD is the characteristic function for D. The first term in equation (2.6) is called the data
fidelity term, whose goal is to match the boundaries of the homogeneous region in the given image
with the optimal curves. The second term of (2.6) is the length penalty term, which favors shorter
and smoother curves since that will ensure that the optimal curves do not fit insignificant variations
or noise in the image. The length penalty term is added into energy (2.4) so that the optimal curves
of the regions of our image do not fit insignificant variations or noise in the image, for doing so will
cause negative effects to the overall analysis in solving the minimizer.
We now explain how we define Γl for l = 0, . . . , n. We partition D into disjoint domains
Ωlnl=0 to ensure coverage, ie D ⊂n⋃l=0
Ωl. The boundaries of each region Ωl, ∂Ωlnl=0, make up
the curve Γ, which is the free variable in solving (2.6). Thus, we can define Γ as a union of closed
curves Γlnn=0. Note that each Γl gives the outer boundary of domain Ωl. To define the boundary,
we split into two cases:
1. If the domain Ωl does not have an interior boundary due to other domains being subsets of
Ωl, then ∂Ωl = Γl.
2. If other domains Ωk are enclosed in Ωl, k 6= l, then we have ∂Ωl = Γl ∪⋃k
Γk
5
A diagram of this is given in the figure below:
Figure 2.1: The domain D being covered by the regions Ωlnl=0 specified by the curves Γ =
n⋃l=0
Γl.
Image is given in [13].
Notice in Figure 2.1 that some of the domains may extend beyond the domain D, which is taken
care of by the characteristic function in (2.6). Also, while the number of regions n acts as a constant
in (2.6), in practice we do not need to know this constant beforehand, and this constant changes
during the minimization process due to merging and splitting.
To solve analytically, we need to define some concepts from differential geometry. Denote
the outer unit normal, the scalar curvature, and the curvature vector of a curve Γ ∈ C2 by n, κ, and
κ, respectively, where κ = κ · n. For a function f ∈ C2(D), define the tangential gradient ∇Γf
and the tangential Laplacian ∆Γf as follows:
∇Γf =
(∇f − ∂f
∂nn
)∣∣∣∣∣Γ
(2.7)
∆Γf =
(∆f − n ·H(f) · n− κ∂f
∂n
)∣∣∣∣∣Γ
(2.8)
The next concept that we will now define is the shape derivatives. Shape derivatives are
used to understand the change of energy induced by velocity field V. This is particularly useful
because we can then select the velocity that decreases the energy functional (2.6) for a given Γ.
6
Thus, the shape derivative of an energy J(Γ) at Γ with respect to a velocity field V as
dJ(Γ;V) = limt→0
1
t(J(Γt)− J(Γ)), (2.9)
where Γt = x(t,X) : X ∈ Γ represents the deformation of Γ by V by the following ordinary
differential equationdx
dt= V(x(t))
x(0) = X.
(2.10)
Note that we can define shape derivatives of energies J(Ω) depending on domains Ω in a similar
manner. More detail about shape derivatives can be found in [11, 12]. With this information, we
can now state the following lemmas.
Lemma 2.2.1. ([11, 13]) The shape derivative of the curve length J(Γ) = |Γ| =∫
Γ
dΓ with respect
to the velocity field V is
dJ(Γ;V) =
∫Γ
κV dΓ,
where V = V · n.
Proof. From [11, 12], the general shape derivative for
∫Γ
ϕ(x,Γ)dΓ is
dJ(Γ;V) =
∫Γ
ϕ′(Γ;V)dΓ +
∫Γ
κV dΓ. (2.11)
Applying (2.11) to the curve length yields the desired result.
Lemma 2.2.2. ([8, 11, 13]) The shape derivative of the data fidelity term from (2.6) for domain Ωl
with respect to velocity field V is
dJ(Ωl;V) =1
2
∫∂Ωl
χD(x)(I(x)− cl)2V dΓ, (2.12)
where V = V · n,
Proof. Again, from [11, 12], the general shape derivative for
∫Ω
φdx is
dJ(Ω;V) =
∫Ω
φ′(Ω;V)dx+
∫∂Ω
φdΓ. (2.13)
7
where φ′(Ω,V) denotes the shape derivative of φ. Applying this to our data fidelity term to (2.13)
with φ = χD(x)(I(x)− cl)2 yields
dJ(Ωl;V) =1
2
∫Ωl
φ′dx+1
2
∫∂Ωl
χD(x)(I(x)− cl)2dΓ. (2.14)
Note that the shape derivative of φ is 0 since it is constant with respect to V. Thus, (2.14) simplifies
to (2.12).
Using Lemmas 2.2.1 and 2.2.2, we obtain the shape derivative for (2.6):
Theorem 2.2.3. ([13]) The shape derivative for (2.6) for Γ =
n⋃l=1
Γl ∈ C2 with respect to velocity
field V is
dJ(Γ;V) =
∫Γ
GV dΓ, (2.15)
where G = µκ+ f(Γ) is the shape gradient, V = V · n, and the image force term f is defined by
f |Γl= (coutl − cinl )χD(x)
(I(x)− 1
2(cnl + coutl )
), (2.16)
which
cinl =1
|Ωl ∩D|
∫Ωl∩D
I(x)dx
is the average image intensity in the region Ωl ∩D enclosed by Γl, and
coutl =1
|Ωk ∩D|
∫Ωk∩D
I(x)dx
is the average over the outer region Ωk enclosing both Ωl and Γl. The shape derivative (2.15) can
be explicitly written as
dJ(Γ;V) = µ
∫Γ
κV dΓ +1
2
n∑l=1
(coutl − cinl )
∫Γl
χD(x)
(I(x)− 1
2(cnl + coutl )
)V dΓ (2.17)
Proof. Using (2.2.1) and (2.2.2), we can write the shape derivative of (2.5) as
dJ(Γ;V) = µ
∫Γ
κV dΓ +1
2
n∑l=0
∫∂Ωl
χD(x)(I(x)− cl)2V dΓ. (2.18)
Based on our definition of ∂Ωl, we can partition ∂Ωl to the curves Γl and the sets Γk : k ∈ IN(l),
8
where IN(l) is the set of indices of the curves immediately inside Γl. For the domain Ω0, it does
not have any outer boundaries, but interior boundaries given by the curves Γk : k ∈ IN(0). Thus,
we rewrite (2.18) as follows:
dJ(Γ;V) = µ
∫Γ
κV dΓ +1
2
∑k∈IN(0)
∫Γk
χD(x)(I(x)− ck)2V · (−nk)dΓ
+1
2
n∑l=0
(∫Γl
χD(x)(I(x)− cl)2V · nldΓ +∑
k∈IN(l)
∫Γk
χD(x)(I(x)− ck)2V · (−nk)dΓ
)
(2.19)
Note that each curve Γl appears only twice in because Γl is the outer boundary of domain Ωl and the
inner boundary boundary of Ωm (which means l ∈ IN(m)). Thus, we can collect all the integrals of
Γl to reorder (2.19) as
dJ(Γ;V) = dJ(Γ;V) = µ
∫Γ
κV dΓ +1
2
n∑l=1
(coutl − cinl )
∫Γl
χD(x)
(I(x)− 1
2(cnl + coutl )
)V dΓ,
where cinl = cl is the constant value for the domain Ωl with outer boundary ∂Ωl, and coutl is the
constant value for the enclosing domain Ωm, for which Γl is an inner boundary. Hence, we have
obtained (2.17).
2.3 Gradient Descent Algorithm
Now that we have Theorem 2.2.3, we can obtain a minimum for the energy (2.5). To accomplish
this, we set V = −(µκ + f(Γ)n) to follow the shape derivatives in (2.15) and (2.16). Using this in
(2.15) yields
−∫
Γ
(µκ+ f)2dΓ ≤ 0.
Thus, V is a gradient descent velocity, so we can obtain a minimization by adapting the following
curve evolution scheme: Begin with the initial curve Γ0 and update the curve iteratively by
Vk = −µκk − f(Γk)nk, Xk+1 = Xk + τkVk, ∀Xk ∈ Γk. (2.20)
9
Here we use Γk at the current step to compute n,κ, and f(Γ). Then we compute V and then we
update Γk with respect to V to obtain Γk+1.
There is one issue with our scheme (2.20). Since the scheme is explicit and it contains the
curvature term, (2.20) has stability issues which results in oscillations on the curve during iterations.
The instability can be prevented by taking small step size τ . However, small step size in gradient
descent methods leads to an increase in the number of iterations needed to reach the threshold,
which also leads to an increase in computation time. To circumvent these issues, Dogan in [13]
proposed the following semi-implicit update scheme :
Vk+1 + µκk+1 = −f(Γk)κk, Xk+1 = Xk + τkVk+1, ∀Xk ∈ Γk, (2.21)
where now we evaluate κ in the next iteration rather than the current iteration. Recalling that
κ = −∆ΓX, we get
κk+1 = −∆ΓXk+1 = −τk∆ΓV
k+1 + ∆ΓXk.
Thus, we obtain the following form for (2.21):
(Id− µτk∆Γ)Vk+1 = −µ∆ΓXk − f(Γk)nk,
Xk+1 = Xk + τkVk+1, ∀Xk ∈ Γk.
(2.22)
Thus, from (2.22) we can compute the velocity for each iteration by
Vk+1 = −(Id− µτk∆Γ)−1(µ∆ΓXk − f(Γk)nk). (2.23)
We will now make a comment about the step size τ . In implementations of the Chan-Vese
algorithms [6, 8, 10, 29], users fix τ and take a fixed number of iterations. However, this is a terrible
approach because in practice the number of iterations might not be enough to reach the minimum,
or the user might have too many iterations, which results in taking unnecessary iterations even when
the curve has already achieved minimum. The solution for the step size, as proposed in [14, 16], is
to select step size τk at each iteration to ensure energy decrease.
Hence, we will now summarize everything in this section with Algorithm 1, which was
10
originally proposed in [13]:
Algorithm 1: Gradient Descent Algorithm
Set initial curve Γ0 and k = 0;
repeat
mark domains Ωl on pixels;
sum up pixels in Ωl, label as SUM1, and compute cl = 1|Ωl∩D|SUM1;
compute Jk = J(Γk);
solve (Id− µτk∆Γ)Vk+1 = −µ∆ΓXk − f(Γk)nk;
test step size τk, modify to ensure energy decrease;
update Xk+1 = Xk + τkVk+1, ∀Xk ∈ Γk
until∥∥Gk+1
∥∥L2 < tol(Γk, ckl );
The tolerance level mentioned in Algorithm 1 is the cutoff tolerance derived in [13]
∣∣∣∣I(x)− 1
2(cin + cout)
∣∣∣∣ < ε1
2|cin − cout|, (2.24)
where the term in the right-hand side of (2.24) measures the contrast between neighboring regions.
Since
G|Γl= (cinl − coutl )
(I(x)− 1
2(cinl + coutl )
),
multiplying (2.24) by (cinl − coutl ) and integrate over Γ yields the stopping criteria in Algorithm (1):
‖G‖L2 =
(∫Γ
|G|2dΓ
)1/2
< tolΓk, ckl =1
2minl
(cinl − coutl )2|Γ|1/2ε (2.25)
11
Chapter 3
Forward Problem in Electrical
Impedance Tomography
3.1 Complete Electrode Model
Electrical Impedance Tomography (EIT) is an imaging technique where weak currents are
injected into the object of interest via electrodes attached to the boundary of the object. With the
measurements of the voltage through the object, an estimate for the internal resistivity distribution
of the object can be obtained. In this section, we discuss the complete electrode model for EIT.
More detail on the derivation of EIT can be found in [28].
We can model the electromagnetic field inside the object using Maxwell’s equations:
∇ ·D = ρc
∇ ·B = 0
∇× E = −∂B∂t
∇×H = J +∂D
∂t.
(3.1)
In (3.1), ∇· represents the divergence operator, ∇× represents the curl operator, E represents the
electric field we are working in, H represents the magnetic field, B represents magnetic induction, D
represents the electric displacement, ρc represents the charge density, and J represents the current
12
density, which is calculated by J = source current + ohmic current, which is denoted by Js and Jo,
respectively. If we are working in a linear isotropic medium,
D = εE
B = µH
J = σE,
(3.2)
where ε represents permittivity. µ represents the permeability of the medium, and σ represents the
conductivity of the medium. Note that σ is the inverse of the resistivity ρ.
In most applications of EIT [5, 28], the quasi-static approximation is used. What this means
is that time-dependence is neglected. With this assumption, the partial derivatives with respect to
time in the third and fourth equations of (3.1) zero. Also, the current source, denoted by Js, is
assumed to be zero inside the object. Because ∇ × E = 0, this implies that there exists electrical
potential u such that
E = −∇u (3.3)
Note that under the quasi-static approximation, from (3.1), we obtain ∇ · J = 0 since
∇ · J = ∇ · (∇×H) = 0.
When we substitute the third equation of (3.2) for J , we obtain
∇ · σE = ∇ · σ∇u = 0. (3.4)
Equation (3.4) is the model of the interior of the object being analyzed.
Now we discuss the boundary conditions of the EIT. One thing to note is that the current
source is not zero on the boundary of the object [28]. Thus, (3.4) becomes
∇ · σE = ∇ · Js. (3.5)
Let the sensor be placed on the boundary of the object so that the top and bottom of the domain
of the sensor are parallel with the boundary. If we assume that Js is a continuous function, then
integrating both sides of (3.5) over the volume of the sensor and applying the Divergence Theorem
13
yields ∫S
σE · ndS = −∫S
Js · ndS, (3.6)
where S represents the boundary of the sensor. Let τ represent the volume of the sensor. As τ → 0,
(3.6) becomes the form
−σE · n|inside = −Js · n|outside.
Combined with (3.3) gives us the next boundary condition
σ∂u
∂n= −Js · n. (3.7)
For convenience, let j denote the right-hand side of (3.7). In most literature, (3.4) combined with
the Neumann boundary condition (3.7) is called the continuum model.
The current density j under the electrodes is not known during testing. This makes the
continuum model inadequate for experiments. But the total current, which is defined by
Il =
∫el
jdS, (3.8)
where el is the surface of the lth electrode, is known. Thus, we can rewrite (3.7) as follows:
∫el
σ∂u
∂ndS = Il, for l = 1, 2, . . . , L (3.9)
Note the on the boundary of the object between electrodes, we have that j = 0 since we are not
passing a current there. This imposes another boundary condition
σ∂u
∂n= 0, x ∈ ∂Ω \
L⋃l=1
el. (3.10)
The shunting effect on the electrodes can be accounted for by the following condition:
u = Ul, l = 1, 2 . . . , L, x ∈ el (3.11)
where Ul represents the voltage of the lth electrode. In most literature, (3.4) combined with equations
(3.9)-(3.11) is referred to as the shunt model.
14
The shunt model is the basis of the complete electrode model, which is given below
∇ · (σ∇u) = 0, x ∈ Ω
u+ zlσ∂u
∂n= Ul, x ∈ el, l = 1, 2, . . . , L∫
el
σ∂u
∂ndS = Il, x ∈ el, l = 1, 2, . . . , L
σ∂u
∂n= 0, x ∈ ∂Ω \
L⋃l=1
el,
(3.12)
where the second equation comes from the combination of the shunting effect of the electrodes and
the contact impedance between the electrode and the object, where zl is the contact impedance
between the lth electrode and the object. To make sure the charge conservation law is fulfilled, we
imposeL∑l=1
Il = 0. (3.13)
Even with the Complete Electrode Model of EIT, (3.12) is still ill-posed because we do
not have uniqueness of the solution. To obtain uniqueness of the solution, we impose the following
condition:L∑l=1
Ul = 0. (3.14)
The proof on the existence and uniqueness of CEM for EIT is given in [26].
3.2 FEM of EIT
3.2.1 Weak Formulation of EIT
To obtain a FEM solution for the forward problem in EIT, we must first derive the weak
formulation of EIT. In this thesis, we will only give an outline for deriving the weak formulation for
EIT. More detail is provided in [28].
For EIT, let the test functions v ∈ H1(Ω) and V ∈ RL, where RL =
V ∈ RL :
L∑l=1
Vl = 0
.
The first test function is for the potential u in the Complete Electrode Model for EIT, while the
second test function is for the voltages Ul on the electrodes. For convenience, define H = H1(Ω)×RL.
15
Multiplying v with the first equation of (3.12) and integrate over Ω yields
∫Ω
v∇ · (σ∇u)dx = 0. (3.15)
Using Green’s formula on the left-hand side of (3.15) gives us the following:
∫Ω
v∇ · (σ∇u)dx =
∫∂Ω
σ∂u
∂nvdS −
∫Ω
σ∇u · ∇v
=
∫∂Ω\
⋃Ll=1 el
σ∂u
∂nvdS +
L∑l=1
∫el
σ∂u
∂nvdS −
∫Ω
σ∇v · ∇udx
=
L∑l=1
∫el
σ∂u
∂nvdS −
∫Ω
σ∇v · ∇udx (by boundary condition of (3.12)).
Note using the first boundary condition of (3.12), we can easily see that
σ∂u
∂n= −u− Ul
zl
Substituting this into our current work, we obtain
L∑l=1
∫el
u− Ulzl
vdS +
∫Ω
σ∇v · ∇udx = 0 (3.16)
Now we repeat this process with the test function Vl. Multiplying the first boundary con-
dition of (3.12) with Vl and integrating over each electrode yields
∫el
uVldS +
∫el
zlσ∂u
∂nVldS =
∫el
UlVldS. (3.17)
Rearranging variables, we can transform (3.17) into the following:
1
zl
∫el
(u− Ul)VldS + VlIl = 0. (3.18)
Note that the second term of (3.18) comes from the second boundary condition of (3.12). Adding
this for all electrodes yields
L∑l=1
1
zl
∫el
(u− Ul)VldS +
L∑l=1
VlIl = 0. (3.19)
16
Combining (3.16) and (3.19) gives us the bilinear form:
b : H ×H → R
(u, U)× (v, V ) 7→L∑l=1
1
zl
∫el
(u− Ul)(v − Vl)dS +
∫Ω
σ∇v · ∇udx.(3.20)
Thus we have the following weak formulation of (3.12): find (u, U) ∈ H such that
b((u, U), (v, V )) = f(v, V ) for all (v, V ) ∈ H, (3.21)
where
f(v, V ) =
L∑l=1
IlVl
is a functional.
3.2.2 FEM Discretization of CEM
Now that we have the weak formulation of CEM, we can now form the FEM discretization of
CEM. Let T = T1, . . . , T|T | be the triangulation of Ω, with N mesh points of the finite dimensional
subspace HN of H1(Ω). From this, we have the potential distribution approximated by the following
sum:
u(x) ≈ uN (x) =
N∑i=1
αiφi(x), for αi ∈ R, (3.22)
where φi(x)′s are the basis functions of HN defined as
φi(xj) =
1, i = j
0, i 6= j
.
Similarly, the approximation of the voltage is then given by
UN =
L−1∑k=1
βkνk, (3.23)
where νk are basis functions of RL chosen so that ν1 = (1,−1, 0, . . . , 0)T , ν2 = (1, 0,−1, 0, 0, . . . , 0)T ,
and so on. We make this choice for νk so that (3.14) is satisfied. The goal of the FEM is we must
17
determine the coefficients αi and βk. We now choose v = φi and V = νk when the set of test
functions is of the form (φ1, 0), . . . , (φN , 0), (0, ν1), . . . , (0, νL−1). Using this set for (3.21) yields a
system of linear equations in matrix form:
Aθ = f, (3.24)
where θ = (α, β)T ∈ RN+L−1 and the matrix A is of the form
A =
B C
CT D
, (3.25)
where
B(i, j) = b((φi, 0), (φj , 0)) =
∫Ω
σ∇φi · ∇φjdx+
L∑l=1
1
zl
∫el
φiφj , i, j = 1, . . . , N (3.26)
C(i, j) = b((φi, 0), (0, νj)) = −L∑l=1
1
zl
∫el
φi(νj)ldS, i = 1, . . . , N ; j = 1, . . . , L− 1 (3.27)
D(i, j) = b((0, νi), (0, νj)) =
L∑l=1
1
zl
∫el
(νi)l(νj)l
=
|e1|z1
, i 6= j
|e1|z1
+|ej+1|zj+1
, i = j
i, j = 1, . . . , L− 1,
(3.28)
where |el| is the measure of electrode l. The vector f in (3.24) is defined as follows:
f(i) =
0 for i = 1, . . . , N
IT νi−N for i = N + 1, . . . , N + L− 1
(3.29)
where I is a fixed current vector and (zk)Lk=1 are positive contact impedances.
18
Define the class of admissible conductivities as
Q = σ ∈ L∞(Ω) : σ(x) ≥ σ0 > 0, for x ∈ Ω and σ0 ∈ R (3.30)
and let
F : Q→ RL
σ 7→ U
(3.31)
be the forward operator which maps the conductivity σ to the solution of the forward problem
measured at the boundary electrodes. The forward problem uses the known conductivity distribution
σ to find the boundary data associated with the given source. The inverse problem is to use all pairs
of current vectors and the resulting voltage vectors I, U ∈ RL to estimate the conductivity σ. This
is done by minimizing the following cost functional:
J(σ) =1
2‖F(σ)− Uδ‖22 (3.32)
where Uδ approximates the exact data U with accuracy δ:
‖U − Uδ‖ < δ.
The issue with the inverse problem of EIT is that it is ill-posed and nonlinear with respect
to σ. Because of this, we need to use regularization on our inverse problem. Thus, we reformulate
(3.32) as follows:
Jλ(σ) =1
2‖F(σ)− Uδ‖22 + λR(σ − σ∗) (3.33)
where λ is the regularization parameter, R(·) is the regularization operator, and σ∗ is the known
background. A common choice for the regularization operator is the lp regularization, which is
defined below:
Rlp(σ − σ∗) =∑k
|σk − σ∗k|p (3.34)
where 0 < p ≤ 2 is a constant and the coefficients σk and σ∗k are the coefficients of an expansion
with respect to a parameter basis for Q.
19
One benefit of the lp regularization is that it enforces sparsity for 0 < p ≤ 1 and smoothness
when p ≥ 2. Another regularization operator used is the total variation regularization, which is
defined as follows:
RTV (σ) =
∫Ω
|∇σ|dx. (3.35)
For the next section, we will use (3.35) in (3.33) to solve the inverse problem. This transforms (3.33)
into the following:
Jλ(σ) =1
2‖F(σ)− Uδ‖22 +
λ
2‖W (σ − σ∗)‖22 , (3.36)
where W is the weight matrix. In this thesis, we use Iteratively Regularized Gauss-Newton Method
(IRGN) to find the minimization of (3.36).
20
Chapter 4
Iteratively Regularized
Gauss-Newton Method
IRGN is an algorithm used to solve the inverse problem of EIT and other ill-posed problems. In
this chapter, we will present the formulation of IRGN, show the convergence of IRGN, and rewrite
within the context of EIT. We will present IRGN in the general context of Hilbert Spaces, as done
in [1], [24], and [25].
4.1 Formulation
Let F : H → W be a nonlinear operator that maps from one Hilbert space H to another Hilbert
space W and be Frechet differentiable in H. Suppose we have the following conditions:
‖F ′(q1)‖ ≤M1 for any q1 ∈ H
‖F ′(q1)−F ′(q2)‖ ≤M2 ‖q1 − q2‖ for any q1, q2 ∈ H.(4.1)
Denote G(M1,M2) to be the class of operators that satisfies (4.1).
Consider minimizing the functional
J(σ) := ‖F(σ)− gδ‖2W , (4.2)
21
where gδ approximates the exact data g with accuracy δ:
‖gδ − g‖ ≤ δ.
Let λk∞k=0 be a sequence of regularizing parameters that satisfy the following conditions:
λk ≥ λk+1 > 0
λkλk+1
= d <∞
limk→∞
λk = 0.
(4.3)
Let σ denote the unique global minimum of (4.2), and assume that it satisfies the invertibility and
source conditions of F from [25]. Denote σk as the current point of the iterative process. From this,
we attempt to construct a Tikhonov’s functional of the form
Jλk(σ) =
1
2‖F(σ)− gδ‖2W +
λk2‖L(σ − σ)‖2X , (4.4)
where L is a linear operator from Hilbert Space H to Hilbert Space X and σ ∈ H is chosen so that
it is a vector of constraining values for σ. Now, replacing (4.2) with its approximation
J(σk, σ) =1
2‖F(σk)− gδ + F ′(σ − σk)‖2W ,
provides the following functional:
Jλk(σk, σ) =
1
2‖F(σk)− gδ + F ′(σk)(σ − σk)‖2W +
λk2‖L(σ − σ)‖2X , σ ∈ H. (4.5)
Thus, its unique global minimum, σk, assuming invertibility conditions is given by
σk = σk − [F ′∗(σk)F ′ + λkL∗L]−1[F ′∗(σk)(F(σk)− gδ)]
+ λkL∗L(σk − σ).
(4.6)
To generalize the algorithm, as done in [22], we use a line search procedure via the introduction of
a variable step size pk such that
0 < pk ≤ 1. (4.7)
22
Thus, the modified IRGN is
σk+1 = σk − pk[F ′∗(σk)F ′ + λkL∗L]−1[F ′∗(σk)(F(σk)− gδ)]
+ λkL∗L(σk − σ).
(4.8)
Due to the inexact nature of gδ, the stopping rule from [3] is adopted, where we terminate the
iterations of (4.8) at the first index K = K(δ), where the residual ‖F(σk)− gδ‖ is less than or equal
to√ρδ, ρ > 1:
‖F(σK)− gδ‖2 ≤ ρδ < ‖F(σk)− gδ‖2 , 0 ≤ k ≤ K, ρ > 1. (4.9)
It has been shown in [25] that (4.8) converges via the following theorem:
Theorem 4.1.1. ([25]) Assume that
1. F is in the class of G(M1,M2).
2. The regularization sequence λk and the step size sk are chosen according to (4.3) and
(4.7), respectively.
3. The following source condition proposed in [18] is satisfied:
L∗L(σ − σ) ∈ F ′∗(σ)S, S = v ∈ H : ‖v‖ ≤ ε. (4.10)
4. The linear operator L∗L is surjective and there is a constant m > 0 such that
(L∗Lh, h) ≥ m ‖h‖2 . (4.11)
5. Constants defining F and the iteration are constrained by
a =M2ε
m+d− 1
dα+
√√√√ ε
m
(M2
2+
M21
(√ρ− 1)2
)≤ 1,
‖σ0 − σ‖√λ0
≤ ε√m(1− a)
= l.
(4.12)
Then
23
1. For the iterations (4.8),
‖σk − σ‖√λk
≤ l, k = 0, 1, . . . ,K, (4.13)
where K is from (4.9).
2. The sequence K(δ) satisfies the following:
limδ→0
∥∥σK(δ) − z∥∥ = 0, (4.14)
where z = infσ∈H‖F(σ)− g‖W .
4.2 IRGN for EIT
Now that we have the general framework for IRGN, we will now execute IRGN within the context of
EIT. Let σ denote the unique global minimum of (3.36). Assume as before that σ satisfy conditions
(4.6) and the source conditions for F described in [25]. From this, we can transform (4.8) into the
following:
σk+1 = σk − sk(F ′(σk)TF ′(σk) + λkW2)−1(F ′(σk)T (F(σk)− Uδ) + λkW2(σk − σ∗)). (4.15)
Here, Uδ takes the place of gδ in the previous section, where we interpret Uδ as an approximation of U .
Also in this context, F ′(σk) represents the Jacobian matrix at the kth iteration, and W2 = WT ∗W
is taking the place of L in (4.8).
We now transform the stopping rule (4.9) with respect to (4.15):
‖F(σK)− Uδ‖2 ≤ ρδ < ‖F(σk)− Uδ‖2 , 0 ≤ k ≤ K, ρ > 1. (4.16)
We use the line search parameter to minimize the scalar objective function
Φ(s) = J(σk + spk)
where pk is the search direction which solves
(F ′(σk)TF ′(σk) + λkW2)pk = −F ′(σk)T (F(σk)− Uδ) + λkW2(σk − σ∗). (4.17)
24
We solve (4.17) via a backtracking strategy until either one of the strong Wolfe conditions
J(σk + spk) ≤ J(σk) + c1s∇J(σk)T pk
|∇J(σk + spk)T pk| ≤ |c2∇J(σk)T pk|(4.18)
from [22] with appropriate constants c1 and c2, or the maximum number of backtracking steps has
been reached. In [22], they derived that c1 and c2 are 0.0001 and 0.9, respectively.
Since we are computing the Jacobian for EIT, we now need to discuss it. Note that in
general, the operator F is Frechet differentiable and it maps σ ∈ Int(Q) to the solution (u, U) ∈ H
of the forward problem (Q was defined in (3.30)). From [1, 25], if η ∈ L∞(Ω) such that σ + η ∈ Q,
then (w,W ) ∈ H satisfies the variational problem
b ((w,W ), (v, V )) = −∫
Ω
η∇u · ∇vdx (4.19)
for all (v, V ) ∈ H and u is the solution to the forward problem of EIT with values on the boundary
electrodes U = F(σ). More detail on the derivation of the Jacobian can be found in [20] and [26].
25
Chapter 5
Level Sets for EIT
The purpose of this chapter is to combine what we discussed in Chapters 2 and 3 and show how we
can use level sets to solve the forward and inverse problems for EIT. This chapter will help lay the
foundations for the major results of this thesis. More details can be found in [7], [9], and [17].
5.1 Decompose into Level-Sets
Suppose that our domain Ω contains two different materials with conductivities σi, i = 1, 2. Partition
Ω into Ωi, i = 1, 2. Chung in [9] decomposed the electrical conductivity σ in EIT as follows:
σ = σ1H(φ(x)) + σ2(1−H(φ(x))), in Ω, (5.1)
where H(x) defined is the Heaviside function
H(x) =
1, x ≥ 0
0, x < 0
(5.2)
and the level set function φ(x) satisfies the following:
Ω1 = x ∈ Ω : φ(x) ≥ 0
Ω2 = x ∈ Ω : φ(x) < 0
Γ = x ∈ Ω : φ(x) = 0.
(5.3)
26
The set Γ defined in (5.3) serves as an interface between the regions Ω1 and Ω2. Note that there are
many level set functions that will satisfy the requirements of (5.1) and (5.3). Chung and Chan in
[9] employed a distance function defined as follows:
φ(x) =
d(x,Γ), x ∈ Ω1
−d(x,Γ), x ∈ Ω2
. (5.4)
While it is obvious that the function in (5.4) solves the following PDE
|∇φ| = 1 in Ω, (5.5)
it is not the only solution to (5.5). To obtain a unique solution for (5.5), let φ be any level set function
that is positive inside Ω1 and negative outside Ω1. Then the unique solution of (5.5) formulated by
Osher and Fedkiw in [23] is us the solution to the following equations:
∂d
∂t+ sgn(d)(|∇d| − 1) = 0
d(x, 0) = φ.
(5.6)
Let N denote the number of measurements made on our domain. Then for 1 ≤ i ≤ N ,
let fi(x) be a function that represents a known applied current density on ∂Ω and mi be the
corresponding measurement of the electrical potential on ∂Ω. As noted in previous chapters, we
next find σ1, σ2, but now we will find the location of Γ as well. This means we minimize the
following functional:
F (φ, σ1, σ2) =1
2
N∑i=1
∫∂Ω
|ui(s, σ)−mi(s)|2ds+ β
∫Ω
|∇σ|dx (5.7)
In order to compute the gradient of F defined in (5.7), we need to evaluate the gradient of F with
respect to σ as well as the differentials of σ with respect to φ, σ1, and σ2. The reason why we need
to do this is because the functional in (5.7) is a functional with respect to σ, but because of our
partition of the conductivity σ earlier in this section, this forces us to also take differentials with
27
respect to the specific partitions. When taking the gradient, we obtain
dF
dσ= −
N∑l=1
∇ul · ∇zl − β∇ ·
(∇σ|∇σ|
)(5.8)
where zl in (5.8) is the solution to the equation
−∇ · (σ∇zl) = 0 in Ω,
σ∂zl∂n
= ul −ml on ∂Ω
(5.9)
with the constraint ∫∂Ω
zl(x)dx = 0.
Chung and Chan in [9] showed that a minimizer of (5.7) is
β|∇σ|−1 ∂σ
∂n= 0 on ∂Ω.
Using Chain Rule, we can easily obtain the following system of equations
dF
dφ=dF
dσ(σ1 − σ2)δ(φ)
dF
dσ1=
∫Ω
dF
dσH(φ)dx
dF
dσ2=
∫Ω
dF
dσ(1−H(φ))dx,
(5.10)
here δ is the delta function.
As before in Chapter 2, we will now use Gradient Descent to determine what will minimize
our problem. In particular, the following iterative scheme is used:
φk+1 = φk − αkdF
dφ(φk, σk1 , σ
k2 )
σkj = σkj − γjk
dF
dσj(σk+1, σk1 , σ
k2 ), j = 1, 2,
(5.11)
where αk > 0 and γjk are step sizes defined in previous sections. During this, the delta function and
28
Heviside functions are replaced with the following approximations:
δε =ε
π(ϕ2 + ε2)
Hε(ϕ) =1
πtan−1
(ϕε
)+
1
2,
(5.12)
with ε > 0 chosen to be the order of the mesh size.
As specified in [9], during the iteration process in (5.11) for ϕ, the new iterate may transform
so that it’s no longer the signed distance function. This means that in the reconstruction algorithm,
only the sign of ϕ is important. However, there could be an issue that future iterates might not
have the same sign as the current iterate. To circumvent this, we replace the new iterate by a signed
distance function by solving (5.6).
29
Chapter 6
Numerical Simulations
In this chapter we run some numerical experiments to image segmentation within IRGN
to display its effectiveness. All code have been written in Matlab, and they can be found in the
appendix section of this thesis. Below are some the reconstruction of the inverse problem with
different geometries. The first column shows the true image of the electrical conductivity. The
second column shows the reconstruction with only IRGN implemented. The third column shows
the reconstruction of image segmentation within IRGN for each iteration. All reconstructions were
done with 1% noise interference. The number of iterations done were different depending on the
geometry.
30
Figure 6.1: Inverse Problem EIT without and with image segmentation
Note that for the third geometry, both the IRGN with and without image segmentation
for the reconstruction of the inverse problem of EIT failed in capturing that there are six smaller
regions. For the image segmentation, this is because the regions are close to each other. Because
of the close proximity, during the reconstruction the regions merged to form the two regions shown
above. Another observation made with Figure 6.1 is that the reconstruction with image segmentation
is more sparse, meaning that it requires less information on forming the reconstruction when solving
the inverse problem of EIT.
Below are the parameter errors for estimating σ with both IRGN and with image segmen-
tation for Geometries 1-4. The tables on the left were calculated using the `1-norm and the tables
on the right were calculated using the `2-norm.
Iteration IRGN Image
1 0.936 0.0886792 0.92508 0.0949523 0.9118 0.0957814 0.89985 0.0980295 0.88951 0.10062
Iteration IRGN Image
1 0.2498 0.0121032 0.24604 0.0125673 0.24214 0.0117994 0.23922 0.0112815 0.2371 0.010862
Table 6.1: Parameter Errors of IRGN vs image segmentation in Geometry 1
31
Iteration IRGN Image
1 0 02 0.19058 0.190423 0.19082 0.190464 0.1911 0.19055 0.30418 0.19056
Iteration IRGN Image
1 0 02 010668 0.016813 0.010646 0.0106794 0.01621 0.0106765 0.009647 0.010674
Table 6.2: Parameter Errors of IRGN vs image segmentation in Geometry 2
Iteration IRGN Image
1 0 02 0.12001 0.120013 0.13939 0.141694 0.12773 0.144285 0.1335 0.150036 0.12918 0.152017 0.13159 0.153518 0.12924 0.154949 0.13011 0.1576410 0.12908 0.1574
Iteration IRGN Image
1 0 02 0.005286 0.0052863 0.005313 0.0052614 0.005138 0.0052935 0.005208 0.0053996 0.005135 0.0054217 0.005156 0.0054668 0.005105 0.0055019 0.005105 0.00553410 0.005073 0.005549
Table 6.3: Parameter Errors of IRGN vs image segmentation in Geometry 3
Iteration IRGN Image
1 0 02 0.25227 0.252273 0.18636 0.169044 0.17752 0.190945 0.16358 0.18552
Iteration IRGN Image
1 0 02 0.008452 0.0084523 0.006477 0.0058964 0.006219 0.0070665 0.005835 0.006749
Table 6.4: Parameter Errors of IRGN vs image segmentation in Geometry 4
Based on Tables 6.1-6.4, we notice that both IRGN and image segmentation estimate σ
similarly, albeit a few minor differences. One case to note is in Geometry 1. As we can see in Table
6.1 the image segmentation performed much better than the IRGN in estimating σ with respect
to both the `1 and `2 norm. A possibility for this occurrence is that Geometry 1 was formed in
a low conductive region unlike the other three geometries. This allows us to suspect that image
segmentation does exceptionally well compared to IRGN when solving the inverse problem for EIT
in this setting. For the other geometries, it shows promise that with image segmentation, it estimates
σ similarly to IRGN.
Below is a list of the residual errors of each of the reconstructions from Figure 6.1. The first
column is the residual error of IRGN alone and the second column on the right is the residual error
with image segmentation.
32
Figure 6.2: Residual error of IRGN vs image segmentation
The corresponding tables for each residual error are given below. We report the errors with
respect to both the `1 and the `2 norms. As with the parameter errors, the tables on the first column
are calculated with respect to the `1−norm and the tables on the second column are calculated with
respect to the `2−norm.
33
Iteration IRGN Image
1 0.096604 .583532 0.086617 0.617133 0.074841 0.559844 0.064494 0.5195 0.055634 0.48791
Iteration IRGN Image
1 0.09599 0.575992 0.08607 0.614843 0.074356 0.555244 0.064046 0.515525 0.055208 0.48603
Table 6.5: Residual Errors of IRGN vs image segmentation in Geometry 1
Iteration IRGN Image
1 180.36 180.362 180.35 180.353 180.33 180.334 180.31 180.315 180.28 180.29
Iteration IRGN Image
1 180.36 180.362 15.308 15.3083 15.307 15.3074 15.306 15.3065 15.304 15.304
Table 6.6: Residual Errors of IRGN vs image segmentation in Geometry 2
Iteration IRGN Image
1 22.562 22.5622 10.352 10.5323 7.1306 6.62634 4.4591 5.72215 3.5561 7.64636 3.1567 6.55687 2.8439 8.32488 2.6422 7.89739 2.5872 7.555110 2.4888 7.3635
Iteration IRGN Image
1 22.562 22.5622 2.2223 2.22233 2.222 2.2224 2.2216 2.22175 2.2212 2.22146 2.2208 2.2217 2.2203 2.22068 2.2197 2.22019 2.2191 2.219610 2.2185 2.2191
Table 6.7: Residual Errors of IRGN vs image segmentation in Geometry 3
Iteration IRGN Image
1 57.821 57.8212 23.579 23.5793 13.233 12.7664 11.56 14.2935 7.5398 11.489
Iteration IRGN Image
1 57.821 57.8212 2.1702 2.17023 1.2116 1.04824 1.0463 1.45385 0.75049 1.0578
Table 6.8: Residual Errors of IRGN vs image segmentation in Geometry 4
One observation to note with the residual errors is how they differ depending on the norm
used to calculate the residual errors. For example, when calculating the residual errors for Geometry
2, we have that the residual errors of both IRGN and with image segmentation with respect to the `2
norm is significantly better than with respect to the `1 norm. This can be seen again with Geometry
3 , where the same property is evident again.
34
Another observation looking at the numerical results is how the residual errors of IRGN
and image segmentation differ based on the geometry used. For example, in Geometry 1 we notice
that the residual error for the IRGN is significantly lower than the residual error for the image seg-
mentation. However, the residual errors of the reconstruction of IRGN against image segmentation
for the other geometries do not differ by any significance. The only exception observed between
Geometries 2-4 is when we calculated the residual errors of the reconstruction of Geometry 3 using
the `1 norm. The residual errors of the reconstructions are similar for both IRGN and with image
segmentation, albeit a few minor errors. However, when recording the residual errors of Geometry
3 with respect to the `1−norm, we noticed that after iteration 2, IRGN and image segmentation
differ significantly. At first, they only differ by a small factor. However, after the fourth iteration
the residual errors with respect to the `1−norm differ drastically. A possibility behind this behavior
is since the one-norm is computed as the sum of the magnitude of each element in the vector, when
updating σk the residual error of the image segmentation does not normalize, which is why the `1
errors in Geometry 3 differs from the residual errors with IRGN.
Overall, based on the numerical experiments, we see that we can reconstruct the inverse
problem of EIT with image segmentation combined with IRGN. Based on our numerical results that
we obtained with the parameter errors and with the residual errors, both IRGN and image segmen-
tation perform similarly to each other. However, as we see in Figure 6.1, the actual reconstruction
of the image with image is better than the reconstruction with IRGN alone. We can infer from this
that image segmentation applied within IRGN will produce a better reconstruction of the image
without losing too much accuracy of the parameters and the residuals.
35
Chapter 7
Conclusions and Future Work
Overall in this thesis, we discussed how we can apply image segmentation to aid in solving
the inverse problem for EIT. We implemented in our Matlab code that using multiphase image
segmentation works in solving the inverse problem of EIT. We also showed that in terms of parameter
and residual errors, we have that the reconstruction with respect to the image segmentation is similar
to the reconstruction of IRGN.
Due to time constraints, other analyses were not conducted during this thesis. However,
in future work, we would like to further our results. One extension that is of particular interest
is to prove the convergence rate of solving the inverse problem of EIT with image segmentation
implemented in IRGN. It is well known and established in [1, 22, 25] the convergence rate of IRGN
along with the convergence rate of solving the inverse problem of EIT with IRGN. However, it is
unknown at the time of this thesis whether or not image segmentation improves the convergence
rate of either IRGN or solving the inverse problem of EIT. Based on results in the previous chapter,
it seems like a viable hypothesis, but it is now just a conjecture.
Another extension that is of interest is to do an analysis on solving the inverse problem
with multiphase image segmentation to show stability and to provide similar results in Chapter 4.
Proving results such as this will not only aid our first extension mentioned in the previous paragraph,
but will also show that our method is stable and valid for generic geometries, not just the geometries
shown in this thesis. We believe that the proof will follow a similar structure as the study of the
well-posedness in [28], but using Theorem 2.2.3 when defining the functional.
Another extension is to see whether we set similar results if we implement image segmenta-
36
tion to solve the inverse problem of EIT with domains that are not rectangular. In this thesis, we
chose to use a rectangular domain due to convenience and ease. However, in modern applications of
EIT, the domain of the image will not necessarily be rectangular and will more likely be a atypical
shape. What would be of interest is to replicate the results of this thesis in a circular domain first,
and then move on to perform the inverse problem on an irregular-shaped domain.
When performing image segmentation within IRGN, we only considered the length penalty
term used in (2.2). We did not consider using other penalty terms to perform the image segmentation.
Other literature, for example in [27], have used different metrics for performing image segmentation
to varying degrees of success. Changing the functional (2.2) could make solving of the inverse
problem of EIT with image segmentation more accurate and avoid issues that we observed with the
results in Geometry 3.
37
Appendices
38
Appendix A Codes for Thesis
The following appendix contains the main codes used for this thesis. All codes used are
written in Matlab. There were other code used to run the numerical simulations, but not included
due to space. All other codes used in this thesis were provided by Shyla Kupis in her thesis [19].
function [sig_all,Vpred,l2err_sig,l1err_sig,st,obj,obj_1,tikfn,iter] = GNfuncPEM_imseg(p,e,t,del,d,in_st,stmin,ERdat)
%%%%% Gauss-Newton Function for EIT Inverse Problem
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%% June 2021
%%%%% Scott Scruggs, srscrug@g.clemson.edu, Clemson University
%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% INPUTS
% Vmeas - experimental or simulated data on voltages
% sig_guess - initial model for the electrical conductivity structure
% sigTrue - true E.C. model
% dsigma - perturbation of model parameter for E.C.
% step - step size
% lambda - model weighting parameter
% iter - number of iterations
% p,e,t - Delaunay triangulation for mesh
% OUTPUTS
% sig_all - All predicted electrical conductivity model
% Vpred - final predicted voltages
% l2err_sig - Reconstruction error
% st - set of step lengths obtained by babacktracking
% obj - Residual error values
% tikfn - Tikhonov functional values
% iter - number of iterations
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% GN-Method
% Inputs
% for data
nelec = ERdat.nelec;
Quarters = ERdat.Quarters;
Vmeas = ERdat.data;
% for model
sig_guess = ERdat.sig_guess;
sigTrue = ERdat.sigTrue;
sigTrue = transposecheck(sigTrue);
lambda = ERdat.lambda;
dsig = ERdat.dsig;
% for IRGN
numbit = ERdat.iter;
39
Nt = size(t,2); % Nt: number of elements
sig_all = [sig_guess zeros(Nt,numbit)]; % saving all runs of model
obj = zeros(numbit,1); % obj = objective values at each iteration
obj_1=obj;
sig_prior = sig_guess;
W2 = speye(Nt,Nt); % model regularization matrix
Vmeas = reshape(Vmeas,nelec*nelec,1); % experimental voltages
% % initializing some variables
iter = 1;
l2err_sig(iter) = norm(sig_all(:,iter)-sigTrue,2);% Reconstruction error
l1err_sig(iter) = norm(sig_all(:,iter)-sigTrue,1);
%
% if isempty(del)==0
% Tol = d*del; % Stopping criteria by discripancy principle
% end
%
% % Predicted voltages F(sigma_k)
Vpred = feval(@VfuncT_pem,sig_all(:,iter),Quarters,p,e,t,nelec);
obj(iter) = norm(Vpred-Vmeas,1);
W1 = speye(Nt,Nt); % Weight matrix
lambda1 = lambda; c = 4; lamb(iter) = lambda1;
% Tikhonov functional
tikfn(iter) = obj(iter)^2/2+lambda^2*norm(W2*(sig_guess-sig_prior),1)^2; %changed to 1 norm because of sparseness
st = [];
[iter, obj(iter)]
while iter < numbit
gcf=figure(’units’,’normalized’,’outerposition’,[0 0 1 1]);
g=2000;
x2=linspace(-1,1,g);
y2=x2;
tmp=pdeprtni(p,t,sig_all(:,iter)’);%doing this on sig_guess; change later (11/12/20)
I=tri2grid(p,t,tmp,x2,y2);
%figure;surf(x2,y2,I);
% view(2);
% pause;
BW=edge(I,’log’);%Changed to Sobel Method (12/30/2020)
% BW = zeros(size(I));
% BW(10:end-10,15:end-5) = 1;
% imshow(BW);
% title(’Initial Contour Location’)
%I=sig_all(:,end-1);
bw = activecontour(I,BW,300);
figure;imshow(bw);
40
% BW1 = zeros(size(BW));
% ind = find(BW==0);
% BW1(ind) = BW(ind) + 1;
%bmap = zeros(size(bw));
%=find(bw==1); bmap(ind)=bw(ind)*255;
%bw=mean(mean(I.*bw)).*bmap; %To hopefully invert the colormap (7/24/2020)
%bw=mean(I.*bw).*bmap; %To hopefully invert the colormap (7/24/2020)
%Id=ones(size(I));
bw1=double(bw)*sum(sum(I.*double(bw)))/(sum(sum(bw)))+double(1-bw)*sum(sum(I.*double(1-bw)))/(sum(sum(1-bw)));
tmp2=interp2(x2,y2,bw1,p(1,:),p(2,:));
% f=size(bw,1);
% tmp2=zeros(f,f);
% for g=1:f
% tmp2(g,:)=interp2(x2,y2,bw(f,:),p(1,:),p(2,:));
% end
hi=pdeintrp(p,t,tmp2’);
%figure;pdeplot(p,e,t,’xydata’,hi,’mesh’,’off’); colormap(jet);
%while obj(iter) > Tol && iter < numbit
J = pem_jac(hi,dsig,Quarters,p,e,t,nelec); % calculating jacobian from complete electrode mod
%J = pem_jac(sig_all(:,iter),dsig,Quarters,p,e,t,nelec); % calculating jacobian from complete electrode model
%% Select step length by back tracking
[step,pk] = alpha_pem(Vmeas,Vpred,J,hi,sig_prior,W2,lambda,p,e,t,obj(iter),dsig,in_st,stmin,Quarters);
%[step,pk] = alpha_pem(Vmeas,Vpred,J,sig_all(:,iter),sig_prior,W2,lambda,p,e,t,obj(iter),dsig,in_st,stmin,Quarters);
st = [st step];
figure;pdeplot(p,e,t,’xydata’,pk); colormap(’jet’);
iter = iter + 1; % updating iteration count
% Checking for better sigma
%sig = sig_all(:,iter-1) + step.*pk;
hi = transposecheck(hi);
sig = hi + step.*pk;
sig_all(:,iter) = sig;
figure;pdeplot(p,e,t,’xydata’,sig); colormap(’jet’);
% Predicted voltages at next iteration
Vpred = feval(@VfuncT_pem,sig,Quarters,p,e,t,nelec);
% Updating model regularization parameter
lambda = lambda1*c/(c+iter);
lamb(iter) = lambda;
%
%%%%% COMMENT FOR SANWAR
% if obj(iter-1) > norm(Vpred-Vmeas,2)
% else
% if lambda == lambda1
% break;
41
% end
% lambda = lamb(1); % go back to initial lambda
% end
% Reconstruction error
l2err_sig(iter) = norm(sig_all(:,iter)-sigTrue,2)/norm(sigTrue,2);
l1err_sig(iter) = norm(sig_all(:,iter)-sigTrue,1)/norm(sigTrue,1);
%l2err_sig(iter) = 1/l2err_sig(iter); %Added to see if this works (2/23/21)
% Residual error
%obj(iter) = norm(Vpred-Vmeas,1);
obj(iter)=norm(Vpred-Vmeas,2)/norm(Vpred,2);
obj_1(iter)=norm(Vpred-Vmeas,1)/norm(Vpred,1);
%obj(iter) = 1/obj(iter); %Added to see if this works (2/23/21)
% Tikhonov functional
tikfn(iter) = obj(iter)^2/2+.5*lambda^2*norm(W2*(sig_all(:,iter)-...
sig_prior),1)^2;
end
%
end
42
function [sig_all,Vpred,l2err_sig,l1err_sig,st,obj,obj_1,tikfn,iter] = GNfuncPEM(p,e,t,del,d,in_st,stmin,ERdat)
%%%%% Gauss-Newton Function for EIT Inverse Problem
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%% June 2021
%%%%% Scott Scruggs, srscrug@g.clemson.edu, Clemson University
%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% INPUTS
% Vmeas - experimental or simulated data on voltages
% sig_guess - initial model for the electrical conductivity structure
% sigTrue - true E.C. model
% dsigma - perturbation of model parameter for E.C.
% step - step size
% lambda - model weighting parameter
% iter - number of iterations
% p,e,t - Delaunay triangulation for mesh
% OUTPUTS
% sig_all - All predicted electrical conductivity model
% Vpred - final predicted voltages
% l2err_sig - Reconstruction error
% st - set of step lengths obtained by babacktracking
% obj - Residual error values
% tikfn - Tikhonov functional values
% iter - number of iterations
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% GN-Method
% Inputs
% for data
nelec = ERdat.nelec;
Quarters = ERdat.Quarters;
Vmeas = ERdat.data;
% for model
sig_guess=ERdat.sig_g;
sig_guess = transposecheck(sig_guess);
sigTrue = ERdat.sigTrue; sigTrue = sigTrue’;
lambda = ERdat.lambda;
dsig = ERdat.dsig;
% for IRGN
numbit = ERdat.iter;
Nt = size(t,2); % Nt: number of elements
sig_guess=transposecheck(sig_guess);
sig_all = [sig_guess zeros(Nt,numbit)]; % saving all runs of model
obj = zeros(numbit,1);% obj = objective values at each iteration
obj_1 = obj;
43
sig_prior = sig_guess;
W2 = speye(Nt,Nt); % model regularization matrix
Vmeas = reshape(Vmeas,nelec*nelec,1); % experimental voltages
% initializing some variables
iter = 1;
l2err_sig(iter) = norm(sig_all(:,iter)-sigTrue,2); % Reconstruction error
l1err_sig(iter) = norm(sig_all(:,iter)-sigTrue,1);
if isempty(del)==0
Tol = d*del; % Stopping criteria by discripancy principle
end
% Predicted voltages F(sigma_k)
Vpred = feval(@VfuncT_pem,sig_all(:,iter),Quarters,p,e,t,nelec);
obj(iter) = norm(Vpred-Vmeas,2); % Residual error
W1 = speye(Nt,Nt); % Weight matrix
lambda1 = lambda; c = 4; lamb(iter) = lambda1;
% Tikhonov functional
tikfn(iter) = obj(iter)^2/2+lambda^2*norm(W2*(sig_guess-sig_prior),2)^2;
st = [];
[iter, obj(iter)]
while iter < numbit
% while obj(iter) > Tol && iter < numbit
J = pem_jac(sig_all(:,iter),dsig,Quarters,p,e,t,nelec); % calculating jacobian from complete electrode model
%% Select step length by back tracking
[step,pk] = alpha_pem(Vmeas,Vpred,J,sig_all(:,iter),sig_prior,W2,lambda,p,e,t,obj(iter),dsig,in_st,stmin,Quarters);
st = [st step];
iter = iter + 1; % updating iteration count
% Checking for better sigma
sig = sig_all(:,iter-1) + step.*pk;
sig_all(:,iter) = sig;
% Predicted voltages at next iteration
Vpred = feval(@VfuncT_pem,sig,Quarters,p,e,t,nelec);
% Updating model regularization parameter
lambda = lambda1*c/(c+iter);
lamb(iter) = lambda;
%%%%% COMMENT FOR SANWAR
% if obj(iter-1) > norm(Vpred-Vmeas,2)
% else
% if lambda == lambda1
44
% break;
% end
% lambda = lamb(1); % go back to initial lambda
% end
% Reconstruction error
l2err_sig(iter) = norm(sig_all(:,iter)-sigTrue,2)/norm(sigTrue,2);
l1err_sig(iter) = norm(sig_all(:,iter)-sigTrue,1)/norm(sigTrue,1);
% Residual error
obj(iter) = norm(Vpred-Vmeas,2)/norm(Vpred,2);
obj_1(iter)=norm(Vpred-Vmeas,1)/norm(Vpred,1);
% Tikhonov functional
tikfn(iter) = obj(iter)^2/2+.5*lambda^2*norm(W2*(sig_all(:,iter)-...
sig_prior),2)^2;
% Displaying current results from iterate
[iter,obj(iter),step,lambda]
end
%
end
45
%%%% Solving EIT %%%%
%%%% Comparison Between Image Segmentation and IRGN%%%%
%%%% Scott Scruggs %%%%
%%%% srscrug@g.clemson.edu %%%%
clear all; clc; close all;
%%% Example 1 %%%
ex = 1;
% Create geometry of square
rect1 = [3 4 -1 1 1 -1 1 1 -1 -1]’;
rect2 = [3 4 0.6 0.9 0.9 0.6 0.7 0.7 -0.7 -0.7]’;
% rect2 = [3 4 -0.9 0.9 0.9 -0.9 -0.7 -0.7 -0.9 -0.9]’;
% perimeter of main square geometry, rect1
ERdat.SA = 2*4;
gd = rect1; % including two geometries together
dl=decsg(gd);
hval1 = 0.2; % CHANGE AS NEEDED BETWEEN 0.1-0.2 for refinement size
[p,e,t]=initmesh(dl,’hmax’,hval1); % using value of 0.1 so shape is closer
% to shape of square
% finding boundary nodes
boundInd = e(1,:);
pbnd = p(:,boundInd);
nodes = length(p);
% Display mesh
% gcf = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]);
% pdemesh(p,e,t), axis on
% fname = sprintf(’./Case1/rect_coarse_mesh_h%2.2d_ex%d.png’,[hval1,ex]);
% saveas(gcf,fname);
hval2 = 0.05; % CHANGE AS NEEDED BETWEEN 0.025-0.05 for refinement size
[p1,e1,t1]=initmesh(dl,’hmax’,hval2); % using value of 0.1 so shape is closer
% to shape of square
% [p1,e1,t1]=refinemesh(dl,p,e,t);
boundInd1 = e1(1,:);
p1bnd = p1(:,boundInd1);
nodes1 = length(p1);
% Display mesh
% gcf = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]);
% pdemesh(p1,e1,t1), axis on
% fname = sprintf(’./Case1/rect_refined_mesh_h%2.2d_ex%d.png’,[hval2,ex]);
% saveas(gcf,fname);
46
% add another rectangle to be impedance object
sigINarea = 0.01; % electrical conductivity can be any value (Changed values of sigInarea &sigOUTarea (11/12/20))
sigOUTarea = 0.007; % distinguishing from rect2
sigTrue=sigOUTarea.*ones(nodes,1);
sigTrue1=sigOUTarea.*ones(nodes1,1);
% finding the nodes within area of rect2 for refined mesh
ind = find((p1(1,:)>=rect2(3))&(p1(1,:)<=rect2(4))&...
(p1(2,:)>=rect2(9))&(p1(2,:)<=rect2(7)));
sigTrue1(ind) = sigINarea;
% finding the nodes within area of rect2 for the mesh
ind = find((p(1,:)>=rect2(3))&(p(1,:)<=rect2(4))&...
(p(2,:)>=rect2(9))&(p(2,:)<=rect2(7)));
sigTrue(ind) = sigINarea;
% gcf = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]);
% pdeplot(p,e,t,’xydata’,sigTrue,’mesh’,’off’)
% colormap(jet)
% title(’\sigma_true’,’Interpreter’,’tex’,’FontSize’,20);
% fname = sprintf(’./Case1/truesigma_coarse_h%2.2d_ex%d.png’,[hval1,ex]);
% saveas(gcf,fname);
%
% gcf = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]);
% pdeplot(p1,e1,t1,’xydata’,sigTrue1,’mesh’,’on’)
% colormap(jet)
% title(’\sigma_true’,’Interpreter’,’tex’,’FontSize’,20);
% fname = sprintf(’./Case1/truesigma_refined_h%2.2d_ex%d.png’,[hval2,ex]);
% saveas(gcf,fname);
% close all;
%% Setting location of electrodes
nelec = 4*4; % number of electrodes
% Side 1
nside = nelec/4; % number of electrodes on 1 side
min_x = min(rect1(3:6));
max_x = max(rect1(3:6));
min_y = min(rect1(7:10));
max_y = max(rect1(7:10));
% Computing equidisant electrode positions
% Sides 1-4: working clockwise from top side
% Remove first and last position
xside = linspace(min_x,max_x,nside+2);
47
xside = xside(2:end-1);
yside = linspace(min_y,max_y,nside+2);
yside = yside(2:end-1);
elec_loc = [xside ones(1,nside) flip(xside) -ones(1,nside);
ones(1,nside) flip(yside) -ones(1,nside) yside];
% edges of geometry
% finer mesh
ie11 = e1(1,:);
ie22 = e1(2,:);
% coarser mesh
ie1 = e(1,:);
ie2 = e(2,:);
Quarters1 = zeros(nelec,1);
Quarters = zeros(nelec,1);
for i = 1:size(elec_loc,2)
% used euclidean dist b/c find command would give an empty matrix
% even though the exact x,y positions are in p1 matrix
euclidist = sum(sqrt((p1-elec_loc(:,i)).^2));
ind1 = find(euclidist==min(euclidist));
Quarters1(i,1) = ind1;
euclidist = sum(sqrt((p-elec_loc(:,i)).^2));
ind = find(euclidist==min(euclidist));
Quarters(i,1) = ind;
end
%plot(p(1,Quarters),p(2,Quarters),’.r’); %plotting electrodes on geometry (11/12/20)
ERdat.Quarters=Quarters;
ERdat.nelec=nelec;
%% Simulate Forward Problem for PEM
%% For finer mesh
% Computing electric potential and then displaying results
sig_T1 = pdeintrp(p1,t1,sigTrue1);
[uh1,Uh1,~] = ufunc_pem(sig_T1,Quarters1,p1,e1,t1,nelec);
% for i = 1:nelec
% % Display internal potential for all current pattern configurations
% gcf = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]); % maximizing displayed figure;
% pdeplot(p1,e1,t1,’xydata’,uh1(:,i),’mesh’,’on’);
% title([’Internal Potential at Electrode ’ num2str(i)])
% filename = sprintf(’./Case1/uN_pem%d_h%2.2d_ex%d.png’,[i,hval2,ex]);
% saveas(gcf, filename);
% end
% close all;
48
% converting predicted potential on boundary to measured voltages for all
% current patterns
[Vtrue1] = potent_to_volt(Uh1);
% % Display all voltages patterns
% gcf = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]); % maximizing displayed figure;
% imagesc(Vtrue1); colorbar;
% filename = sprintf(’./Case1/VoltTruePEM_h%2.2d_ex%d.png’,[hval2,ex]);
% saveas(gcf,filename);
% close all;
% IF NOISE IS TO BE ADDED TO SYNTHETIC DATA:
Vtrue1 = reshape(Vtrue1,nelec*nelec,1);
%% DROP NOISE LEVEL AS NEEDED
Vstd = 0.01; % standard deviation of 1% noise
% Vstd = 0.03; % standard deviation of 5% noise
noise = randn(size(Vtrue1)); %Gaussian noise
Vsyn1 = Vtrue1.*(1 + Vstd.*noise);
ERdat.data=Vsyn1;
for i = 1:nelec
delta(i) = norm(Vtrue1(1+nelec*(i-1):nelec*i,1)-Vsyn1(1+nelec*(i-1):nelec*i,1),2);
end
dl1 = norm(Vsyn1-Vtrue1,2); % del: level of noise
del1 = max(delta);
%% Solve Inverse Problem
%% IRGN Method and One Step Inverse Problem
% Initializing LVM
f = @(x0) EITresfunc_pem(x0,Vsyn1,Quarters,p,e,t,nelec);
options = optimset(’Algorithm’,’levenberg-marquardt’,’Diagnostics’,’on’,...
’display’,’iter’, ’TolFun’,1e-5,’MaxIter’,5);
% Initializing parameters
sig_guess = 0.007*ones(nodes,1); % prior for sigma
sig_g = pdeintrp(p,t,sig_guess);
ERdat.sig_g=sig_g;
sig_T = pdeintrp(p,t,sigTrue);
ERdat.sigTrue=transposecheck(sig_T);
% number of iterations
numbit = 3;
ERdat.iter=numbit;
d = 1.0;
kk = 1.25;
49
% initial step size
in_st = 0.0001;%changed from .1 (2/17/21)
% minimum step length
stmin = 0.2.*1e-6;
lambda = [5e-7 6e-7 7e-7 8e-7 9e-7 1e-6 2e-6 3e-6 4e-6 5e-6];
% lambda = [5e-7 5e-6 5e-5 5e-4];
% lambda = [2e-7 2e-6 2e-5 2e-4];
% lambda=2e-6; %% CHANGE AS NEEDED, TRY GOING UP OR DOWN ORDER OR MAGNITUDE
dsig = [1e-6 2e-6 3e-6 4e-6 5e-6];
% dsig = [1e-7 2e-7 3e-7 4e-7 5e-7];
% dsig=1e-4;
ii = 0;
for i=1:numbit
for j=1:numbit-1
ii=i+1;
% Running LVM
% sigSol = lsqnonlin(f,sig_guess,[],[],options);
% sigact = pdeintrp(p,t,sigSol);
ERdat.lambda=lambda(i);
ERdat.dsig=dsig(j);
sigact_all=GNfuncPEM(p,e,t,del1,d,in_st,stmin,ERdat);
sigact=sigact_all(:,end-1);
sigact = transposecheck(sigact); %Transpose sigact (12/17/2020)
ERdat.sig_guess=sigact;
gcf = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]); % maximizing displayed figure;
pdeplot(p,e,t,’xydata’,sigact,’mesh’,’off’); colormap(jet);
filename = sprintf(’./Case1/pem_lvm_h%2.2d_ex%d_run%d.png’,[hval1,ex,ii]);
saveas(gcf,filename);
%%
% sig_g = transposecheck(sig_g);
% ERdat.sig_guess = sig_g;
% Running GN method
numbit2=5;
ERdat.iter=numbit2;
[sig_all,Vpred,l2err,l1err,st,obj,obj_1,~,kt] = ...
GNfuncPEM_imseg(p,e,t,del1,d,in_st,stmin,ERdat);
[sig_all_irgn,Vpred_irgn,l2err_irgn,l1err_irgn,st_irgn,obj_irgn,obj_1_irgn,~,kt_irgn] = ...
GNfuncPEM(p,e,t,del1,d,in_st,stmin,ERdat);
for i = 1:numbit2-1
pdeplot(p,e,t,’xydata’,sig_all(:,i+1),’mesh’,’off’); colormap(’jet’);
end
for i = 1:numbit2-1
pdeplot(p,e,t,’xydata’,sig_all_irgn(:,i+1),’mesh’,’off’); colormap(’jet’);
50
end
sig = sig_all(:,end-1);
sig_irgn = sig_all_irgn(:,end-1);
ERdat.iter=numbit;
% Displaying last sigma model and predicted voltage in comparison to experimental data
h = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]); % maximizing displayed figure;
%image segmentation results
subplot(2,2,1); pdeplot(p,e,t,’xydata’,sig_T,’mesh’,’off’);
title(’True E.C.’), colormap(jet);
subplot(2,2,2); pdeplot(p,e,t,’xydata’,sig,’mesh’,’off’);
title([’P.E.C. dsig=’ num2str(dsig(j)) ’, \lambda=’ num2str(lambda(i)) ’,iter=’ num2str(kt-1) ’,\alpha_k=’ num2str(st(end-1))]), colormap(jet);
subplot(2,2,3); plot(obj(1:kt));
title(’Residual error, E’);
subplot(2,2,4); plot(l2err); title(’Reconstruction error, e’);
filename = sprintf(’./Case1/pem_image_h%2.2d_ex%d_run%d.png’,[hval1,ex,ii]);
saveas(h,filename);
close all;
filename = sprintf(’./Case1/l2err_pem_image_h%2.2d_ex%d_run%d.mat’,[hval1,ex,ii]);
save(filename,’l2err’);
filename = sprintf(’./Case1/obj_pem_image_h%2.2d_ex%d_run%d.mat’,[hval1,ex,ii]);
save(filename,’obj’);
filename = sprintf(’./Case1/sigall_pem_image_h%2.2d_ex%d_run%d.mat’,[hval1,ex,ii]);
save(filename,’sig_all’);
%plotting the irgn things
h = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]); % maximizing displayed figure;
subplot(2,2,1); pdeplot(p,e,t,’xydata’,sig_T,’mesh’,’off’);
title(’True E.C.’), colormap(jet);
subplot(2,2,2); pdeplot(p,e,t,’xydata’,sig_irgn,’mesh’,’off’);
title([’P.E.C. dsig=’ num2str(dsig(j)) ’, \lambda=’ num2str(lambda(i)) ’,iter=’ num2str(kt_irgn-1) ’,\alpha_k=’ num2str(st_irgn(end-1))]), colormap(jet);
subplot(2,2,3); plot(obj_irgn(1:kt));
title(’Residual error, E’);
subplot(2,2,4); plot(l2err_irgn); title(’Reconstruction error, e’);
filename = sprintf(’./Case1/pem_h%2.2d_ex%d_run%d.png’,[hval1,ex,ii]);
saveas(h,filename);
close all;
filename = sprintf(’./Case1/l2err_pem_h%2.2d_ex%d_run%d.mat’,[hval1,ex,ii]);
save(filename,’l2err’);
51
filename = sprintf(’./Case1/obj_pem_h%2.2d_ex%d_run%d.mat’,[hval1,ex,ii]);
save(filename,’obj’);
filename = sprintf(’./Case1/sigall_pem_h%2.2d_ex%d_run%d.mat’,[hval1,ex,ii]);
save(filename,’sig_all’);
end
end
52
%%%% Comparison Between Image Segmentation and IRGN for other geometries%%%%
%%%% Scott Scruggs %%%%
%%%% srscrug@g.clemson.edu %%%%
% Inputs
clear;clc;close all;
load(’p.mat’); load(’e.mat’); load(’t.mat’);
load(’p1.mat’); load(’e1.mat’); load(’t1.mat’);
nelec = 16;
ERdat.nelec=nelec;
body.rect = [3,4,-1,1,1,-1,1,1,-1,-1]’;
% [Quarters,Quarters1] = pem_square_node_assembly(p,p1,body);
load(’Quarters.mat’);
%load(’Quarters1.mat’);
ERdat.Quarters = Quarters;
%ERdat.Quarters1 = Quarters1;
% Vsyn1 = load(’Vsyn1_1noise.mat’);
% Vmeas = load(’V1_ex13_nc1_1noise_hl1.txt’);
% Vmeas = load(’V1_ex13_nc1_5noise_hl1.txt’);
% Vmeas = load(’V1_ex20_nc2_1noise_hl1.txt’);
%Vmeas = load(’Vsyn1_ex4_1noise.mat’);
Vmeas = load(’Vsyn1_ex7_1noise.mat’);
% Vmeas = load(’Vsyn1_ex8_1noise.mat’);
% Vmeas = load(’V1_ex3_nc3_5noise_hl1.txt’);
% Vmeas = load(’V1_ex6_nc3_1noise_hl1.txt’);
% Vmeas = load(’V1_ex6_nc3_5noise_hl1.txt’);
% for model
nodes = size(p,2);
%nodes = size(p1,2);
sig_g = 7e-4*ones(nodes,1); % using background conductivities
sig_guess = pdeintrp(p,t,sig_g); % interpolate to triangulation mesh
%sig_guess = pdeintrp(p1,t1,sig_g);
sig_guess = sig_guess’;
% load(’SIGstructure_ex13_nc1_hl1.mat’); % called "SIG"
% load(’SIGstructure_ex20_nc2_hl1.mat’); % called "SIG"
% load(’SIGstructure_ex3_nc3_hl1.mat’); % called "SIG"
% load(’SIGstructure_ex6_nc3_hl1.mat’); % called "SIG"
%%
% load(’sigTrue1_ex4.mat’);
%load(’sigTrue_ex4.mat’);
load(’sigTrue_ex7.mat’);
% load(’sigTrue_ex8.mat’);
sigTrue = pdeintrp(p,t,sigTrue);
53
%sigTrue = pdeintrp(p1,t1,sigTrue1);
sigTrue = transposecheck(sigTrue);
ERdat.sigTrue=sigTrue;
%load(’Vsyn1_ex3_1noise.mat’);
%load(’Vsyn1noise_ex4.mat’);
load(’Vsyn1noise_ex7.mat’);
% load(’Vsyn1noise_ex8.mat’);
ERdat.data=Vsyn1;
ERdat.nelec=nelec;
% for IRGN
%lambda = 1e-6;
lambda = 5e-6; % 1e-6 5e-6 1e-5 5e-5 1e-4 5e-4
dsig = 1e-6; % 5e-8
numbit=5;
in_st=1e-4;
stmin = 0.2.*1e-6;
%% Everything below comes from GNfuncPEM_imseg.m
Nt = size(t,2); % Nt: number of elements
%Nt = size(t1,2);
sig_all = [sig_guess zeros(Nt,numbit)]; % saving all runs of model
obj = zeros(numbit,1); % obj = objective values at each iteration
obj_1=obj;
sig_prior = sig_guess;
W2 = speye(Nt,Nt); % model regularization matrix
Vmeas = reshape(Vsyn1,nelec*nelec,1); % experimental voltages
iter = 1;
l2err_sig(iter) = norm(sig_all(:,iter)-sigTrue,2)/norm(sigTrue,2);% Reconstruction error
l1err_sig(iter) = norm(sig_all(:,iter)-sigTrue,1)/norm(sigTrue,1);
Vpred = feval(@VfuncT_pem,sig_all(:,iter),Quarters,p,e,t,nelec);
%Vpred = feval(@VfuncT_pem,sig_all(:,iter),Quarters1,p1,e1,t1,nelec);
obj(iter) = norm(Vpred-Vmeas,1); % Residual error(Changed to 1 norm because of sparseness (2/18/21))
W1 = speye(Nt,Nt); % Weight matrix
lambda1 = lambda; c = 4; lamb(iter) = lambda1;
% Tikhonov functional
tikfn(iter) = obj(iter)^2/2+lambda^2*norm(W2*(sig_guess-sig_prior),1)^2; %changed to 1 norm because of sparseness
st = [];
[iter, obj(iter)]
while iter < numbit
%
% gcf=figure(’units’,’normalized’,’outerposition’,[0 0 1 1]);
% g=1500;
% x2=linspace(-1,1,g);
54
% y2=x2;
% tmp=pdeprtni(p,t,sig_all(:,iter)’);%doing this on sig_guess; change later (11/12/20)
% I=tri2grid(p,t,tmp,x2,y2);
% BW=edge(I,’log’);%Changed to Sobel Method (12/30/2020)
% % % % BW = zeros(size(I));
% % % % ixx = find(I>I(1,1)); % find all values greater than background
% % % % BW(ixx) = 1; % set to 1 and create shape of anomaly
% % % % BW = zeros(size(I));
% % % % BW(10:end-10,15:end-5) = 1;
% bw = activecontour(I,BW,300);
% figure;imshow(bw);
% bw1=double(bw)*sum(sum(I.*double(bw)))/(sum(sum(bw)))+double(1-bw)*sum(sum(I.*double(1-bw)))/(sum(sum(1-bw)));
% tmp2=interp2(x2,y2,bw1,p(1,:),p(2,:));
% hi=pdeintrp(p,t,tmp2’);
% J = pem_jac(hi,dsig,Quarters,p,e,t,nelec); % calculating jacobian from complete electrode mod
J = pem_jac(sig_all(:,iter),dsig,Quarters,p,e,t,nelec);
%% Select step length by back tracking
% [step,pk] = alpha_pem(Vmeas,Vpred,J,hi,sig_prior,W2,lambda,p,e,t,obj(iter),dsig,in_st,stmin,Quarters);
%[step,pk] = alpha_pem(Vmeas,Vpred,J,hi,sig_prior,W2,lambda,p1,e1,t1,obj(iter),dsig,in_st,stmin,Quarters1);
[step,pk] = alpha_pem(Vmeas,Vpred,J,sig_all(:,iter),sig_prior,W2,lambda,p,e,t,obj(iter),dsig,in_st,stmin,Quarters);
st = [st step];
figure;pdeplot(p,e,t,’xydata’,pk); colormap(’jet’);
iter = iter + 1; % updating iteration count
% Checking for better sigma
sig = sig_all(:,iter-1) + step.*pk;
% hi = transposecheck(hi);
% sig = hi + step.*pk;
sig_all(:,iter) = sig;
figure;pdeplot(p,e,t,’xydata’,sig); colormap(’jet’);
% Predicted voltages at next iteration
Vpred = feval(@VfuncT_pem,sig,Quarters,p,e,t,nelec);
% Updating model regularization parameter
lambda = lambda1*c/(c+iter);
lamb(iter) = lambda;
%
%%%%% COMMENT FOR SANWAR
% if obj(iter-1) > norm(Vpred-Vmeas,2)
% else
% if lambda == lambda1
% break;
% end
% lambda = lamb(1); % go back to initial lambda
% end
55
% Reconstruction error
l2err(iter) = norm(sig_all(:,iter)-sigTrue,2)/norm(sigTrue,2);
l1err(iter)= norm(sig_all(:,iter)-sigTrue,1)/norm(sigTrue,1);
%l2err_sig(iter) = 1/l2err_sig(iter); %Added to see if this works (2/23/21)
% Residual error
%obj(iter) = norm(Vpred-Vmeas,1);
obj(iter)=norm(Vpred-Vmeas,2)/norm(Vpred,2);
obj_1(iter)=norm(Vpred-Vmeas,1)/norm(Vpred,1);
%obj(iter) = 1/obj(iter); %Added to see if this works (2/23/21)
% Tikhonov functional
tikfn(iter) = obj(iter)^2/2+.5*lambda^2*norm(W2*(sig_all(:,iter)-...
sig_prior),1)^2;
% Displaying current results from iterate
% [iter,obj(iter),step,lambda]
% gcf=figure(’units’,’normalized’,’outerposition’,[0 0 1 1]);
% g=100;
% x2=linspace(-1,1,g);
% y2=x2;
% tmp=pdeprtni(p,t,sig’);
% I=tri2grid(p,t,tmp,x2,y2);
% figure;surf(x2,y2,I);
% % view(2);
% % pause;
% mask = zeros(size(I));
% mask(5:end-5,5:end-5) = 1;
% % imshow(mask)
% % title(’Initial Contour Location’)
% %I=sig_all(:,end-1);
% bw = activecontour(I,mask,300);
% bw=mean(mean(I.*bw))*(255-bw); %To hopefully invert the colormap (7/24/2020)
% bw1=double(bw)*mean(mean(I.*(bw)))+double(bw)*mean(mean(I.*(1-bw)));
%
% tmp2=interp2(x2,y2,bw1,p(1,:),p(2,:));
%
% % f=size(bw,1);
% % tmp2=zeros(f,f);
% % for g=1:f
% % tmp2(g,:)=interp2(x2,y2,bw(f,:),p(1,:),p(2,:));
% % end
% hi=pdeintrp(p,t,tmp2’);
% figure;pdeplot(p,e,t,’xydata’,hi,’mesh’,’off’); colormap(jet);
% sig_all(:,iter) =hi; % updating e.c. model
% l2err_sig(iter) = norm(sig_all(:,iter)-sigTrue’,2)/norm(sigTrue,2); % Reconstruction error
%
56
% lambda = lambda1*c/(c+iter); %lambda1*c/(c+iter);
% lamb(iter) = lambda;
% obj(iter) = norm(Vpred-Vmeas,2); % Residual error
%
% [iter,obj(iter),step,lambda]
%
% tikfn(iter) = obj(iter)^2/2+.5*lambda^2*norm(W2*(sig_all(:,iter)-sig_prior),2)^2;% Tikhonov functional
end
h = figure(’units’,’normalized’,’outerposition’,[0 0 1 1]); % maximizing displayed figure;
subplot(2,2,1); pdeplot(p,e,t,’xydata’,sigTrue,’mesh’,’off’);
title(’True E.C.’), colormap(jet);
subplot(2,2,2); pdeplot(p,e,t,’xydata’,sig,’mesh’,’off’);
title([’P.E.C. dsig=’ num2str(dsig) ’, \lambda=’ num2str(lambda) ’,iter=’ num2str(numbit-1) ’,\alpha_k=’ num2str(st(end-1))]), colormap(jet);
subplot(2,2,3); plot(obj(1:numbit));
title(’Residual error, E’);
subplot(2,2,4); plot(l2err); title(’Reconstruction error, e’);
filename = sprintf(’./Case1/pem_image_h%2.2d_ex%d_run%d.png’,[hval1,ex,numbit]);
saveas(h,filename);
close all;
filename = sprintf(’./Case1/l2err_pem_image_h%2.2d_ex%d_run%d.mat’,[hval1,ex,numbit]);
save(filename,’l2err’);
filename = sprintf(’./Case1/obj_pem_image_h%2.2d_ex%d_run%d.mat’,[hval1,ex,numbit]);
save(filename,’obj’);
filename = sprintf(’./Case1/sigall_pem_image_h%2.2d_ex%d_run%d.mat’,[hval1,ex,numbit]);
save(filename,’sig_all’);
57
References
[1] Sanwar Ahmad, Thilo Strauss, Shyla Kupis, and Taufiquar Khan. Comparison of statisticalinversion with iteratively regularized gauss newton method for image reconstruction in electricalimpedance tomography. Applied Mathematics and Computation, 358:436–448, 2019.
[2] Nordin Aranzabal, Julio Martos, A Montero, L Monreal, Jesus Soret, Jose Torres, andRaimundo Garcıa-Olcina. Extraction of thermal characteristics of surrounding geological layersof a geothermal heat exchanger by 3d numerical simulations. Applied Thermal Engineering,99:92–102, 2016.
[3] Anatoly Bakushinsky and Alexandra Smirnova. On application of generalized discrepancyprinciple to iterative methods for nonlinear ill-posed problems. Numerical Functional Analysisand Optimization, 26(1):35–48, 2005.
[4] Tushar Kanti Bera. Applications of electrical impedance tomography (eit): a short review.In IOP Conference Series: Materials Science and Engineering, volume 331, page 012004. IOPPublishing, 2018.
[5] Liliana Borcea. Electrical impedance tomography. Inverse problems, 18(6):R99, 2002.
[6] Tony F Chan, Selim Esedoglu, and Mila Nikolova. Algorithms for finding global minimizers ofimage segmentation and denoising models. SIAM journal on applied mathematics, 66(5):1632–1648, 2006.
[7] Tony F Chan and Xue-Cheng Tai. Level set and total variation regularization for elliptic inverseproblems with discontinuous coefficients. Journal of Computational Physics, 193(1):40–66, 2004.
[8] Tony F Chan and Luminita A Vese. Active contours without edges. IEEE Transactions onimage processing, 10(2):266–277, 2001.
[9] Eric T Chung, Tony F Chan, and Xue-Cheng Tai. Electrical impedance tomography usinglevel set representation and total variational regularization. Journal of Computational Physics,205(1):357–372, 2005.
[10] Ginmo Chung and Luminita A Vese. Image segmentation using a multilayer level-set approach.Computing and visualization in science, 12(6):267–285, 2009.
[11] Michel C Delfour and J-P Zolasio. Shapes and geometries: metrics, analysis, differential calcu-lus, and optimization, volume 22. Siam, 2011.
[12] Gunay Dogan. Shape calculus for shape energies in image processing. arXiv preprintarXiv:1307.5797, 2013.
[13] Gunay Dogan. An efficient curve evolution algorithm for multiphase image segmentation. InInternational Workshop on Energy Minimization Methods in Computer Vision and PatternRecognition, pages 292–306. Springer, 2015.
58
[14] Gunay Dogan, Pedro Morin, and Ricardo H Nochetto. A variational shape optimization ap-proach for image segmentation with a mumford–shah functional. SIAM Journal on ScientificComputing, 30(6):3028–3049, 2008.
[15] Geoff Dougherty. Digital image processing for medical applications. Cambridge University Press,2009.
[16] Michael Hintermuller and Wolfgang Ring. An inexact newton-cg-type active contour approachfor the minimization of the mumford-shah functional. Journal of Mathematical Imaging andVision, 20(1-2):19–42, 2004.
[17] Kazufumi Ito, Karl Kunisch, and Zhilin Li. Level-set function approach to an inverse interfaceproblem. Inverse problems, 17(5):1225, 2001.
[18] Karl Kunisch and Wolfgang Ring. Regularization of nonlinear illposed problems with closedoperators. Numerical functional analysis and optimization, 14(3-4):389–404, 1993.
[19] Shyla Rae Kupis. Methods for the electrical impedance tomography inverse problem: Deeplearning and regularization with wavelets problem: Deep learning and regularization withwavelets. Master’s thesis, Clemson University, 2021.
[20] Armin Lechleiter and Andreas Rieder. Newton regularizations for impedance tomography: anumerical study. Inverse Problems, 22(6):1967, 2006.
[21] David Mumford and Jayant Shah. Optimal approximations by piecewise smooth functions andassociated variational problems. Communications on pure and applied mathematics, 42(5):577–685, 1989.
[22] Jorge Nocedal and Stephen Wright. Numerical optimization. Springer Science & BusinessMedia, 2006.
[23] Stanley Osher and Ronald P Fedkiw. Level set methods: an overview and some recent results.Journal of Computational physics, 169(2):463–502, 2001.
[24] Jin Qi-Nian. On the iteratively regularized gauss-newton method for solving nonlinear ill-posedproblems. Mathematics of Computation, 69(232):1603–1623, 2000.
[25] Alexandra Smirnova, Rosemary A Renaut, and Taufiquar Khan. Convergence and applicationof a modified iteratively regularized gauss–newton algorithm. Inverse problems, 23(4):1547,2007.
[26] Erkki Somersalo, Margaret Cheney, and David Isaacson. Existence and uniqueness for electrodemodels for electric current computed tomography. SIAM Journal on Applied Mathematics,52(4):1023–1040, 1992.
[27] Abdel Aziz Taha and Allan Hanbury. Metrics for evaluating 3d medical image segmentation:analysis, selection, and tool. BMC medical imaging, 15(1):1–28, 2015.
[28] Paivi Vauhkonen. Image Reconstruction in Three-Dimensional Electrical Impedance Tomogra-phy. PhD thesis, University of Kupio, 2004.
[29] Luminita A Vese and Tony F Chan. A multiphase level set framework for image segmentationusing the mumford and shah model. International journal of computer vision, 50(3):271–293,2002.
59
top related