safety of self-driving cars: a case study on lane keeping ...€¦ · safety of self-driving cars:...
TRANSCRIPT
Safety of Self-driving Cars: A Case Study on Lane Keeping Systems
Hao Xu
Thesis submitted to the Faculty of the
Virginia Polytechnic Institute and State University
in partial fulfillment of the requirements for the degree of
Master of Science
in
Computer Engineering
Haibo Zeng, Chair
A. Lynn Abbott
Michael S. Hsiao
May 20, 2020
Blacksburg, Virginia
Keywords: Self-driving, Neural Network, Lane Detection, Specification, Enforcement,
Delay, Prediction.
Copyright 2020, Hao Xu
Safety of Self-driving Cars: A Case Study on Lane Keeping SystemsHao Xu
(ABSTRACT)
Machine learning is a powerful method to handle the self-driving problem. Researchers use
machine learning to construct a neural network and train it to drive the car. A self-driving
car is a safety-critical system. However, the neural network is not necessarily reliable. The
output of a neural network can be easily influenced by many factors, such as the quality
of training data and the runtime environment. Also, it takes time for the neural network
to generate the output. That is, the self-driving car may not respond in time. Such weak-
nesses will increase the risk of accidents. In this thesis, considering the safety of self-driving
cars, we apply a delay-aware shielding mechanism to the neural network to protect the self-
driving car. Our approach is an improvement based on previous research on runtime safety
enforcement for general cyber-physical systems that did not consider the delay to generate
the output. Our approach contains two steps. The first is to use formal language to specify
the safety properties of the system. The second step is to synthesize the specifications into
a delay-aware enforcer called the shield, which enforces the violated output to satisfy the
specifications during the whole delay. We use a lane keeping system as a small but repre-
sentative case study to evaluate our approach. We utilize an end-to-end neural network as a
typical implementation of such a lane keeping system. Our shield supervises those outputs
of the neural network and verifies the safety properties during the whole delay period with
a prediction. The shield can correct it if a violation exists. We use a 1/16 scale truck and
construct a curvy lane to test our approach. We conduct the experiments both on a simu-
lator and a real road to evaluate the performance of our proposed safety mechanism. The
result shows the effectiveness of our approach. We improve the safety of a self-driving car
and we will consider more comprehensive driving scenarios and safety features in the future.
Safety of Self-driving Cars: A Case Study on Lane Keeping Systems
Hao Xu
(GENERAL AUDIENCE ABSTRACT)
Self-driving cars is a hot topic nowadays. Machine learning is a popular method to achieve
self-driving cars. Machine learning constructs a neural network, which imitates a human
driver’s behavior to drive the car. However, a neural network is not necessarily reliable.
Many things can mislead the neural network into making wrong decisions, such as insufficient
training data or a complex driving environment. Thus, we need to guarantee the safety of
self-driving cars. We are inspired to use formal language to specify the safety properties
of the self-driving system. A system should always follow those specifications. Then the
specifications are synthesized into an enforcer called the shield. When the system’s output
violates the specifications, the shield will modify the output to satisfy the specifications.
Nevertheless, there is a problem with state-of-the-art research on specifications. When the
specifications are synthesized into a shield, it does not consider the delay to compute the
output. As a result, the specifications may not be always satisfied during the period of the
delay. To solve such a problem, we propose a delay-aware shielding mechanism to continually
protect the self-driving system. We use a lane keeping system as a small self-driving case
study. We evaluate the effectiveness of our approach both on the simulation platform and the
hardware platform. The experiments show that the safety of our self-driving car is enhanced.
We intend to study more comprehensive driving scenarios and safety features in the future.
Dedication
To my family.
iv
Acknowledgments
I would like to express my thanks to my thesis advisor, Professor Haibo Zeng. Without
his enthusiastic help and careful guidance, I cannot finish my thesis smoothly. Whenever I
have confusion about my research, he can always reply to me with a clear explanation. His
rigorous attitude to research and open thinking of solving problems have always helped me
a lot. Not only on my research but also on my life.
I also thank Professor A. Lynn Abbott and Professor Michael S. Hsiao for their willingness to
be my advisory committee and for taking time out of their busy work to review my research.
Their valuable comments and suggestions mean a lot for me to improve my research.
Last but not least, I want to thank my family and my friends for supporting me through all
the obstacles I encountered.
v
Contents
List of Figures viii
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Lane Keeping System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Contributions and Organization . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Introduction to End-to-end Self-driving Cars 12
2.1 Convolutional Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Fully Connected Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Introduction to Lane Detection 17
3.1 Canny . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Lane Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4 Delay-aware Shielding Mechanism 23
4.1 Introduction to Enforcement Method . . . . . . . . . . . . . . . . . . . . . . 24
vi
4.2 Specification for a Case: Lane Keeping System . . . . . . . . . . . . . . . . . 28
4.2.1 Specification Construction and Shield Synthesis . . . . . . . . . . . . 28
4.2.2 Specification Verification . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Delay-aware Shield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5 Experiments 39
5.1 Simulation Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2 Hardware Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3 Attack Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6 Discussions and Conclusions 47
Appendices 52
Appendix A Specification Proof 53
Bibliography 55
vii
List of Figures
1.1 A delay-aware shield. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2 A simulation platform used for the experiment. . . . . . . . . . . . . . . . . 8
1.3 The self-driving car hardware. A car which is adapted to be controlled by a
Raspberry Pi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 The hardware platform to configure and run the CNN. . . . . . . . . . . . . 10
2.1 The CNN architecture of the end-to-end convolutional neural network for
self-driving cars. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 The CNN construction of our control model. . . . . . . . . . . . . . . . . . 14
2.3 The illustration of convolutional kernel. . . . . . . . . . . . . . . . . . . . . 15
2.4 A fully connected layer is a mapping for feature information. . . . . . . . . . 16
3.1 A plane coordinate system with lane center as original point. By analyzing
the lane situation, we get the lane width L and the distance d between the
car and the lane center. Staying in the lane means −L/2 ≤ d ≤ L/2. . . . . 17
3.2 Convert the RGB image to a gray image. A gray image contains gradient
information with less data size. . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 The result of Canny edge detection algorithm. . . . . . . . . . . . . . . . . . 19
3.4 An illustration of Hough Transform, which is actually a spcae mapping. Each
edge pixel point in image space becomes a line in Hough space. . . . . . . . 20
viii
3.5 The lane detection results. The two red lines are the real lane edges and the
two green lines are the edges found by the algorithm. . . . . . . . . . . . . . 21
3.6 Compute the offset when the camera has captured a picture and lane detection
has been finished. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 A specification for keeping the safety distance to the front vehicle. One vi-
olation is shown as the red edge. When the distance is less than 50m, the
positive relative velocity is a violation. I: dis > 50, O1: v < vmax, O2: v < 0. 27
4.2 The shield for the distance keeping specification. This red edge is the modi-
fication for the mentioned scenario. When the distance is less than 50m, the
relative velocity is modified to negative to satisfy the specification. . . . . . 27
4.3 The car is heading for the left edge of the lane and is about to rush out of
the driveway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4 Explain the function of specifications. . . . . . . . . . . . . . . . . . . . . . . 30
4.5 Hybrid program for the self-driving specifications. . . . . . . . . . . . . . . 32
4.6 Starting with a safe state S0 that satisfies specifications, a system may also
reach an unsafe state S2 that specifications do not hold during the period of
the delay ε, Checking φ(I, O) only ensures the safety property of current state
without considering the dynamic process of input data. A safety guarantee
for the whole dynamic evolution should be given within time ε such that
φ(I(t), O), t ∈ [0, ε] is consistently satisfied. . . . . . . . . . . . . . . . . . . . 36
4.7 Part of the verification rules for d’s evolution. . . . . . . . . . . . . . . . . . 38
ix
5.1 The simulator self-driving car only controlled by CNN. The car moves from
the bottom to the top in each frame. A frame’s right frame is the next frame.
The first frame of the lower row is the timestamp just after the last frame of
the higher row. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2 The simulator self-driving car controlled by CNN and the general shield. . . 40
5.3 The simulator self-driving car controlled by CNN and the delay-aware shield. 40
5.4 Driving on diverse lane shapes. . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.5 The self-driving car runs with CNN only. The car moves from the left to the
right in each frame. A frame’s right frame is the next frame. The first frame
of the lower row is the timestamp just after the last frame of the higher row. 42
5.6 The self-driving car runs with CNN and the general shield.The car moves
from the left to the right in each frame. A frame’s right frame is the next
frame. The first frame of the lower row is the timestamp just after the last
frame of the higher row. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.7 Given the same frame, The shield without prediction (left) has no reaction as
long as the current frame is satisfied. The shield with the prediction (right)
will predict and gives modification if a violation is possible. . . . . . . . . . 43
5.8 The self-driving car controlled by CNN and the shield with delay prediction.
The car moves from the bottom to the top in each frame. A frame’s right
frame is the next frame. The first frame of the lower row is the timestamp
just after the last frame of the higher row. . . . . . . . . . . . . . . . . . . . 43
5.9 The comparison of physical platform and simulation platform. . . . . . . . . 44
5.10 Attacks on the simulation platform. . . . . . . . . . . . . . . . . . . . . . . . 45
x
5.11 Attacks on the hardware platform. . . . . . . . . . . . . . . . . . . . . . . . 46
6.1 The lane detection misses the right lane edge, which lets the the car fail to
know its position. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.2 Caused by the wrong lane detection, our shield fails to work such that the
car rushes out of the lane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.3 A car-like robot model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
xi
List of Abbreviations
[a]P P after all runs of a
Γ Precondition
D Reactive system
S Specification shield
→ Imply
ε Delay
φ Specification
∧ And/ Conjunction
ang Normalized steering angle
d Offset between car and lane center, unit: centimeter
I Input data
O Output data
P Postcondition
CNN Convolutional Neural Network
NN Neural Network
xii
Chapter 1
Introduction
1.1 Motivation
A self-driving car, which is also called an autonomous car or a driverless car, is a car that is
able to detect its environment and move by itself with little or without the driver’s manip-
ulation [2]. The origin of self-driving cars dates to the 1920s [1].
There are many benefits of self-driving cars.
• A self-driving car can overcome a human driver’s weaknesses. While a person is emo-
tional, negligent, and error-prone, technology is much more reliable. According to
the National Highway Traffic Safety Administration (NHTSA), human mistakes have
caused about 93% fatal traffic accidents [2]. If we go deeper into the accidents, we will
find that those drivers are mostly drunk, distracted, or fatigue. A self-driving car can
reduce the risk of these human factors.
• Self-driving cars can also help to improve the traffic jam. Self-driving cars will auto-
matically adjust routes based on real-time traffic conditions [3].
• Not everyone can drive. Self-driving cars can make travel more freely for the elderly,
the disabled, a person without a driver’s license [4].
• Self-driving cars can accelerate and decelerate in a timely manner. They can minimize
1
2 Chapter 1. Introduction
CO2 emissions depending on road conditions [5].
People are attracted by those benefits of self-driving cars. But the wide application of self-
driving cars is always restricted by one problem, the safety problem. A self-driving car is
a safety-critical system [7]. Any little failure of the car is a threat to passengers. There
is still a potential danger of current self-driving cars. So far, there has been a number of
self-driving accidents. Top self-driving research companies like Uber and Tesla both have
been through fatal self-driving accidents [30, 31, 32]. National Transportation Safety Board
(NTSB) pointed out that the Autosteer is not strong enough to offer safe driving control.
Autosteer is a lane keeping function of self-driving cars provided by Tesla, which is designed
to autonomously adjust the cars’ steering angle to keep the car staying in the lane safely.
Nevertheless, a deadly crash occurred when a driver activated this function on Tesla’s self-
driving car Model X. In this case, the driver trusted this Autosteer function so much even
without holding his hand on the wheel. However, the car deviated the lane and headed for
the guardrail.
Self-driving cars should be sufficiently safe. Otherwise, all the benefits of self-driving cars
are meaningless. Too many accidents have reminded us that self-driving cars are not safe
now. We must minimize the risk of self-driving cars. However, safety is not an effortless
task. Let us simply analyze some difficulties.
A self-driving system is a complicated cyber-physical system [6]. There so many sensors,
actuators, and communication buses. With those devices, the car does not run independently
and needs to communicate with the environment continuously. It is mostly impossible to
verify if the entire system is safe or not. Also, we are not always able to access the details
of the self-driving car. For example, using machine learning to achieve a self-driving car
is a popular method. With the help of machine learning, an end-to-end neural network
(NN) is constructed to control the car. Under the NN’s control, the car can automatically
1.2. Related Work 3
adjust the steering angle [8, 9]. But even if the development of machine learning is fast,
the interpretability of the machine learning method is still an unsolved problem. There is
research on interpretability, but we have not got the perfect interpretation yet [10, 11]. So
far, a neural network to us is like a black box. We know what a neural network can do, but
we do not know why it does. For a low-risk system like a movie recommendation system,
a lack of interpretability may not be a serious problem. For a safety-critical system like a
self-driving system, how can we trust it if we do not know the principle behind it? Once
the neural network makes a wrong decision, we may not find the reason. Many factors can
fail a neural network. Is the architecture of the NN not reasonable? Or is the training
data not enough? Does the training data contain outliers? Is the wrong decision caused
by outside factors, like the lane shape, the objects on road, and so on? Besides those basic
factors, there are malicious attacks on the car. Those attacks can easily mislead the self-
driving car into making wrong decision [12, 13]. If the car stubbornly carries out the wrong
decision, accidents will happen. With so many uncertainties, how can we improve the safety
of self-driving cars?
1.2 Related Work
For a cyber-physical system like a self-driving system, there have been some methods helping
to improve the safety of the system. We will go over them in general.
The first category contains some fault-tolerance techniques. With fault-tolerance techniques,
some faults are allowed in a system and there will be compensation for those faults. N-
modular redundancy (NMR) is a common fault-tolerance technique [14, 15]. The same com-
putation performs N times in a process and the final output is decided by a majority-voting
system. Thus, a fault occurs in one computation can be masked by other computations. Er-
4 Chapter 1. Introduction
ror correction code (ECC) can also help with fault-tolerance [16]. The basial idea of ECC is
adding redundancy to the signal. With redundant information, the error in the signal can be
detected. N-version programming (NVP) is a choice for handling faults, too [17, 18]. Start-
ing from the same initial requirement, the programs are generated separately with equivalent
function. Two independent programs are less likely to have the same fault. Therefore, NVP
is also a redundancy mechanism for software. Those methods are good at handling faults,
but they do not consider the defects in the system design. If there is a design defect, the
result will always be wrong. No matter how many computations are executed, how many
code errors are corrected, how many programs are generated, the mentioned methods will
fail.
The second category is testing and verification [19, 20, 21]. They are methods proposed to
detect design defects. Using the testing method to test the safety of self-driving cars is a long-
term and formidable task. For a self-driving car, the reliability needs to be demonstrated
by at least billions of safe-driving miles [22]. The verification method also has its scalability
problems [23]. When the system is scaled up, it is hard to verify if the properties still hold.
The third category is runtime monitoring methods. The runtime monitoring platforms syn-
thesize a runtime monitor from the well-designed high-level system specifications (properties,
requirements) [24, 25, 26, 27]. The monitor can be implemented on the target system and
monitors the system’s status. However, it just checks if the specifications are met but it does
not enforce the violated situation. Thus, the monitor is not good enough for a self-driving
system. If the enforcement operation needs to be finished by the human driver, then we can
not call it a fully self-driving car.
The methods in the fourth category are enforcement methods. An enforcer is synthesized
from specifications. The enforcer still supervises the system’s status. If the specifications are
violated, the enforcement will be applied to the system. The enforcer in [28] enforces to halt
1.3. Lane Keeping System 5
the system when a violation occurs. Thus, this method is not a suitable choice for a reactive
system like a self-driving system. The enforcer in [29] keeps the output in its buffer until
the output satisfies the specifications. This enforcement causes a non-deterministic delay
in the system. The enforcement mechanism in [39] synthesizes a Boolean enforcer from the
specifications, which can detect the violated Boolean output and correct it to satisfy the
specifications. The enforcer proposed in [56] improves the Boolean enforcer to deal with real
values. But it does not take the time properties into account. Thus, the real enforcer may
not execute in time because of the delay in the target system. During the observation and
checking of the enforcer, the specifications may have been violated already.
To sum up, none of the above methods can immediately correct the violations of the self-
driving system. Once the safety specifications are violated and the car continues to carry on
the wrong control commands, the car may rush into a fatal situation. We need to implement
some instantaneous enforcers on the system to continually watch and correct the system’s
dangerous output. That’s what we intend to do in this thesis.
1.3 Lane Keeping System
Before displaying our approach, we first talk about our testing case. Our final target is to
ensure the safety of the entire self-driving car. But a self-driving system is an extremely
sophisticated cyber-physical system. Thus, in the first phase, we do not cover all the safety-
concerned properties of a self-driving car. We will start with some fundamental functions
that a self-driving car can offer and focus on the safety properties of these functions. And
further research on comprehensive properties will be future work. In this thesis, we choose
a lane keeping system as a simple self-driving case.
The function of a lane keeping system is to automatically steer the car and keep the car
6 Chapter 1. Introduction
staying in the lane safely. There have been some lane keeping systems, such as Autosteer
of Tesla and Pilot Assist of Volvo. According to the levels of self-driving cars [55], a fully
self-driving car should be able to keep the car in the lane without the participation of a
human driver. However, to the self-driving cars which have been put into use, this function
is not reliable. The providers of such a function suggest that the driver also needs to focus
on the driving situation when this function is activated, e.g. Tesla. Thus, we are inspired
to improve the safety of a lane keeping system. We use an end-to-end convolutional neural
network to construct our lane keeping system [49]. It simply takes the image of a single
camera as the input and output the steering angle. We manually drive the car on the lane
and collect the data. We use this data to train our neural network such that it can adjust
the steering angle. About the details of the neural network, we will discuss it in Chapter 2.
1.4 Methodology
• Delay-aware Shield
To improve the safety of self-driving cars, we propose a delay-aware shielding mechanism
to instantaneously protect the car. Our approach is also an enforcement method. Differing
from the previous work, we introduce the time property into the enforcer. Thus, the enforcer
can check if specifications are satisfied during the delay and correctness will be acted if a
violation exists. This is how our approach works. We first design some specifications for the
safety properties of the self-driving car. Those specifications φ(I, O) are constraints for the
car when it outputs its control commands [33, 34, 35, 39]. I is an essential input. It is useful
information that can be utilized to judge the car’s status and condition. O is the control
output of the reactive system D. In our situation, the reactive system D is the self-driving
system. We list some specifications for a self-driving system. For better understanding, we
1.4. Methodology 7
write it in natural language. But when we program, the specifications need to be written in
formal language. We’ll introduce it in Chapter 4.
• Slowing down and stopping the car when the traffic light is turning red.
• Slowing down when a driver is heading for a sharp curve without a clear view [37].
• Keeping safety distance to front vehicles, moving objects, static objects.
• Keeping distance to the right edge and the left edge to stay in the lane [38].
Once the safety specifications φ(I, O) have been designed, we synthesize the specifications
into a delay-aware shield S(I(t), O,O’), which is an enforcer considering the delay. Compared
with other enforcement methods, our shield not only monitors and corrects but also predicts,
Figure 1.1. For an input I, the output is computed after the delay ε. To the previous enforcer
like [56], the delay is not considered. It just checks if the specification φ(I, O) holds even
if I and O are not generated at the same time. But a violation can still occur during the
delay under the situation that φ(I, O) is satisfied. In our approach, We abstract a dynamic
model M for our system which can describe the system’s behavior. Our delay-aware shield
uses the model M to predict I(t) with the evolution of the system. Then it checks if the
specification is satisfied for the variant I(t) during the delay, ?(I(t), O) |= φ(I(t), O). If a
violation is found, our shield S(I(t), O,O’) will modify the output O into O′ such that the
new O′ will make the specification always satisfied. And the shield just focuses on the input
and output of the reactive system. It also works if we are not completely familiar with such
a complex system and treat the reactive system as a black box.
8 Chapter 1. Introduction
Figure 1.1: A delay-aware shield.
• Experiment Platforms
To evaluate the effectiveness of our approach, we combine the lane keeping system with our
delay-aware shield and we test the combined system on our experiment platform.
We first set up a simulator to operate our experiment, Figure 1.2. The simulator is built
based on the Unity game platform [51]. The computation core of the simulation platform is
Intel’s CPU.
Figure 1.2: A simulation platform used for the experiment.
In addition to the software platform, we also set up a hardware platform to run the combined
system based on a Raspberry Pi. We refit a Radio Control (RC) car and control it using
a Raspberry Pi rather than a remote monitor. The car is shown in Figure 1.3. It has one
1.4. Methodology 9
camera to take the picture of the road. The captured picture will be sent to the neural
network in the Raspberry Pi. The predicted results of the neural network will control the
steering angle. We drive the car on a single lane, Figure 1.4a, to record the road information
with the picture size of 160*120 like Figure 1.4b and the corresponding control command
like steering angle. The data will then be used for training a CNN model to achieve the lane
keeping objective. As this training process shows, the model’s reliability totally depends on
the quality and amount of collected data. The car copies the human driver’s behavior. If
the driver’s bad driving actions are not dropped out clearly, there will be a serious risk that
the self-driving car may make a bad decision. If there is no enough data including diverse
situations, the car will fail to handle a new situation. Thus, many things can put the car
in danger. That’s the reason why we want to implement a enforcer to monitor the car’s
behavior.
The performance of our approach on these two platforms makes the experiment results more
convincing.
Figure 1.3: The self-driving car hardware. A car which is adapted to be controlled by aRaspberry Pi.
10 Chapter 1. Introduction
(a) A single lane for driving. Collecting dataon this lane and training the CNN to achieveself-driving function.
(b) The perspective of the car’s camera withpicture 160 ∗ 120.
Figure 1.4: The hardware platform to configure and run the CNN.
1.5 Contributions and Organization
In our work, we aim to improve the safety of a self-driving car with the enforcement mecha-
nism. We solve the delay problem in the enforcer such that we can check if the specifications
are satisfied during the delay. In specific, our contributions are:
• We propose a delay-aware shielding mechanism to monitor the system’s status during
the delay. If the specifications may be violated, the shield will enforce to modify the
violation.
• We construct a lane keeping system based on an end-to-end convolutional neural net-
work. We utilize the system as a simple self-driving case to evaluate our approach.
• We design some concrete specifications for a lane keeping system and generate the
delay-aware shield from the specifications.
• We combine our shield with the lane keeping system and the combined system is
implemented on a simulation platform and a hardware platform.
1.5. Contributions and Organization 11
• We compare the effectiveness of our delay-aware shield with the previous no-delay-
aware enforcement method and no-enforcement method.
The remainder of the paper is organized as follows.
• In chapter 2, we’ll introduce the end-to-end convolutional neural network for a lane
keeping system. We’ll discuss the essential concepts like what a convolutional layer is.
• In chapter 3, we will show how we collect, analyze, and extract the essential information
we need. This is preparation for specifications.
• Chapter 4 explains how we design our specifications. It also explains how to achieve
our delay-aware shield.
• Chapter 5 discusses the experiment we operate to test our approach.
• Chapter 6 is our conclusion. We summarize our approach and talk about the general
features of our approaches, including assumptions, limitations, and possible improve-
ments.
Chapter 2
Introduction to End-to-end
Self-driving Cars
The core control part of our system is a neural network [45], abbreviated to NN. More
specifically, it is a Convolutional Neural Network(CNN). As a term in computer science,
a neural network is an imitation of human biological neural networks. Since scientists are
trying to teach the car to drive more like a human, the neural network is a perfect choice. As
mentioned before, the neural network we construct on our car is a CNN. A CNN is one type
of special neural network. The superiority of a CNN is its great ability to handle image’s
feature extraction and classification. The feature of each layer is given by the previous layer
which is filtered with shared convolutional kernels. The basic structure of a CNN includes
the input layer, output layer, and some hidden layers. The numbers and types of hidden
layers are decided by the researchers. Normally, there are convolutional layer, fully connected
layer, flatten layer, polling layer, and so on. A convolutional neural network is a combination
of those layers. There has be some classical CNN models, e.g. the LeNet-5 for handwritten
character recognition [46], AlexNet for images classification [47], R-CNN for object detection
[48]. The great image-processing ability lets CNN become a strong tool in the self-driving
area. NVIDIA has proposed an end-to-end convolutional neural network for self-driving cars
in 2016 [49]. This model, shown in Figure 2.1, displays the essential structure of this network.
This CNN accomplishes the behavioral cloning function [50]. The control system learns
12
13
Figure 2.1: The CNN architecture of the end-to-end convolutional neural network for self-driving cars.
the behaviors of a human and imitates them. More specifically, the self-driving car clones
the driving behaviors of human drivers by collecting a bunch of human driving data as
training data. To make the neural network deal with different situations, there may be some
adjustments to the network model. The self-driving control system built in our research is
based on an open source self-driving tool [51]. The architecture and the parameters of our
network are shown in Figure 2.2. We’ll explain some main concepts of the CNN architecture.
14 Chapter 2. Introduction to End-to-end Self-driving Cars
Figure 2.2: The CNN construction of our control model.
2.1 Convolutional Layer
The function of the convolutional layer is extracting the image’s features. The core part of
a convolutional layer is the convolutional kernel. A convolutional kernel is a filter for an
image, Figure 2.3a. For the overlap part of the image and the kernel, we calculate the sum
of its dot product. The computed result is a local feature we extract. Once the kernel has
gone through the image, we’ll have the matrices of all the features. Such a matrix is called
a feature map Figure 2.3b. In practical applications, there are often multiple convolutional
kernels to scan the different features of an image.
2.2. Fully Connected Layer 15
(a) An example for convolutional kernel.
(b) The convolutional process and the con-volved feature.
Figure 2.3: The illustration of convolutional kernel.
2.2 Fully Connected Layer
After finishing the feature extraction, a fully connected layer will take over the classification
task. The fully connected layer computes the sum of the weighted features of the previous
layer. It maps from distributed feature to label space, Figure 2.4. Actually, this process can
also be achieved by convolution layer. However, instead of focusing on the feature’s space
position, the fully connected layer focuses on the feature’s value . It can also lower the data
dimensions to reduce the computation load.
When constructing a convolutional neural network, we need to add other hidden layers to
adjust the model to better performance. A pooling layer can be used for downsampling
16 Chapter 2. Introduction to End-to-end Self-driving Cars
between the convolutional layers to trim the data. An activation layer makes the network
nonlinear. The combination of those layers gives us a workable CNN.
Figure 2.4: A fully connected layer is a mapping for feature information.
Chapter 3
Introduction to Lane Detection
Considering the necessity of specifications, we use the safety specifications to check each
control output of CNN. We enforce the dangerous output into a new output to satisfy the
specifications. But before we do that, we also need the car’s situation as the input I for the
specification φ(I, O), like where the car is on the lane. For example, if we want to keep the
car always stay in the lane, the first thing we need to know is what the distance between the
car and the lane center is and how wide the lane is, Figure 3.1. Our information source is the
camera. Once the car’s camera takes a picture, we need to analyze the useful information
from the picture. And this involves computer vision. The following sections will illustrate
the core idea for lane analysis.
Figure 3.1: A plane coordinate system with lane center as original point. By analyzing thelane situation, we get the lane width L and the distance d between the car and the lanecenter. Staying in the lane means −L/2 ≤ d ≤ L/2.
17
18 Chapter 3. Introduction to Lane Detection
3.1 Canny
To know where the car is, we first need to know where the lane is. There are different
ways to detect a lane, such as a machine learning way for lane detection [52] or a classical
digital image processing(DIP). In our research, we choose to use the DIP way due to its
lower requirements for the CPU’s computing capability. We’ll discuss how we find the two
edges, the left and the right.
The Canny edge detection algorithm is an excellent algorithm to detect a lane. Canny
algorithm is proposed by John F. Canny [53]. The object of Canny is to recognize pixels
with obvious gray values change in the target image. The change speed of pixel’s gray values
is called gradient. We first convert the image to a gray image which contains the gradient
information, Figure 3.2. Edge is the set of pixels with larger gradient values. Canny is a
multi-stage algorithm which mainly contains the following steps.
Figure 3.2: Convert the RGB image to a gray image. A gray image contains gradientinformation with less data size.
3.1. Canny 19
• Smooth image and remove noise with Gauss filter.
• Find the intensity gradients of the image. The edge is in those pixels.
• Apply non-maximum suppression technology to eliminate false edge detection. In the
process of Gaussian filtering, the edge may be enlarged. Therefore, we need to ignore
such pixels.
• Use two thresholds to determine possible (potential) boundaries.
When we apply the Canny edge detection algorithm to our camera image, we have the result
as Figure 3.3 shows. That’s the edge we want to find. It may contains some edges that are
irrelevant to the lane edge. We need to do some operations like choosing our interested area
in the image.
Figure 3.3: The result of Canny edge detection algorithm.
20 Chapter 3. Introduction to Lane Detection
3.2 Lane Fitting
With the Canny edge detector, we can find the edge’s position. But they are just a set of
pixel points which can not be directly used. Seen from the computer, there are only some
numbers. The computer can’t tell where the edge is or what the direction of the edge is or
even if there is an edge. We need to find a line to mathematically describe the lane. The
way we take to find such a line is Hough Transform [54]. The biggest advantage of Hough
Transform is that even there is a dashed line, or some partially missing lines or some points
token from a line, Hough Transform can detect the complete line. By Hough Transform,
each edge pixel point has the chance to vote for finding such a line.
Assuming a line y = k ∗ x + b, where k is the slope and b is the intercept. A problem of
such expression is that, when the line is parallel to the y-axis, the slope k will be very large.
There is another way to express the line, a polar coordinate system. The line can be written
as ρ = x ∗ cos(θ) + y ∗ sin(θ). For points (x0, y0) and (x1, y1) on line y = k ∗ x + b, there
are ρ = x0 ∗ cos(θ) + y0 ∗ sin(θ) and ρ = x1 ∗ cos(θ) + y1 ∗ sin(θ). We can see that the
value of (ρ, θ) is always the same. If we use ρ and θ as horizontal and vertical axis, function
ρ = x0 ∗ cos(θ) + y0 ∗ sin(θ) and function ρ = x1 ∗ cos(θ) + y1 ∗ sin(θ) will intersect at one
point, Figure 3.4.
Figure 3.4: An illustration of Hough Transform, which is actually a spcae mapping. Eachedge pixel point in image space becomes a line in Hough space.
3.2. Lane Fitting 21
If we transform all the edge pixel points into the same Hough space, those points will vote
for their (ρ, θ) and select the winner.
Once the edge is detected and the line which can fit those points is found, the lane detection
is finished. We can draw the line back on the original image to evaluate the detection effect�
The lane detection result is shown in Figure 3.5.
Figure 3.5: The lane detection results. The two red lines are the real lane edges and the twogreen lines are the edges found by the algorithm.
When we have prepared the necessary materials, we can compute the wanted value, e.g.
d the distance between the self-driving car and the lane center. We call it offset. To get
the offset, we first find the midpoint of those two lines’ endpoint at the lower boundary of
the image. Then according to the principles of photography, the camera’s position is in the
middle of the image. Our camera is installed in the center of the car. Obviously, we can
easily get the offset by minus the car’s position and the lane center, Figure 3.6. The unit is
centimeter. A negative value means the car is on the left of the lane center.
22 Chapter 3. Introduction to Lane Detection
Figure 3.6: Compute the offset when the camera has captured a picture and lane detectionhas been finished.
Chapter 4
Delay-aware Shielding Mechanism
As we mentioned before, researchers have spent a lot of energy on self-driving cars. But
the development of this technology is still not mature enough to perfectly drive a car. Self-
driving cars will never be applied in large-scale as long as the safety concern is not solved.
According to the National Highway Traffic Safety Administration(NHTSA), we are long away
from the fully automated driving standard stipulated by NHTSA’s policy [55]. The news
has reported a number of self-driving accidents since self-driving cars are put into operation,
like Tesla’s Autosteer we have mentioned before. We do not want to see such tragedy again
and something must be done to avoid that. Thus, we propose to run an enforcer to observe
if the self-driving car’s control output satisfies the safety specification and enforce to modify
the unsatisfied output. We can prove that if the self-driving car always follows the safety
specification rules, the car will keep in a safe state. But there is still a problem, the delay
problem. In a platform with high computation capacity, the delay is able to be ignored. But
when the computation platform is not strong enough, the delay is too large to be ignored.
Thus, we suggest to add the time property into the specification and take the delay into
account.
23
24 Chapter 4. Delay-aware Shielding Mechanism
4.1 Introduction to Enforcement Method
We first explain some general concepts of enforcement method. Specifications, denoted as
φ [39], stipulate the responses of a system to an inside or outside event [33, 34, 35, 56].
They are the safety properties that a system should always follow. Shield S is the enforcer
synthesized from the specifications, which enforces to correct the unsatisfied output. Any
action the system takes should always obey the specifications. Such systems are called
reactive systems, denoted as D.
* Reactive systems. A reactive system D = (Q, q0,ΣI ,ΣO, δ, λ) is expressed by a
Mealy machine, where Q is a finite set of states, q0 is the initial state where the
machine starts and q0 ∈ Q, ΣI is the set of input alphabet and ΣO is the set of
output alphabet with a finite set I = {i1, ..., im} inputs and a finite set O = {o1, ..., on}
outputs, δ : Q × ΣI → Q is a complete transition function, λ : Q × ΣI → ΣO is the
complete output function. Clearly, for a given input σI ∈ ΣI in state qi ∈ Q, the
transition function δ gives the unique next state qi+1 = δ(σI , qi) and qi+1 ∈ Q. Also,
for a given input σI ∈ ΣI in state qi ∈ Q, the output function λ returns the unique
output σO = λ(qi, σI) and σO ∈ ΣO.
* Specifications A specification φ = (Qφ, q0,Σ, δφ, Fφ) is expressed as a finite automa-
ton, which specifies the safety properties of a reactive system D. Qφ is a finite set of
states, q0 is the initial state where the specifications starts and q0 ∈ Qφ, Σ = ΣI ×ΣO
is the input alphabet of the specification which consists of the input ΣI and output ΣO
of reactive system D. δφ is transition function, Fφ ⊆ Qφ is a set of fatal error states.
Let σ̄ = σ0σ1 . . . be an input trace of a specification with each input sequence σi for all
i = 0, 1, . . . we have σi ∈ Σ. Let q̄ = q0q1 . . . be the corresponding state sequence such
that, for all i = 0, 1, . . . , All the state transitions are given by the transition function
4.1. Introduction to Enforcement Method 25
qi+1 = δφ (qi, σi).
The safety condition is that, for any input trace σ̄, the corresponding state sequence
trace q̄ always stays in the safe states, which means for any i = 0, 1, . . . , we have
qi ∈ (Q\FΣ) , then we say that D satisfies specification φ. Let L(φ) be the set of all
input traces satisfying φ. Let L(D) be the set of all input traces provided by D. D
satisfies φ if and only if L(D) ⊆ L(φ)
* Shield A safety shield S = (S ′, s′0,Σ,ΣO′ , δ′, λ′) is defined as a correctness enforcer to
execute the specification. It is also defined as a reactive system S such that, even if D
violates φ, the shield S will modify the output O into a new output O′ to ensure that
the combined system (D◦S) still satisfies φ. The parameters of the tuple S are defined
as follows: S ′ is a finite set of states, s′0 ∈ S ′ is the initial state, Σ = ΣI×ΣO is the input
alphabet, actually, the input alphabet Σ is the same as the one of the specification,
Σo′ is the set of output alphabet which is composed by the possible output values,
δ′ : S ′ × Σ → S ′ is the transition function, and λ′ : S ′ × Σ → ΣO′ is the output
function. The composition is D ◦ S = ⟨S ′′, s′′0,ΣI ,ΣO′ , δ′′, λ′′⟩, where S ′′ = (Q× S ′) ,
s′′0 = (q0, s′0) ,ΣI is the set of values of the input of D,ΣO′ is the same definition
in S, δ′′ is the transition function, and λ′′ is the output function. More specifically,
λ′′ ((q, s′) , σI) is defined as λ′ (s′, σI , λ (q, σI)), which means the output of the combined
system is given by the shield. Firstly, we get the output of system D : λ (q, σI) and
then use σI , λ (q, σI) as the new input to get the output of S. Similarly, δ′′ is defined as
δ′′ ((q, s′) , σI) = (δ (q, σI) , δ′ (s′, σI · λ (q, σI))). Let L(D ◦S) be the set of input traces
generated by the composed system. Clearly, if L(D) ⊆ L (φ), the shield S should not
interfere the original output, which means σ′O = σO. But if L(D) ⊈ L (φ), the shield
S needs to modify the original output of D to eliminate the erroneous behaviors in
L(D)\L (φ).
26 Chapter 4. Delay-aware Shielding Mechanism
To illustrate the concepts of specification and shield, we use a distance keeping case as a
example. Mathematically, we can use linear temporal logic (LTL) to define the specification
formulas [40]. The specification for keeping safety distance to a front vehicle is expressed as
φ = {0 < dis < 50 → v < 0, 50 ≤ dis → v < vmax}, where dis is the distance between the
self-driving car and the front vehicle, v is relative velocity, vmax is the maximum physical
velocity constraint. An automaton expression of the formulas which is shown as Figure 4.1.
I is the input dis > 50, O1 is v < vmax, O2 is v < 0. State 0 is the situation that the car is
more than 50m away from the front vehicle and state 1 is the less than 50m situation. State
2 is the error state which all edges to state 2 are dangerous actions. The shield synthesized
from the specification is show in Figure 4.2. When the self-driving car is more than 50 meters
far away from the front vehicle, the restriction is just the physical velocity constraint. The
shield will do nothing to the car and the car will follow every driving command given by
the neural network, like steering and throttle. When the car is getting 50 meters closer to
the front vehicle, if the neural network lets the car slow down, then the output will stay the
same; if the output lets the car speed up, the shield should modify the output to decelerate
the car. In fact, this is the safe driving strategy suggested by the official driver’s manual
when a human drives.
4.1. Introduction to Enforcement Method 27
Figure 4.1: A specification for keeping the safety distance to the front vehicle. One violationis shown as the red edge. When the distance is less than 50m, the positive relative velocityis a violation. I: dis > 50, O1: v < vmax, O2: v < 0.
Figure 4.2: The shield for the distance keeping specification. This red edge is the modificationfor the mentioned scenario. When the distance is less than 50m, the relative velocity ismodified to negative to satisfy the specification.
How many specifications φ should be applied to a reactive system? It depends on the user’s
safety purpose. Noticing that more specifications for a system will restrict the system more
and the system may be too careful to run normally. Thus, it’s not a good idea to set too
many specifications. We’ll design some specifications for a lane keeping system as a simple
test cases.
28 Chapter 4. Delay-aware Shielding Mechanism
4.2 Specification for a Case: Lane Keeping System
We’ll start from a lane keeping system as a case to explain our method. It is a convenient
function for drivers that can adjust the steering angle automatically and keep the car driving
in the lane. As what we show in the introduction, the accidents caused by this function have
raised people’s concerns over it. Let’s build some specifications to let the car stay in the lane
safely.
4.2.1 Specification Construction and Shield Synthesis
The necessary parameters for a specification φ(I, O) is the I and O. Let the control system
be a reactive system D. I is the input information to describe the situation that the reactive
system is experiencing. The output O is the control command given by the reactive system
D. In our case, the reactive system D is the CNN we have trained and the input I is the
information we can extract from sensors like a camera. We have shown this part in Chapter
3. The output O is the steering decision of D. In another word, if the CNN has been
trained to have the automatically steering function, then the output O is the output of that
CNN. For each known I and O, the specification φ will offer its requirements for output
O. The violation O will be modified to satisfy φ. For a lane keeping system, what kind of
specification can realize such a goal?
4.2. Specification for a Case: Lane Keeping System 29
Figure 4.3: The car is heading for the left edge of the lane and is about to rush out of thedriveway.
As Figure 4.3 shows, what if the car is close to the left lane edge but the neural network
still commands the car to head for the edge? This is obviously an unsafe action. Faced
with such a situation, the car should discard the original decision made by CNN and choose
to steer to the right. When the car returns to the surrounding area of the lane center and
runs normally, we’d better let the CNN take charge again and the specification can be less
rigorous. Because if the specification is always strict, the self-driving car may be too cautious
to move. We make some specifications like the following items.
• φ1 : −10 ≤ d ≤ 10 =⇒ −1 ≤ ang ≤ 1
• φ2 : 10 < d ≤ 20 =⇒ −1 ≤ ang ≤ 0
• φ3 : 20 < d ≤ 30 =⇒ −1 ≤ ang ≤ −0.5
• φ4 : −20 ≤ d < −10 =⇒ 0 ≤ ang ≤ 1
• φ5 : −30 ≤ d < −20 =⇒ 0.5 ≤ ang ≤ 1
30 Chapter 4. Delay-aware Shielding Mechanism
Figure 4.4: Explain the function of specifications.
The purpose of those specifications is shown as Figure 4.4. The lane is divided into 5 areas.
The d is the offset we compute from the image. The unit for d is centimeter. The range of
d is decided by our lane width. To our hardware platform, the lane width is 60 centimeters.
The range of ang is the requirements for steering angle and the value of ang is the normalized
steering angle. For the first specification, when the car is driving around the lane center, we
do not propose a requirement for the control output and the car will follow all the CNN’s
commands. For the second specification, if the car has the obvious right deviation from the
center of the lane, we need to make sure that all the steering angle given by the CNN is to
the left. The third specification is also a specification for right deviation, the difference is
that when the right offset is large, the steering angle to the left should also be large. φ4 and
φ5 are similar to φ2 and φ3. They are specifications for left deviation.
We synthesize the specifications into a general shield, just like the synthesis method in [56].
Assumption Notice that those specifications are built based on some assumptions. We
assume that the essential input I and output O are real and easily accessible to the specifi-
cations. Like for our specifications, the car’s position is assumed to be the real value.
4.2. Specification for a Case: Lane Keeping System 31
The position is given by the lane detection in our research. The detection effect really makes
great impacts on the result. The lane detection works well on our lane. But if we need
more accurate position information, there will be other options besides lane detection. For
example, high-precision map contains the accurate lane information, combined with sensors
like Lidar, the car is able to easily know its position on the lane.
4.2.2 Specification Verification
If we have built a specification, we first need to know that this specification does keep the car
safe as long as the car always follow the specification’s instruction. If the specification itself
is not right, the car controlled by it won’t be safe. Thus, how to guarantee the correctness
of our specification is an essential issue. This inspires us to prove them mathematically. We
need some theory support, like proof rules. These rules can be axioms, theorem, and derived
rules. There is a summary of the crucial rules for cyber-physical system verification [44].
We’ll start our proof from scratch in the following.
To begin with, we need to model our system. The modeling way we use is the differential
equation.
Definition 4.1. Hybrid Program. A hybrid program is mathematically expressed as
Γ[{x′ = f(x)&Q}; {c1 ∪ c1 ∪ · · · ∪ cn}]P . Where x is the variable. It is a function of time
t and x′ the derivative of x on t. Γ = {γ1, γ2, . . . } is the set of preconditions we can get.
Q is evolution domain constraint for differential equation x′ = f(x). The system will follow
x′ = f(x) within the domain Q and it is not allowed to leave Q. {c1 ∪ c1 ∪ · · · ∪ cn} are
some control actions the system may take when the control event is triggered. P is the
postcondition we want to check its correctness after starting from the initial Γ and evolving
in the domain Q.
32 Chapter 4. Delay-aware Shielding Mechanism
Figure 4.5: Hybrid program for the self-driving specifications.
Under such a definition, we construct the hybrid program for our self-driving specifications.
It is displayed in Figure 4.5. What this hybrid program does is checking the car’s position.
If the car starts on the lane and drives with obeying the specifications, the car will stay on
the lane. The definitions of those symbols are:
d: Car offset.
v: Velocity of car.
a: Acceleration of car.
R: Right steering coefficient.
L: Left steering coefficient.
aR: Range of acceleration to the right hand direction, thus, -aR is a range to the left.
4.2. Specification for a Case: Lane Keeping System 33
Now the problem formulation has been decided. Our next step is to solve the problem. We’ll
discuss the proof rules we apply, and the detailed proof steps are attached in appendix A. It
is written with the syntax defined by [44, 57]. And some proof rules are listed below.
•
implyR → R Γ, P → ∆, Q
Γ → P → Q,∆
The implyR proof rule means that a proof for an implication P→Q can be transformed
by assuming its left-hand side P (by pushing it into the assumptions in the antecedent)
and proving its right-hand side Q.
•
loop loopΓ ⊢ J,∆ J ⊢ P J ⊢ [a]J
Γ ⊢ [a∗]P,∆
The loop rule proves by introducing the loop invariant J . ∗ means non-deterministic
loops of model a. Intending to prove P is true after the loops, we first prove J is true
initially, and then, J can imply P . The last, J is always true after one round of a
whenever J is true before.
•
composeb [; ] [a; b]P ↔ [a][b]P
The composeb rule separates a sequentially composed model into successive models.
After the run of a, it is the case that all runs of b satisfy P .
•
choiceb [∪] [a ∪ b]P ↔ [a]P ∧ [b]P
The choiceb means that if there is a non-deterministic model [a ∪ b], the system will
randomly chose a or b, after the run of such model, the P is true, then there must be
34 Chapter 4. Delay-aware Shielding Mechanism
P is true after a runs and P is also true after b runs, vice versa.
•
andR ∧ R Γ → P,∆ Γ → Q,∆
Γ → P ∧Q,∆
The andR means that if Γ implies P and Q is true, we must have that Γ implies P
and Γ implies Q separately.
•
assignb [:=] [x := e]p(x) ↔ p(e)
The assignb rule is a reduction for proof expression. If p as a function of x is true after
the e is assigned to x, then p(e) is true at first.
•
testb [?] [?Q]P ↔ (Q → P )
The testb is a rule that if Q is true then the model will continue. [?Q]P is equal to
Q → P .
Synthetically using those proof rules, we can finally prove that our specifications are safe for
the self-driving car system. The car starts on the lane. If the decision made by the CNN
has the chance to let the car plunge off the lane, our shield will modify the decision. And
the proof has shown that as long as the specification is obeyed, the car will safely stay on
the lane.
4.3. Delay-aware Shield 35
4.3 Delay-aware Shield
Since the specifications have been designed and it has also be proved to be able to safely
protect the self-driving car, the safety task seems to have been finished. Is it true or is there
still something we failed to notice? Unfortunately, we have not done it. If we add the general
shield to a system, we’ll find that it can not react to the system’s violation in time.
In our proof, we do prove the true safety property of our specification. But we ignore
a problem, which is the crucial reason. There is always a difference between theory and
reality. A theory may be proved to be valid while it does not fit the real world perfectly.
For the shield, there is also such a dilemma that is brought by the time delay.
The delay is usually an unavoidable tough problem in a reactive system. A long time
delay always comes with potential risks of the system. During the period of the delay, the
parameters, properties, and features of the system may vary. Thus, the design for time-
invariant systems can not be directly used in a delayed system. Considering specifications,
we intend to check if the output O of the reactive system D(I, O) satisfies the specification
φ(I, O) : ?D(I, O) |= φ(I, O). A modification of the output is activated by the correctness
enforcer ”shield S” if a verification exists such that : O S(I,O,O′)−→ D ◦ S(I, O′) |= φ(I, O′). Up
to now, the general thoughts work fine. But when the shield is applied to a physical system,
the input data I and the output data O are not generated simultaneously. The system’s
sensors collect necessary surrounding information and send it to the control part as input
data I. With I, the control part generates its control command as the control output O
for the system’s actuator. Therefore, there is a computational delay ε between I and O in
the controller. When checking the specification, we are checking if φ(I, O) holds. But the
real situation is that at this moment, the surrounding environment has changed from I to
Iε. Even if the φ(I, O) holds, how to guarantee that φ(Iε, O) still holds? Figure 4.6 shows
36 Chapter 4. Delay-aware Shielding Mechanism
more details. During the period of the delay ε, will φ(I(t), O), t ∈ [0, ε] always holds? Thus,
the satisfaction of specification should be checked for the total delay. However, the problem
is that we only know the original input data I. For input data I(t) with evolution domain
t ∈ (0, ε], we are unable to collect all the information.
Figure 4.6: Starting with a safe state S0 that satisfies specifications, a system may also reachan unsafe state S2 that specifications do not hold during the period of the delay ε, Checkingφ(I, O) only ensures the safety property of current state without considering the dynamicprocess of input data. A safety guarantee for the whole dynamic evolution should be givenwithin time ε such that φ(I(t), O), t ∈ [0, ε] is consistently satisfied.
One way to deal with such a problem is prediction. Based on reasonable computation,
we can predict future input change. However, the prediction is a challenging task, even if
the duration is not too long. Research on prediction has a long history. Normally, there
are many ways to do prediction in control systems, like widely applied Model-Predictive
Control(MPC) [41] and Kalman filter [42]. However, those approaches are not sensible
choices for our purpose. Now we discuss how the mentioned approaches work and why they
4.3. Delay-aware Shield 37
are not good enough. They first find a model F which can describe the reactive system.
The model may not contain all the properties of the system, but it needs to characterize the
focused system behavior. Then, for an input variable xk, k = 0, 1, 2, ..., the model should
determine the corresponding next state at time xk+1 by F(xk) by applying some objective
functions or optimization functions. When the system evolves into timestamp k + 1, the
model will be updated according to the current situation. The issue is that this process
only gives the discrete prediction for state k+1 based on state k. However, we can only
check φ(I(ε), O) without contemplating all the potential reachable states at time t ∈ (0, ε).
Robustness is another challenge[43]. Robustness is a system’s capacity to keep its system
property under some unknown or uncontrollable parameter changes. Within the evolution
domain t ∈ [0, ε], the physical world is continuous and fickle. For example, the acceleration of
a moving car can be influenced by wind or pavement conditions. That is, the part parameters
fi, . . . , fj of the model F = (f1, f2, . . . , fn) used for prediction have changed while we treat
them as stable values. We are supposed to take the model mutation into account in case the
mutation gets the chance to bring the reactive system into an unsafe state.
Based on the analysis above, we propose a logical verification approach to solve these prob-
lems. Instead of enumerating all the possible states, we can use mathematical proof to check
the property. Instead of predicting with an invariant model, we allow the change of those
model parameters by setting them as the precondition or the evolution domain constraints
in our proof. Then the model will be designed as below.
Γ&t = 0 → [f(x)− δ1 ≤ x′ ≤ f(x) + δ2, t′ = 1&Q ∧ t ≤ ε]P
We use an example to explain this process for a better explanation.
Problem Description: Use one previous specification φ1 as an example.
38 Chapter 4. Delay-aware Shielding Mechanism
φ1 : −10 ≤ d ≤ 10 =⇒ −1 ≤ ang ≤ 1 To current −10 ≤ d ≤ 10, the specification does
not restrict ang.
We need to check d’s evolution. If the d is no long −10 ≤ d ≤ 10, there should be a
modification action. Similarly, we use the modeling and verification rules above. We can
build the hybrid program as the following. And we combine this verification process into the
shield so that it can automatically predict. The part of the verification process is shown in
Figure 4.7. And the accomplishment of this combination is by the aid of Z3 prover, a prover
provided by Microsoft for proving Theorem [58].
d = d0& − 10 ⩽ d ⩽ 10&t = 0 → [d′ = v, t′ = 1&t ≤ ε ∧ v ≥ −20 ∧ v ≤ 20] (−10 ⩽ d ⩽ 10)
Figure 4.7: Part of the verification rules for d’s evolution.
Chapter 5
Experiments
5.1 Simulation Experiments
We have trained a CNN to achieve a lane keeping system which can automatically steer the
car. We combine the system with our delay-aware shield and we first test the combined
system on a simulator. We compare three situations: the car is controlled by a convolutional
neural network, the car is controlled by a CNN and also runs with a general shield, the car
is controlled by a CNN and runs with a delay-aware shield. Observing Figure 5.1, if the car
is controlled by the CNN, it actually gets out of the lane.
Figure 5.1: The simulator self-driving car only controlled by CNN. The car moves from thebottom to the top in each frame. A frame’s right frame is the next frame. The first frameof the lower row is the timestamp just after the last frame of the higher row.
If we use the shield without considering the delay to constrain the self-driving car’s behavior,
the result is shown in Figure 5.2. The car does not rush out but it still drives just on the
lane edge of the simulator.
39
40 Chapter 5. Experiments
Figure 5.2: The simulator self-driving car controlled by CNN and the general shield.
When we consider the delay in the simulator, we’ll have the result in Figure 5.3. The car
safely drives in the lane. The delay-aware shield is protecting the car.
Figure 5.3: The simulator self-driving car controlled by CNN and the delay-aware shield.
A reliable self-driving car should be able to safely drive billions of miles. Although we
have not finished such long-distance testing, we still evaluate our approach in diverse road
situations on the simulator and we intend to build more complex situations in the future
work to provide driving scenes for more complete testing. Some lane shapes are shown in
Figure 5.4. As the result we have displayed before, our shield actually protects the car and
keep it driving in the lane safely.
5.2. Hardware Experiments 41
Figure 5.4: Driving on diverse lane shapes.
5.2 Hardware Experiments
We also test our approach’s effect based on a hardware platform. The same CNN is imple-
mented on an RC car with a Raspberry Pi as its CPU. We compare three different situations:
The car is totally controlled by the CNN, the car is controlled by the CNN but the shield
runs simultaneously, and the car runs with both the CNN and the shield, but the shield is
combined with the prediction function. We collect the corresponding data.
When there is only a CNN as the controller, the result is like Figure 5.5. When the car is
accessing a turning, we can clearly see that the CNN’s decision lets the car get away from
the lane. If a fence is set on the lane edge, the car will hit it undoubtedly.
To avoid such tragedy happening again, we generate the shield from our specification and
we add the shield to CNN to monitor CNN’s behavior and limit its unreliable decision. The
result is shown in Figure 5.6. The car’s situation is better than it runs without the shield.
42 Chapter 5. Experiments
Figure 5.5: The self-driving car runs with CNN only. The car moves from the left to theright in each frame. A frame’s right frame is the next frame. The first frame of the lowerrow is the timestamp just after the last frame of the higher row.
But seen from the figure, the car also just passed by the lane edge. Here is the reason. Even
though the RC car is not as complicated as a real vehicle, the capacity of the control center
Raspberry Pi is still too weak. The computation time is the barrier for the shield to instantly
check the control command.
Figure 5.6: The self-driving car runs with CNN and the general shield.The car moves fromthe left to the right in each frame. A frame’s right frame is the next frame. The first frameof the lower row is the timestamp just after the last frame of the higher row.
Here comes the shield with prediction function. Seen from Figure 5.7, the shield without
and with prediction show different action to the same image. The shield with prediction has
verified that there is a hidden danger of violating the specification. A modification is made
by the delay-aware shield.
5.2. Hardware Experiments 43
Figure 5.7: Given the same frame, The shield without prediction (left) has no reaction aslong as the current frame is satisfied. The shield with the prediction (right) will predict andgives modification if a violation is possible.
As for how the delay-aware shield works, we can see the result from Figure 5.8. With the
help of delay prediction, the car safely passes the curve. The weakness of lacking computing
capacity is kind of made up by the prediction. The car becomes safer and more effective.
Figure 5.8: The self-driving car controlled by CNN and the shield with delay prediction.The car moves from the bottom to the top in each frame. A frame’s right frame is the nextframe. The first frame of the lower row is the timestamp just after the last frame of thehigher row.
And we compare the performance of the shield on two different platforms, like the figure
shown in Figure 5.9. The shield has a more obvious effect on the physical platform. Because
the simulator is computed by Intel CUP which has a stronger computation capacity than a
44 Chapter 5. Experiments
Raspberry Pi. The delay is shorter.
(a) Self-driving car runs on physical platformwith the shield.
(b) Self-driving car runs on physical platformwith the shield, delay considered.
(c) Self-driving car runs on simulator withthe shield.
(d) Self-driving car runs on simulator withthe shield, delay considered.
Figure 5.9: The comparison of physical platform and simulation platform.
5.3 Attack Experiments
Our shield not only can handle the accidental errors made by the neural network but also
it can continuously correct the burst errors. There is a need for such a situation. A self-
driving car can be attacked remotely and control commands are always wrong. As Figure
5.10 shows, we interrupt the neural network’s control and send the dangerous commands to
the car. The neural network’s output is no longer executed by the car. Instead, we trick the
car into recognizing us as a user and the car takes commands from the malicious user. Under
the user’s commands, the steering angle is always set to the left. As Figure 5.10a shows, the
car rushes out of the left edge. However, if we implement a shield on the self-driving system,
we will find that both the general shield and our shield can constantly modify the error to
protect the car, Figure 5.10b and Figure 5.10c. The difference is that our shield can react
5.3. Attack Experiments 45
more instantaneously, just like what we show in previous experiments.
(a) The steering angle is always set to -0.3(normalized) to viciously control the car. The car rushesout of the lane.
(b) The steering angle is always set to -0.3. The general shield can correct it. But the correctnessmay not be made in time due to the delay.
(c) The steering angle is always set to -0.3. Our shield instantly corrects it.
Figure 5.10: Attacks on the simulation platform.
We also test our shield’s ability to handle attacks on the hardware platform. We continuously
mislead the car to the right edge. Seen from Figure 5.11a, the car does get out of the right
edge when there is no shield correcting the dangerous violation. When we implement the
general shield or our shield on the self-driving car like Figure 5.11b or Figure 5.11c, we find
the shield modifies the attacks and the new control commands keep the car staying in the
lane. And on a hardware platform, we can see more clearly that our shield can react faster
than a general shield.
46 Chapter 5. Experiments
(a) The steering angle is always set to 0.3(normalized) to viciously control the car. The car rushesout of the right edge.
(b) The steering angle is always set to 0.3. The general shield can correct it but the correctness isnot in time such that the car is close to the right edge.
(c) The steering angle is always set to 0.3. Our shield instantly corrects it and the car safely drivesthrough the curve.
Figure 5.11: Attacks on the hardware platform.
Chapter 6
Discussions and Conclusions
Seen from the experiment results of our approach, our shield modifies the dangerous decision
to protect the self-driving car. Although the task is implemented well from previous experi-
ments, we still need to discuss some assumptions, limitations, and possible improvements in
our future work.
First, we have mentioned that the proper execution of the shield is based on the correct lane
information. The safety of the self-driving car can be guaranteed only when we assume that
the perception process is accurate. In our case, the only information source is a camera.
We apply a lane detection algorithm to the captured image to get the lane information.
As a small scale car, this method satisfies our purpose. But if we adjust the lane into an
abnormal acute turning, the car will still have a chance to fail to judge its correct position.
If the accuracy of the lane information is not guaranteed, the experiment may not work well.
We also show an example of the incorrect lane detection. We build an acute turning for the
car to drive. Such a lane can easily let the car miss the lane edge, Figure 6.1. Once the
lane detection is not right, the car will be confused about its position. The purpose to keep
the car staying in the lane safely will not be achieved, just like Figure 6.2. There have been
many reliable perception methods on a real vehicle, e.g. high precision map and Lidar can be
equipped to offer enough and accurate information. The high precision map contains detailed
environment information and Lidar can detects the surrounding environment. Comparing
the detected data with the high precision map dataset, the car can be located. For better
47
48 Chapter 6. Discussions and Conclusions
perception, we also intend to add more sensors and functions into our platform in the future.
Figure 6.1: The lane detection misses the right lane edge, which lets the the car fail to knowits position.
Figure 6.2: Caused by the wrong lane detection, our shield fails to work such that the carrushes out of the lane.
Second, one specification just focuses on a specific type of safety requirement. While one of
the safety properties is protected by a certain specification, the car’s other safety properties
may still be violated. This dilemma inspires us to design some all-round specifications that
contain more safety properties for a self-driving car. To support the specification evaluation,
we’ll improve our experiment platforms to handle more complex situations.
Third, we use prediction to check if the specifications are satisfied during the delay. The
choice of the prediction module has a big influence on the prediction result. In our method, we
use the ordinary differential equation(ODE) to model the system and describe the system’s
behavior. We will check safety properties based on this ODE to achieve prediction. But if the
interested system is not suitable for an ordinary differential equation, we may need to find
49
another way to predict, such as the difference equation for a discrete system. For example,
a self-driving car may need to predict the traffic flow to adjust routes and the traffic flow
is discrete. Also, if the system can be described by an ODE, we still need to be careful to
choose our model. The future is full of uncertainties and we can not claim that the module
is entirely perfect. Thus, our model must be accurate enough. In our case, we just focus on
the offset of our car and there have been mature models to describe the involved variables’
relationship, such as the kinematic bicycle model[59]. For other situations, we may need to
abstract a different model. For example, the Dubins path is a classical control method for
path planning [60]. In the Dubins path, there is a situation that the car moves in an arc.
Then we may consider the lane as a platform and construct a two-dimensional model for the
arc trace, Figure 6.3, such as [x′ = vcosφ, y′ = vsinφ, φ′ = vLtanδ,&Q].
Figure 6.3: A car-like robot model.
Fourth, the scalability of the synthesis algorithm can be improved. A fully self-driving car
has more safety properties than a lane keeping system. For such a complicated system, it
is not an advisable choice to synthesize a shield S that covers all the properties. It will be
extremely monolithic and there will be a state explosion problem. Thus, we can synthesize
some small scale shields S1,S2,S3, . . . and combine them to achieve the equivalent function
of the comprehensive shield S.
50 Chapter 6. Discussions and Conclusions
Fifth, to synthesize the shield, our specification needs to be expressed in formal language.
However, not all safety properties have such an expression. For example, we specify that
the throttle varies inversely with the curvature of a lane throttle = k/curvature, where k is
a positive coefficient. For a given curvature, there is a corresponding throttle specification.
If we try to write down all the specifications in formal language, we will have an infinite
alphabet. Then our shield will contain infinite states which are unable to be synthesized.
Sixth, our shield can protect the target system form attacks and our experiments have shown
that. However, the attacks need to be on the original reactive system D, e.g. the neural
network. For the attacks on the shield, we have not proposed a protection mechanism for
our shield. Thus, the shield should run in a safe environment. But there is a benefit of our
shield. That is, it is smaller than the reactive system D. It’s easier for us to build a safe
running environment for the shield S rather than system D.
In conclusion, a self-driving car is a safety crucial system. It will never get widely used
unless the passenger’s safety can be guaranteed. We have designed some specifications for
the self-driving cars and we synthesize a shield based on the specifications. But even if
the specification is proved to be safe, the practical application is not so perfect because
of the computation delay. Our approach is to use a logical verification method to achieve
a delay-aware shield. Our approach works both on our self-driving car hardware platform
and simulation platform. Our shield can instantaneously correct the violated output and
modify it to satisfy the specification, even if the violation is caused by continuous malicious
attacks. Our shield works without knowing details about the reactive system. Thus, it can
be synthesized separately from the system. Focusing on self-driving car’s safety properties,
it is an inspiring result that our delay-aware shield does help the car become safer.
In the future, we intend to design more specifications for kinds of self-driving cars’ functions,
like keeping a distance from the front vehicle, stopping the car when the lane is blocked.
51
More sensors and functions can be added to our platform to get more accurate car statuses.
Also, more testing scenes can be built to support the safety effect of specifications. We aim
to protect the car to be safer.
Appendices
52
Appendix A
Specification Proof
This the detailed code to prove the specifications’ correctness.
1 ArchiveEntry ” S p e c i f i c a t i o n ”
2 Desc r ip t i on ”A s p e c i f i c a t i o n f o r Autosteer ” .
3
4 D e f i n i t i o n s
5 Real aR ; /∗ Range o f a c c e l e r a t i o n to the r i g h t hand d i r e c t i o n ∗/
6 Real R; /∗ Right s t e e r i n g c o e f f i c i e n t ∗/
7 Real L ; /∗ Le f t s t e e r i n g c o e f f i c i e n t ∗/
8 End .
9
10 ProgramVariables
11 Real d , v , a ; /∗ o f f s e t ( d i s t ance from cente r ) , v e l o c i t y , a c c e l e r a t i o n ∗/
12 End .
13
14 Problem
15 (30>=d&d>=−30 &(d=−30−>a>=0) & (d=30−>a<=0)&(d=−30−>v>=0) & (d=30−>v<=0)) &
16 (−1<=a & a<=1 &R>=0 & L>=0 & 0<=aR & aR<=1)
17 −>
18 [
19 {
20 { {d’=v , v’=a&d>=−30&d<=30}++{d’=v , v’=a&d<=−30}++{d’=v , v’=a&d>=30}}
21 {?(−30<=d&d<=−10&v<=0) ; v:=−R∗v ; a:=aR ; ++ ?(10<=d&d<=30&v>=0) ; v:=−L∗v
; a:=−aR ; ++ ?(−30>d | d>−10&d<10|d>30) ; }
53
54 Appendix A. Specification Proof
22 }∗ @invar iant((−30<=d&d<=30)&(d=−30−>a>=0) & (d=30−>a<=0)&(d=−30−>v>=0) &
(d=30−>v<=0))
23 ] (−30<=d&d<=30)
24 End .
25
26 Tact ic ”08 : Event−t r i g g e r e d Ping Pong Ba l l : Proof 1”
27 implyR (1) ; loop(”(−30<=d&d<=30)&(d=−30−>a>=0)&(d=30−>a<=0)&(d=−30−>v>=0)&(d
=30−>v<=0)” , 1) ; <(
28 QE,
29 QE,
30 composeb (1 ) ; cho iceb ( 1 . 1 ) ; cho iceb ( 1 . 1 . 1 ) ; composeb ( 1 . 1 . 0 ) ; composeb
( 1 . 1 . 1 . 0 ) ; cho iceb (1 ) ; andR (1) ; <(
31 composeb ( 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 0 . 1 ) ; composeb
( 1 . 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 1 . 0 . 1 ) ; t e s tb ( 1 . 1 . 0 ) ;
t e s tb ( 1 . 1 . 1 . 0 ) ; t e s tb ( 1 . 1 . 1 . 1 ) ; ODE(1) ,
32 cho iceb (1 ) ; andR (1) ; <(
33 composeb ( 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 0 . 1 ) ; t e s tb ( 1 . 1 . 0 ) ;
composeb ( 1 . 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 1 . 0 . 1 ) ;
t e s tb ( 1 . 1 . 1 . 0 ) ; t e s tb ( 1 . 1 . 1 . 1 ) ; ODE(1) ,
34 t e s tb ( 1 . 1 . 0 ) ; composeb ( 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 0 . 1 ) ;
t e s tb ( 1 . 1 . 1 . 0 ) ; composeb ( 1 . 1 . 1 . 0 . 1 ) ; a s s i gnb ( 1 . 1 . 1 . 0 . 1 ) ; a s s i gnb
( 1 . 1 . 1 . 0 . 1 ) ; t e s tb ( 1 . 1 . 1 . 1 ) ; ODE(1)
35 )
36 )
37 )
38 End .
Bibliography
[1] Kröger F. (2016) Automated Driving in Its Social, Historical and Cultural Contexts.
In: Maurer M., Gerdes J., Lenz B., Winner H. (eds) Autonomous Driving. Springer,
Berlin, Heidelberg.
[2] Taeihagh, Araz; Lim, Hazel Si Min (2 January 2019). ”Governing autonomous vehi-
cles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks”.
Transport Reviews. 39 (1): 103–128. arXiv:1807.05720.
[3] Arne Bartels and Thomas Ruchatz. “Einführungsstrategie des Automatischen Fahrens”.
In: at - Automatisierungstechnik 63.3 (Jan. 2015), pp. 168– 179
[4] Federal Ministry of Transport and Digital Infrastructure. Ethics Commission: Auto-
mated and connected driving. June 2017
[5] Zushi, Keiichiro. “Driverless Vehicles: Opportunity for Further Greenhouse Gas Emis-
sion Reductions.” Carbon $ Climate Law Review, vol. 11, no. 2, 2017, pp. 136–149.
JSTOR, www.jstor.org/stable/26353861. Accessed 5 June 2020.
[6] R. Rajkumar, ”A Cyber–Physical Future,” in Proceedings of the IEEE, vol. 100, no. Spe-
cial Centennial Issue, pp. 1309-1312, 13 May 2012, doi: 10.1109/JPROC.2012.2189915.
[7] Marco Bozzano and Adolfo Villafiorita. Design and safety assessment of critical systems.
First. Auerbach Publications, 2010.
[8] D. A. Pomerleau, ”Alvinn: An autonomous land vehicle in a neural network.” Advances
in neural information processing systems. 1989.
55
56 BIBLIOGRAPHY
[9] J. Jhung, I. Bae, J. Moon, T. Kim, J. Kim and S. Kim, ”End-to-End Steer-
ing Controller with CNN-based Closed-loop Feedback for Autonomous Vehicles,”
2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, 2018, pp. 617-622, doi:
10.1109/IVS.2018.8500440.
[10] Molnar, Christoph. ”Interpretable machine learning. A Guide for Making Black Box
Models Explainable”, 2019. https://christophm.github.io/interpretable-ml-book/.
[11] Rai, A. Explainable AI: from black box to glass box. J. of the Acad. Mark. Sci. 48,
137–141 (2020). https://doi.org/10.1007/s11747-019-00710-5
[12] M. T. Garip , M. E. Gursoy , P. Reiher , M. Gerla, Congestion Attacks to Autonomous
Cars Using Vehicular Botnets, SENT ’15, 8 February 2015, San Diego, CA, USA.
[13] Rasheed Hussain, Sherali Zeadally, ”Autonomous Cars: Research Results Issues and
Future Challenges”, Communications Surveys & Tutorials IEEE, vol. 21, no. 2, pp.
1275-1313, 2019.
[14] R. Lyons and W. Vanderkulk. 1962. The use of triple-modular redundancy to improve
computer reliability. IBM J. Research and Development 6, 2 (1962), 200–209
[15] Ahmad T. Sheikh, Aiman H. El-Maleh, Muhammad E. S. Elrabaa, Sadiq M. Sait, ”A
Fault Tolerance Technique for Combinational Circuits Based on Selective-Transistor
Redundancy”, Very Large Scale Integration (VLSI) Systems IEEE Transactions on,
vol. 25, no. 1, pp. 224-237, 2017.
[16] Glover, Neal; Dudley, Trent. Practical Error Correction Design For Engineers Revision
1.1, 2nd. CO, USA: Cirrus Logic. 1990. ISBN 0-927239-00-0. ISBN 978-0-927239-00-4
[17] N-Version Programming: A Fault-Tolerance Approach to Reliability of Software Oper-
ation, Liming Chen; Avizienis, A., Fault-Tolerant Computing, 1995, ’ Highlights from
BIBLIOGRAPHY 57
Twenty-Five Years’., Twenty-Fifth International Symposium on, Vol., Iss., 27-30 Jun
1995.
[18] Algirdas A Avizienis. 1995. A Methodology of N-Version Programming. Software fault
tolerance 3 (1995), 23–46.
[19] PJ Roache, Verification and validation in computational science and engineering, 1998.
[20] E. Clarke, O. Grumberg, and D. Peled. 1999. Model checking. MIT press.
[21] Gerard Holzmann. 2003. The Spin Model Checker: Primer and Reference Manual (first
ed.). Addison-Wesley Professional.
[22] Nidhi Kalra, Susan M. Paddock, Driving to safety: How many miles of driving would it
take to demonstrate autonomous vehicle reliability? Transportation Research Part A:
Policy and Practice, Volume 94, 2016, Pages 182-193.
[23] Y. Chen and X. Sun, ”STAS: A Scalability Testing and Analysis System,” 2006
IEEE International Conference on Cluster Computing, Barcelona, 2006, pp. 1-10, doi:
10.1109/CLUSTR.2006.311882.
[24] Aaron Kane. 2015. Runtime Monitoring for Safety-Critical Embedded Systems. Ph.D.
Dissertation. Carnegie Mellon University.
[25] MoonZoo Kim, Mahesh Viswanathan, Sampath Kannan, Insup Lee & Oleg Sokolsky
(2004): Java-MaC: A Run-Time Assurance Approach for Java Programs. FMSD 24(2),
pp. 129–155, doi:10.1023/B:FORM.0000017719.43755.7c.
[26] Duncan Paul Attard, Ian Cassar, Adrian Francalanza, Luca Aceto & Anna Ingolfsdottir
(2017): A Runtime Monitoring Tool for Actor-Based Systems. In: Behavioural Types:
from Theory to Tools., chapter 3, River Publishers, Gistrup, Denmark, pp. 49–76,
doi:10.13052/rp-9788793519817.
58 BIBLIOGRAPHY
[27] P. Daian, S. Shiraishi, A. Iwai, B. Manja, and G. Rosu. 2016. RV-ECU: Maximum
Assurance In-Vehicle Safety Monitoring. In SAE World Congress.
[28] John Rushby. 1989. Kernels for Safety? In Safe and Secure Computing Systems, T.
Anderson (Ed.). Blackwell Scientific Publications, Chapter 13, 210–220.
[29] Y. Falcone, J.-C. Fernandez, and L. Mounier. 2012. What can you verify and enforce at
runtime? STTT 14, 3 (2012), 349–382.
[30] N. Bomey, ”Uber self-driving car crash: Vehicle detected Arizona pedestrian 6 seconds
before accident”, May 2018.
[31] E. Ackerman, ”Fatal tesla self-driving car crash reminds us that robots aren’t perfect”,
IEEE Spectr., Jul. 2016.
[32] L Kolodny, NTSB calls out Tesla and Apple for neglecting driver safety, calls Tesla
Autosteer ‘completely inadequate’, FEB. 2020.
[33] K. Wan, K.L. Man, D. Hughes, Specification, Analyzing Challenges and Approaches for
Cyber-Physical Systems (CPS), Engineering Letters, 2010
[34] M. Wu, H. Zeng, and C. Wang, Synthesizing Runtime Enforcer of Safety Specification
under Burst Error, 2016, 10.1007/978-3-319-40648-0.
[35] Bettina Könighofer, Mohammed Alshiekh, Roderick Bloem, Laura R. Humphrey,
Robert Könighofer, Ufuk Topcu, and Chao Wang. Shield synthesis. Formal Methods
in System Design, 51(2):332–361, 2017.
[36] S. Zanero, ”Cyber-Physical Systems,” in Computer, vol. 50, no. 4, pp. 14-16, April 2017.
BIBLIOGRAPHY 59
[37] Ivancic, Karolina & Hesketh, Beryl. (2001). Learning from errors in a driving
simulation: Effects on driving skill and self-confidence. Ergonomics. 43. 1966-84.
10.1080/00140130050201427.
[38] A. Furda and L. Vlacic, ”Enabling Safe Autonomous Driving in Real-World City Traffic
Using Multiple Criteria Decision Making,” in IEEE Intelligent Transportation Systems
Magazine, vol. 3, no. 1, pp. 4-17, Spring 2011.
[39] Bloem R., Könighofer B., Könighofer R., Wang C. (2015) Shield Synthesis: Runtime
Enforcement for Reactive Systems. In: Baier C., Tinelli C. (eds) Tools and Algorithms
for the Construction and Analysis of Systems. TACAS 2015. Lecture Notes in Computer
Science, vol 9035. Springer, Berlin, Heidelberg.
[40] Pnueli, Amir. “The temporal logic of programs.” 18th Annual Symposium on Founda-
tions of Computer Science (sfcs 1977) (1977): 46-57.
[41] Camacho E.F., Bordons C. (2007) Introduction to Model Predictive Control. In: Model
Predictive control. Advanced Textbooks in Control and Signal Processing. Springer,
London
[42] N.A. Thacker, A. Lacey, ”Tutorial: The Kalman Filter”, 1998.
[43] Fernandez JC., Mounier L., Pachon C. (2005) A Model-Based Approach for Robustness
Testing. In: Khendek F., Dssouli R. (eds) Testing of Communicating Systems. TestCom
2005. Lecture Notes in Computer Science, vol 3502. Springer, Berlin, Heidelberg.
[44] André Platzer. Logical Foundations of Cyber-Physical Systems. Springer, Cham, 2018.
659 pages. ISBN 978-3-319-63587-3.
[45] Hagan, Martin T., Howard B. Demuth and Mark Beale. “Neural Network Design.” 1995.
60 BIBLIOGRAPHY
[46] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, ”Gradient-based learning applied to
document recognition,” in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov.
1998.
[47] Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E. (2017-05-24). ”ImageNet classifi-
cation with deep convolutional neural networks”. Communications of the ACM. 60 (6):
84–90.
[48] R Girshick, J Donahue, T Darrell, J Malik, Rich feature hierarchies for accurate object
detection and semantic segmentation, The IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2014, pp. 580-587.
[49] Bojarski, M. et al, End to End Learning for Self-Driving Cars, 2016.
[50] R. Kulic and Z. Vukic, ”Autonomous Vehicle Obstacle Avoiding and Goal Position
Reaching by Behavioral Cloning,” IECON 2006 - 32nd Annual Conference on IEEE
Industrial Electronics, Paris, 2006, pp. 3939-3944.
[51] DonkeyCar, https://www.donkeycar.com.
[52] Neven, Davy & Brabandere, Bert & Georgoulis, Stamatios & Proesmans, Marc & Van
Gool, Luc. (2018). Towards End-to-End Lane Detection: an Instance Segmentation
Approach. 286-291. 10.1109/IVS.2018.8500547.
[53] Canny, J., A Computational Approach To Edge Detection, IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, 8(6):679–698, 1986.
[54] Duda, R. O. and P. E. Hart, ”Use of the Hough Transformation to Detect Lines and
Curves in Pictures,” Comm. ACM, Vol. 15, pp. 11–15 (January, 1972).
[55] National Highway Traffic Safety Administration(NHTSA), Federal Automated Vehicles
Policy, 2016.
BIBLIOGRAPHY 61
[56] M. Wu, J. Wang, J. Deshmukh and C. Wang, ”Shield Synthesis for Real: Enforcing
Safety in Cyber-Physical Systems,” 2019 Formal Methods in Computer Aided Design
(FMCAD), San Jose, CA, USA, 2019, pp. 129-137
[57] Nathan Fulton, Stefan Mitsch, Jan-David Quesel, Marcus Vo¨lp, and Andr´e Platzer.
KeYmaera X: An axiomatic tactical theorem prover for hybrid systems. In Amy Felty
and Aart Middeldorp, editors, CADE, volume 9195 of LNCS, pages 527–538, Berlin,
2015.
[58] Microsoft Research, https://github.com/Z3Prover/z3.
[59] P. Polack, F. Altché, B. d’Andréa-Novel and A. de La Fortelle, ”The kinematic bicycle
model: A consistent model for planning feasible trajectories for autonomous vehicles?,”
2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, 2017, pp. 812-818,
doi: 10.1109/IVS.2017.7995816.
[60] Anisi, David (July 2003). ”Optimal Motion Control of a Ground Vehicle”. Swedish
Research Defence Agency. I650-1942.