the threat of biologically-inspired self-stopping wormsrvogt/papers/vogtmsc.pdf · the threat of...

195
UNIVERSITY OF CALGARY The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF COMPUTER SCIENCE CALGARY, ALBERTA JUNE, 2008 c Ryan Andrew Vogt 2008

Upload: tranminh

Post on 16-Jul-2019

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

UNIVERSITY OF CALGARY

The Threat of

Biologically-Inspired Self-Stopping Worms

by

Ryan Andrew Vogt

A THESIS

SUBMITTED TO THE FACULTY OF GRADUATE STUDIES

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE

DEGREE OF MASTER OF SCIENCE

DEPARTMENT OF COMPUTER SCIENCE

CALGARY, ALBERTA

JUNE, 2008

c© Ryan Andrew Vogt 2008

Page 2: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

UNIVERSITY OF CALGARY

FACULTY OF GRADUATE STUDIES

The undersigned certify that they have read, and recommend to the Faculty of Graduate

Studies for acceptance, a thesis entitled “The Threat of Biologically-Inspired Self-Stopping

Worms” submitted by Ryan Andrew Vogt in partial fulfillment of the requirements for the

degree of Master of Science.

Dr. John Aycock,Supervisor,Department of Computer Science

Dr. Michael J. Jacobson, Jr.,Co-Supervisor,Department of Computer Science

Dr. Rei Safavi-Naini,Internal Examiner,Department of Computer Science

Dr. Peter D. Vize,External Examiner,Department of Biological Sciences

Date

ii

Page 3: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

Abstract

Worms are malicious software that spread over a network, such as the Internet, often

without any user involvement. When a computer is infected, it begins scanning for ad-

ditional vulnerable hosts to infect. One scanning strategy popular with worm authors is

random scanning: infected computers continually choose random targets in the network to

attempt to infect.

Despite the simplicity of random scanning, it has a weakness from the perspective of a

worm author. Unceasing scanning activity quickly identifies infected hosts to defenders. A

worm author has clear motivation for this scanning activity to stop once most or all of the

vulnerable hosts have been infected.

In this thesis, I look at potential techniques adversaries could use to construct a self-

stopping worm, including one which could even be used to construct more resilient botnets

(collections of remotely-controlled compromised hosts). I also suggest proactive defenses

against this threat which should be employed.

iii

Page 4: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

Acknowledgments

First, I would like to thank my supervisors, John Aycock and Michael J. Jacobson, Jr.,

for their guidance and support over the course of my degree. Their feedback contributed a

great deal to the success of this project.

I would also like to thank Justin Ma for his time and help on the Sum-Count-X algorithm.

Also, Jonathan Hammell provided feedback and suggestions with many of the cryptographic

topics in this thesis, and Rennie deGraaf saved me a great deal of trouble with his assistance

in typesetting this document.

Finally, I would like to acknowledge the generous funding provided by the Natural Sci-

ences and Engineering Research Council of Canada, the Informatics Circle of Research

Excellence, and the Department of Computer Science at the University of Calgary, all of

which made this research possible.

iv

Page 5: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

Dedication

To my amazing wife, Stefanie. Your love and support, not to mention the many engaging

discussions we shared on my research, are the reasons for my success.

v

Page 6: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

Table of Contents

Approval Page ii

Abstract iii

Acknowledgments iv

Dedication v

Table of Contents vi

List of Tables viii

List of Figures x

1 Introduction 11.1 Contributions of this Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Previous Appearances of this Work . . . . . . . . . . . . . . . . . . . . . . 61.3 Outline of this Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Background 82.1 Introduction to Networking . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Introduction to Worms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3 Introduction to Defensive Technology . . . . . . . . . . . . . . . . . . . . 122.4 Context and History of the Self-Stopping Worm Problem . . . . . . . . . . 142.5 Statistical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 The Perfect Self-Stopping Worm 193.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4 Previous Self-Stopping Worms 294.1 Description of Sum-Count-X . . . . . . . . . . . . . . . . . . . . . . . . . 304.2 Limitations of Sum-Count-X . . . . . . . . . . . . . . . . . . . . . . . . . 324.3 Sampling Error in Model-Based Techniques . . . . . . . . . . . . . . . . . 364.4 Random Approaches to Self-Stopping . . . . . . . . . . . . . . . . . . . . 404.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5 The Quorum Sensing Worm 475.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.2 Quorum Sensing Worm Design . . . . . . . . . . . . . . . . . . . . . . . . 49

vi

Page 7: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.4 Worm Seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565.5 Reducing Extraneous Traffic . . . . . . . . . . . . . . . . . . . . . . . . . 595.6 Social Cheating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715.7 Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

6 The Population Inference Worm 796.1 Statistical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806.2 Population Inference Worm Design . . . . . . . . . . . . . . . . . . . . . . 826.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836.4 Confidence Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876.5 Data Sharing and Memory . . . . . . . . . . . . . . . . . . . . . . . . . . 896.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

7 The Tree-Based Super-Botnet Worm 957.1 Vulnerabilities of Traditional Botnets . . . . . . . . . . . . . . . . . . . . . 967.2 Super-Botnet Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 997.3 Inter-Botnet Communication . . . . . . . . . . . . . . . . . . . . . . . . . 1087.4 Time Bombs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197.5 Defense Against Super-Botnets . . . . . . . . . . . . . . . . . . . . . . . . 1237.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

8 The Inference-Based Super-Botnet Worm 1288.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1298.2 Inter-Botnet Communication . . . . . . . . . . . . . . . . . . . . . . . . . 1368.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388.4 Comparison to the Perfect Self-Stopping Worm . . . . . . . . . . . . . . . 1438.5 Super-Botnet Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478.6 Defense Against Self-Stopping Super-Botnets . . . . . . . . . . . . . . . . 1528.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

9 Conclusions and Future Work 157

Bibliography 160

A Expected Infections by Random-Scanning Worms 170

B Secret Splitting in Super-Botnets 176B.1 Capturing Each Piece of the Secret . . . . . . . . . . . . . . . . . . . . . . 177B.2 Destroying One Piece of the Secret . . . . . . . . . . . . . . . . . . . . . . 180

vii

Page 8: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

List of Tables

4.1 Sum-Count-X worm infection percentages for different values of V and p0 . 354.2 Perfect-knowledge worm infection percentages for different values of V and

p0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.3 Blind strategy worm infection percentages for different values of V and the

random halting constant, phalt . . . . . . . . . . . . . . . . . . . . . . . . . 444.4 Redundant hit strategy worm infection percentages for different values of V

and the random halting constant, phalt . . . . . . . . . . . . . . . . . . . . 45

5.1 Quorum sensing worm configurations . . . . . . . . . . . . . . . . . . . . 545.2 Quorum sensing worm infection percentages for different values of V under

different configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.3 Comparison of the quorum sensing worm under configuration 7 and the

perfect self-stopping random-scanning worm, when V = 131072 . . . . . . 565.4 The effect of multiple seed infections on the final infection percentage of a

configuration 3 quorum sensing worm . . . . . . . . . . . . . . . . . . . . 585.5 The effect of multiple seed infections on the final infection percentage of a

configuration 6 quorum sensing worm . . . . . . . . . . . . . . . . . . . . 595.6 The effect of differing memory sizes on the performance of a configuration 7

quorum sensing worm, relative to the perfect self-stopping worm . . . . . . 615.7 The effect of differing memory sizes on the final infection percentage of a

configuration 3 quorum sensing worm . . . . . . . . . . . . . . . . . . . . 635.8 The effect of differing feedback limit values on the performance of a config-

uration 7 quorum sensing worm, relative to the perfect self-stopping worm,when M = 0 and F = 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.9 The effect of differing feedback multiplier values on the final infection per-centage of a configuration 7 quorum sensing worm with M = 0 utilizingquadratic feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

6.1 The effect of differing infection targets on the performance of a populationinference worm using α = 0.05, relative to the perfect self-stopping worm,when V = 131072 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

6.2 The effect of differing infection targets on the performance of a populationinference worm using α = 0.95, relative to the perfect self-stopping worm,when V = 131072 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

6.3 Infection percentages for a population inference worm using data sharingand memory for different values of V and p0 . . . . . . . . . . . . . . . . . 93

8.1 Self-stopping super-botnet worm infection percentages for different valuesof V and p0 when α = 0.05 . . . . . . . . . . . . . . . . . . . . . . . . . . 139

8.2 Self-stopping super-botnet worm infection percentages for different valuesof V and p0 when α = 0.25 . . . . . . . . . . . . . . . . . . . . . . . . . . 142

viii

Page 9: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

8.3 Comparison of the self-stopping super-botnet worm with p0 = 0.900 and theperfect self-stopping random-scanning worm, both with 100 seed infections,when V = 131072 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

8.4 Self-stopping super-botnet worm infection percentages for different valuesof V and p0 when α = 0.25 and inter-botnet sample sharing is disabled . . . 155

ix

Page 10: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

List of Figures

2.1 The “S” curve, illustrating the number of hosts infected over time during aworm epidemic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.1 Pseudocode to model the perfect self-stopping mechanism . . . . . . . . . 223.2 The effect of the target infection percentage (p0) on the number of scans

generated by the perfect self-stopping worm when V = 131072, I = 1, andγ = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.3 The effect of the target infection percentage (p0) on the epidemic runningtime of the perfect self-stopping worm when V = 131072, I = 1, and γ = 1 24

3.4 The effect of the target infection percentage (p0) on the epidemic runningtime of the perfect self-stopping worm when V = 131072 and γ = 1 for var-ious values of I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.5 A comparison of the number of hosts infected over time in the discrete per-fect worm model and the epidemiological model when V = 131072, I = 1,and γ = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.1 An example of three hosts running a Sum-Count-X worm over time . . . . 314.2 Pseudocode for Ma et al.’s Sum-Count-X algorithm (part 1 of 2) . . . . . . 334.3 Pseudocode for Ma et al.’s Sum-Count-X algorithm (part 2 of 2) . . . . . . 344.4 Replacement pseudocode for a perfect-knowledge worm . . . . . . . . . . 374.5 A comparison of the number of hosts infected over time in the discrete

worm model and the epidemiological model when V = 1500000, I = 1, andγ = 4000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.6 Pseudocode for Ma et al.’s blind algorithm . . . . . . . . . . . . . . . . . . 414.7 Pseudocode for Ma et al.’s redundant hit algorithm . . . . . . . . . . . . . 42

5.1 Pseudocode for the quorum sensing worm (part 1 of 2) . . . . . . . . . . . 515.2 Pseudocode for the quorum sensing worm (part 2 of 2) . . . . . . . . . . . 525.3 An example of six hosts running a quorum sensing worm over time . . . . . 535.4 The number of infected and halted hosts over time for a configuration 7

quorum sensing worm, when V = 131072 . . . . . . . . . . . . . . . . . . 575.5 Pseudocode for the simple feedback mechanism . . . . . . . . . . . . . . . 665.6 The relation between initial autoinducer level and autoinducer level near the

end of a quorum sensing worm epidemic for a configuration 7 worm withM = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.7 A quadratic function approximating the relation between initial autoinducerlevel and autoinducer level near the end of a quorum sensing worm epidemicfor a configuration 7 worm with M = 0 . . . . . . . . . . . . . . . . . . . . 69

5.8 Pseudocode for the quadratic feedback mechanism . . . . . . . . . . . . . 705.9 The effect of lazy and severe cheaters on the final infection percentage of a

configuration 3 quorum sensing worm . . . . . . . . . . . . . . . . . . . . 74

x

Page 11: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

5.10 The effect of lazy and severe cheaters on the final infection percentage of aconfiguration 6 quorum sensing worm . . . . . . . . . . . . . . . . . . . . 74

6.1 Pseudocode for the population inference worm (part 1 of 2) . . . . . . . . . 846.2 Pseudocode for the population inference worm (part 2 of 2) . . . . . . . . . 856.3 Propagation pseudocode for a population inference worm with data sharing

and memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

7.1 The close proximity of C&C machines when a super-botnet is generatedthrough a two-phase infection process . . . . . . . . . . . . . . . . . . . . 101

7.2 Pseudocode for a tree-based super-botnet worm . . . . . . . . . . . . . . . 1037.3 An example of a growing super-botnet over time using the constant values

B = 4, HPB = 3, and S = 2 (part 1 of 2) . . . . . . 1047.4 An example of a growing super-botnet over time using the constant values

B = 4, HPB = 3, and S = 2 (part 2 of 2) . . . . . . 1057.5 The total number of infected hosts and the number of C&C machines started

over time as a perfect-luck super-botnet worm using S = 2 infectsV = 1500000 hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

7.6 The total number of infected hosts and the number of C&C machines startedover time as a perfect-luck super-botnet worm using S = 25 infectsV = 1500000 hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

7.7 The effect of increasing S on the epidemic running time of a perfect-luck super-botnet worm when V = 1500000 . . . . . . . . . . . . . . . . . 108

7.8 Pseudocode for routing information exchange in super-botnets . . . . . . . 1127.9 The average number of botnets that receive a command sent to a single seed

as random attacks bring down individual botnets, for various values of H . . 1157.10 The average number of botnets that receive a command sent to a single seed

as oracular attacks bring down individual botnets, for various values of H . 1157.11 The average number of botnets that receive a command sent to all seeds as

random attacks bring down individual botnets, for various values of H . . . 1177.12 The average number of botnets that receive a command sent to all seeds as

oracular attacks bring down individual botnets, for various values of H . . . 1187.13 Tracking infections backwards: the two-hop weakness . . . . . . . . . . . 124

8.1 Pseudocode for a self-stopping super-botnet worm (part 1 of 2) . . . . . . . 1348.2 Pseudocode for a self-stopping super-botnet worm (part 2 of 2) . . . . . . . 1358.3 Pseudocode for inter-botnet communication during the spread of a self-

stopping super-botnet worm (part 1 of 2) . . . . . . . . . . . . . . . . . . . 1378.4 Pseudocode for inter-botnet communication during the spread of a self-

stopping super-botnet worm (part 2 of 2) . . . . . . . . . . . . . . . . . . . 1388.5 The effect of different α values on the absolute error in the final infection

percentage for a self-stopping super-botnet worm with different target infec-tion percentages (p0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

xi

Page 12: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

8.6 The effect of different p0 values on the computation of the z statistic whenp is 0.02 smaller than p0, using the minimum acceptable sample size for p0

and α, with α = 0.95 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1418.7 The effect of different p0 values on the mean and standard deviation of a

self-stopping super-botnet worm’s epidemic duration . . . . . . . . . . . . 1458.8 The average number of botnets that receive a command sent to a single seed

as random attacks bring down individual botnets created by the self-stoppingsuper-botnet worm, for various values of H . . . . . . . . . . . . . . . . . 150

8.9 The average number of botnets that receive a command sent to a singleseed as oracular attacks bring down individual botnets created by the self-stopping super-botnet worm, for various values of H . . . . . . . . . . . . 151

8.10 The effect of honeypots serving false samples on the final infection percent-age of the self-stopping super-botnet worm for different values of V . . . . 154

A.1 The effect of the increasing the number of scans performed by a random-scanning worm on the expected number of hosts infected when V = 131072,I = 1, and A = 232 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

B.1 Marble placement spots for the missing marbles of color C − 1 . . . . . . . 179B.2 An algorithm for computing E(T ) . . . . . . . . . . . . . . . . . . . . . . 180B.3 Marble placement spots for the missing marbles of color 2 . . . . . . . . . 181B.4 An algorithm for computing E(S ) . . . . . . . . . . . . . . . . . . . . . . 183

xii

Page 13: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

1

Chapter 1

Introduction

In his groundbreaking work The Selfish Gene, Richard Dawkins proposed the idea that

genes are the fundamental unit of evolution. Evolutionary forces — that is, natural selec-

tion — function on genes, rather than on individuals or species. Organisms, from bacteria

to humans, are nothing more than survival vehicles built by genes to allow them to replicate

and survive.

As a companion theory, Dawkins introduced memetics, the study of memes. Memes,

like genes, are replicators; but, far from being replicating sequences of nucleotides, memes

are ideas that replicate. Some memes are cultural: academic theories, artistic advances, re-

ligious beliefs, successful viral marketing campaigns [Ray96] — in short, units of imitation

that replicate among people [Daw89, p. 192].

But memes are not limited to the realm of culture. One realization of memes noted

by Dawkins exists in the computer world: viruses [Daw89, p. 329]. Computer viruses are

self-replicating, often malicious pieces of code — code that can be viewed as a concrete

implementation of an idea or set of ideas.

Indeed, the name “computer virus” is surprisingly apt, as computer viruses that infect

executable programs are a clear memetic analogue to viruses that infect biological targets.

Viruses that infect living hosts do so by co-opting the genetic replication system of the

host, in essence using the survival vehicle created by different genes to replicate their own

genes. Computer viruses perform the memetic equivalent, co-opting code located in another

survival machine (i.e., a computer) to replicate themselves.

Page 14: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

2

It is amazing to see just how far this analogy can be taken. Different types of biological

viruses could even be compared to different types of computer viruses. Companion viruses,

a type of virus targeting executable files that “sits alongside” its target without actually

modifying the target’s code [Ayc06, pp. 30–32], are similar in a way to lytic viruses. Lytic

viruses do not integrate into the host chromosome; they simply make copies of themselves

in a host cell. A lysogenic virus following a provirus life cycle, on the other hand, integrates

directly into the host chromosome, inserting new information where there was none (details

are available in most introductory sources on microbiology, e.g., [MMP03, p. 247]). Such

a biological virus can be seen as a direct parallel to file infectors — viruses that actually

modify the code of a target executable by inserting their viral code [Ayc06, pp. 32–33].

While this analogy may not be perfect (companion viruses can, in some respect, be regarded

as stand-alone programs, whereas lytic viruses need to infect a host cell), it does show how

well computer viruses are a memetic parallel to biological viruses. It is even open to debate

whether computer viruses are a form of artificial life [Spa94], just as it is unclear whether

biological viruses are alive.

There is another meme in the computer world: worms. A computer worm is a complete,

typically-malicious program that moves from computer to computer. It may be tempting

to view computer worms in the same light as computer viruses, but I feel that a distinction

needs to be made. Unlike viruses, worms do not co-opt other programs. Rather, a worm

is a survival vehicle for its memes, which are the various fundamental units of code or

algorithms from which the worm is built. This vehicle infects the survival vehicle of other

memes — the computer.

Here, I resist defining exactly how to divide the worm code into component memes, as

it adds nothing to the topic at hand. The purpose of defining a worm in terms of memet-

ics is to draw a comparison to the living world. Bacteria, themselves vehicles for genetic

Page 15: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

3

material, can infect host organisms, which are nothing more than vehicles for other genes.

The bacterial genes survive by using vehicles that replicate and spread between hosts in a

population.

What is the point to exploring this parallel between computer worms and memetics? The

purpose is to open a discussion about successful memes that reside in computer worms, and

the phenotype exhibited by their vehicle (the worm).

Just as the success of genes can be measured by their continued presence in the gene

pool, the success of memes can also be measured. Dawkins suggests three predictive mea-

sures for the long-term success of a gene: longevity, i.e., how long one copy of a gene sur-

vives (the vehicle organism’s lifetime); fecundity, i.e., the speed at which the gene replicates;

and, copying fidelity, i.e., how few errors are made when the gene replicates [Daw89, p. 17].

These same predictive measures can be applied to memes. And, as with genes, Dawkins sug-

gests that longevity is probably relatively unimportant compared to fecundity in measuring

the success of a meme [Daw89, p. 194].

Yet there is a key difference between most memes and those that constitute a computer

worm. Worms can be damaging to the host vehicles they infect, so such an infection can

provoke an “immune response.” This response could range from an irate user or system ad-

ministrator unplugging or formatting an infected computer, to anti-virus products detecting

and cleaning the infection.

And so it is with bacteria: infecting a host organism will often invoke an immune re-

sponse. One interesting result that follows is that, in the case of bacterial genes, the longevity

of individual copies of the genes is important for the survival of a large number of gene

copies. To clarify this point, let us consider the example of a Pseudomonas aeruginosa

infection in the lungs of a cystic fibrosis (CF) patient. P. aeruginosa conceals its presence

in the lung by eliminating its antigens, thereby making detection by the host immune sys-

Page 16: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

4

tem difficult [GP07]. If many bacteria fail to conceal themselves properly and are detected

by the immune system, however, the host immune system will launch a “two-pronged” at-

tack against the bacteria (again see, e.g., [MMP03, p. 756]). One part of the attack will be

antigen-specific — that is, an attack that will target the detected P. aeruginosa. The other

part of the attack will be a general immune response, which will attempt to destroy anything

foreign in the lung. So, while the antigen-bearing copies of P. aeruginosa will likely be the

first to go, their detection puts all of the bacteria in the lungs at risk — and all copies of the

bacterial genes contained therein.

The same observation holds true with computer worms. Any copies of the worm that are

detected will likely be the first to perish (e.g., by the hand of the irate system administrator

alluded to above). However, there is unsurprisingly an analogue to the general immune re-

sponse: anti-virus software updates. One captured copy of a worm could result in patches

being shipping to anti-virus products that will allow them to locate and eradicate many

copies of the worm, and all of the associated worm memes. Interestingly, methods for au-

tomatically generating anti-virus updates based on the biological general immune response

have been developed [Kep94, KSSW97, WSP+99].

How might computer worms increase their longevity? It is reasonable to predict that

“stealth” memes, and especially combinations of stealth memes, will continue to evolve

within computer worms. These memes will change the “phenotype” of their survival vehi-

cle, making it more difficult to detect. One example could be a meme that hides a worm’s

activity on an infected computer (call this a rootkit meme). Memes which help worms avoid

a general (anti-virus) immune response are also predictable (a polymorphism meme, for

example).

Fecundity of computer worms is already very high [MPS+03]. Therefore, I feel that it

is unlikely for evolution along this line to be highly advantageous to adversaries. Rather, I

Page 17: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

5

predict that longevity of the survival vehicle will be the most important evolutionary change

in future worm design, as alluded to above. As such, this thesis will focus on one such

longevity meme in worms: propagation code with self-stopping capabilities.

1.1 Contributions of this Thesis

A host infected by a typical random-scanning worm, such as one of the Code Red vari-

ants [MSc02] or Nimda [DDHR01], will continuously choose a random IP address, probe

the host at that address for a vulnerability, and infect that host if possible. An adversary

would choose to use this scanning method because it is easy to program; and, more impor-

tantly, it is easy to program correctly. Worm epidemics in the past have been less effective

than they could have been because of bugs in the target host-selection code [MSc02]. Be-

cause of the ease for an adversary in implementing a random-scanning worm, this thesis

will focus strictly on random-scanning worms.

Since modern worms can propagate quickly, response teams often must detect and clean

infected computers after most vulnerable hosts have already been infected [Sto03]. For

example, the Slammer worm infected 90% of the vulnerable population in only 10 min-

utes [MPS+03], and it has been theorized that a worm would infect most vulnerable hosts

in under one second [SMPW04]. Luckily for defenders, worms which utilize a random-

scanning strategy for propagation are often easy to detect — the unceasing scanning activity

exhibited by infected machines quickly raises red flags.

Imagine, though, a worm that is able to spread until the percentage of the vulnerable

population that is infected, p, reaches some predetermined constant, p0 (e.g., p0 = 0.95).

Once this limit is reached, the worm ceases all activity. Computers infected by this worm

would be far more difficult to identify, as they would not exhibit the continuous scanning

activity.

Page 18: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

6

In exploring this worm behavior, there will be two key questions addressed by this thesis.

Specifically:

1. How could such a self-stopping random-scanning worm be constructed, especially if

the adversary does not know the size of the vulnerable population in advance; and,

2. What can we do to proactively defend against this potential future threat?

While this sort of “future threat research” can sometimes be controversial, I consider proac-

tively addressing potential threats and discussing defenses in advance to be not only ethical

but necessary. That adversaries are incapable of devising new attacks on their own is a

dangerous assumption to make, and it can necessitate the rapid development of reactive de-

fenses. A detailed look at the legal and ethical issues surrounding this form of research was

presented by Aycock and Maurushat [AM07].

1.2 Previous Appearances of this Work

Some of the work in this thesis has been presented previously. Significant portions

of Chapters 3, 4, and 5 were published at the 5th ACM Workshop on Recurring Mal-

code (WORM 2007) [VAJ07b]. John Aycock and Michael J. Jacobson, Jr. were co-authors

on this work, and the conference had an acceptance rate of 30%.

Chapter 7 also has considerable academic lineage. The work on this chapter began as

a technical report co-authored with John Aycock [VA06]. A significant portion of the final

version of this chapter, along with Appendix B, was later published at the 2007 Network

and Distributed System Security Symposium (NDSS 2007) [VAJ07a]. John Aycock and

Michael J. Jacobson, Jr. were once again co-authors, and NDSS 2007 had a 15% accep-

tance rate. This paper has been cited multiple times in third-party work since its publica-

tion [DGLL07, GSN+07, SAM07, WSZ07].

Page 19: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

7

As an acknowledgment of the contributions of my co-authors, as well as to avoid general

grammatical silliness, I will write the remainder of this thesis in the first person plural. This

delineation also serves to separate my philosophical soliloquy in the first chapter from the

scientific endeavor presented henceforth.

1.3 Outline of this Thesis

We will begin with a discussion of the background material necessary to fully understand

this work in Chapter 2. Chapter 3 will then present a theoretical, perfect solution to the

self-stopping problem, which will serve as a baseline against which to compare all other

work. Chapter 4 will explore some previous work done on the self-stopping problem, before

Chapter 5 explores one new type of self-stopping worm design and associated defenses:

the quorum sensing worm. Chapter 6 will explore the potential of population inference

being used as a tool for constructing a self-stopping worm. We will then change direction

and explore the feasibility of a worm constructing a large, sub-divided set of compromised

hosts called a super-botnet, along with defenses against super-botnets, in Chapter 7. The

purpose for that change in direction will become evident in Chapter 8, where we will bring

together the ideas presented in Chapters 6 and 7 to investigate the possibility of a self-

stopping population-inference worm that builds a super-botnet. Finally, we will bring this

thesis to a close in Chapter 9 by summarizing its contributions and discussing future work.

Two appendices are also provided as a supplement to the reader. Appendix A gives

a rigorous mathematical treatment to the problem of how many hosts a random-scanning

worm can expect to infect as it performs more scans. Appendix B provides a full discussion

of a statistical problem first presented in Chapter 7: secret splitting in super-botnets.

Page 20: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

8

Chapter 2

Background

This chapter presents a general overview of some topics in computer networks and ma-

licious software. This chapter is intentionally brief, as most of the background specific to

our work is provided on an as-needed basis throughout the thesis. The goal of this chapter

is to provide some additional background material.

We begin in Section 2.1 with a brief overview of relevant networking concepts. We then

provide some high-level details about computer worms in Section 2.2 and associated defen-

sive techniques in Section 2.3. Section 2.4 will provide a framework for the self-stopping

problem by discussing different worm scanning techniques and a historical account of pre-

vious work on this problem. We will then conclude in Section 2.5 with a brief discussion of

the statistical methods used in this thesis.

2.1 Introduction to Networking

The architecture of the Internet is based on a four-layer stack [Bra89] that is related to

the OSI stack model [Zim80]. The four layers in the Internet stack are:

1. The link layer. This layer lies at the bottom of the stack. Protocols at this level, such

as Ethernet, are responsible for allowing directly-connected hosts (i.e., computers or

other connected devices) to communicate.

2. The Internet layer. Above the link layer is the Internet layer, which provides datagram

services between hosts on the Internet using global addressing. The Internet layer

does not provide any notion of a connection, nor is there any end-to-end delivery

Page 21: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

9

guarantee of data packets. The most common Internet layer protocol in use currently

is the Internet Protocol version 4 (IPv4).

3. The transport layer. This layer is responsible for providing end-to-end services for

applications. Two common transport layer protocols are the transmission control pro-

tocol (TCP) and the user datagram protocol (UDP). TCP is a reliable service, guar-

anteeing delivery and correct ordering of packets. UDP has less overhead, but it is

connectionless and does not provide the same guarantees as TCP.

4. The application layer. The top layer in the stack is responsible for transmitting and

receiving data in a format known to a specific application. Examples of application

layer protocols include the file transfer protocol (FTP) and the simple mail transfer

protocol (SMTP).

The role of each layer in the stack is to provide services to the layer above it and pass

received information back up. For example, an application that wishes to make an HTTP

connection to another host would first encode the relevant data (e.g., the request for HTTP

data) according to the specifications of HTTP. Those data would be passed down the stack

to the TCP protocol, which would encode the data appropriately for TCP transmission. This

process would continue all the way to the link layer, which would be responsible for encod-

ing and transmitting the information onto, e.g., an Ethernet cable, sending the information

to another directly-connected host in the network. As data is received at a host, it is decoded

by the relevant layers and passed up the stack.

One reason for discussing the four-layer stack, aside from discussing the different trans-

port layer protocols (which, as will become apparent, can be relevant to worm propagation),

was to introduce the notion of global addressing at the Internet layer. With IPv4, each host

has a 32-bit address, typically written in the form of four octets (e.g., 136.159.5.39) [Pos81].

That is, there are 232 possible addresses that a host can have. While IPv4 is slowly being

Page 22: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

10

superseded by the Internet Protocol version 6 (IPv6) [DH98], this thesis will focus strictly

on an IPv4 environment.

It is not necessarily the case that every host has a unique address. Certain blocks of

the IPv4 address space are reserved as private (e.g., 10.0.0.0–10.255.255.255, among oth-

ers) [RMK+96]. Packets destined for these addresses will not be routed across the Internet.

This allows many different hosts, e.g., in different organizations, to have the IP address

10.0.0.1; however, none of these hosts can be contacted at that address over the Inter-

net. These hosts would connect to the Internet via a network address translator (NAT),

which serves as an interface between machines with private addresses and the public In-

ternet [SH99]. Other portions of the IPv4 address space are reserved for other purposes,

such as the 127.0.0.0–127.255.255.255 block, which is used to loop connections back to the

sender [RP94]. Some hosts even have multiple addresses. A NAT will have at least two

addresses — one address in the internal network’s address space and another routable from

the outside. However, for the purposes of this thesis, we will abstract away all of the details

of private and reserved addresses, and we will assume that every host on the Internet has a

unique, publicly-accessible address, and that there are 232 potential addresses in total.

2.2 Introduction to Worms

As mentioned in Chapter 1, computer worms are pieces of malicious software (malware)

that propagate between hosts. Specifically, the malicious software running on an infected

host will target another vulnerable host and attempt to spread onto it. The worm spreads

by attacking vulnerabilities such as buffer overflow errors [One96] in servers running on

the target machine. Note that this definition of a worm is more specific than some other

definitions, which may include malicious software that spreads through user interaction.

One example of this form of malware is an e-mail worm, which infects hosts when a user

Page 23: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

11

opens an infected e-mail in his or her mail client [ZTG04]. This thesis, however, focuses

strictly on the type of worm that automatically targets vulnerable servers without user action.

Worms could be implemented to spread over TCP or UDP. The technical differences

between these transport layer protocols is unimportant to our adversary, the worm author.

What matters to the author is which transport protocol the vulnerable server uses. For ex-

ample, if a worm targeted vulnerable HTTP servers, that worm would spread using TCP,

since HTTP servers typically listen for traffic on TCP port 80. A worm that targeted vul-

nerable DNS servers, on the other hand, would typically target UDP port 53. Of course,

a worm could just as easily target multiple different vulnerable servers running on differ-

ent transport layer protocols. With knowledge of how to attack many kinds of servers, a

worm presumably increases the number of hosts vulnerable to it. Comparisons have been

performed on the performance of TCP worms versus UDP worms, however, showing that

UDP worms can be significantly faster than their TCP counterparts [SMPW04].

A worm’s propagation is slowest at the beginning of an epidemic. At this point, there

are few infected hosts scanning for additional hosts to infect. However, the propagation rate

increases as more hosts become infected. As the number of hosts infected approaches the

size of the vulnerable population, the worm’s propagation once again slows down, as there

are few vulnerable, uninfected hosts left to discover and infect. The result is that a graph of

the number of hosts infected over the course of a worm epidemic has a characteristic “S”

shape [Ayc06, p. 5], illustrated in Figure 2.1.

The nature of this “S” curve is perhaps best explained by the study of epidemiology. First

applied to malware by Murray [Mur88] and explored in greater detail by Kephart, White

and Chess [KW91, KWC93], epidemiology explores the progress of a malicious software

epidemic through differential equation models. Recent work has explored the impact of the

Page 24: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

12

Host

s in

fect

ed

Time

Figure 2.1: The “S” curve, illustrating the number of hosts infected over time during a wormepidemic

human element on the epidemic models, e.g., how long it takes for anti-virus patches to be

released [WL03].

2.3 Introduction to Defensive Technology

On November 2, 1988, the Morris worm spread across the Internet [Spa89]. This water-

shed event marked the first worm to strike the modern Internet. Interestingly, there was no

malicious code included in the worm intended to run on infected hosts (a payload). Instead,

the worm simply spread from host to host; the damage caused by this worm was strictly

collateral, as repeatedly-infected systems thrashed under the load.

Worms have evolved significantly since then. Hosts infected by modern worms may

find themselves delivering spam messages or participating in massive distributed denial of

service (DDoS) attacks as part of a botnet — a set of hosts compromised by malicious

software that act on commands sent to them by some adversary. Alternately, the worm

Page 25: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

13

payload could act as spyware, searching victims’ computers for valuable information to

steal, such as banking information or online passwords.

So how can users defend themselves against this threat? The most prominent method

for protecting systems is the installation of anti-virus software. Anti-virus techniques can be

divided into static techniques, such as signature-based scanning for known virus code and

heuristic-based file analysis that looks for “virus-like” characteristics, and dynamic tech-

niques, such as behavioral analysis of code running on the system or running code in an

emulator provided by the anti-virus software. Malicious code may employ anti-anti-virus

techniques, however, such as disabling anti-virus software or detecting when it is being run

in an emulated environment. A comprehensive analysis of anti-virus and anti-anti-virus

techniques is provided by Aycock [Ayc06].

Users may also protect themselves through the use of firewalls. Firewalls can exist either

as dedicated devices located in between most of the hosts in a network and the Internet, or as

protective pieces of software installed on host computers. In either case, their key function

from a security standpoint is the same: preventing unauthorized traffic from entering the

network. If a worm only propagates over TCP port p, but no traffic on port p is allowed to

enter a company’s network, that network will be safe from the worm. Similarly, if traffic

is allowed into a home network only if it originates from the University of Calgary, then

so long as no host at the University of Calgary is infected, and the worm does not spoof its

traffic to appear as though it is originating from the University of Calgary, that home network

will be safe from the worm. Some firewalls, called egress firewalls, also mandate what traffic

can leave a given network. For more information on firewalls, see [BC94, CBR03].

One technology often used in conjunction with a defensive firewall is an intrusion-

detection system (IDS). In many ways, IDSs are the network counterparts to host-based

anti-virus software. These systems monitor network traffic, looking either for signatures

Page 26: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

14

of known malicious traffic or for traffic that triggers a heuristic identifying it as potentially

malicious [DDW99]. Equipped with real-time response capabilities, intrusion-prevention

systems (IPSs) are like IDSs, but are capable of thwarting an attack as it happens, at the

cost of potential false positives disrupting legitimate traffic. A related tool in the fight

against worm propagation is the virus throttle, which limits the number of new connec-

tions that can be made by any host per unit of time, delaying additional outbound connec-

tion attempts [TW03]. This delay was shown to significantly slow the spread of rapidly-

propagating worms and reduce the resulting damage.

2.4 Context and History of the Self-Stopping Worm Problem

Before discussing the self-stopping worm problem in detail, it is important to recog-

nize that there are many strategies a worm could use for locating additional hosts to infect.

There are three main classes of targeting strategies, described in extensive detail by Ay-

cock [Ayc06, pp. 150–154]. The first class of strategies is based on random selection:

• Uniform random scanning. When an infected host performs a scan, it chooses the

target IP address to scan without bias from all potential 232 IPv4 addresses (potentially

excluding its own IP address or private IP address blocks that are inaccessible). The

Code Red v2 worm used this strategy [MSc02].

• Biased random scanning. In this case, targets are still chosen at random; how-

ever, the distribution is not uniform. Code Red II, for example, was more likely to

choose targets with the same high-order bytes in their IP address as the scanning

host [MSc02], as was Nimda [DDHR01]. The rationale behind this design is that

similarly-configured (and hence similarly-vulnerable) machines are more likely to be

on the same network, thereby sharing high-order bytes in their IP addresses.

Page 27: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

15

• Permutation scanning. Here, all copies of the worm share a predetermined permuta-

tion of all of the potential target IP addresses [SPW02]. When a new host is infected,

it chooses a random place in the permutation from which to begin iterating through

target hosts. Additionally, each time a host performs a redundant infection attempt

(that is, it scans a host which it determines to be already infected), that scanning host

chooses a new random spot in the permutation from which to continue scanning. This

approach reduces the number of redundant infection attempts early in the epidemic,

compared to either of the previous approaches, since two infected hosts will not con-

tinue to scan the same sections of the permutation.

The second class of scanning strategies is based on the worm code obtaining information

on potential targets from the machine on which it is running:

• Topological scanning. In this case, the worm chooses targets to scan based not on any

properties of the physical network, but rather on the information stored on the infected

host. For example, an e-mail worm could scan through any address books stored on

an infected host, then mail itself to all of the addresses therein. The Morris worm also

used this technique; one of the methods by which this worm discovered target hosts

was by examining the list of trusted hosts used for remote login procedures that was

maintained on the infected computer [Spa89].

• Passive scanning. With this scanning technique, the worm does not seek out potential

hosts to infect. Instead, the worm waits for an infected host to legitimately contact

another host [SPW02]. The worm can then target that host, knowing that it exists.

Since the infected host is already communicating with the target host, the worm’s

probe may be less likely to trigger anti-worm defenses such as throttling.

Page 28: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

16

The final strategy, which stands in its own class, is based on the worm having fore-

knowledge about the addresses of vulnerable hosts (it may be perfect knowledge, or it may

be partial or out-of-date knowledge):

• Hit-list scanning. The last scanning technique that we will discuss here is a list-

based technique. Prior to releasing the worm, the adversary would construct a list

of vulnerable hosts for the worm to target. Each time a new host is infected, the

new host and the parent host (the host that performed the infection) will divide the

remaining hosts on the hit list in half, and each will take half of the list. The adversary

could include all targets on the list (i.e., the worm will halt after the list is exhausted),

or the worm could switch to one of the aforementioned scanning techniques upon

reaching the end of the list. A hit-list scanning worm has the potential to infect a large

number of hosts very quickly [SPW02], but the adversary would have to prepare the

list in advance. Also, if one of the first infections cannot propagate (e.g., after being

disinfected by anti-virus software or stopped by an egress firewall), a large portion of

the list could be lost.

With these different scanning strategies detailed, let us reiterate that this thesis will focus

strictly on uniform random-scanning worms. This strategy is the simplest for an adversary to

program, and as such holds an appeal for an adversary to use. From a research perspective,

the self-stopping problem for random-scanning worms is very interesting. How is it possible

for individual worm instances, each scanning essentially blindly, to coordinate a cessation of

their scanning activity once the percentage of the vulnerable population infected, p, reaches

some threshold p0?

The first proposed solution to this problem came in the form of Wiley’s Curious Yel-

low worm [Wil06]. Instances of this worm could use an overlay control network based on

Chord [SMK+01] to coordinate their spread and halt at the appropriate time. Unfortunately,

Page 29: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

17

this solution was highly complex, relative to the simple scanning mechanism (random scan-

ning) being employed. Also, the proposed design did not detail how worm instances would

actually decide when to halt.

The next set of solutions to this problem came from Ma et al. [MVS05]. This work will

be discussed in detail in Chapter 4.

2.5 Statistical Methods

Before we discuss any work on the self-stopping problem, we will briefly introduce the

three different tests of statistical significance that are used in this thesis. The first is the two-

tailed one-sample t-test [Moo00, p. 370]. Given a set of experimental numerical results with

mean x, this statistical test evaluates the hypothesis µ = µ0, where µ would be the average

result over an infinite number of experimental trials and µ0 is some fixed value. We report

the results of this test as

t = τ, df = δ, P = π,

where the t-statistic value τ is computed from x, µ0, as well as the standard deviation and size

of the set of experimental results. We compute π as π = 2 · P (T ≥ |τ|), where T is a random

variable with the t-distribution having δ degrees of freedom. If π < α, where α is fixed at

0.05, we reject the hypothesis that µ = µ0 in favor of the hypothesis that µ , µ0 — that is,

the experimental results indicate that µ is different than µ0 at significance level α = 0.05.

Otherwise, we retain the hypothesis that µ = µ0. The key detail for readers to remember is

that if π < 0.05, we conclude that there is a statistically-significant difference between µ and

µ0.

The second type of statistical test we use in this thesis is the two-tailed unpaired t-

test [Moo00, p. 395]. Here, we have one set of experimental numerical results with mean x1

and another set of results with mean x2. This test evaluates the hypothesis µ1 = µ2, where

Page 30: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

18

µ1 would be the average result over an infinite number of trials of the first experiment and

µ2 would be the average result for the second experiment. We report the results of this test

identically to our previous test as

t = τ, df = δ, P = π,

where the t-statistic value τ is computed from x1, x2, and the standard deviation and size

of the two sets of experimental results. The value of π is computed the same as with the

previous test, and we conclude that there is a statistically-significant difference between µ1

and µ2 if and only if π < 0.05.

The final type of statistical test we use is the one-way analysis of variance (ANOVA)

test [Moo00, p. 518]. This test is the generalization of the previous test to three or more ex-

perimental results. Specifically, we have numerical results from I ≥ 3 different experiments

with means x1, x2, . . . , xI . This test evaluates the hypothesis µ1 = µ2 = · · · = µI , where the

µi values are defined as before. We report the results of this test as

F = φ, df = δ1, δ2, P = π,

where the F-statistic value φ is computed from the xi values, as well as the standard deviation

and size of the I sets of experimental results. We compute π as π = P (F ≥ φ), where F is

a random variable with the F-distribution having δ1 and δ2 degrees of freedom. Again,

we conclude that there is a statistically-significant difference among the µi if and only if

π < 0.05.

Having discussed the statistical methods that will be used in this thesis, we can now

begin our investigation of self-stopping worms. We begin with a theoretical, perfect solution

to the random-scanning self-stopping problem, prior to discussing any realistic solutions.

Page 31: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

19

Chapter 3

The Perfect Self-Stopping Worm

Before exploring various self-stopping mechanisms that can be applied to realistic

worms, it is important to first establish some upper bound on the performance of a self-

stopping mechanism. As such, this chapter will describe how a perfect random-scanning,

self-stopping worm would function. It is important to stress that the self-stopping method

described herein is strictly theoretical, as it relies on individual infected computers having

some form of perfect knowledge about the progress of the worm outbreak.

Section 3.1 will first describe how an optimal self-stopping worm would behave, at a

high level. Section 3.2 will then provide equations and code for modeling this behavior,

before Section 3.3 concludes the chapter with a summary of its contributions.

3.1 Description

Assume that the worm in question is a random-scanning worm. That is, on each attempt

to propagate, an instance of the worm will randomly select one host from the address space,

and infect that host if it is both vulnerable and uninfected. The optimal behavior for such

a worm is that each infected host will continue its propagation attempts, until p0 percent

of the vulnerable population has been infected (recall that p0 is the predetermined infection

percentage at which to halt, e.g., p0 = 0.95). At this point, all instances of the worm will

simultaneously stop scanning. This behavior can be considered optimal in the following two

ways:

1. No host stops early. If some hosts were to stop before the percentage of vulnerable

hosts that are infected reached p0, there would be fewer scans occurring over a given

Page 32: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

20

time period, looking for the remaining vulnerable hosts. As such, the expected amount

of time until the outbreak reached an infection percentage of p0 would be greater than

necessary.

2. No host runs late. Though some additional infections may occur if an infected host

continues scanning for new victims, the goal of a self-stopping worm is not necessarily

to infect as many hosts as possible. Rather, it is to halt after infecting p0 percent of

the vulnerable hosts, so that less traffic is generated by the outbreak. Any additional

traffic generated after the target infection ratio has been reached can be viewed as

being superfluous to the infection goal.

It should be noted that these considerations assume a general symmetry among all of the

vulnerable hosts. For instance, there are no different types of firewalls or network address

translation (NAT) protecting different hosts (e.g., egress firewalls, which could motivate

hosts without the ability to perform infections to halt early). We maintain this assumption

throughout this chapter and this thesis, along with other symmetry assumptions, such as

each host having an equal amount of bandwidth.

3.2 Mathematical Model

Using the sum of the expected values on multiple geometric distributions, it is possi-

ble to predict the expected number of scans that will occur before the perfect self-stopping

worm halts (details on geometric distributions are available in most statistics textbooks,

e.g., [HKM95, pp. 111–116]). Assume that the worm outbreak occurs over an address space

of size A, and that there are V vulnerable hosts in that address space. Recall, we assume

the adversary does not know V . Assume also that there are I initial seeds to the outbreak

(I ≤ V ≤ A) — these seeds can be thought of as infections that occurred for free, without

Page 33: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

21

any scans being performed by the worm. As such, the expected number of scans that will

occur, E(S ), is the expected number of scans to infect hosts I + 1, I + 2, . . . , dp0Ve. The ex-

pected number of scans to infect host x ∈ [I + 1, dp0Ve] is just the inverse of the probability

of a random scan finding the x-th vulnerable host, P(x). Assuming that an infected host

performing a random scan will never scan itself, that probability is just

P(x) =V − (x − 1)

A − 1.

Hence,

E(S ) =

dp0Ve∑i=I+1

1P(i)

=

dp0Ve∑i=I+1

A − 1V − i + 1

.

Note that, if the worm foregoes the precaution of not scanning its own address during a

random scan, the denominator of P(x) changes to A, and E(S ) changes accordingly.

One might ask how long such an epidemic would last before halting. This question

is best answered by modeling the perfect self-stopping mechanism. The following model

assumes that time is divided into discrete units, and that each infected host can perform γ

scans (and infections, when applicable) in a single unit of time. Furthermore, if a host is

infected during time unit t, then it begins its own scanning activity during time unit t + 1.

That is, if inf hosts are infected at the beginning of a unit of time, then inf · γ scans will be

performed in that time period.

The model follows the expected values for the number of scans that it will take to find

and infect a new host: specifically, if inf hosts are already infected, host inf + 1 will be

infected after1

P(inf + 1)=

A − 1V − (inf + 1) + 1

=A − 1

V − inf

scans. As such, many, one, or no new hosts may be infected in a given time period.

The pseudocode given in Figure 3.1 implements this model. Starting with I infections,

the code loops through units of time, until the number of infections, inf, reaches dp0Ve. At

Page 34: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

22

1: function M-P(A,V, I, γ, p0)2: sp← 0 . Total scans performed3: t ← 0 . Epidemic running time4: inf ← I5: nextIn← (A − 1)/(V − inf )6: while T do7: thisTime← inf · γ8: remaining← thisTime9: while remaining > 0 do

10: m← M(nextIn, remaining)11: remaining← remaining − m12: nextIn← nextIn − m13: sp← sp + m14: t ← t + m/thisTime15: if nextIn = 0 then16: inf ← inf + 117: if inf ≥ dp0Ve then18: return t, sp19: end if20: nextIn← (A − 1)/(V − inf )21: end if22: end while23: end while24: end function

Figure 3.1: Pseudocode to model the perfect self-stopping mechanism

the beginning of each unit of time, the number of scans that will occur during that unit is

calculated to be inf · γ, as described above. New hosts may be infected as these scans are

performed. Once dp0Ve hosts have been infected, the pseudocode returns the number of

time units taken and the number of scans performed.

From this model, it is clear why an adversary may choose a value of p0 , 1. Con-

sider 131072 vulnerable hosts inside an address space of size 232, targeted by a perfect

self-stopping worm with one seed infection. Were p0 = 1, the adversary would expect that

5.3 · 1010 scans would be generated. On the other hand, if p0 were set to 0.75, the adversary

could expect 6.0 · 109 scans to be generated — almost an order of magnitude fewer. Even

Page 35: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

23

using p0 = 0.98 would yield an expected 1.7 · 1010 scans, or less than one-third as many as

in the p0 = 1 case. The running time of the epidemic also drops significantly; if γ were

4000 scans per unit of time (as in [MVS05]), the outbreak would be reduced from 208.31

units of time in the p0 = 1 case to 115.90 units of time in the p0 = 0.75 case. Similarly, if

γ were 1 scan per unit of time (as in [VAJ07a]), then the epidemic time would drop by over

45%, from 810077.26 units of time to 441039.90.

The results for the γ = 1 case are presented visually in Figures 3.2 and 3.3. The number

of scans generated steadily increases, with a sharp spike as p0 approaches 1, as seen in Fig-

ure 3.2. Interestingly, there are two spikes in the epidemic running time graph, Figure 3.3.

The spike as p0 nears 1 mirrors the spike in the expected number of scans required to locate

and infect the final few vulnerable hosts. However, there is another spike as p0 increases

from I/V = 1/131072 (the smallest possible final infection percentage). This increase oc-

curs because, while infecting I hosts requires no time (as they are seed infections), scanning

for more vulnerable hosts takes a long time for the small number of seed infections.

This sharp increase in the epidemic time as p0 increases from I/V yields some insight

into potential adversary behavior. If an adversary wants to reduce the running time of an

epidemic, it is likely that adversary will attempt to increase I (i.e., seed the worm on more

computers initially) to mitigate the time required for the epidemic to “get started.” Figure 3.4

compares the epidemic running time for I = 1, I = 10, and I = 100, demonstrating how

more initial seeds can significantly reduce the epidemic duration.

As an aside to these results: why did we use a discrete, algorithmic model to compute

the expected duration of a perfect self-stopping random-scanning worm epidemic? Why

did we not just solve the differential equations provided by epidemiology? One differential

equation, presented in a simplified form by Ma et al. [MVS05] and related to the equations

originally presented by Kephart and White [KW91], can be used to predict the number of

Page 36: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

24

0

1e+10

2e+10

3e+10

4e+10

5e+10

6e+10

0 10 20 30 40 50 60 70 80 90 100

Expe

cted

sca

ns g

ener

ated

Target infection percentage

Figure 3.2: The effect of the target infection percentage (p0) on the number of scans gener-ated by the perfect self-stopping worm when V = 131072, I = 1, and γ = 1

0

100000

200000

300000

400000

500000

600000

700000

800000

900000

0 10 20 30 40 50 60 70 80 90 100

Expe

cted

epi

dem

ic du

ratio

n (u

nits

of t

ime)

Target infection percentage

Figure 3.3: The effect of the target infection percentage (p0) on the epidemic running timeof the perfect self-stopping worm when V = 131072, I = 1, and γ = 1

Page 37: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

25

0

100000

200000

300000

400000

500000

600000

700000

800000

900000

0 10 20 30 40 50 60 70 80 90 100

Expe

cted

epi

dem

ic du

ratio

n (u

nits

of t

ime)

Target infection percentage

I = 1I = 10

I = 100

Figure 3.4: The effect of the target infection percentage (p0) on the epidemic running timeof the perfect self-stopping worm when V = 131072 and γ = 1 for various values of I

hosts that will be infected by a continuously-scanning worm after t units of time. Specifi-

cally,

dNdt

= γN(t)V − N(t)

Aand (3.1)

N(0) = I , (3.2)

where N(t) is the expected number of hosts infected after t units of time. The differential

equation is formed by multiplying the number of random scans performed in each time

step, γN(t), by the proportion of hosts in the address space that can still be infected, V−N(t)A .

Equations 3.1 and 3.2 can be solved to yield an expression for N(t). Namely,

N(t) =IV

I + (V − I) e−γVt/A . (3.3)

The first issue to note about Equation 3.3 is that N(t) < V for all t ≥ 0, provided that

I < V . As such, it is impossible to solve N(t) = V for t. This restriction occurs because

the expected number of infected hosts at time t, N(t), will always be less than V if I < V .

Page 38: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

26

0

20000

40000

60000

80000

100000

120000

140000

0 200000 400000 600000 800000 1e+06

Expe

cted

hos

ts in

fect

ed

Units of time

Epidemiological modelDiscrete model

Figure 3.5: A comparison of the number of hosts infected over time in the discrete perfectworm model and the epidemiological model when V = 131072, I = 1, and γ = 1

The discrete model in Figure 3.1, on the other hand, takes advantage of the fact that there

is a finite expected time until V hosts are infected — a subtle distinction between the two

different expected values.

The second issue with Equation 3.3 is how accurate it is at measuring the progress of

an epidemic over time. This approach models a discrete process as a continuous equation.

Additionally, the number of hosts infected by γN(t) independent random scans will not be

γN(t)V−N(t)A , as suggested by Equation 3.3 — the number of infections caused by a series of

independent random scans is discussed in detail in Appendix A. It is for these reasons that

we presented the discrete model in Figure 3.1. The difference between our discrete model

and the differential equation model is apparent in Figure 3.5, which displays the expected

number of infected hosts as time progresses under either model for V = 131072, I = 1,

γ = 1, and A = 232. Note not only the slightly different rates of propagation, but also that

the discrete model eventually reaches N(t) = V , at which point that line on the graph stops.

Page 39: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

27

As a final point about the perfect self-stopping worm: notice that the model of this worm

in Figure 3.1 terminates after an integer number of hosts have been infected (represented by

the variable inf ). It may be useful, however, to have some measure of how this perfect

worm behaves at infecting “a fraction” of one host. Why? Consider some other, non-perfect

worm that we wish to compare to the perfect self-stopping worm. Assume that this non-

perfect worm infects an average of 131000.4 hosts over five trials. How could we compare

the average epidemic duration and average number of scans performed by this worm to the

performance of the perfect worm? We have no way of knowing how long it would take, or

how many scans it would take for the perfect self-stopping worm to infect 0.4 of a host.

To have some meaningful measure of how many scans and units of time the perfect

self-stopping worm would take to infect fractions of a host, simply perform linear interpo-

lation. That is, using the example number of 131000.4 hosts infected, choose p0 such that

dp0Ve = 131000 and p′0 such that dp′0Ve = 131001. Then, compute

(1 − 0.4) ·M-P(A,V, I, γ, p0) + 0.4 ·M-P(A,V, I, γ, p′0) .

In general, to assess how many scans and units of time the perfect worm would take to infect

n + d hosts, where n ∈ Z ≥ 0 and 0 ≤ d < 1, first find p0 such that dp0Ve = n and p′0 such

that dp′0Ve = n + 1. Then, compute

(1 − d) ·M-P(A,V, I, γ, p0) + d ·M-P(A,V, I, γ, p′0) .

3.3 Summary

This chapter described the optimal behavior for a self-stopping, random-scanning worm.

From this description, a model was developed for predicting the number of scans that would

be performed by this perfect self-stopping worm, as well as the amount of time the worm

epidemic would take to halt. Not only is this model necessary to assess the performance of

Page 40: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

28

realistic self-stopping worms, but it also serves to motivate why an adversary may choose to

target p0 < 1 of the vulnerable population.

Page 41: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

29

Chapter 4

Previous Self-Stopping Worms

Having discussed a baseline, perfect self-stopping model for random-scanning worms,

we now shift focus to look at previous work on possible self-stopping mechanisms. Ma et al.

presented a comprehensive summary of different types of self-stopping mechanisms, which

were divided into two classes [MVS05]. One class of strategy works by having each worm

instance mathematically model the progress of the epidemic. When an infected host’s model

indicates that the percentage of vulnerable hosts that have been infected has passed p0, the

predetermined infection percentage at which the epidemic should halt, that host will cease

its attempts to propagate. The other class of strategy is based on random behavior, wherein

each worm instance halts with a fixed probability on some condition.

This chapter will begin with a detailed explanation of Ma et al.’s Sum-Count-X algo-

rithm, one of the mathematical model strategies, in Section 4.1. Section 4.2 will then

demonstrate that this algorithm performs poorly through a series of experiments testing

its accuracy. Section 4.3 will attempt, without success, to solve the problems with Sum-

Count-X’s accuracy, and will in turn demonstrate the limitations of all of the strategies in

the mathematical model class. We will then turn to the random strategies, which we will

investigate in Section 4.4. The limitations of these strategies will motivate future chapters’

exploration of other, more successful or versatile self-stopping techniques that could be used

by an adversary. The chapter concludes with a summary in Section 4.5.

Page 42: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

30

4.1 Description of Sum-Count-X

The Sum-Count-X algorithm works by having each worm instance model the entire

epidemic. To model the epidemic progress, each infected host must know the size of the

address space, A, as well as the scan rate per unit time, γ, and the current number of units of

time that have passed since the beginning of the epidemic, t. As Ma et al. point out, the hosts

need not have access to a global clock to determine t. Instead, each host can pass along an

epoch counter to new victims as they are infected — an epoch counter the host increments

each time it performs γ scans. It should be noted, though, that this counter gives valuable

information about the propagation of the worm to a defender attempting to trace an outbreak

to its source.

In addition to the three aforementioned pieces of knowledge that an infected host re-

quires for its model, a host also maintains two counters: scans (representing the total number

of scans performed) and hits (representing the total number of vulnerable or already-infected

hosts found); both are initialized to 0. After a host infects a new vulnerable host, or when

a random scan finds another host that is already infected, the two hosts combine their scans

values, as well as their hits values. Specifically, the two hosts compute the sum of their

individual scans values, and both take the sum as their new scans value. These steps are

repeated for their hits counters. After this exchange takes place, the scanning host incre-

ments both its hits counter and its scans counter. Alternately, if a random scan does not find

a vulnerable or infected host, the scanning computer increments only its scans counter. An

illustrative example of this process with three vulnerable hosts is presented in Figure 4.1.

Using the three pieces of global knowledge (A, γ, and t), as well as its two counters,

the host can model the progress of the entire epidemic using formulas derived from the

epidemic-modeling differential equation originally seen in Chapter 3. If we let N(t) be the

number of hosts infected at time t and V be the number of vulnerable hosts, that equation is

Page 43: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

31

Hits = 6

Scans = 34

Hits = 10

Scans = 51

1. A cross-section of vulnerable hosts; the two

infected (red) hosts have hits and scans counters.

One host is vulnerable but uninfected (white).

Hits = 6

Scans = 34

Hits = 10

Scans = 52

2. The middle host performs a random scan,

finding either an invulnerable host or no host at

that address. Its scans counter is incremented.

Hits = 6

Scans = 34

Hits = 10

Scans = 52

3. The middle host performs another random scan,

finding a host that is already infected. These two

hosts will use this opportunity to share data.

Hits = 16

Scans = 86

Hits = 16

Scans = 86

4. The two hosts compute the sum of their hits

counters, and use it as their new counter value.

The procedure is repeated with the scans counters.

Hits = 16

Scans = 86

Hits = 17

Scans = 87

5. Since the previous scan successfully located a

vulnerable or already-infected host, the middle

host increments both hits and scans.

Hits = 16

Scans = 86

Hits = 17

Scans = 87

6. The next random scan locates an uninfected

host. The worm propagates, and the newly-

infected host initializes its counters to zero.

Hits = 0

Scans = 0

Hits = 16

Scans = 86

Hits = 17

Scans = 87

7. The two hosts compute the sum of their hits

counters, and use it as their new counter value.

The procedure is repeated with the scans counters.

Hits = 17

Scans = 87

Hits = 16

Scans = 86

Hits = 18

Scans = 88

8. Since the previous scan successfully located a

vulnerable or already-infected host, the middle

host increments both hits and scans.

Hits = 17

Scans = 87

Figure 4.1: An example of three hosts running a Sum-Count-X worm over time

Page 44: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

32

dNdt = γN(t)V−N(t)

A . The actual formulas used in Ma et al. [MVS05] are as follows. First, the

host computes an estimate for V ,

V = A ·hits

scans.

Then, an estimate for N(t) is computed as

ˆN(t) = Veβ(t−t0)

1 + eβ(t−t0) ,

where β and t0 are defined as

β =γVA

and t0 =ln(V − 1)

β.

The estimates for V and N(t) are then used to predict whether the percentage of vulnerable

hosts that have been infected has passed p0; if so, the host halts. This behavior is described

more formally in the pseudocode in Figures 4.2 and 4.3. Note that, in the pseudocode, a

worm instance verifies that V > 100 prior to halting; though this check was not described

in [MVS05], Ma notes that it improves the accuracy of Sum-Count-X [Ma06]. As a final

point of interest, note that the equations used in Ma et al. are equivalent to Equation 3.3,

using an estimate for V and setting I = 1. That is, the halting mechanism simply applies its

estimate for V to the epidemiological, differential equation-based model of worm propaga-

tion to estimate how much of the vulnerable population has been infected.

4.2 Limitations of Sum-Count-X

Unfortunately, Sum-Count-X suffers from a number of drawbacks. Worms implement-

ing this self-stopping mechanism will typically halt when the percentage of hosts infected

is far removed from p0. To demonstrate Sum-Count-X’s weakness, a worm implementing

Sum-Count-X was simulated. The epidemic always began with a single seed in an address

space of size 232, and the worm had a per-unit-time scanning rate of γ = 4000. However, V

Page 45: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

33

1: procedure S-C-X(host)2: host.hits← 03: host.scans← 04: while S-H(host) = N do . Iterate while loop once per unit of time5: for i← 1 to γ do6: P(host)7: end for8: end while9: end procedure

10: procedure P(host)11: target ← a host chosen by a random scan12: if target is vulnerable & uninfected then13: host infects target14: target runs S-C-X(target)15: C(host, target)16: host.hits← host.hits + 117: else if target is vulnerable & infected then18: C(host, target)19: host.hits← host.hits + 120: end if21: host.scans← host.scans + 122: end procedure

Figure 4.2: Pseudocode for Ma et al.’s Sum-Count-X algorithm (part 1 of 2)

Page 46: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

34

23: function S-H(host)24: if host.scans > 0 then25: V ← A · host.hits/host.scans26: β← γ · V/A27: t0 ← ln(V − 1)/β28: ebtt0← e ∧ (β · (t − t0))29: N ← V · ebtt0/(1 + ebtt0)30: if N/V ≥ p0 and V > 100 then31: return Y32: end if33: end if34: return N35: end function

36: procedure C(hostA, hostB)37: sumHits← hostA.hits + hostB.hits38: sumScans← hostA.scans + hostB.scans39: hostA.hits← sumHits40: hostA.scans← sumScans41: hostB.hits← sumHits42: hostB.scans← sumScans43: end procedure

Figure 4.3: Pseudocode for Ma et al.’s Sum-Count-X algorithm (part 2 of 2)

Page 47: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

35

Vulnerable population (V)131072 750000 1500000

0.500 77.03% 66.64% 52.84%0.750 88.71% 72.01% 56.98%

p0 0.900 90.89% 79.28% 63.32%0.990 99.36% 91.72% 74.38%0.999 99.87% 98.19% 82.91%

Table 4.1: Sum-Count-X worm infection percentages for different values of V and p0

was varied between 131072 (the size of the vulnerable population in [MVS05]), 1500000

(the size of the vulnerable population in [VAJ07a]), and a middle-ground value, 750000.

The other parameter that was varied was p0 — the values 0.500, 0.750, 0.900, 0.990, and

0.999 were all used.

The performance of Sum-Count-X was highly inconsistent. Table 4.1 lists the results of

the simulations. Each table entry lists the actual percentage of the vulnerable population that

the Sum-Count-X worm infected before halting. The value of each table entry was computed

by averaging the results of five trials, to compensate for randomness. As an example of the

inconsistent performance of Sum-Count-X, note that when p0 = 0.500 and V = 131072, the

worm overshot its target significantly and infected 77.03% of the vulnerable population, on

average. When p0 = 0.990 and V = 1500000, the worm undershot its target, infecting an

average of 74.38% of the vulnerable hosts.

It is possible that the inconsistent performance of Sum-Count-X results from two fac-

tors. First is the propagation of sampling error as hosts repeatedly add together their hits

and scans counters. That is, even though a given host, h, has a value scans = s, it is not

necessarily the case that s distinct scans were performed to compute h’s hits value. It may

be the case that the same scans were repeatedly summed into h’s scans and hits counters as

more and more redundant infection attempts occurred. We explore this conjecture further

in Section 4.3. The second factor that we suspect led to the inconsistent performance of

Page 48: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

36

Sum-Count-X is the model it uses to estimate how many hosts are infected at any given

time. Might the differential equation-based model not accurately reflect the rate at which

the worm propagates?

Another weakness of Sum-Count-X is the sensitivity of the algorithm to small changes

in the method by which the worm collects data from its environment. As an example,

with reference to the pseudocode in Figure 4.2, Sum-Count-X could be modified so that an

infected host increments its scans and hits counters prior to communicating with another

host, rather than after. Why? In the wild, instances of a real worm will not be nicely

synchronized, as they are in a simulator. Rather, worm instances will receive information

asynchronously from other worms, and their scans and hits counters are liable to be modified

while that worm is performing its own scans.

The results of this modification were tremendous. Using the values A = 232,

V = 131072, γ = 4000, and I = 1, the simulator was run with both p0 = 0.500 and

p0 = 0.999. In the former case, the average number of hosts infected over five trials, before

the worm halted, was 3.06%. In the latter case, the average over five trials was 49.90%. Nei-

ther value was even close to its target percentage, demonstrating that a Sum-Count-X worm

in the wild (an asynchronous environment) may be even less accurate than a Sum-Count-X

worm running in a simulator. For an adversary to implement a Sum-Count-X worm, it is

likely that the accuracy would have to be improved. This issue is explored further in the

next section.

4.3 Sampling Error in Model-Based Techniques

In the previous section, we conjectured that the use of hits and scans counters to esti-

mate V , the size of the vulnerable population, could be part of the reason for the inaccurate

performance of the Sum-Count-X worm. It is possible that a better method for estimating

Page 49: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

37

1: function S-H(host)2: if host.scans > 0 then3: V ← V . Perfect knowledge replaces estimation4: β← γ · V/A5: t0 ← ln(V − 1)/β6: ebtt0← e ∧ (β · (t − t0))7: N ← V · ebtt0/(1 + ebtt0)8: if N/V ≥ p0 and V > 100 then9: return Y

10: end if11: end if12: return N13: end function

Figure 4.4: Replacement pseudocode for a perfect-knowledge worm

V could improve the performance of Sum-Count-X? Ma et al. presented several approaches

for estimating V with a random-scanning worm other than Sum-Count-X. Any one of these

methods for estimating V could be used in conjunction with the model-based self-stopping

approach. However, we will take a more direct approach to investigating this question than

iterating through a multitude of estimation techniques.

To eliminate any potential error caused by the technique used to estimate V , we replace

line 25 in Figure 4.3 with a line that assigns V to V — that is, every worm instance con-

sistently estimates V with perfect accuracy. The complete new version of the S-H

function is shown in Figure 4.4. Recall, we assume that the adversary does not know V , so

this perfect-knowledge behavior could not be introduced into an actual model-based worm.

However, by testing such an artificial modification to the Sum-Count-X worm, we should

gain some insight into the cause of the worm’s inaccuracy, detailed in the previous section.

After this modification was complete, the set of trials performed in the previous section

was repeated. Specifically, the perfect-knowledge worm was run in environments in which

the size of the vulnerable population, V , was varied between 131072, 750000, and 1500000.

Page 50: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

38

Vulnerable Population (V)131072 750000 1500000

p0

0.500 40.34% 6.60% 0.58%0.750 33.62% 10.28% 1.58%0.900 76.80% 18.50% 1.68%0.990 94.98% 62.42% 14.69%0.999 97.72% 95.44% 57.48%

Table 4.2: Perfect-knowledge worm infection percentages for different values of V and p0

The target infection percentage, p0, was again set to one of five values: 0.500, 0.750, 0.900,

0.990, or 0.999.

The results of these experiments are outlined in Table 4.2. Counter-intuitively, the ac-

curacy of the worm decreased from the Sum-Count-X design to the perfect-knowledge

design in all but one of the (V, p0) pairs (when V = 131072 and p0 = 0.500, the perfect-

knowledge version infected 40.34% of the vulnerable population, as opposed to the Sum-

Count-X version’s 77.03%). That is not the only anomaly in the new results, though. Notice

that, for V = 131072, the perfect-knowledge worm with target p0 = 0.500 actually infected

more than the worm with target p0 = 0.750 (namely, 40.34% versus 33.62%). This result

can be explained by the inconsistent performance of the perfect-knowledge worm within

each (V, p0) pair. With the Sum-Count-X worm, detailed in the previous section, the stan-

dard deviation on the percent of the vulnerable population infected over five trials, when

V = 131072 and p0 = 0.750, was 2.77%. Over the five trials performed with the same V

and p0 values using the perfect-knowledge worm, the standard deviation was 19.85%.

How can we explain the poor accuracy of the perfect-knowledge worm and its high

variability of results over multiple trials for a single (V, p0) pair? The high variability may

result from the idealized model the worm uses to estimate how many hosts are infected

at any given time. Unlike in this model, actual worms are variable in how many scans

they perform before a new host is infected; the model fails to account for this variability.

Page 51: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

39

Interestingly, there may have been an unpredictable interaction between the propagation

model and the method of estimating V with two counters used by Sum-Count-X — an

interaction that actually helped to reduce this variability, compared to when the worm had

perfect knowledge. But what about the poor performance of the worm?

To account for the poor performance of this worm, recall the discrepancies between

a differential equation-based model of worm propagation and a discrete model, discussed

in Chapter 3. Recall that the epidemiological, differential equation-based model used by

Ma et al.’s model-based worms predicts faster worm propagation than a discrete model. Fig-

ure 4.5 illustrates this difference for an environment in which V = 1500000 and γ = 4000.

The differential model requires approximately 11.75 units of time for the worm to have in-

fected 90% of the vulnerable population. The discrete model, on the other hand, requires

approximately 18.08 units of time. In fact, at 11.75 units of time, the discrete model predicts

that approximately 26000 hosts will have been infected — a mere 1.73% of the vulnerable

population. Looking back to Table 4.2, we see that the perfect-knowledge worm infected

1.68% of the vulnerable population, which is not statistically different from our prediction

of 1.73% (two-tailed one-sample t-test: t = 0.1837, df = 4, P = 0.8632).

In short, the poor accuracy of the model-based worm is caused by discrepancies between

the model the worm is using (based on differential equations) and the actual propagation

curve of the worm (which is similar to the discrete model of worm propagation presented

in Chapter 3). Would it be possible for an adversary simply to use a different propagation

model in the halting calculations of a model-based worm such as Sum-Count-X? Of course.

However, the adversary would have no guarantees about the accuracy of that model. Con-

sider the following: cross-referencing Table 4.2 and Figure 4.5 shows that the discrete model

was far more accurate than the differential model used by Ma et al., at least inside our sim-

ulated environment. But which model would be more accurate in the wild? Perhaps a third,

Page 52: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

40

0

200000

400000

600000

800000

1e+06

1.2e+06

1.4e+06

0 5 10 15 20 25

Expe

cted

hos

ts in

fect

ed

Units of time

Epidemiological modelDiscrete model

Figure 4.5: A comparison of the number of hosts infected over time in the discrete wormmodel and the epidemiological model when V = 1500000, I = 1, and γ = 4000

entirely different model would be needed, e.g., because both the differential and discrete

models assume that all hosts have equal bandwidth, which is not true in the wild. Because

of this uncertainty, not to mention the large difference in the performance of a model-based

worm that changing the underlying model can make, we do not believe it to be likely that

an adversary would use a model-based self-stopping worm. A different approach would be

more likely.

4.4 Random Approaches to Self-Stopping

Two other approaches to the self-stopping problem were described by Ma et al., both

based on random decisions made by each worm instance regarding whether to halt. The

first, called a blind strategy, sees each worm instance randomly halt with probability phalt at

the end of each unit of time (e.g., after each host has performed a multiple of γ scans). The

second, called a redundant hit strategy, sees each worm instance randomly halt with proba-

Page 53: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

41

1: procedure B(host)2: while S-H(host) = N do . Iterate while loop once per unit of time3: for i← 1 to γ do4: P(host)5: end for6: end while7: end procedure

8: procedure P(host)9: target ← a host chosen by a random scan

10: if target is vulnerable & uninfected then11: host infects target12: target runs B(target)13: end if14: end procedure

15: function S-H(host)16: r ← a uniform random value in [0, 1]17: if r < phalt then18: return Y19: else20: return N21: end if22: end function

Figure 4.6: Pseudocode for Ma et al.’s blind algorithm

bility phalt each time it performs a redundant infection attempt. Both of these strategies are

based on methods for controlling distributed database updates [DGH+87]. Ma et al. also dis-

cuss a tree-based algorithm, though this approach is not suitable when V is unknown — we

discuss the limitations of tree-based approaches to self-stopping worm design in more detail

in Chapter 7. For clarity, we have provided pseudocode for both of the random approaches.

The blind approach is described in Figure 4.6, and the redundant hit approach is described

in Figure 4.7.

How well do these two techniques work at solving the self-stopping problem? To

find out, we simulated worms using each technique in an environment where A = 232, and

Page 54: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

42

1: procedure R(host)2: host.halted ← N3: while host.halted = N do . Iterate while loop once per unit of time4: for i← 1 to γ do5: if host.halted = N then6: P(host)7: end if8: end for9: end while

10: end procedure

11: procedure P(host)12: target ← a host chosen by a random scan13: if target is vulnerable & uninfected then14: host infects target15: target runs R(target)16: else if target is vulnerable & infected then17: host.halted ← S-H(host)18: end if19: end procedure

20: function S-H(host)21: r ← a uniform random value in [0, 1]22: if r < phalt then23: return Y24: else25: return N26: end if27: end function

Figure 4.7: Pseudocode for Ma et al.’s redundant hit algorithm

Page 55: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

43

γ = 4000. Again, V was varied among 131072, 750000, and 1500000. To ascertain the

effect that the halting probability, phalt, has on worm performance, we also varied that value

among 0.10, 0.33, 0.50, and 1.00. For each (V, phalt) pair, we conducted five trials.

We began by testing the blind strategy by simulating an epidemic that begins with I = 1

initial seed, under the conditions described above. Under these conditions, many of the

epidemics never progressed beyond the initial single seed infection. For example, when

V = 131072, only three of the twenty trials conducted progressed beyond the seed infec-

tion — of those trials, the one that infected the most hosts (when phalt = 0.10) infected

seven. The problem, especially when V is small, is that the probability of a random scan

hitting a vulnerable host is too small. The seed infection halts before the worm has a chance

to spread effectively.

To improve the probability of the worm spreading effectively, we repeated the previous

experiment, this time increasing the number of initial seeds to I = 1000. By increasing the

number of initial seeds, we increase the number of hosts performing random scans during

the initial stage of the epidemic when few hosts are infected. The results of these trials are

summarized in Table 4.3. Increasing the number of initial seeds did allow the epidemic

to proceed to higher infection percentages. However, the blind strategy for self-stopping

produces highly inconsistent results for varying values of V . For example, when phalt = 0.33,

the final infection percentage was only 1.01% in the environment where V = 131072, but

it was 92.81% in the environment where V = 1500000. The difference among the three

final infection percentages for the three different values of V is unquestionably significant

(one-way ANOVA test: F = 729585.6389, df = 2, 12, P < 0.0001).

This result is not entirely surprising, however. Let us return to our model of the perfect

self-stopping worm from Chapter 3, and examine a worm that consistently infects a given

percentage of the vulnerable population over different population sizes. Starting with one

Page 56: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

44

Vulnerable Population (V)131072 750000 1500000

phalt

0.10 22.61% 99.81% 100.00%0.33 1.01% 52.82% 92.81%0.50 0.87% 0.45% 50.96%1.00 0.76% 0.13% 0.07%

Table 4.3: Blind strategy worm infection percentages for different values of V and the ran-dom halting constant, phalt

initial seed, and performing γ = 4000 scans per unit of time, a worm that infects 50% of the

vulnerable population will require different amounts of time to do so, depending on V . For

example, when V = 131072, the worm will require 106.70 units of time to infect 50% of

the vulnerable population, compared to 16.16 units of time when V = 1500000. However,

the blind halt strategy does not account for the varying amount of time a worm will require

to infect a fixed percentage of the vulnerable population, based on V . Instead, each worm

instance halts at the end of each unit of time with a single, fixed probability. In short, the

blind strategy is not suitable as a self-stopping mechanism when V is unknown.

The final technique from the literature that we will test is the redundant hit strategy for

self-stopping. To determine how effective this mechanism is at making a worm self-stop,

we simulated worms using this technique in the same environments as we used to test the

blind worm. For the redundant hit worm, we set I = 1. The results of this experiment are

shown in Table 4.4.

By reading across the rows of Table 4.4, we can see that the redundant hit strategy worm

infected a consistent percentage of the vulnerable population when using any fixed phalt

value, regardless of the size of the vulnerable population, V . For example, when phalt = 0.50,

there was no statistically significant difference between the final infection percentages for

any of the three values of V (one-way ANOVA test: F = 0.1588, df = 2, 12, P = 0.8549).

Even though there was a statistically significant difference when phalt = 1.00 (one-way

Page 57: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

45

Vulnerable Population (V)131072 750000 1500000

phalt

0.10 100.00% 100.00% 100.00%0.33 98.07% 98.08% 98.08%0.50 94.04% 94.05% 94.05%1.00 79.79% 79.64% 79.69%

Table 4.4: Redundant hit strategy worm infection percentages for different values of V andthe random halting constant, phalt

ANOVA test: F = 6.2371, df = 2, 12, P = 0.0139), the difference between 79.64% and

79.79% would not be of any practical significance to an adversary. That is, the redundant

hit strategy is an effective solution to the self-stopping problem — a consistent percentage

of the vulnerable population can be infected by this worm, regardless of the size of the

vulnerable population.

That being said, the redundant hit strategy suffers from one significant limitation: it is not

possible to infect less than approximately 80% of the vulnerable population. After all, phalt

cannot be set any higher than phalt = 1.00. Could increasing the number of initial seeds,

for example, reduce the final infection percentage by increasing the probability of early

redundant hits? No. We reran the five trials for V = 131072 and phalt = 1.00, this time using

either I = 10, I = 100, or I = 1000 initial seeds. In combination with the initial I = 1 tests,

the final infection percentage ranged from an average of 79.66% when I = 10, to 79.81%

when I = 1000. There was no statistically significant difference among the final infection

percentages for the four different initial seed values (one-way ANOVA test: F = 2.6035,

df = 3, 16, P = 0.0878). So, while the redundant hit strategy can successfully solve the self-

stopping problem, it is highly limited — it cannot be made to target less than approximately

80% of the vulnerable population.

Page 58: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

46

4.5 Summary

This chapter described several potential designs by Ma et al. for a random-scanning, self-

stopping worm. The first class of strategies were based on worm instances estimating the

size of the vulnerable population, V , and then modeling the progress of the epidemic based

on V . However, it was difficult for worms to effectively estimate the size of the vulnerable

population, as demonstrated by the large differences to the final infection percentage of a

Sum-Count-X worm that even minor changes to the estimation algorithm caused. Further-

more, inaccuracies in the epidemic progression model can lead to highly inaccurate halting

decisions by worm instances, even when they have perfect knowledge of V . For these rea-

sons, it is unlikely that an adversary would use a model-based self-stopping worm.

Out of the second class of self-stopping strategies, only one — the redundant hit strat-

egy — produced consistent results. However, this worm was limited to targeting only large

portions of the vulnerable population: between approximately 80% and 100%. This worm

design is certainly an excellent choice for an adversary wishing to target such a large portion

of the vulnerable population, but it is not an option for an adversary looking to infect fewer

hosts and generate less traffic.

The findings of this chapter motivate the investigation of alternate methods by which

an adversary may create a self-stopping, random-scanning worm. By understanding how a

more reliable or versatile worm of this type could be built, we can more accurately predict

what form future threats will take.

Page 59: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

47

Chapter 5

The Quorum Sensing Worm

Given the inconsistency or lack of versatility in previous self-stopping worm designs, a

worm author may wish to deploy a better self-stopping mechanism instead. This chapter de-

scribes one novel mechanism: a self-stopping mechanism based on the biological technique

of quorum sensing [HB04].

This chapter will begin with background information on quorum sensing, both from a

biological and a computational perspective, in Section 5.1. We will then investigate how

the concept of quorum sensing could be adapted for use in a computer worm in Section 5.2.

Section 5.3 will present preliminary results on the performance of a quorum sensing worm.

This topic will be followed by a discussion on how an adversary may choose to seed a quo-

rum sensing worm in Section 5.4, and how an adversary may choose to reduce extraneous

traffic in Section 5.5. We then investigate the possibility that each infected host in a quorum

sensing worm epidemic need not behave identically in Section 5.6. Finally, we conclude

this chapter by discussing defenses against quorum sensing worms in Section 5.7, before

summarizing the contributions of this chapter in Section 5.8.

5.1 Background

Before discussing how computer worms could use techniques based on quorum sens-

ing to self-stop, we first describe quorum sensing. Reviewed in detail by Henke and

Bassler [HB04], quorum sensing is a process by which large communities of bacteria can

regulate gene expression through the use of signaling molecules, known as autoinducers.

When a small community of bacteria emit autoinducers, there is little or no effect on gene

Page 60: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

48

expression in that community, as the concentration of signaling molecules is low. However,

in large bacterial communities, the autoinducer concentration grows beyond the necessary

threshold to induce changes in gene expression. Simply, quorum sensing can be thought of

as a mechanism by which bacterial behavior is changed when the population becomes dense

enough.

One classic example of quorum sensing is the symbiotic relationship between the Vib-

rio fischeri bacteria and the bobtail squid [Mil06]. V. fischeri, which inhabit the light or-

gan of the bobtail squid, release an autoinducer known as 3-oxo-hexanoyl homoserine lac-

tone (3-oxo-C6-HSL). When the population of V. fischeri is low, the autoinducer has no

effect. However, as the population of V. fischeri grows, the concentration of 3-oxo-C6-HSL

increases as well, eventually triggering transcription of the bacterial luminescence gene,

thereby producing enzymes that cause the squid’s light organ to glow. In the morning, the

squid vents its light organ, reducing the population of V. fischeri back to minimal levels, and

causing the light organ to dim until night (when the bacterial population has again grown

large).

Autoinducer molecules diffuse through the local environment as bacteria release them,

affecting nearby bacteria. Similar mechanisms exist in the computing world, as well. For

example, particle swarm optimization [KE95] utilizes multiple agents to optimize a func-

tion, each of which can affect other agents. Similarly, in ant colony optimization [DMC96],

the search agents place signals in the search space, which serve as positive reinforcement

to other agents. Quorum sensing itself has been used as inspiration for different advances

in the computing world, ranging from a computational model that is equivalent to a Turing

machine [BGK07], to a method for collecting votes in mobile ad-hoc networks using an

agent that walks through the network [PDMR06].

Page 61: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

49

However, it should be noted that the previous examples use substantially more complex

techniques than bacteria simply releasing signaling molecules into their local environment;

and, these techniques are used to solve more complex problems than simply detecting the

density of the various agents involved. One key contribution of this chapter, therefore,

will be to show how simpler techniques derived from the original biological purpose of

quorum sensing (namely, providing a density-dependent mechanism for coordinating gene

expression; or, in computing terms, toggling the behavior of individual members of some

population when that population becomes dense enough) can be used to solve a new, simpler

problem: designing a self-stopping worm.

5.2 Quorum Sensing Worm Design

To achieve self-stopping functionality, we assume that each infected host remembers the

M (or fewer) previous vulnerable hosts that it has attempted to infect, for some global con-

stant M (recall that this worm is a random-scanning worm and that there can be redundant

infection attempts). In the context of this computer worm, the memory of recently-scanned

hosts serves to provide some notion of “proximity” of hosts, since autoinducers in the bac-

terial world affect other bacteria that are near to the one that released them.

Given this memory, each time a host performs a redundant infection attempt, it sends

an “autoinducer” message to itself, as well as to all of the other hosts in its memory. This

message increases the autoinducer level at each recipient by some global constant, σA. This

behavior mimics that of some types of bacterial communities, in which individual bacteria

actually produce more autoinducers as the concentration of bacteria becomes larger (rather

than more autoinducers being produced simply because there are more bacteria producing

a consistent number of autoinducers). One example of such a bacterial system is Pseu-

domonas aeruginosa: at high concentrations, the Las autoinducer (3-oxo-C12-HSL) acti-

Page 62: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

50

vates expression of the lasI gene in P. aeruginosa. This gene produces an enzyme which is

responsible for greater synthesis of the autoinducer. This positive feedback loop, reviewed

in detail by de Kievit and Iglewski [dKI00], causes individual bacteria to produce autoin-

ducers at a greater rate at high cell densities.

Returning to the worm design, we can also include a different type of message. Specif-

ically, each time a host infects a new target, the host releases an “enzyme” message, which

“breaks down” previously received autoinducer messages (or, if there are none present at

the target, then the enzyme message waits for an autoinducer message to arrive to break

it down). Specifically, the autoinducer level at each recipient is reduced by some global

constant, σE. This behavior is based, in part, on that of Agrobacterium tumefaciens. As

discovered by Zhang et al. [ZWZ02], A. tumefaciens produces an enzyme that breaks down

its own autoinducer, 3-oxo-octanoyl homoserine lactone (3-oxo-C8-HSL), during phases of

the bacterial community’s growth curve where the quorum sensing system is not needed.

If, at any point, the autoinducer level at a given host passes some globally-constant

threshold, T , that host will permanently cease all propagation attempts. This behavior is

formalized with pseudocode in Figures 5.1 and 5.2. The initial infected host in an epidemic,

h, runs Q(h, 0). An illustrative example of the quorum sensing algorithm is also pre-

sented in Figure 5.3.

5.3 Results

To assess the performance of the quorum sensing worm, different configurations of the

worm were tested in a simulated environment. A configuration of a quorum sensing worm

is defined to be the triple of values σA, σE, and T used in an epidemic, representing the

“strength” of the autoinducers and enzymes released by the spreading worm instances, and

their threshold for halting. Seven different configurations, discovered through experimenta-

Page 63: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

51

1: procedure Q(host, initial)2: host.initial← initial3: host.auto← initial4: host.memory← E5: host.shouldHalt ← N6: C(host)7: while host.shouldHalt = N do . Iterate while loop once per unit of time8: P(host)9: end while

10: end procedure

11: procedure P(host)12: target ← a host chosen by a random scan13: if target is vulnerable & uninfected then14: host infects target15: target runs Q(target, host.auto)16: M(host, target)17: R(host,−σE)18: else if target is vulnerable & infected then19: M(host, target)20: R(host, σA)21: end if22: end procedure

Figure 5.1: Pseudocode for the quorum sensing worm (part 1 of 2)

Page 64: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

52

23: procedure M(host, friend)24: host adds friend to host.memory25: if |host.memory| > M then26: Remove oldest entry from host.memory27: end if28: end procedure

29: procedure R(host, amount)30: if amount , 0 then31: host.auto← host.auto + amount32: C(host)33: for friend ∈ host.memory do34: friend.auto← friend.auto + amount35: C(friend)36: end for37: end if38: end procedure

39: procedure C(host)40: if host.auto ≥ T then41: host.shouldHalt ← Y42: end if43: end procedure

Figure 5.2: Pseudocode for the quorum sensing worm (part 2 of 2)

Page 65: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

53

3

1. A cross-section of vulnerable hosts; infected

hosts (red) each have local autoinducer levels.

Only the memory of the middle host is displayed.

2

-59

-1

9

3

2. A random scan locates an uninfected host. The

worm propagates, and the newly-infected host

initializes its autoinducer level equal to its parent’s.

2

-59

-1

8

3

3. The middle host updates its memory and

releases an enzyme message. Its autoinducer level,

and that of all recipients, decreases by one.

1

-68

-1-!E

10

3

5. The host updates its memory and releases an

autoinducer message. Its autoinducer level, and

that of all recipients, increases by two.

1

-410

1!A

8

3

4. The middle host performs another random scan.

This time, it locates a host that is already infected,

so the worm does not propagate.

1

-68

-1

10

3

6. Two hosts’ autoinducer levels have passed the

threshold. Those two hosts permanently cease all

propagation attempts.

1

-410

1

Figure 5.3: An example of six hosts running a quorum sensing worm over time

Page 66: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

54

σA σE TConfiguration 1 10 0 10Configuration 2 5 0 10Configuration 3 4 0 10Configuration 4 2 0 10Configuration 5 1 0 10Configuration 6 2 1 10Configuration 7 1 1 10

Table 5.1: Quorum sensing worm configurations

tion to produce a wide range of final infection percentages, were tested. These configura-

tions are listed in Table 5.1.

As with the Sum-Count-X worm in the previous chapter, the quorum sensing worm was

simulated in an address space of size 232. The epidemic started with a single infected host;

and, in each time step of the simulation, each infected host performed a single random scan

(that is, with reference to Chapter 3, γ = 1). For this experiment, we arbitrarily fixed the

memory size of each worm instance at M = 3. The number of vulnerable hosts in the address

space was again varied between 131072, 750000, and 1500000.

Unlike Sum-Count-X, the performance of the quorum sensing worm was consistent

across the different values of V . Table 5.2 lists the results of the simulations. As before,

each table entry lists the percentage of the vulnerable population that the worm infected

before halting, averaged over five trials. The consistency of the quorum sensing worm over

different vulnerable populations can be seen by looking across each row. For example, a

quorum sensing worm using configuration 3 reliably infected approximately 80% of the

vulnerable population before halting; there was no statistical difference between the per-

centage of the vulnerable population infected for the three different vulnerable population

sizes (one-way ANOVA test: F = 0.1272, df = 2, 12, P = 0.8817). Interestingly, there were

statistically-significant differences in some cases, e.g., the configuration 4 worm (one-way

Page 67: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

55

Vulnerable population (V)131072 750000 1500000

Configuration 1 61.78% 61.64% 61.70%Configuration 2 72.08% 72.21% 72.19%Configuration 3 80.40% 80.43% 80.41%Configuration 4 90.39% 90.26% 90.32%Configuration 5 98.19% 98.17% 98.19%Configuration 6 99.94% 99.97% 99.98%Configuration 7 100.00% 100.00% 100.00%

Table 5.2: Quorum sensing worm infection percentages for different values of V underdifferent configurations

ANOVA test: F = 5.5267, df = 2, 12, P = 0.0199). However, while the difference may be

statistically significant, it is not a meaningful difference to an adversary.

The next question to answer is how the quorum sensing worm compares to the perfect

self-stopping random-scanning worm. For clarity, we restrict our discussion to the results

achieved with configuration 7, when V = 131072. The comparison is summarized in Ta-

ble 5.3. Averaged over five trials, the quorum sensing worm used approximately 5.7 · 1010

random scans — only slightly more than the 5.3 · 1010 random scans that would be per-

formed by the perfect worm. Even with the average 4.8 · 106 release transmissions (i.e.,

autoinducer and enzyme messages sent), the quorum sensing worm performed only 7.3%

more communications than the perfect worm. Note though that the difference in the num-

ber of scans is statistically significant (two-tailed one-sample t-test: t = 16.7118, df = 4,

P < 0.0001).

Unfortunately for the adversary, the quorum sensing worm took over 2.9 times as long

to come to a halt as the perfect worm would. The reason for this delay is apparent from

Figure 5.4. Namely, the infected hosts do not halt simultaneously; rather, some hosts halt

earlier than others, drawing out the epidemic duration. Despite this deficiency, quorum sens-

ing worms should still be seen as a serious threat, and as a useful tool to an adversary, since

Page 68: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

56

Quorumsensingworm

Hosts infected 131072.0Time to halt 2350550.4Scans 56957721634.4Release transmissions 4821728.6Total communication 56962543363.0

Perfectworm

Hosts infected 131072.0Time to halt 810077.3Scans 53088862028.0Release transmissions N/ATotal communication 53088862028.0

Table 5.3: Comparison of the quorum sensing worm under configuration 7 and the perfectself-stopping random-scanning worm, when V = 131072

the simulations demonstrated that a quorum sensing worm could infect almost all vulnerable

hosts with a nearly-optimal number of communications. Additionally, if an optimal number

of communications are spread over a larger period of time, the traffic generated by the worm

may be less conspicuous to defenders.

5.4 Worm Seeding

When the performance of the quorum sensing worm was analyzed in the previous sec-

tion, one assumption made was that the worm epidemic began with only a single seed in-

fection. From the adversary’s perspective, this seed infection could have been planted, e.g.,

by having the adversary physically install the worm on the seed machine, or by having the

adversary e-mail the malicious code to a few victims, one of whom became infected.

The problem with this approach, from the point of view of the adversary, is that the

worm may not be able to spread to epidemic proportions if only a single seed infection

is used. What if outbound traffic from the infected machine is blocked by a firewall, for

example? For this reason, the adversary may wish to use many seed infections to begin the

epidemic, to increase the probability that the worm will successfully take hold in the wild.

Page 69: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

57

0

20000

40000

60000

80000

100000

120000

140000

0 500000 1e+06 1.5e+06 2e+06 2.5e+06

Host

s

Time (simulation steps)

Infected hostsInfected halted hosts

Figure 5.4: The number of infected and halted hosts over time for a configuration 7 quorumsensing worm, when V = 131072

The adversary may use mass-mailing techniques, for example, to distribute the viral code to

many computers, all of which could function as seed infections.

The question then, for the author of a quorum sensing worm, is how well such a worm

would perform when using multiple seed infections (i.e., with reference to the formulas in

Chapter 3, I > 1). To test whether multiple seed infections affect the accuracy of a quorum

sensing worm, we first reran the simulation of the quorum sensing worm using configu-

ration 3 over the same three vulnerable population sizes used in previous sections. This

time, however, the number of initial seeds was varied between 10 and the very large value

of 10000. Configuration 3 was chosen for this experiment as, with an infection average of

approximately 80% in previous tests, it will be easy to see if the final infection percentage

with multiple seeds is larger or smaller than the original outcome.

The results of this experiment are summarized in Table 5.4, where each table entry repre-

sents the percentage of the vulnerable population that the worm infected before halting, av-

Page 70: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

58

Vulnerable Population (V)131072 750000 1500000

InitialSeeds

(I)

1 80.40% 80.43% 80.41%10 80.38% 80.30% 80.41%

100 80.46% 80.35% 80.40%1000 80.65% 80.43% 80.44%

10000 81.89% 80.63% 80.52%

Table 5.4: The effect of multiple seed infections on the final infection percentage of a con-figuration 3 quorum sensing worm

eraged over five trials. By reading down the columns, it is possible to see how an increasing

number of initial seeds affected the final infection percentage of the quorum sensing worm,

for each of the three vulnerable population sizes. The top row, showing the performance of

the worm with a single initial seed, is taken directly from Table 5.2 (which summarized the

initial results on the quorum sensing worm) for comparison purposes. The largest effect of

multiple seed infections was seen when V = 131072, and 10000 seed infections were used.

Even though this number of seed infections represented over 9% of the final number of hosts

infected in the I = 1 case, and over 7.5% of the total vulnerable population, the final infec-

tion percentage was changed by less than 1.5% when compared to the I = 1 case (increasing

from 80.40% to 81.89%). When a more reasonable number of seed infections is considered,

the largest change came when V = 131072 and 1000 seed infections were used, when the

final infection percentage increased by 0.25% from the I = 1 case. While small, this differ-

ence is statistically significant (two-tailed unpaired t-test: t = 2.8630, df = 7, P = 0.0242).

When I = 10 or I = 100, there is not even a statistically-significant difference in the final

infection percentage from the I = 1 case (one-way ANOVA test: F = 0.4689, df = 2, 12,

P = 0.6367).

Recall that configuration 3 quorum sensing worms use only autoinducer messages; no

enzyme messages are sent, since σE = 0. In order to confirm that quorum sensing worms

Page 71: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

59

Vulnerable Population (V)131072 750000 1500000

InitialSeeds

(I)

1 99.94% 99.97% 99.98%10 99.87% 99.94% 99.95%

100 99.66% 99.84% 99.88%1000 99.08% 99.57% 99.69%

10000 97.66% 98.85% 99.14%

Table 5.5: The effect of multiple seed infections on the final infection percentage of a con-figuration 6 quorum sensing worm

perform consistently regardless of the number of initial seeds, even when enzyme messages

are used, the previous experiment was repeated with a configuration 6 worm. The results are

similar, and are summarized in Table 5.5. One counter-intuitive result from this experiment

is that increasing the number of initial seeds actually caused a decrease in the final infection

percentage. This behavior is probably the result of fewer enzyme message being produced in

the early stages of the epidemic, resulting from the higher number of initial infections. What

remains similar between the configuration 3 and configuration 6 experiments is that signifi-

cant effects on the final infection percentage only occurred in the I = 10000 case. Here the

final infection percentage decreased over 2%, from 99.94% to 97.66%, when V = 131072

and I was increased from 1 to 10000. In the cases where I ≤ 1000, all reductions to the

final infection percentage were under 1%. These results demonstrate that, regardless of the

configuration used by an adversary, the adversary is free to seed a quorum sensing worm

epidemic with either a large or small number of initially-infected hosts.

5.5 Reducing Extraneous Traffic

Having determined that an adversary may begin a quorum sensing worm epidemic using

any number of initial seed infections, we now turn to another aspect of the quorum sensing

Page 72: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

60

worm’s behavior. Recall from Table 5.3 that the configuration 7 worm performed approx-

imately 7.3% more communication than the perfect random-scanning self-stopping worm

would to infect 100% of the vulnerable population. While this level of extraneous communi-

cation may not be of large concern to an adversary, it is important to understand if there are

any design changes an adversary may make to a quorum sensing worm to avoid it. We will

be looking only at the configuration 7 quorum sensing worm, since there are no expected

extraneous random scans produced by random-scanning worms that infect less than 100%

of the vulnerable population (a proof of this statement is provided in Appendix A).

We will investigate two different methods by which an adversary may reduce extraneous

random scans. In Section 5.5.1, we will investigate how the memory size, M, affects the

amount of communication. Section 5.5.2 will then look at how feedback loops could be

used to improve the efficiency of the quorum sensing system.

5.5.1 Memory Size

Until now, we have fixed the memory size, M, of the quorum sensing worm at M = 3.

Recall from Table 5.3, though, that the number of release transmissions performed by the

quorum sensing worm was approximately four orders of magnitude less than the number of

random scans. Is it possible, then, that the introduction of additional release transmissions

could make the quorum sensing system more efficient? That is, if M were increased (thereby

increasing the number of release transmissions), is it possible that individual hosts would

halt sooner after 100% of the vulnerable population is infected, thereby reducing extraneous

traffic?

While it may seem counter-intuitive, it is nonetheless worth testing this theory, as an

adversary may take this zero-effort approach to reducing extraneous traffic if it works. To

test whether memory size has any effect on extraneous traffic, we reran the simulation of

the quorum sensing worm using configuration 7 in an environment with V = 131072 (for

Page 73: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

61

Average Hosts Average Total Perfect Worm PercentInfected Communication Expected Scans Difference

Memory(M)

0 131072.0 91718012719.2 53088862028.04 72.76%1 131072.0 76387025238.0 53088862028.04 43.89%2 131071.8 64871599638.2 52229868569.05 24.20%3 131072.0 56962543363.0 53088862028.04 7.30%4 131071.6 51683513246.4 51370875110.07 0.61%5 131070.2 47974821400.0 47075907815.04 1.91%

10 131065.0 42015180442.2 41952625398.86 0.15%

Table 5.6: The effect of differing memory sizes on the performance of a configuration 7quorum sensing worm, relative to the perfect self-stopping worm

clarity of exposition, we fixed V , since we have already shown the quorum sensing worm to

perform consistently across various sizes of vulnerable population). This time, however, the

memory size, M, was varied between 0 and 5, and also set as high as 10.

What effect, if any, did this modification have on the amount of extraneous traffic pro-

duced? Table 5.6 displays the results of this experiment, with the row for M = 3 taken from

the experiment used to construct Table 5.2, where the initial results on the quorum sensing

worm were listed, for comparison purposes. For each different memory size, the table lists

the average number of hosts infected over the five trials, as well as the average communi-

cation generated by the epidemic (this is the sum of the number of scans and the number

of release transmissions; but, since the number of release transmissions is several orders of

magnitude lower than the number of random scans, this column can essentially be read as

the average number of random scans performed). Also listed in the table is the expected

number of random scans that the perfect random-scanning worm would perform in order to

infect the same number of hosts as infected by the quorum sensing worm in that set of trials,

as well as the percent difference between the quorum sensing worm’s communication total

and the perfect worm’s expected total.

Page 74: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

62

In the M = 4, M = 5 and M = 10 cases, the quorum sensing worm infected fewer than

V = 131072 hosts, on average. It follows from Theorem A.6 in Appendix A that the number

of scans in those cases should be very near to the expected number of scans produced by the

perfect self-stopping worm. As Table 5.6 shows, this is indeed the case. For example, in the

M = 4 case, there was no statistical difference between the total communication generated

by the quorum sensing worm and the number of scans that would be produced by the perfect

self-stopping worm (two-tailed one-sample t-test: t = 2.0103, df = 4, P = 0.1148). The

same is true in the M = 5 and M = 10 cases.

What happened with respect to Theorem A.6 in the M = 2 case, though? On average,

less than 100% of the vulnerable population was infected on average over five trials, yet

the worm produced a significantly larger amount of communication than the perfect self-

stopping worm would. Does this result contradict Theorem A.6? No. There was simply one

trial in which only 131071/131072 of the vulnerable hosts were infected, and by random

chance, a large number of scans were performed.

In general, though, the quorum sensing worm infected V = 131072 hosts when M ≤ 3,

so the potential for significant extraneous scans exists. However, increasing the memory

size, M, helped mitigate this problem. Increasing M from 0 to 3 reduced the percentage of

extraneous traffic by almost an order of magnitude, from 72.76% to 7.30%. It follows from

these results that an adversary wishing to infect approximately 100% of all vulnerable hosts

would choose a large memory size in a quorum sensing worm, thereby either infecting all

hosts with minimal extraneous traffic (as in the M = 3 case), or infecting just below 100%

of vulnerable hosts with no expected extraneous traffic (as in one of the M > 3 cases).

While not related to the topic of reducing extraneous scans, it is also important to de-

termine what effect, if any, memory size has on the propagation of a quorum sensing worm

using a configuration that infects less than 100% of the vulnerable population. To find out

Page 75: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

63

Vulnerable Population (V)131072 750000 1500000

Memory(M)

0 96.02% 96.03% 96.03%1 88.17% 88.12% 88.11%3 80.40% 80.43% 80.41%5 78.51% 78.49% 78.43%

10 77.82% 77.87% 77.90%

Table 5.7: The effect of differing memory sizes on the final infection percentage of a con-figuration 3 quorum sensing worm

what this effect is, we reran the previous experiment, but this time we used a configura-

tion 3 quorum sensing worm. The results of the experiment are summarized in Table 5.7.

When the quorum sensing worm’s memory is varied between 1 and 10, the final infection

percentage does change, though not largely. The range of the final infection percentages for

those M values was approximately 10%, with around 88% infected when M = 1 and around

78% infected when M = 10 (compared to the original value of around 80% infected when

M = 3). Because of this variability, the adversary’s choice of M should also be considered

part of the configuration of a quorum sensing worm, along with the choices for σA, σE,

and T . It is important to note, though, that the infection percentage of the quorum sensing

worm was still consistent across multiple vulnerable population sizes for a fixed value of M,

regardless of what that value of M was. This result can be seen by reading across any row in

Table 5.7; for example, the configuration 3 quorum sensing worm with M = 5 consistently

infected approximately 78.5% of the vulnerable population.

One final important result followed from these experiments on memory size, as well.

Even with M = 0 — that is, only the host sending an autoinducer or enzyme message would

receive that message — the quorum sensing worm still halted. This result is important

because, until now, we have assumed that any host with an open infection vector (i.e., any of

the V vulnerable hosts), when infected, will also be able to send and receive autoinducer and

Page 76: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

64

enzyme messages. The communication channel through which these messages are sent may

or may not be the same as the one used for the initial infection, in practice. If the release

transmission communication channel is distinct from the infection vector communication

channel, then firewalls that block the channel used for release transmissions could have a

similar effect on the quorum sensing worm as reducing M would (in the most extreme case,

the effect would be equivalent to setting M = 0). What we now know is that such firewalls

would not prevent the quorum sensing worm from working properly. They may, however,

harm the worm’s performance by increasing the total amount of network traffic generated;

and they may increase the number of hosts infected, depending on the configuration of the

worm.

5.5.2 Autoinducer Feedback Loop

Continuing our investigation of how to reduce extra traffic in the case of a quorum sens-

ing worm that infects 100% of the vulnerable population, we now turn our attention to the

concept of an autoinducer feedback loop. Recall from our discussion of the quorum sensing

worm design in Section 5.2 that one goal of this design was to have each individual infected

host release more autoinducers as the population density of infected hosts increased. This

concept was based on the positive feedback loop of P. aeruginosa, in which the Las autoin-

ducer activates the lasI gene, resulting in greater production of the autoinducer [dKI00].

In the original quorum sensing worm design, this feedback system was realized by having

infected hosts produce autoinducers on each redundant infection attempt — a higher density

of infected hosts led to greater autoinducer production.

It should be noted, though, that the original quorum sensing worm design did not truly

capture the subtlety of the biological feedback loop. It was not the case that a high autoin-

ducer level at a given host led to greater autoinducer production by that host. Instead, autoin-

ducer production levels were simply controlled by the results of a host’s random scans —

Page 77: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

65

independent of the host’s autoinducer level. Is it possible to realize more completely the

feedback loop of P. aeruginosa in the quorum sensing worm? And, if so, how would such a

modification help reduce the amount of extraneous traffic?

Recall what happens when a given host, h, releases an autoinducer message. In the

original quorum sensing design, any recipient of this message would increase their local

autoinducer level by σA. What if, though, h’s autoinducer level affected the strength of the

autoinducer message? Consider a design in which, if h’s autoinducer level is greater than or

equal to some limit, L, then h will send a strong autoinducer message, rather than a regular

autoinducer message. Recipients of this strong autoinducer message will increase their local

autoinducer level by F · σA, instead of the usual σA, where F > 1 is a globally-constant

feedback multiplier. This design more accurately reflects the behavior of P. aeruginosa, in

which increasing autoinducer levels cause individual bacteria to produce more autoinducers.

Note that, to implement such a design in a quorum sensing worm, infected hosts would

now be required to exchange three types of messages: autoinducers, enzymes, and strong

autoinducers. Alternately, hosts could send F autoinducer messages to represent a strong

autoinducer, though such a design would be less efficient in terms of traffic generated. Pseu-

docode for the new version of the P function, originally detailed in Figure 5.2, is

shown in Figure 5.5.

To test how well this feedback system works at eliminating extraneous traffic, we limit

our tests to configuration 7 quorum sensing worms (which can infect 100% of the vulnerable

population and hence are liable to produce extraneous traffic) with M = 0. This value of

M was chosen for two reasons. First, these quorum sensing worm instances produced a

significant amount of extraneous traffic after infecting 100% of the vulnerable population

(a 72.76% increase in traffic over the perfect self-stopping worm, as seen by Table 5.6). As

such, it will be easy to gauge the effectiveness of the feedback system with this memory size.

Page 78: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

66

1: procedure P(host)2: target ← a host chosen by a random scan3: if target is vulnerable & uninfected then4: host infects target5: target runs Q(target, host.auto)6: M(host, target)7: R(host,−σE)8: else if target is vulnerable & infected then9: M(host, target)

10: if host.auto ≥ L then . New code uses stronger autoinducers in this case11: R(host, σA · F)12: else13: R(host, σA)14: end if15: end if16: end procedure

Figure 5.5: Pseudocode for the simple feedback mechanism

Second, an adversary may wish to limit the number of release transmissions produced by a

quorum sensing worm, as it may be possible for defenders to rapidly identify the presence of

a quorum sensing worm by looking for release transmissions (this topic is discussed in detail

in Section 5.7, where we discuss defenses against quorum sensing worms). It is important,

therefore, to understand ways in which an adversary could design a quorum sensing worm

that produces no release transmissions, yet still produces minimal extraneous traffic.

We simulated the worm with this feedback mechanism in a vulnerable population size

of V = 131072. For clarity of exposition, we fixed the feedback multiplier at F = 5. The

feedback limit, L, was varied between 1 and 5 — lower values for L should presumably

trigger the feedback mechanism sooner, reducing the amount of extraneous traffic. The

results are shown in Table 5.8. A feedback limit of 10 (i.e., L = T ) disables the feedback

mechanism; as such, the corresponding results from Table 5.6 are displayed in the L = 10

row, for comparison purposes.

Page 79: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

67

Average Hosts Average Total Perfect Worm PercentInfected Communication Expected Scans Difference

FeedbackLimit(L)

1 131071.8 61238557064.4 52229868569.05 17.25%2 131072.0 67224930746.0 53088862028.04 26.63%3 131072.0 68581087772.0 53088862028.04 29.18%4 131072.0 74695431826.0 53088862028.04 40.70%5 131072.0 75105241210.6 53088862028.04 41.47%

10 131072.0 91718012719.2 53088862028.04 72.76%

Table 5.8: The effect of differing feedback limit values on the performance of a configura-tion 7 quorum sensing worm, relative to the perfect self-stopping worm, when M = 0 andF = 5

As illustrated by these results, introducing a low feedback limit does improve the effi-

ciency of the configuration 7 quorum sensing worm. Could we set the feedback limit even

lower? While it would be possible to set L ≤ 0, worm instances would be more likely to in-

voke the feedback mechanism well before nearing the end of the epidemic. This possibility

may or may not pose an issue to the performance of the worm. The largest issue, however,

is that there are several “magic numbers” in use in this feedback mechanism: the feedback

limit, L, and the multiplier, F. While it is entirely possible that an adversary could use this

feedback mechanism to reduce extraneous traffic when M = 0, we should ask the question:

is there a more meaningful way to choose, e.g., the feedback limit? Possibly.

We propose that each individual infected host, h, could have its own feedback limit

value, Lh. This value would be computed based on the host’s initial autoinducer level, Ih, as

Lh = L (Ih) .

The logic here is that Lh essentially measures when a given host should initiate its feedback

mechanism, based on the epidemic conditions when it was first infected. But, how do we

construct the function L?

Page 80: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

68

We begin as we assume an adversary would: in a training phase, in which we simulated

a configuration 7 quorum sensing worm with M = 0. During the simulation, we recorded

the initial autoinducer level, Ih, for each host h. Then, we recorded the autoinducer level

at each infected host when 99% of the vulnerable population had been infected, Nh. That

is, each simulation provided a large number of (Ih,Nh) pairs. Because we assume that the

adversary does not know V in advance of releasing a worm (i.e., during this training phase),

we ran this simulation over V = 131072, V = 750000, and V = 1500000. For each value

of V , we performed twenty trials of this simulation, and we saved 100000 (Ih,Nh) pairs at

random from each trial (as opposed to all available pairs, which would yield more pairs from

the trials with the larger V values).

The results of this training phase are displayed in Figure 5.6. For each initial autoinducer

value recorded, Ih, the graph shows the mean value for Nh. In the cases where there was more

than one host recorded with a given Ih value, the graph also displays one standard deviation

in either direction from the mean.

Notice that the training data has a quadratic shape to it. Indeed, using least squares

analysis, we can approximate the trend in the training data with the function

L(x) = −0.00353791x2 + 0.87813831x + 1.80014823 .

Figure 5.7 illustrates the training data alongside this quadratic function.

Using this newly-constructed function, L, each host can decide when it should initiate

its feedback mechanism (i.e., when it thinks the infected percentage is approximately 99%,

based on the relation between its current and initial autoinducer levels). Pseudocode for

the newest version of the P function, using this quadratic feedback mechanism, is

shown in Figure 5.8. This method of predicting future outcomes from past data based on

a polynomial model built from a training set has been used in, e.g., game-playing artificial

intelligence research [Bur95].

Page 81: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

69

-35

-30

-25

-20

-15

-10

-5

0

5

10

-35 -30 -25 -20 -15 -10 -5 0 5 10

Auto

indu

cer l

evel

whe

n 99

% o

f vul

nera

ble

host

s ar

e in

fect

ed

Initial autoinducer level

Training data

Figure 5.6: The relation between initial autoinducer level and autoinducer level near the endof a quorum sensing worm epidemic for a configuration 7 worm with M = 0

-35

-30

-25

-20

-15

-10

-5

0

5

10

-35 -30 -25 -20 -15 -10 -5 0 5 10

Auto

indu

cer l

evel

whe

n 99

% o

f vul

nera

ble

host

s ar

e in

fect

ed

Initial autoinducer level

Training dataQuadratic function fit to data

Figure 5.7: A quadratic function approximating the relation between initial autoinducerlevel and autoinducer level near the end of a quorum sensing worm epidemic for a configu-ration 7 worm with M = 0

Page 82: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

70

1: procedure P(host)2: target ← a host chosen by a random scan3: if target is vulnerable & uninfected then4: host infects target5: target runs Q(target, host.auto)6: M(host, target)7: R(host,−σE)8: else if target is vulnerable & infected then9: M(host, target)

10: if host.auto ≥ L(host.initial) then . Quadratic function replaces constant11: R(host, σA · F)12: else13: R(host, σA)14: end if15: end if16: end procedure

Figure 5.8: Pseudocode for the quadratic feedback mechanism

How well does this quadratic feedback mechanism actually work, though? To find out,

we tested this newest iteration of the quorum sensing worm, running configuration 7 and

with M = 0, over the standard three population sizes. We also took this opportunity to vary

the feedback multiplier, F, to determine its effect on the performance of the worm. The

percentage of the vulnerable population infected, averaged over five trials for each (V, F)

pair, is shown in Table 5.9.

These results demonstrate that it is possible for a quorum sensing worm with M = 0

to infect just under 100% of the vulnerable population (and hence, generate no extrane-

ous traffic). For example, when F = 5, we see that slightly under 100% of the vulnerable

population was infected, regardless of the vulnerable population size. However, there are

tradeoffs for the adversary. First, a quorum sensing worm using a feedback mechanism is

more complex than one without. Most notably, it may be difficult to predict the accuracy of

a quadratic model generated in a simulator during the training phase, relative to the behavior

of a quorum sensing worm once it is in the wild.

Page 83: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

71

Vulnerable Population (V)131072 750000 1500000

FeedbackMultiplier

(F)

1 100.00% 100.00% 100.00%2 100.00% 100.00% 100.00%5 99.98% 99.99% 99.99%

10 99.92% 99.92% 99.93%20 99.78% 99.80% 99.81%50 99.76% 99.75% 99.75%

Table 5.9: The effect of differing feedback multiplier values on the final infection percentageof a configuration 7 quorum sensing worm with M = 0 utilizing quadratic feedback

5.6 Social Cheating

Until now, we have made the assumption that every infected host will behave in the

same fashion. What would happen, though, if some of the infected hosts “cheated” and

did not perform all of the actions that a host normally would? These cheating hosts could,

for example, not transmit any autoinducer or enzyme messages to other hosts (essentially

setting their local memory size to M = 0). Alternately, a cheating host may never attempt to

propagate at all, right from the moment of infection.

What would the benefit to the adversary be to allow some hosts to employ this social

cheating behavior? Ultimately, the adversary wants to reduce, or potentially eliminate, sus-

picious traffic being generated by specific hosts. If the quorum sensing worm includes a spy-

ware payload, the worm may want to reduce the possibility of its presence being detected

on hosts with particularly valuable information. For example, the worm could perform a

quick scan of the infected host prior to beginning propagation; and, if valuable information

such as tax returns or banking information is discovered on the computer, that instance of

the worm may decide to act as a social cheater. Alternately, each instance of the worm may

have some fixed probability, pcheat, of becoming a social cheater. The logic here is that an

adversary may not have a complete list of what would be valuable to steal, or the adversary

Page 84: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

72

may simply not wish to program the worm to detect everything that could be valuable. So,

a fixed proportion of the infected hosts will become social cheaters to reduce the likelihood

that the infection will be detected, then those hosts will join a black market botnet [FAV08]

where other miscreants in the underground economy could purchase information from the

infected computers.

The concept of social cheating among infected hosts is inspired by similar behavior

among some bacteria. Quorum sensing exists in bacterial communities because individ-

ual bacteria benefit from some common good produced by all of the bacteria in response

to a signaling molecule (e.g., secreted enzymes useful in nutrient acquisition). However,

some individual bacteria exhibit cheating behavior with regard to quorum sensing. These

cheating bacteria will either decrease their production of a signaling molecule or decrease

their production of the common good that is regulated by the signaling molecule. How-

ever, the cheaters will still benefit from the common goods produced by surrounding bacte-

ria [SMS07]. In essence, the cheaters let other bacteria do more of the work that benefits the

population, giving the cheaters an evolutionary advantage: all of the benefit for less cost.

We wish to test how similar behavior, modified to better fit the survival needs of indi-

vidual worm instances, would affect the overall performance of a quorum sensing worm

epidemic. First, we define the two different types of cheating worm. A lazy cheater is a

worm instance that does not bother to transmit any autoinducer or enzyme messages to re-

mote hosts (i.e., it has a local value of M = 0), in order to somewhat mask its presence by

reducing the control traffic it generates. A severe cheater is a worm instance that halts im-

mediately after being spawned on a vulnerable host, so that it generates no traffic that could

give away its presence. We simulated both a configuration 3 and configuration 6 quorum

sensing worm in an environment where V = 131072. However, we varied the probability

that each worm instance would be a cheater from 0 to 1 in increments of 0.05, repeating the

Page 85: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

73

experiment for both types of cheating. Each epidemic was seeded with 100 seed infections;

more seeds are necessary when the probability of severe cheating is large, otherwise the

epidemic would likely never progress beyond the initial seeds. For this same reason, twenty

trials were performed for each pair of cheating type and pcheat.

The effect of the increasing number of cheaters on the final infection percentage for the

configuration 3 worm is displayed in Figure 5.9. The configuration 6 results are shown in

Figure 5.10. An increasing number of lazy cheaters increases the final infection percentage,

likely because there are infected hosts performing random scans for more hosts to infect

without producing any autoinducer messages to halt hosts. Conversely, an increasing num-

ber of severe cheaters decreases the final infection percentage, likely because many of the

infected hosts never attempt to spread, but still appear as infected hosts when other hosts

scan them. In the extreme case, where 100% of infected hosts act as severe cheaters, the

epidemic never progresses beyond the seed infections. This effect is most pronounced in the

configuration 6 results, where increasing the severe cheating probability from 0.95 to 1.00

reduced the final infection percentage from 97.91% to 0.08%.

What these results demonstrate is that an adversary could have a small number of in-

fected hosts cheat, thereby reducing their chance of detection from their scans or release

transmissions, with minimal effect on the final infection percentage. For example, increas-

ing the severe cheating probability from 0.00 to 0.20 for the configuration 6 worm only

decreased the final infection percentage from 99.66% to 99.56%. While this result is sta-

tistically significant (two-tailed unpaired t-test: t = 12.2498, df = 33, P < 0.0001), it is not

practically significant to an adversary. The tradeoff for the adversary, however, is an in-

creased epidemic duration caused by the severe cheating worms not spreading. Even though

fewer hosts were infected, the epidemic was significantly longer (two-tailed unpaired t-test:

t = 3.8872, df = 36, P = 0.0004).

Page 86: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

74

0

20

40

60

80

100

0 0.2 0.4 0.6 0.8 1

Perc

enta

ge o

f vul

nera

ble

popu

latio

n in

fect

ed

Probability of an infected host cheating

Lazy cheatingSevere cheating

Figure 5.9: The effect of lazy and severe cheaters on the final infection percentage of aconfiguration 3 quorum sensing worm

0

20

40

60

80

100

0 0.2 0.4 0.6 0.8 1

Perc

enta

ge o

f vul

nera

ble

popu

latio

n in

fect

ed

Probability of an infected host cheating

Lazy cheatingSevere cheating

Figure 5.10: The effect of lazy and severe cheaters on the final infection percentage of aconfiguration 6 quorum sensing worm

Page 87: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

75

Interestingly, increasing the number of lazy cheaters increases the efficiency of the

quorum sensing worm. For example, when the lazy cheating probability was increased

from 0.00 to 0.50 for the configuration 6 worm, the final infection percentage increased

significantly from 99.66% to 99.85% (two-tailed unpaired t-test: t = 34.5213, df = 33,

P < 0.0001). However, the epidemic duration fell significantly, from 1323491.7 simulation

steps on average to 1265666.2 (two-tailed unpaired t-test: t = 2.1770, df = 37, P = 0.0359).

Results were similar for the configuration 3 worm, where the final infection percentage in-

creased significantly, yet there was no significant difference in the epidemic duration. The

most likely cause of this anomaly is fewer hosts halting early, due to reduced autoinducer

production, increasing the overall efficiency of the worm.

5.7 Defenses

We now turn our attention to defenses against quorum sensing worms. What techniques

can defenders use to proactively protect users against this potential threat?

One key observation about quorum sensing worms is that their traffic patterns may be

highly characteristic. The combination of random scans and release transmissions may be

indicative of a quorum sensing worm, allowing for rapid identification and isolation of in-

fected computers by, e.g., an intrusion-prevention system (IPS). However, if the release

transmissions are what allows for easy identification of a quorum sensing worm, an adver-

sary may just set M = 0; the adversary may even use a feedback mechanism to compensate

for the decrease in performance when M = 0, at the potential cost of additional complexity

introducing a bug into the worm. Alternately, an adversary may attempt to disguise release

transmissions as, e.g., HTTP traffic, or something equally unsuspicious.

Another possibility is to use the quorum sensing worm’s own self-stopping strategy

against it. If there were a way to transmit a large number of potentially-forged autoin-

Page 88: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

76

ducer messages to an infected host, or some way to trick that host into generating a large

number of autoinducer messages, that host could be convinced to halt prematurely. This

idea is similar to how synthetic autoinducer molecules can be used in a biological setting to

trigger quorum sensing-regulated behavior in bacteria [PPIG95]. For example, an IPS could

send forged autoinducer messages to hosts displaying quorum sensing worm-like behavior;

if the IPS were mistaken about the host being infected (i.e., if this detection were a false

positive), there would presumably be no consequences, except for the additional network

traffic.

As a related option, defenders could “reflect” scan attempts to unallocated addresses.

That is, any attempt to contact a non-responsive host could be redirected by an egress fire-

wall back to the host initiating the connection. Such a security measure should have little

impact on typical usage of the network being protected, depending on the user base of the

network. For example, if a user attempts to initiate an HTTP connection to a web server

that is inactive, the HTTP request would be reflected back to the user’s machine. If there is

no web server on that machine, the connection request will simply time-out. However, with

this defensive measure in place, most of the random scans performed by an infected host

(which would normally attempt to connect to unused ports or addresses) would detect an

infected host. The infected host would produce more autoinducer messages, and hopefully

halt early. Other variations on this idea could work equally well, e.g., redirecting traffic

to a host configured to act as though it were infected with many viruses. Given sufficient

logging of this defensive technique, network administrators could quickly identify infected

hosts on their network during an epidemic. This idea is related to a defensive technology

called a tarpit. Tarpits listen for communication attempts to unallocated addresses, then re-

ply to those attempts in a manner that causes the offending program or host to wait idly for

the connection to time-out [Lis08, McC01].

Page 89: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

77

In any case, the goal of a defender should be to protect their local network. Why? Could

there not be a defensive technique which would help to “protect the Internet?” That is,

could a large-scale defense be developed to stop entire epidemics of quorum sensing worms?

Probably not. Consider the results of the social cheating experiments. Even when 95% of

hosts infected with a configuration 6 quorum sensing worm behaved as severe cheaters (i.e.,

they halted immediately after infection), the final infection percentage only fell from 99.66%

to 97.91%, on average. This result suggests that, even if some large-scale defense could

immediately halt the spread of most quorum sensing worm instances, the epidemic would

still be successful, from the point of view of the adversary. At best, such a defensive system

could only facilitate more rapid identification of infected hosts, which could probably be

accomplished more easily by network- or host-level defenses anyway. Firewalls, anti-virus

products, and intrusion-prevention systems are the best tools for defenders in preparing for

this potential threat.

5.8 Summary

This chapter demonstrated that a versatile and consistent solution to the random-

scanning self-stopping worm problem is possible. The quorum sensing worm performed

well across various vulnerable population sizes, infecting a consistent percentage of the

population. The worm design was also highly flexible, with the potential for feedback mech-

anisms or cheating behavior to be incorporated. However, this design flexibility opened the

door for many “magic numbers” to be added to a quorum sensing worm’s code. Between

the autoinducer and enzyme strengths, the autoinducer threshold, the memory size, poten-

tially a feedback multiplier and limit function, and so forth, there could be a large number

of seemingly random numbers contained within a quorum sensing worm. How would such

Page 90: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

78

a worm perform outside of a simulated environment, in the wild? A self-stopping worm

design with fewer of these magic numbers is the topic of the next chapter.

Page 91: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

79

Chapter 6

The Population Inference Worm

The quorum sensing worm presented in the previous chapter performed consistently

across various population sizes. That is, regardless of the size of the vulnerable population,

V , the final infection percentage of a quorum sensing worm with a fixed configuration re-

mained constant. However, there is no meaningful relation between the values used in a

quorum sensing worm configuration and the final infection percentage. Why does a config-

uration 3 quorum sensing worm reliably infect approximately 80% of the vulnerable popu-

lation? Similarly, what if network conditions in the wild are significantly different from in

the adversary’s simulator (where the adversary simulated and chose a configuration to use

for an upcoming release of a real worm)? What would the final infection percentage of a

“configuration 3” quorum sensing worm be then?

Because of this unclear association, an adversary may wish to design a self-stopping

worm where the target infection percentage, p0, is actually used in some meaningful way by

the spreading worm. Such a design will be the focus of this chapter. We begin by providing

the appropriate statistical background in Section 6.1, before moving on to a simple design

for a population inference worm in Section 6.2. The performance of this worm will then

be described in Section 6.3. Potential design changes and their effects will be detailed in

Sections 6.4 and 6.5. The findings of this chapter will then be summarized in Section 6.6,

providing the motivation for the next two chapters, which will explore how the techniques

described in this chapter could be adapted to work in a new style of worm.

Page 92: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

80

6.1 Statistical Background

One well-known problem in statistics is to attempt to infer if the percentage of a pop-

ulation with some characteristic, p, is greater than some constant percentage, p0. One

method of making this inference [Moo00, p. 437] begins by drawing a simple random sam-

ple (SRS) [Moo00, p. 171] from the population – that is, if a sample of size n is drawn, then

every subset of individuals of size n must have an equal probability of being drawn as the

sample. Note, though, that some shortcuts can be taken with the sampling technique: for

example, drawing n individuals with replacement from the population will produce a distri-

bution of individuals possessing the characteristic nearly identical to that of an SRS, when

n is small relative to the size of the population [HKM95, p. 119]. As such, this method of

sampling can be used in place of drawing an SRS.

Once the random sample is drawn, the percentage of individuals in the sample that pos-

sess the characteristic, p, can be computed. From this value, the z statistic of the sample can

be calculated as

z =p − p0√

p0(1−p0)n

. (6.1)

The next step is to choose a confidence value, α (e.g., α = 0.05). That is, if we infer

that p > p0, we want to be able to make that statement with a confidence of at least 1 − α

(e.g., 95%). Using α, we compute zlimit, which is the value such that P (Z ≥ zlimit) = α,

where Z is a random variable with the standard normal distribution. Computing zlimit

from α is done by computing the inverse of the normal cumulative distribution function.

This computation can be performed using tables [Moo00, pp. 580–581] or algorithmic tech-

niques [Wic88, Ack08].

Finally, we use the computed z and zlimit values to complete our inference. Specifically,

if z ≥ zlimit, we can infer with confidence at least 1 − α that p > p0.

Page 93: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

81

However, there are a few caveats about the size of the sample, n. The first two caveats are

described by Moore [Moo00, p. 434]; namely, for this method of inference to be accurate, it

must be the case that both np0 ≥ 10, and n(1 − p0) ≥ 10. From these restrictions, we derive

that

n ≥10p0

(6.2)

and

n ≥10

1 − p0. (6.3)

We suggest that, when this form of inference is used for halting worm spread, there

should be an additional set of caveats. Namely, n should be large enough so that it is at least

possible that z ≥ zlimit (e.g., if the sample contains only individuals possessing the character-

istic). Similarly, n should be large enough so that it is at least possible that z < zlimit (e.g., if

the sample contains no individuals possessing the characteristic).

For z ≥ zlimit to be true, it must be the case that

zlimit ≤ z =p − p0√

p0(1−p0)n

.

Assume that the sample contains only individuals with the characteristic (i.e., p = 1); there-

fore, it must be the case that

zlimit ≤1 − p0√

p0(1−p0)n

.

If zlimit ≤ 0, this restriction trivially holds, since the right-hand side is always > 0. However,

if zlimit > 0, it must be the case that

z2limit ≤

n(1 − p0)2

p0(1 − p0)

⇒p0 · z2

limit

1 − p0≤ n .

This result yields the third caveat. Namely,

(zlimit > 0)⇒(n ≥

p0 · z2limit

1 − p0

). (6.4)

Page 94: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

82

A similar set of calculations yield a lower bound on n which ensures that it is possible

for z < zlimit if p = 0. The fourth caveat is

(zlimit < 0)⇒(n >

(1 − p0) · z2limit

p0

). (6.5)

Recall, though, that drawing n individuals from a population with replacement produces

a distribution nearly identical to an SRS only if n is small relative to the size of the popu-

lation. How do we know that following the four caveats will not produce a value of n that

is too large for this sampling technique to work? While not obvious yet, it will become

apparent that this concern is not relevant for our purposes. In the experiments we will per-

form in Section 6.3, the largest sample size required by the four caveats is 1000, whereas

the smallest population size on which we will experiment is 131072 — over two orders of

magnitude larger.

6.2 Population Inference Worm Design

Given this background on population inference, the question that now remains is: how

could an adversary use this technique to construct a self-stopping worm? One approach

would be for infected hosts to record the results of each random scan that finds a vulnerable

host, infected or otherwise; each host would essentially be collecting an SRS of the vulner-

able population. In this scenario, each infected host maintains two counters, infScans and

vulnScans. The former is a tally of the number of random scans which found vulnerable

hosts that have already been infected. The latter is a count of the total number of random

scans that found vulnerable hosts. Using these two counters, the host can estimate the pro-

portion of the vulnerable population that is already infected, p = infScans/vulnScans.

Recall, though, that for this estimate to be meaningful, there is a lower bound on the size

of the SRS (in this case, vulnScans). As such, each host would need to compute this lower

Page 95: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

83

bound, call it minN, using Equations 6.2, 6.3, 6.4, and 6.5. The p0 and α values required

for these computations, and the computation of the associated zlimit value, would be chosen

by the adversary prior to releasing the worm. The choice of p0 and α would be based on

the adversary’s chosen target infection percentage and desired confidence for that infection

percentage to be reached.

Given this lower bound, we know that no host should bother computing a p value until

vulnScans reaches minN. But, what should be done once the number of vulnerable hosts

scanned reaches that minimum bound? If the host can infer, based on the z statistic computed

from p, that p > p0 with confidence 1 − α, then the host should halt. But, what if the host is

unable to make that inference (i.e., z < zlimit)? One possibility is that the host could continue

to grow its SRS with more scans, until that inference is possible. However, hosts that do so

would have samples filled largely with stale data. After all, the proportion of the vulnerable

population that is infected, p, is continually changing. When a host is unable to infer that

p > p0, that host can discard its SRS and start fresh with a new one (alternately, data in the

sample could age out gradually over time, but such an implementation would be far more

complex than just maintaining two counters). This behavior is formalized in Figures 6.1

and 6.2. The initial host, h, infected in an epidemic runs PI(h, p0, α),

where the values p0 and α are chosen by the adversary.

6.3 Results

Despite the seemingly-sound design of the population inference worm, its performance

is quite poor. To demonstrate this poor performance, population inference worm epidemics

were simulated in an address space of size 232, with epidemics beginning with a single seed

infection. As was done with previous worms, the number of vulnerable hosts in the address

space, V , was varied among 131072, 750000, and 1500000. The target infection percentage

Page 96: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

84

1: procedure PI(host, p0, α)2: host.infScans← 03: host.vulnScans← 04: host.halted ← C(host, p0, α)5: while host.halted = N do . Iterate while loop once per unit of time6: P(host, p0, α)7: host.halted ← C(host, p0, α)8: end while9: end procedure

10: procedure P(host, p0, α)11: target ← a host chosen by a random scan12: if target is vulnerable then13: isInfected ← T if target is infected or F otherwise14: if isInfected = F then15: host infects target16: target runs PI(target, p0, α)17: end if18: S(host, isInfected)19: end if20: end procedure

21: procedure S(host, isInfected)22: host.vulnScans← host.vulnScans + 123: if isInfected = T then24: host.infScans← host.infScans + 125: end if26: end procedure

Figure 6.1: Pseudocode for the population inference worm (part 1 of 2)

Page 97: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

85

27: function C(host, p0, α)28: n← host.vulnScans29: Calculate zlimit from α . See, e.g., [Moo00, pp. 580–581]30: Calculate minN from p0 and zlimit using Equations 6.2, 6.3, 6.4, and 6.531: if n < minN then . Inference is ambiguous, so do nothing32: return N33: end if34: p← host.infScans/n35: Calculate z from p, p0 and n using Equation 6.136: if z ≥ zlimit then . Inferred that p > p0, so halt37: return Y38: else . Could not infer that p > p0, so reset the test39: host.infScans← 040: host.vulnScans← 041: return N42: end if43: end function

Figure 6.2: Pseudocode for the population inference worm (part 2 of 2)

of the population inference worm, p0, was also varied among 0.500, 0.750, 0.900, and

0.990. For each pair of V and p0 values, five trial simulations were performed, to account

for randomness. For now, α was fixed at 0.05 — that is, each infected host would halt when

it was 95% confident that p > p0, where p is the percentage of the vulnerable population

that is infected.

The results of these simulations can be succinctly stated: in every trial for every (V, p0)

pair, 100% of the vulnerable population was infected. Why? The problem with the popula-

tion inference worm design is that each host has to individually perform at least minN scans

of vulnerable hosts prior to concluding that it can halt. Performing these scans, especially

early in the epidemic, likely results in more hosts becoming infected; each of these newly-

infected hosts must then perform more scans. The resulting cascade of necessary scans

consistently results in all vulnerable hosts becoming infected, regardless of the adversary’s

choice for p0.

Page 98: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

86

Hosts Average Scans Perfect Worm PercentInfected Performed Expected Scans Difference

p0

0.500 131072 86588384289.6 53088862028.04 63.10%0.750 131072 174499115629.0 53088862028.04 228.69%0.900 131072 440554872409.0 53088862028.04 729.84%0.990 131072 4428510725555.8 53088862028.04 8241.69%

Table 6.1: The effect of differing infection targets on the performance of a population infer-ence worm using α = 0.05, relative to the perfect self-stopping worm, when V = 131072

The results of this cascade are best demonstrated by the number of extraneous scans

performed by the population inference worm — that is, the number of additional scans

performed above and beyond the number of scans that would be expected for the perfect

self-stopping random-scanning worm to infect all V hosts. As the value of p0 increases

through the values 0.500, 0.750, 0.900, and 0.990, the minimum size of each host’s SRSs

(minN) also increases: from 20, to 40, 101, and 1000, respectively. The result is that each

host must perform more scans to conclude that p > p0, driving up the number of extraneous

scans. For clarity, we limit our discussion to the V = 131072 results. In the p0 = 0.500

case, in which 100% of the vulnerable population was infected, the population inference

worm performed 63.10% more scans than would be expected for the perfect self-stopping

worm to infect all V hosts. In the p0 = 0.990 case, that number was driven up to over

8000%. These results, summarized in Table 6.1, demonstrate why the population inference

worm consistently infected 100% of the vulnerable population, regardless of the adversary’s

choice for p0: a cascade of extraneous scans necessary for each host to infer when p > p0.

Is there some way in which the population inference worm could be modified, allowing

hosts to halt sooner? Is there some way by which an adversary may reduce the number of

extraneous scans performed by this worm? These questions are the subject of the next two

sections.

Page 99: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

87

6.4 Confidence Level

Is it possible to change the design of the population inference worm so that it does not

consistently infect all of the vulnerable population? Recall that each host infected by the

worm halts only when it is confident that the actual percentage of the vulnerable population

infected, p, is greater than p0 (with significance at level α). Could this required confidence

that p > p0 be part of the reason why so many scans are required by each host? What if, for

example, each host continued spreading only until it was no longer confident that p < p0?

Would this approach change the worm’s behavior?

To answer this question, it is important to note that this approach is essentially equivalent

to the old one. For a host to infer whether p < p0, it would have to compute a z statistic value

just as in Equation 6.1. Then, it would compute a different limit value, call it z′limit, such that

P(Z ≤ z′limit

)= α′, where Z is a random variable with the standard normal distribution. Then,

if z ≤ z′limit, the host could infer with confidence at least 1 − α′ that p < p0. In other words,

if z > z′limit, the host would halt, since it could no longer infer that p < p0.

The equivalence of this new approach and the original approach comes from the fol-

lowing observation. If P(Z ≤ z′limit

)= α′, then P

(Z > z′limit

)= 1 − α′. That is, if the original

approach is used, in which hosts halt upon inferring that p > p0 at significance level α, these

hosts would halt at the same moment as hosts using the new approach, provided α = 1 − α′.

This equivalence does provide motivation for modifying the value of α used by a popu-

lation inference worm. Instead of α = 0.05, an adversary could use α = 1 − 0.05 = 0.95, as

suggested by the discussion above.

To test what effect this change in the value α would have, the experiments performed in

the previous section were repeated with α = 0.95. That is, V , was varied among 131072,

750000, and 1500000, and p0 was varied among 0.500, 0.750, 0.900, and 0.990.

Page 100: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

88

The results of these new experiments, summarized in Table 6.2, showed that adjusting

α alone is not enough to solve the issue with extraneous scans. Once again, in every trial

simulation for every (V, p0) pair, 100% of the vulnerable population was infected. This result

should not come as a large surprise. After all, despite the change in α, the minimum size

of each host’s SRSs (minN) remained the same. That is, for each of the p0 values tested

(0.500, 0.750, 0.900, and 0.990), minN remained unchanged from the previous section (20,

40, 101, and 1000, respectively). Each host still had to perform many scans in order to infer

that p > p0.

It should be noted, however, that this change in α did have some impact on the num-

ber of extraneous scans performed. Recall that each host does not necessarily perform

minN scans before halting; rather, it performs some multiple of minN scans, attempting to

make an inference each time it accumulates that minimum number of scans in its SRS. The

modification to α caused some hosts to require fewer SRSs before halting, as witnessed

by the number of extraneous scans performed during an epidemic. As indicated by Ta-

ble 6.2, the number of extraneous scans was still large. For example, when V = 131072

and p0 = 0.900, the population inference worm still performed 717.10% more scans than

the perform worm would have to infect all V hosts. However, compared to the result when

α = 0.05 (from Table 6.1), we see that number has fallen from 729.84%. Looking at the

average number of scans performed, which fell from approximately 4.4055 · 1011 to ap-

proximately 4.3379 · 1011, there is a statistically-significant difference (two-tailed unpaired

t-test: t = 84.1013, df = 5, P < 0.0001).

Even though the difference in the number of extraneous scans between the α = 0.05 and

α = 0.95 cases was small, there was nonetheless a difference. We can conclude, therefore,

that adjusting α may play a minor role in increasing the accuracy of a population inference

worm, if other issues could be solved. The primary issue that must be investigated, if we are

Page 101: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

89

Hosts Average Scans Perfect Worm PercentInfected Performed Expected Scans Difference

p0

0.500 131072 85927557108.0 53088862028.04 61.86%0.750 131072 171831935063.6 53088862028.04 223.67%0.900 131072 433789340464.0 53088862028.04 717.10%0.990 131072 4294906771925.8 53088862028.04 7990.03%

Table 6.2: The effect of differing infection targets on the performance of a population infer-ence worm using α = 0.95, relative to the perfect self-stopping worm, when V = 131072

to determine whether such a worm would ever likely be built by an adversary, is whether

it is possible to reduce the number of scans that each individual host must perform. This

question is the topic of the next section.

6.5 Data Sharing and Memory

So far, the population inference worm has failed to perform as intended. Cascades of

extraneous scans, caused by each host having to collect a large SRS prior to inferring that

p > p0 (or failing to infer that p > p0 and having to collect another SRS), result in population

inference worms consistently infecting 100% of the vulnerable population. Would it be

possible for an adversary to reduce the number of scans that each host performs? This

section will examine two potential methods that could be used, inspired by techniques seen

in previous chapters.

The first technique, data sharing, was inspired by the Sum-Count-X worm seen in Chap-

ter 4. Recall that hosts infected by a Sum-Count-X worm would maintain two counters,

called hits and scans, which would be incremented based on the outcome of that host’s ran-

dom scans. When one infected host scanned a previously-infected host, or when one host

infected a new vulnerable host, the two hosts would share and combine their two counters.

Page 102: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

90

Specifically, both hosts would assign to their hits counters the sum of their two previous

values; the process would then be repeated for the scans counters.

Could this technique be adapted to work with the population inference worm? Ab-

solutely. Each host infected by a population inference worm maintains two counters:

vulnScans and infScans — these two counters represent the SRS being built by a given

host. We propose that population inference worms could share and combine their counters,

just like Sum-Count-X worms, whenever two infected hosts have the opportunity to commu-

nicate. As with Sum-Count-X worms, two such opportunities exist here: when one infected

host scans a host that is already infected, or when one host infects a fresh host. Note that

in the latter case, since the newly-infected host would have its two counters initialized to

zero, this sharing and combining behavior would be functionally equivalent to new hosts

initializing their counters to their parent infection’s counter values.

There is one caveat to consider, however. In the case of the Sum-Count-X worm, the

two counters maintained by any host would not become “stale.” The counters were used

only to estimate the size of the vulnerable population, V , as V = hits/scans. Assuming V

is essentially static, any infected host h could share data with any other infected host h′,

thereby adding to both of their knowledge about V . In the case of the population inference

worm, however, the counters are used to estimate the proportion of the vulnerable population

that is infected, p, as p = infScans/vulnScans. Unlike V , p changes rapidly over the course

of an epidemic. So if one infected host, h, has an opportunity to share counters with another

infected host, h′, h must ensure that h′ has counters that contain recent data. So long as h′ is

still actively propagating, it must be maintaining a recent SRS. However, h should not share

data with h′ if h′ has already halted.

The second technique, memory, was inspired by the quorum sensing worm seen in Chap-

ter 5. Recall that hosts infected by a quorum sensing worm would send autoinducer and

Page 103: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

91

enzyme messages to recently-encountered infected hosts, in essence sharing the results of

their random scans with infected hosts that they remembered. We saw that while a quorum

sensing worm would still function with this memory feature disabled, the number of extra-

neous scans generated in this scenario was higher than when memory was enabled. When

each host informed a small group of other infected hosts about the results of its random

scans, the number of scans necessitated during the entire epidemic dropped.

How would this memory mechanism function in the case of a population inference

worm? Each host h could maintain a memory of the M active infected hosts most recently

scanned, for some global constant M > 0. Since redundant infection attempts can occur

with random-scanning worms, this memory could include hosts recently infected by h, as

well as hosts scanned by h that were already infected. Each time h performed a redundant

infection attempt, h could send a message to all of the hosts in its memory. Not only would h

increment its infScans and vulnScans counters; so too would each recipient of the message.

Similarly, each time h infects a new host, it would send a different type of message, causing

h and all recipients to increment their vulnScans counters. Just as with the quorum sensing

worm, there are two different types of messages sent by this population inference worm.

An adversary could add both the data sharing and the memory mechanisms to a popu-

lation inference worm in an effort to reduce the number of scans that each individual host

must perform. As such, it is important to test how effective these modifications, formalized

in Figure 6.3, would be. Note that Figure 6.3 contains only a new version of the population

inference worm’s propagation code; the PI, S, and C functions

from Figures 6.1 and 6.2 remain unchanged.

To test whether memory and data sharing would improve the accuracy of the population

inference worm, the experiments performed in the previous section were repeated. We kept

α fixed at 0.95; and, for clarity of exposition, the memory size was fixed at M = 5. As

Page 104: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

92

1: procedure P(host, p0, α)2: target ← a host chosen by a random scan3: if target is vulnerable then4: isInfected ← T if target is infected or F otherwise5: if isInfected = F then6: host infects target7: target runs PI(target, p0, α)8: end if9: if target.halted = F then . New code shares data opportunistically

10: M(host, target)11: SD(host, target)12: end if13: S(host, isInfected)14: for friend ∈ host.memory do . New code uses memory to share scan result15: S(friend, inInfected)16: end for17: end if18: end procedure

19: procedure M(host, friend)20: host adds friend to host.memory21: if |host.memory| > M then22: Remove oldest entry from host.memory23: end if24: end procedure

25: procedure SD(hostA, hostB)26: sumInf ← hostA.infScans + hostB.infScans27: sumVuln← hostA.vulnScans + hostB.vulnScans28: hostA.infScans← sumInf29: hostA.vulnScans← sumVuln30: hostB.infScans← sumInf31: hostB.vulnScans← sumVuln32: end procedure

Figure 6.3: Propagation pseudocode for a population inference worm with data sharing andmemory

Page 105: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

93

Vulnerable Population (V)131072 750000 1500000

p0

0.500 97.43% 97.30% 97.25%0.750 99.74% 99.76% 99.78%0.900 100.00% 100.00% 99.99%0.990 100.00% 100.00% 100.00%

Table 6.3: Infection percentages for a population inference worm using data sharing andmemory for different values of V and p0

before, V , was varied among 131072, 750000, and 1500000, and p0 was varied among

0.500, 0.750, 0.900, and 0.990.

The results of these experiments shed further light on the plausibility of an adversary

ever using a population inference worm. The numerical results, summarized in Table 6.3,

show the worm’s inability to accurately achieve a given infection target. For example, in

the p0 = 0.500 case, the worm consistently infected more than 97% of the vulnerable pop-

ulation. However, it should be noted that sharing of knowledge between infected hosts,

implemented in the form of data sharing and memory, did cause the worm to halt prior to

infecting 100% of the vulnerable population, for some choices of p0. Even in other cases,

where 100% of the vulnerable population was infected, there was a drop in the number

of scans performed with the introduction of memory and data sharing. For example, in the

case V = 131072 and p0 = 0.900, the average number of scans performed fell from approxi-

mately 4.3379 · 1011 to approximately 4.2881 · 1010 — a drop of over an order of magnitude,

which is most certainly statistically significant (two-tailed unpaired t-test: t = 5860.7425,

df = 6, P < 0.0001). The implications of this result will be explored further in the next two

chapters.

Page 106: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

94

6.6 Summary

The goal of this chapter was to determine whether a population inference worm could

be built that uses an adversary’s choice of p0 directly in its halting algorithm. Such a design

would give an adversary some confidence that an epidemic in the wild would halt when the

proportion of the vulnerable population that is infected, p, is near to p0. Ultimately, that

design would be more likely to be used by an adversary, and studying it proactively would

have been important.

However, the population inference worm that we attempted to design in this chapter

failed to work properly. Even using a large α value, plus memory and data sharing tech-

niques, the worm stopped just short of infecting all vulnerable hosts when the adversary

was targeting only p0 = 0.500. There is no reason why an adversary would use a population

inference worm as opposed to the simpler quorum sensing worm, which offered a variety of

configuration choices.

Despite this poor performance, one important discovery came from this work on popu-

lation inference worms: increased cooperation and sharing between infected hosts helped to

reduce the number of scans that each host had to perform. This reduction raises the question:

could increased sharing of scan results between infected hosts lead to an accurate halting

algorithm, based on that of the population inference worm? If so, how would this sharing

be accomplished? This question is the topic of the next two chapters, which explore how

infected hosts can arrange themselves into small groups during propagation.

Page 107: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

95

Chapter 7

The Tree-Based Super-Botnet Worm

In the previous chapter, we saw that a self-stopping random-scanning worm based on

population inference typically failed to halt near its target infection percentage, p0. The

cause of this failure was the large number of scans that each individual host had to perform

before attempting a population inference. This result raised the question of whether infected

hosts could cluster themselves in some fashion during propagation to share the results of

their individual scans.

For this chapter, we will temporarily ignore the problem of self-stopping worm design.

Rather, the focus of this chapter will be whether it is even feasible to assume that hosts can

cluster themselves in some meaningful fashion during propagation. From the point of view

of the adversary, that clustering could be useful for more than just solving the self-stopping

problem.

What if infected hosts could communicate as a group as a worm propagates? It is in this

vein that we come to the topic of botnets: collections of computers compromised by mali-

cious software, such as a worm, that receive and act on commands sent by an adversary over

some command-and-control (C&C) channel. The hosts within a botnet could conceivably

work together over the botnet’s C&C channel to decide when to halt propagation. However,

for reasons that will become apparent shortly, it is undesirable to the adversary for all of the

infected hosts to be part of one botnet, sharing one C&C channel. What if, instead, infected

hosts could cluster themselves into many small botnets — a network of botnets — and share

information within those smaller groups?

Page 108: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

96

The remainder of this chapter examines the feasibility, structure, and potential defenses

against networks of botnets, which we call super-botnets, with no reference to the self-

stopping problem. The goal will strictly be to explore the structure of super-botnets, before

seeing how they could be used to solve the self-stopping problem in Chapter 8. Section 7.1

discusses the vulnerabilities inherent in traditional botnet design, to motivate in detail why

adversaries would utilize a super-botnet. Section 7.2 addresses whether it is feasible for

adversaries to construct large super-botnets, and Section 7.3 discusses the communication

mechanisms employed in such super-botnets. Section 7.4 discusses a new type of time-

delayed attack that can be launched using a super-botnet, to further motivate why an adver-

sary would use this construction. Section 7.5 discusses how defenders can combat both this

new form of attack and super-botnets in general. Finally, we summarize the results of this

chapter in Section 7.6.

7.1 Vulnerabilities of Traditional Botnets

Is it even reasonable to assume that a large number of hosts (e.g., 1500000) could be

used to form a botnet? It is. Botnets involving over 100000 zombie computers have been

claimed [DGLL07, HD03, Uni05], and there was even one case involving 1.5 million com-

promised computers [Ste05]. However, from an adversary’s perspective, these large botnets

are bad from the standpoint of survivability: someone is likely to notice a big botnet and

take steps to dismantle it.

In fact, the recent trend is toward smaller botnets with only several hundred to several

thousand zombies [CJM05]. This may reflect better defenses — the malware creating new

zombies may not be as effective — but it may be a conscious decision by adversaries to limit

botnet size to try to avoid detection. It has also been suggested that the wider availability of

broadband access makes smaller botnets as capable as the larger botnets of old [CJM05].

Page 109: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

97

So where does this fact leave our notion of an adversary creating a large set of small

botnets? This goal of this chapter is not only to show that such a large network of botnets

could be constructed, but also that an adversary would have the motivation to construct such

a thing.

At a high level, there are some apparent benefits for the adversary in this construction.

By themselves, the smaller botnets can be exploited by the adversary in the usual way, such

as being rented to spammers. The discovery and disabling of some of the adversary’s botnets

is not a concern, either, because the botnets are numerous, unlike in the aforementioned case

of a single large botnet. The greater threat arises, though, when the botnets are coordinated

into this network of botnets — a super-botnet. For example, an adversary could command

the super-botnet to launch a massive distributed denial of service (DDoS) attack on a chosen

target, or to pummel a critical piece of the Internet’s infrastructure like the DNS. A super-

botnet design potentially allows an adversary to surreptitiously amass enough machines for

attacks of enormous scale.

But, before we consider why the construction of super-botnets is dangerous, it is first

important to understand some weaknesses exhibited by traditional botnets. By recognizing

how the traditional botnets of old can be detected and disabled, it will become clearer why

a decentralized super-botnet poses such a threat.

A traditional botnet uses some centralized C&C channel to receive commands: for in-

stance, an IRC server. Alternately, a botnet may periodically poll an information source

that is unlikely to be blocked or raise suspicion, such as accessing a web site (perhaps lo-

cated via a web search engine), or making a DNS request to a domain under the adversary’s

control [IH05].

It is this centralized command-and-control mechanism that constitutes a weak point for

defenders to target. There are five main goals that defenders could have:

Page 110: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

98

1. Locate or identify the adversary. The adversary is vulnerable to detection when they

issue commands to the botnet via this C&C channel. Defenders may not know the

nature of the adversary’s commands in advance, but assuming some of the zombies

have been detected and analyzed, the source of the adversary’s commands will be

known and alarms can be triggered when commands are sent. For example, a defender

may know that botnet commands will be found by periodically performing Google

searches for “haggis gargling” and sifting through the avalanche of results for the

adversary’s commands. Monitoring when and where Google finds such a web site

may provide a lead as to the whereabouts of the adversary. In turn, an adversary may

try to obfuscate their trail by issuing botnet commands through proxies or anonymity

networks like Tor [DMS04].

2. Reveal all the infected machines. Again, if the zombies are polling a known location

for the adversary’s commands, then the polling activity will reveal infected machines.

Meeting this goal and the previous one may require cooperation between law enforce-

ment and the private sector.

3. Command the botnet. A defender could attempt to send a command to the bot-

net to shut it down. A related concern for the adversary is that another ad-

versary may try to usurp control of the botnet. For these reasons, as pointed

out by Staniford et al. [SPW02], the adversary must digitally sign or en-

crypt botnet commands using asymmetric encryption (described in Section 8.1 of

Menezes et al. [MvOV01, p. 283]), so that the compromise of an infected machine

does not reveal a secret key (Staniford et al. presented their work in the context of up-

dating worms, but the technique could be applied to controlling botnets). Generating

a public/private key pair in advance, the adversary could send the public key along

with the worm that creates the botnet, solving the key distribution problem.

Page 111: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

99

4. Disable the botnet. If a defender can shut down the C&C channel (along with any

redundant channels), the entire botnet will be rendered useless in a single blow. With

the zombies unable to receive commands, the botnet will cease to operate.

5. Disrupt botnet commands. If the C&C channel is one that cannot easily be shut down,

such as Google, other defense methods can be employed. The adversary’s commands

can be intentionally garbled by a defender regardless of whether or not they are en-

crypted or signed — changing a few bits is sufficient to achieve this goal, leaving the

adversary unable to control the botnet. This defense can be applied locally, e.g., by

an intrusion-prevention system (IPS) or firewall rules, or globally if a defender has

enough access to the botnet’s command source.

Large traditional botnets are vulnerable to defenders because of the easily-targeted C&C

channel. The benefits to the adversary of decentralizing their botnet into a super-botnet — a

network of independent botnets — are clear. But, we must still explore whether it is feasible

for an adversary to create a large super-botnet.

7.2 Super-Botnet Feasibility

Are large super-botnets feasible? To answer this question, we must consider how a

super-botnet can be constructed. We begin by abstracting away some details:

1. In Chapter 8, we will show how super-botnets can be used to solve the self-stopping

problem for a random-scanning worm. As such, we will assume that the super-botnet

is constructed using only one worm. This assumption does not imply that the worm’s

code is uniform across infections, of course, because the worm could be polymorphic

or metamorphic in nature.

Page 112: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

100

However, more than one worm could be involved in practice. A single adversary

could create multiple worms or worm variants; multiple adversaries could conspire

to create multiple worms based on a common super-botnet specification. Neither

possibility is farfetched. For some malware, creating variants is practically a cottage

industry — Spybot, for example, has thousands of variants [Can05]. Adversaries have

collaborated in the past on malware [SF01], and N-version programming [Avi85] of

worms by adversaries within a single organization is definitely possible in the context

of information warfare. Implementation diversity can make worm detection more

difficult for signature-based detection methods.

2. As with the discussion of worms in previous chapters, the infection vector(s) the worm

uses are not relevant to the analysis and are not considered. However, for this single

chapter we make a further simplifying assumption, so that we can focus our efforts

on studying super-botnets themselves rather than the fine details of a propagation

algorithm. We will assume that the adversary knows (or has an accurate estimate of)

the size of the vulnerable population, V . While this assumption may be idealistic for

the adversary, it serves only to simplify the presentation of the material in this chapter.

Chapter 8 will reexamine this assumption.

3. The exact C&C mechanism(s) used within individual botnets does not matter. We

assume a centralized IRC-based C&C for concreteness, but it could just as easily be a

peer-to-peer architecture or some other method. Communication within an individual

botnet could even be covert in nature, as discussed by Aycock [Ayc05], to help reduce

the risk that a botnet would be detected through traffic analysis.

One way to establish a super-botnet could be with a two-phase process. In the first phase,

a worm is released which makes every new infection a C&C machine for a new, independent

Page 113: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

101

Initial C&C InfectionIndividual Botnet

Figure 7.1: The close proximity of C&C machines when a super-botnet is generated througha two-phase infection process

botnet. In the second phase, after enough C&C machines have been created, new infections

populate the C&C machines’ botnets. This method is risky for an adversary, because the

“backbone” of the super-botnet’s C&C infrastructure is all established by direct infections;

discovery of one C&C machine can easily lead to others using information from firewall

logs, for example. The close proximity of the C&C machines established using this method

is illustrated in Figure 7.1.

Instead, we use a tree-structured algorithm that is dependent on three constants: B-

, HPB, and S. This algorithm creates B individual botnets,

each consisting of HPB zombies. As each zombie is infected, it learns how

many additional zombies it will be responsible for adding to its botnet (to bring the size of

its botnet up to HPB), as well as how many C&C machines for new, indepen-

dent botnets it must infect (to bring the number of botnets up to B). If a zombie is not

a C&C machine for a new botnet, it also learns the location of its botnet’s C&C server.

Page 114: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

102

However, each zombie infects at most S new hosts, with priority given to adding

zombies to its botnet. By choosing a smaller S value, the adversary can narrow the

breadth of the tree-structured growth of the super-botnet, thereby reducing suspicious traf-

fic and helping to avoid rate limiting defenses [SMPW04]. Since a given zombie may be

responsible for adding many additional hosts to its botnet and starting many new botnets

(i.e., performing more than S infections), the zombie delegates an equal amount of the

remaining work to each of the (up to S) new hosts it infects.

When a new botnet/C&C server is created, all information about the previous botnet

is discarded, leaving each botnet isolated. Furthermore, new C&C servers are placed as

far away from one another as the balanced tree structure allows: for nontrivial values of

HPB, C&C servers are separated by at least blogSHPB − 1c in-

termediate infections.

This tree-based behavior is described more formally in the pseudocode shown in Fig-

ure 7.2, and a small example of the growth is shown in Figures 7.3 and 7.4. The seed

infection, h, in the pseudocode begins by running ST(h, 0,B, 0.0.0.0).

Intercepting a worm close to an initial infection can prune a large number of botnets

in this algorithm. Additionally, the adversary must have some estimate of the number of

vulnerable hosts in order to choose an initial value for B. However, these concerns

are just limitations of this algorithm. In practice, an adversary can seed multiple initial in-

fections, and the next chapter will focus on self-stopping algorithms that can be used with

super-botnets, rather than relying on information being propagated in a tree-structured man-

ner, thereby avoiding both of these issues. However, as will become apparent, the situation

looks bad for defenders, even under these simplifying assumptions.

We simulated the worm pseudocode, allowing one scan per active machine per time step

(γ = 1, with reference to the formulas in Chapter 3). Starting with a single seed, the sim-

Page 115: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

103

1: procedure ST(host, hostsNeeded, botnetsNeeded,myCC)2: if hostsNeeded > 0 then . More hosts needed in this botnet3: host connects to the C&C server myCC4: hostsNeeded ← hostsNeeded − 15: else . Start a new individual botnet6: host starts a new C&C server (e.g., IRC) on itself7: hostsNeeded ← HPB − 18: botnetsNeeded ← botnetsNeeded − 19: myCC ← host’s IP address

10: end if11: childHosts← D(hostsNeeded,S)12: childBotnets← D(botnetsNeeded,S)13: for i← 0 to S − 1 do14: if childHosts[i] > 0 or childBotnets[i] > 0 then15: repeat16: target ← a host chosen by a random scan17: until target is a vulnerable, uninfected host18: host infects target19: target runs ST(target, childHosts[i], childBotnets[i],myCC)20: end if21: end for22: end procedure

23: function D(value, slots)24: for i← 0 to slots − 1 do25: distribution[i]← value/slots . Integer division must be used here26: end for27: for i← 0 to (value mod slots) − 1 do28: distribution[i]← distribution[i] + 129: end for30: return distribution . value distributed fairly into slots slots31: end function

Figure 7.2: Pseudocode for a tree-based super-botnet worm

Page 116: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

104

1. A single seed infection becomes the C&C server

for a new individual botnet.

C&C Infection

Individual Botnet

Client Infection

2. A new host is infected and it joins the first

botnet as a client.

3. The worm continues to spread, spawning a

second botnet.

4. The seed host, having spread twice, ceases

propagation.

Figure 7.3: An example of a growing super-botnet over time using the constant valuesB = 4, HPB = 3, and S = 2 (part 1 of 2)

Page 117: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

105

6. The final new hosts are infected, bringing the

super-botnet to its final size.

5. Hosts that have spread as many times as they

ever will (sometimes not at all) halt.

7. The propagation is now complete, and the super-

botnet is fully formed.

Figure 7.4: An example of a growing super-botnet over time using the constant valuesB = 4, HPB = 3, and S = 2 (part 2 of 2)

Page 118: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

106

ulation established 15000 botnets with 100 machines apiece — a super-botnet 1.5 million

strong. For these simulations, we assumed that the propagating worm had “perfect luck.”

That is, every random scan performed in the simulation hit a vulnerable, uninfected host.

This contrived luck allowed us to study the effect of the S constant without the highly

variable results of random scans that may or may not find vulnerable hosts. Figures 7.5

and 7.6 chart the growth of infections and C&C servers over time, for a S value of 2

and a more generous value of 25 respectively.

One interesting anomaly is that, even though the C&C growth curve is the least smooth

for a spread of 2, a spread of 3 or 4 infects 1.5 million machines the fastest. Figure 7.7

displays the amount of time required for the super-botnet to be built for all S values

between 2 and 25, under the perfect luck assumption. The non-smooth nature of the graph

demonstrates an interesting relationship between the S value and the total epidemic

time. In essence, some S values are better at keeping more infected hosts active at

once, reducing the epidemic time. In all cases, it is clear that a super-botnet can be estab-

lished in short order when the worm is perfectly lucky.

What happens when when the contrived luck assumption is removed? Because very few

hosts are actively scanning for the final vulnerable hosts (in fact, only one infected host will

scan for the final vulnerable, uninfected host), the total epidemic time of the super-botnet

worm does not compare favorably against the perfect self-stopping worm. For example, a

super-botnet consisting of 15000 individual botnets each with 100 hosts was constructed

in an environment where V = 1500000 and A = 232 by a super-botnet worm using I = 1

and S = 3. Averaged over 5 trials, the super-botnet worm epidemic ran for just under

6.7 · 109 units of time. This duration is several orders of magnitude larger than the 8.1 · 105

units of time expected from the perfect self-stopping worm. Of course, by design, the super-

botnet worm stops all scanning after the final vulnerable host is infected, so the number of

Page 119: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

107

1

10

100

1000

10000

100000

1e+06

1e+07

0 5 10 15 20 25 30 35 40 45

Num

ber o

f hos

ts (l

og s

cale

)

Time (perfect-luck simulation steps)

Hosts infectedC&C servers

Figure 7.5: The total number of infected hosts and the number of C&C machines startedover time as a perfect-luck super-botnet worm using S = 2 infects V = 1500000 hosts

1

10

100

1000

10000

100000

1e+06

1e+07

0 20 40 60 80 100 120

Num

ber o

f hos

ts (l

og s

cale

)

Time (perfect-luck simulation steps)

Hosts infectedC&C servers

Figure 7.6: The total number of infected hosts and the number of C&C machines started overtime as a perfect-luck super-botnet worm using S = 25 infects V = 1500000 hosts

Page 120: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

108

0

20

40

60

80

100

0 5 10 15 20 25

Epid

emic

dura

tion

(per

fect

-luck

sim

ulat

ion

step

s)

Spread

Figure 7.7: The effect of increasing S on the epidemic running time of a perfect-lucksuper-botnet worm when V = 1500000

scans produced by this random-scanning worm is optimal (see Appendix A for a complete

discussion on this topic). A super-botnet worm design more scalable in terms of epidemic

duration is presented in Chapter 8.

For now, though, we focus on the question: how can the adversary issue commands

to some or all of the independent botnets, thereby leveraging the full power of the super-

botnet? The adversary does not have a direct way to issue commands to the botnets and,

as shown, the adversary will not even know how to locate individual botnets. In the next

section, we explore advanced communication methods.

7.3 Inter-Botnet Communication

The adversary may try to send commands along the super-botnet itself, to avoid having

externally-issued commands be subject to attack. In this section, we consider how this task

is accomplished.

Page 121: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

109

Following [SPW02], we assume that an adversary will want to encrypt communications

both within each botnet and between botnets. When a new C&C server for a botnet B is

created during worm propagation, it can generate a new symmetric encryption key, KB, as

well as a new public/private key pair, {PB, S B}. The symmetric key, KB, is provided to new

infections within the botnet and is used to encrypt communication within the botnet. The

public key, PB, is used to asymmetrically encrypt messages sent to the botnet from other

botnets. The two problems faced by the adversary then are how to distribute the symmetric

key to all hosts within a botnet, and how to distribute the public key to other botnets. We

will examine each of these problems in turn.

Perhaps the simplest method of distributing the symmetric key to each host in a botnet is

to include KB as part of the worm payload delivered to newly-infected hosts. Each new host

joining the botnet would then immediately have access to the symmetric key. The downside

for the adversary, though, is that defenders could potentially capture the symmetric key,

since it is sent in the clear. Of course, the adversary would have to judge how great a risk a

captured key poses.

Keep in mind that encrypting communication within each botnet really only serves

to raise the bar on how difficult it is for defenders to eavesdrop on communications,

not to make it perfectly secure. After all, newly-infected hosts (or anything posing as

a newly-infected host, such as a defender) will always need some way of acquiring KB

when they join their botnet. However, there are more secure options. For example, the

worm payload could include PB instead of KB; i.e., each newly-infected client host would

be provided with the C&C server’s public key. Then, that client host could initiate an

ElGamal key agreement protocol with the C&C server, as described in Section 12.6 of

Menezes et al. [MvOV01, p. 517]. The result of that agreement would be a shared session

key, and the client host could send a symmetric key-request message (encrypted with the

Page 122: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

110

session key) to the C&C server. The C&C server would respond by encrypting the botnet’s

symmetric key, KB, along with a cryptographic hash of KB (for data integrity on decryption),

using the session key. The result of that encryption could then be sent to the client, provid-

ing it with KB. Once the number of hosts in that botnet reaches HPB, the C&C

server would stop responding to symmetric key-request messages. While defenders could

still discover KB by compromising a host in the botnet, or by acquiring PB while the botnet

was still growing then posing as an infected host, it would be far more difficult for defenders

than if the symmetric key were sent in the clear.

Having discussed distribution of each botnet’s symmetric key to each host in the botnet,

we turn our attention to distribution of a botnet’s public key, PB, to other botnets for inter-

botnet communication. First, we define the routing information needed for botnet A to talk

to botnet B as the pair

(PB, C&C IP addressB).

The adversary’s problem can now be viewed as how to propagate routing information around

the super-botnet. No one botnet can be allowed to have complete information about the

super-botnet, because the complete structure of the super-botnet would be revealed if a

botnet with complete routing information were compromised by a defender. Each botnet

can have partial routing information instead, and know how to contact a small finite set of

its “neighbors.” But how can partial information be gathered by a botnet?

One obvious time for a botnet to gather partial routing information is when its C&C

server is created, from the botnet that infected it. One less-obvious time is when a worm tries

to infect an already-infected machine (since the perfect-luck assumption would never actu-

ally hold, redundant infection attempts are an expected occurrence with random-scanning

worms). Normally this is considered something an adversary would want to avoid — worm

contention is a waste of effort — but, for super-botnets, this is an ideal opportunity to ex-

Page 123: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

111

change information, just as it was for the various self-stopping worms described in previous

chapters. Such exchanges would presumably only occur while the super-botnet was be-

ing constructed, so that a defender could not trivially extract routing information later by

pretending to be a worm, or flood the super-botnet with routing information pointing to a

honeypot.

Note that we abstractly consider C&C servers to be performing the routing information

exchange here. In practice, a C&C server’s clients would be performing routing information

exchanges too, and sending new information to their C&C server.

This technique of exchanging routing information on redundant infection attempts is

also used by Midgard worms [RLK04]. In fact, super-botnet worms can be regarded as

specific instances of the more general Midgard worm design, in which an overlay network

is created among all infected hosts. However, the more specific super-botnet structure scales

easier, as the size of the peer-to-peer portion of the overlay (in which routing information is

exchanged) is based on the number of individual botnets rather than the number of infected

hosts. Additionally, the super-botnet design naturally lends itself to the adversary renting

out collections of compromised hosts.

In the remainder of this section, we simulate and analyze a routing information-exchange

algorithm over a population of 1.5 million vulnerable machines. Routing information ex-

change occurs in both of the above scenarios: creation of a new C&C server, and one in-

fected machine locating another during its random scan. We denote botnet A’s and botnet

B’s routing information by a and b, respectively.

Intuitively, our information-exchange algorithm mimics how children exchange sports

trading cards; we refer to it as the hockey card algorithm. Each C&C server has H slots

for routing information, and a newly-created C&C server for botnet A begins with all its

slots (HA) initialized to a. Thereafter, when either of the two opportunities arise to exchange

Page 124: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

112

1: procedure E(A, B)2: if H > 1 then3: N ← random value ∈ [1,H − 1]4: else5: N ← 16: end if7: A randomly partitions HA into XA and YA, where |XA| = N8: B randomly partitions HB into XB and YB, where |XB| = N9: A sends XA to B . A and B exchange routing information

10: B sends XB to A11: HA ← the concatenation of YA and XB

12: HB ← the concatenation of YB and XA

13: end procedure

Figure 7.8: Pseudocode for routing information exchange in super-botnets

information between botnets A and B, the two botnets first agree on a random number N,

where 1 ≤ N ≤ M(1,H − 1), then A and B each select N random slots and trade them as

described in Figure 7.8. This code maintains H links to each botnet within the super-botnet.

Although a botnet may end up with duplicates and self-links, our simulations showed that

this does not have any negative effects.

We assume that the adversary knows the addresses of seed infections, and that they will

want to communicate through the super-botnet as surreptitiously as possible. In other words,

the adversary will not broadcast a command to all seeds, but will select one seed and send a

super-botnet command through it. The important metric to the adversary is thus the amount

of connectivity from some seed. That is, how many botnets will a command reach?

To evaluate this metric, we fixed three simulation parameters on the basis that

the adversary would want a large super-botnet: 15000 botnets, 100 seeds, and a

S = 3 — so, the initial code run by each seed host, hi, with reference to Figure 7.2, is

ST(hi, 0, 150, 0.0.0.0). Two parameters were adjusted, however. First, the value of

H was varied among 1, 2, 3, 4, 5, and 10 to ascertain its effect on connectivity. Second, the

number of hosts per botnet was varied in increments of 25 from 25 to 100 to study different

Page 125: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

113

probabilities of finding other botnets during propagation. All simulations, run in an environ-

ment with V = 1500000, were repeated five times to minimize the effects of randomness.

We also set A = V , thereby eliminating random scans that do not find vulnerable (infected

or otherwise) hosts. We chose A as such simply to speed up the simulations; note that scans

that find invulnerable hosts have no effect on the outcome of the inter-botnet communication

structure.

The results were dramatic. In every simulation run where H was greater than 1, the

amount of connectivity from any single seed was 100%. An adversary would be able to

command all the botnets comprising the super-botnet using any one seed.

H does play a role in the resistance of the super-botnet to defensive measures. In par-

ticular, a defender may try to prevent an adversary from commanding their super-botnet by

reducing connectivity. Here we define the degree of a botnet to be the sum of its useful

in- and out-degrees, which are the links remaining once self-links and duplicate links are

removed.

Following research on attacks against both random and scale-free networks [AJB00],

we consider a defender who will try to disable a super-botnet using two different strategies.

First, a defender may disable high-degree botnets first. This is an effort to reduce super-

botnet connectivity more quickly than simply disabling botnets at random. It is important to

stress that this is the absolute best case scenario, where a defender has oracular knowledge

of the super-botnet’s connectivity. Second, a defender may disable botnets at random. This

is perhaps a more likely case, where uncoordinated defenders would simply be shutting

down botnets randomly upon discovery.

To test the resilience of a super-botnet’s communication structure, we first define a metric

that we call single-seed connectivity. This metric is computed by calculating how many

botnets would receive a message sent to a single seed (which may or may not be disabled)

Page 126: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

114

and passed over the super-botnet’s inter-botnet communication structure, averaged over all

seeds. That is, single-seed connectivity is computed as∑Ii=1 BFS(i)

I,

where I is the number of botnets constructed from initial seeds and 0 ≤ BFS(i) ≤ B is

the number of botnets reached by performing a breadth-first search along the non-disabled

botnets in the inter-botnet communication structure, starting at the botnet constructed from

initial seed 1 ≤ i ≤ I. Because BFS(i) can take a wide range of values (i.e., the connectivity

from some seed infection can deviate substantially from the average connectivity over all

seed infections), single-seed connectivity is strictly an aggregate measure over the entire

super-botnet. In essence, the single-seed connectivity of a super-botnet measures how well

a command sent to a single seed will propagate, on average, through a super-botnet.

We tested the resilience of the super-botnet communication structure by first build-

ing a simulated super-botnet consisting of 15000 botnets with 100 seed infections in an

environment with V = 1500000 (and, again, A = V). For clarity of exposition, we fixed

HPB = 25. We then disabled up to 5000 individual botnets using either high-

degree-first targeting or random selection, recording the resulting single-seed connectivity.

This procedure was repeated five times, to compensate for randomness. The results for ran-

dom attacks are shown in Figure 7.9, and those for oracular attacks are shown in Figure 7.10.

The strategy used by the defender does not make a difference once H ≥ 5. In this case, a

defender able to disable one-third of the botnets in a super-botnet still leaves the adversary

able to contact over one-third of their original botnets from some seed, on average. This is

still enough for a sizeable attack to be launched with the super-botnet. Two conclusions can

be drawn:

• The adversary would set H to at least 5. How do the number of botnets reachable

from a single seed after 5000 botnets have been disabled by oracular attacks com-

Page 127: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

115

0

5000

10000

15000

20000

0 1000 2000 3000 4000 5000

Botn

ets

rece

iving

adv

ersa

ry’s

com

mun

icatio

n

Botnets disabled

H = 2H = 3H = 4H = 5

H = 10

Figure 7.9: The average number of botnets that receive a command sent to a single seed asrandom attacks bring down individual botnets, for various values of H

0

5000

10000

15000

20000

0 1000 2000 3000 4000 5000

Botn

ets

rece

iving

adv

ersa

ry’s

com

mun

icatio

n

Botnets disabled

H = 2H = 3H = 4H = 5

H = 10

Figure 7.10: The average number of botnets that receive a command sent to a single seed asoracular attacks bring down individual botnets, for various values of H

Page 128: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

116

pare between the cases H = 4 and H = 5? The single-seed connectivity averaged

over five trials when H = 5 was 6455.39, compared to 5411.87 when H = 4 — the

H = 5 single-seed connectivity is significantly different (two-tailed unpaired t-test:

t = 3.3276, df = 6, P = 0.0159).

However, there is a tradeoff between robustness and disclosure. Too large a value

for H would reveal the location of a large number of other botnets, if a defender

discovers one botnet. Also, there was no significant difference in the single-seed con-

nectivity after 5000 oracular attacks, averaged over five trials, between when H = 5,

with 6455.39 botnets, and when H = 10, with 6334.81 botnets (two-tailed unpaired

t-test: t = 0.6352, df = 7, P = 0.5455).

• Communication through a super-botnet is robust to a defender’s countermeasures.

Finding and disabling 5000 botnets, randomly or otherwise, would be a difficult task.

The only defense that is deployed widely enough to feasibly do this is anti-virus soft-

ware.

This simulation also assumes that an adversary will only communicate through

one of the initial seeds. Botnets could announce their (encrypted) routing information

to an adversary instead; this would give the adversary many more communication

endpoints to use at the risk of leaking information to a defender. An adversary may

also use a larger number of seeds for communication, rather than just one, or alternate

which seed receives the command each time a command is sent. Either variation

by the adversary should reduce the effectiveness of the defender’s countermeasures.

The reduction in a super-botnet’s connectivity caused by disabled individual botnets

is mitigated, for example, if the adversary sends a command to all seeds rather than

to just one (the metric of the adversary’s success here could be called, e.g., total

seed connectivity, as opposed to single-seed connectivity). The results of this change

Page 129: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

117

0

5000

10000

15000

20000

0 1000 2000 3000 4000 5000

Botn

ets

rece

iving

adv

ersa

ry’s

com

mun

icatio

n

Botnets disabled

H = 2H = 3H = 4H = 5

H = 10

Figure 7.11: The average number of botnets that receive a command sent to all seeds asrandom attacks bring down individual botnets, for various values of H

in strategy are illustrated in Figures 7.11 and 7.12, which show a less pronounced

drop in the number of botnets receiving an adversary’s message compared to their

counterparts, Figures 7.9 and 7.10.

As a final comment on the issue of resilience: note that we fixed V = 1500000 through-

out these experiments. What happens if the size of the vulnerable population is much

smaller, e.g., V = 131072? To find out, we repeated all of the previous experiments in

this environment. This time, however, using 100 seed infections we created 1310 botnets,

each with 25 hosts, and disabled up to 440 of them.

The results were very similar. Again, as long as H ≥ 2, 100% connectivity was achieved

from any single seed when no botnets were disabled. As before, with approximately one-

third of an adversary’s botnets disabled, the single-seed connectivity of the super-botnet

was still greater than one-third of the original number of botnets, when H was sufficiently

large. However, with the smaller value of V , there was no significant difference in the single-

Page 130: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

118

0

5000

10000

15000

20000

0 1000 2000 3000 4000 5000

Botn

ets

rece

iving

adv

ersa

ry’s

com

mun

icatio

n

Botnets disabled

H = 2H = 3H = 4H = 5

H = 10

Figure 7.12: The average number of botnets that receive a command sent to all seeds asoracular attacks bring down individual botnets, for various values of H

seed connectivity when one-third of the super-botnet was disabled between when H = 4 and

when H = 5, unlike in the V = 1500000 case. This difference was probably caused by the

higher ratio of seed botnets to non-seed botnets in the V = 131072 case, allowing for greater

connectivity from the adversary using fewer edges in the inter-botnet communication graph.

Aside from that minor distinction, though, it is no surprise that the results are similar despite

a different vulnerable population size. The properties of the inter-botnet communication

graph should be the same regardless of the number of botnets in the graph (e.g., average

degree of each botnet and percentage of botnets disabled). In short, the super-botnet is

resilient regardless of its size.

These conclusions suggest that new, large-scale defenses are needed for the super-botnet

threat. We discuss defenses against the super-botnet threat in Section 7.5. First, however,

we discuss a novel application of the super-botnet’s decentralized structure that allows for a

Page 131: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

119

new form of attack to be launched. Understanding how this new form of attack works will

further reveal the methodology that must be used to defend against super-botnets.

7.4 Time Bombs

Even with the decentralized structure of the super-botnet, the adversary is still vulnerable

to detection when a command is injected into the super-botnet. When the adversary gives

the command, “Attack CNN’s web site” to a seed infection, the seed will pass the command

on to other botnets, and will then launch its part in the attack. Analysis of traffic logs could

correlate the incoming message with the proceeding massive bandwidth usage.

Of course, the incoming message to a seed from the adversary will look no different than

the message being passed along from one botnet to another. Be that as it may, the adversary

may still worry about detection. In essence, the adversary would like to be “as far away as

possible” from the super-botnet when the attack begins, to minimize the chance of detection.

It would be safer for the attacker to issue the command, “Attack CNN’s web site six

hours from now” to the super-botnet. Though this method of commanding the super-botnet

does not make the adversary impervious to detection — for example, a seed infection could

have been disinfected and replaced with a honeypot, or a defender could capture this mes-

sage at some point, wait for the attack to occur, then look back six hours in the logs for

suspicious communication — it does increase the difficulty of correlating the communique

to the resulting attack. Unfortunately, the attacker cannot simply sign this time bomb com-

mand with a private key (assuming the botnets possess the corresponding public key for the

purpose of verifying commands) and release it into the super-botnet. If a defender compro-

mises a single botnet, the time bomb would be discovered well in advance of the attack, and

the future victim could be warned.

Page 132: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

120

In this section, we describe a method which uses the super-botnet’s structure to allow

the adversary to inject time bomb commands into the super-botnet, with minimal risk that

the command will be uncovered ahead of time by a defender.

Until now, we have implicitly assumed that the adversary has a public/private key pair

(call the public key P and the private key S ), and that every botnet in the super-botnet

possesses P (which could have been, for example, included in the worm that infected the

zombies). The adversary signs all commands with S , thereby ensuring that the botnets

execute only the adversary’s commands.

We now take the application of asymmetric encryption one step further. Let C > 1 be

a constant, and let the adversary generate an additional C key pairs. That is, the adversary

generates key pairs {P, S }, {P1, S 1}, . . . , {PC, S C}. Instead of including only P with the worm,

the adversary includes {P, P1, . . . , PC}. That is, each botnet receives and passes on C + 1

public keys. However, after a given botnet creates all of the new C&C servers that it is

required to, it deletes all but one of the extra C public keys. Specifically, the botnet chooses

a random number i ∈ [1,C], and it saves P and Pi; all of the other public keys are wiped

from that botnet.

Note that every botnet will possess P. Therefore, if the adversary wishes to issue a

command upon which the super-botnet should act instantly, the command should be signed

with S .

The additional C key pairs are used for time bomb attacks. Assume that the adver-

sary has some command, K, and desires that the super-botnet execute it after a delay of

T time. First, the adversary uses a secret splitting scheme, as described in Section 12.7

of Menezes et al. [MvOV01, p. 524], to divide K into C pieces: K1, . . . ,KC. Individually,

any of the pieces are meaningless. In fact, even possessing C − 1 of the C pieces does not

reveal K; all C pieces are needed to reconstruct K. We treat the secret splitting scheme as

Page 133: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

121

a black box; any scheme that meets the aforementioned requirements would suffice. One

simple implementation, from Section 12.7.1 of Menezes et al. [MvOV01, p. 525], would

be to choose random values for Ki for all i ∈ [1,C − 1], and set KC = K ⊕ K1 ⊕ . . . ⊕ KC−1.

A more general scheme, which could even allow K to be reconstructed from any N of the

pieces, for some fixed N ≤ C, was described by Shamir [Sha79].

Regardless of what secret splitting scheme is used, the adversary constructs the follow-

ing message after the command is split:

{DS 1(DS (K1)), . . . ,DS C (DS (KC)),DS (T )} ,

where DS (x) is a digital signature scheme with message recovery, using private key S on

message x, that incorporates a suitable, non-multiplicative redundancy function. One ex-

ample of such a scheme is RSA with ISO/IEC 9796 formatting, described in Section 11.3.5

of Menezes et al. [MvOV01, pp. 442–444]. The redundancy function is necessary since the

pieces of K look random; without it, it would be impossible to distinguish a valid signature

from garbage [MvOV01, p. 430].

This message is delivered into the super-botnet as usual, and each botnet decodes what

part of the message it can. Since each botnet possesses only a single, randomly-chosen Pi, it

can decrypt only the corresponding DS (Ki). That is, after the message has traveled through

the entire super-botnet, each botnet will be in possession of a signed copy of T , and a signed

copy of one of the C pieces of K.

After a delay of T has elapsed, each botnet floods its DS (Ki) into the super-botnet. Since

Ki is signed by S , each botnet can confirm that any piece of K that it receives is, in fact,

one of the original C pieces created by the adversary. After receiving and confirming all C

pieces of K, each botnet can perform the command.

This scheme is highly resistant to attacks by individual defenders. There are two tasks

that a defender may wish to accomplish against this scheme:

Page 134: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

122

• Reconstruct the command before it is executed. To do so, the defender would require

all C public keys, P1, . . . PC. It would be difficult for a defender to capture all C public

keys during the construction of the super-botnet. Each botnet deletes all but one of

these values after it finishes spreading, and compromising a botnet to extract all C keys

while it is still spreading would require either a honeypot capturing the worm while

it spreads, or near-instantaneous response from a defender to an infection. Short of a

honeypot capturing the worm, a defender’s best chance for capturing all C keys (or,

all C of the DS (Ki)) is, for each public key, to compromise at least one botnet that

knows that key. Then, the key can be extracted from the compromised botnet.

• Destroy at least one part of the command in the super-botnet, so that the command

cannot be reconstructed and executed. To do so, a defender would have to disable

each botnet that knows Pi for some i ∈ [1,C].

Assuming that a defender does not have oracular knowledge about how to find botnets

that match one of these two goals, and that botnets are compromised or disabled randomly

(as they are found by the defender), how many botnets can the defender expect to have to

locate and attack, on average, before accomplishing either of the goals?

We can answer these questions probabilistically, as described in detail in Appendix B.

Computing an expected value for the number of botnets that a defender must compromise to

reconstruct the command is the subject of Appendix B.1, and computing an expected value

for the number of botnets that a defender must disable to destroy one part of the command

is the subject of Appendix B.2. Numerical simulations support the results derived in the

appendices.

As an example of the analysis in Appendix B, let us assume that there are 15000 botnets

in the super-botnet, and that the command is split into 100 pieces (so, the values of R and

C used in the appendices are 150 and 100, respectively). To reconstruct the command, a

Page 135: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

123

defender would have to compromise, on average, 509.38 botnets — roughly 3.4% of the

entire super-botnet. This sizeable task would likely be beyond the capabilities of a single

defender. Destroying a single piece of the command would require the defender to disable,

on average, 14491.62 botnets — roughly 96.6% of the entire super-botnet. This number is

so large that, in practice, the super-botnet structure would become so partitioned before the

defender succeeded in eradicating one piece of the command, that the command could not

be reassembled anyway.

Clearly, defending against creative uses of the super-botnet structure is beyond the ca-

pabilities of a single defender, or multiple defenders working without shared knowledge or

coordination. To combat this new threat, new defenses must be developed.

7.5 Defense Against Super-Botnets

We now turn to defense. There would be no sense for an adversary to build a super-botnet

for immediate use; a traditional worm would be more effective in that scenario. The strength

of the decentralized super-botnet design is, after all, its resistance to defenders’ attacks over

time. It is reasonable to assume, therefore, that super-botnets would be deployed in advance

of an attack. Traditional anti-virus software would thus be useful against super-botnets, as

anti-virus software would have time for updates and detection.

Aside from the obvious use of anti-virus software, what defenses can be constructed

to specifically target super-botnets? As revealed by the resistance of the super-botnet’s

communication structure to the failure of individual botnets (discussed in Section 7.3), and

the super-botnet’s ability to distribute its attack plans beyond the reach of a single defender

(discussed in Section 7.4), super-botnets cannot be disabled by a single defender.

One solution to the threat of super-botnets is centralized defense. When anti-virus soft-

ware locates a super-botnet infection with routing information, it could pass the routing

Page 136: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

124

Hop 1Hop 2

Figure 7.13: Tracking infections backwards: the two-hop weakness

information along to a central defense location. Given enough disinfected hosts, this tactic

should reveal a sizeable portion of the super-botnet’s structure. In fact, given enough infor-

mation, a centralized defense location may be able to locate the adversary commanding the

super-botnet. If communication logs are available from the disinfected C&C servers of the

individual botnets, potential suspects to be the adversary are all those machines for which

routing information has not been collected at the central defense location, but which have

sent a super-botnet command to a disinfected machine. After all, a command sent by the

adversary looks just like a command passed on to one individual botnet from another, except

that no botnet has routing information pointing to the adversary.

Defenders should also not underestimate the value of studying the algorithm by which a

given super-botnet implementation spreads. Our worm pseudocode, for example, spaces out

C&C servers along the (tree-structured) infection path. If infections can be tracked back-

wards from a C&C server to the machine that infected it, then there are up to S − 1

other C&C servers two hops away, as illustrated in Figure 7.13. The adversary has a clear

incentive to choose small spread values and to destroy any information that might link dif-

ferent botnets.

Page 137: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

125

Aside from discerning the structure of the super-botnet, a centralized defense location

would also aid attempts to decipher time bombs before they are executed. When a machine

is disinfected, any public keys that are captured, as well as any un-executed time bombs,

can be sent to the central defense. Capturing all of the public keys (thereby giving advance

warning of all future time bombs sent into the super-botnet) is a less daunting task when all

available defenders combine their knowledge.

In fact, revisiting the five goals for a defender described in Section 7.1, we see that

a centralized defense mechanism can give defenders a starting point against the threat of

super-botnets:

1. Locate or identify the adversary. The adversary may issue a command to the super-

botnet in a wide variety of places. Only through central analysis of communication

logs and captured routing information could the adversary’s direct communications

be identified.

2. Reveal all the infected machines. This goal is promising: by collecting routing in-

formation from disinfected machines, a central defense should be able to construct at

least a partial image of the super-botnet’s structure.

3. Command the super-botnet. Obviously, an adversary who signs or encrypts com-

mands will effectively eliminate any possibility of defenders injecting their own com-

mands into the super-botnet.

4. Disable the super-botnet. With its decentralized C&C structure, a defender simply

cannot disable the entire super-botnet in a single stroke. There is no point at which

the C&C mechanism can be attacked to prevent commands from reaching all of the

botnets.

Page 138: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

126

5. Disrupt super-botnet commands. Even if a defender compromises a botnet, garbling

super-botnet commands is unlikely to work well. The same randomness that makes

the super-botnet’s communication structure so resistant to disabled individual botnets

also makes it resistant to disrupted commands. There is no guarantee that the adver-

sary’s commands must travel through the compromised botnet, so the majority (if not

all) of the super-botnet is likely to receive the adversary’s commands intact.

Though a centralized defense does not aid defenders with all five goals, it will give

defenders a much needed edge against this coming evolution in botnets. Unfortunately, it is

impossible to predict all forms of future super-botnet; consequently, it is difficult to know if

a centralized defense will always work. However, a centralized defense will be applicable

against all forms of super-botnet in which individual botnets remember routing information,

because this information can be extracted when a botnet is discovered, and delivered to

the central defense. Hence, security vendors and organizations would be well-advised to

prepare centralized defense mechanisms in the near future.

7.6 Summary

Super-botnets provide adversaries with an enormous amount of virtual firepower that is

easy to construct yet hard to shut down, as there is no single C&C channel for defenders

to target. The loss of individual botnets (which can, by themselves, be farmed out for

spamming and other traditional uses) is not catastrophic.

The trend toward smaller botnets can be seen as an evolutionary step leading to super-

botnets. This means that attacks by millions of machines on the Internet’s infrastructure can

appear from nowhere, as multiple small botnets join forces. Super-botnets must be consid-

ered a serious threat that must be defended against with new centralized defense mecha-

nisms.

Page 139: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

127

Be that as it may, the focus of this chapter was the structure and feasibility of super-

botnets; many points were left unresolved. The construction of super-botnets in this chapter

relied on the adversary knowing V prior to releasing the super-botnet worm, or at least hav-

ing an accurate estimate for it. It has been assumed in the rest of this thesis that V is unknown

to the worm author. Additionally, the epidemic duration of the tree-based super-botnet worm

was many orders of magnitude higher than that of a perfect self-stopping random-scanning

worm. These issues are the topic of the next chapter.

Page 140: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

128

Chapter 8

The Inference-Based Super-Botnet Worm

The super-botnet worm presented in Chapter 7 propagated in a tree structure, with each

infected host spawning at most S new infections. As the worm spread, it partitioned

the infected hosts into individual botnets, each containing HPB hosts. How

many botnets were created was controlled by the B variable, which (like the other

two variables) would be chosen by an adversary prior to releasing the worm. As such, an

adversary would need an estimate of the number of vulnerable hosts in the address space.

The estimate would likely have to be chosen conservatively; a poor estimate could cause the

worm to attempt to propagate endlessly.

This chapter will investigate whether it is possible to redesign the original super-botnet

worm to utilize the statistical inference self-stopping mechanism first seen in Chapter 6,

which failed to work properly because each individual host had to perform too many scans.

The focus of this chapter will be whether all of the hosts in an individual botnet can cooper-

ate to decide efficiently when to halt.

We begin the chapter with a description of the new super-botnet worm in Section 8.1,

and a brief discussion of the inter-botnet communication mechanism in Section 8.2. We then

test the performance of the new self-stopping worm in Section 8.3, and compare those re-

sults to the performance of the perfect self-stopping random-scanning worm in Section 8.4.

Section 8.5 will then discuss how effective this new worm is at constructing a super-botnet,

compared to the tree-based super-botnet worm from Chapter 7. Finally, we discuss defenses

against the self-stopping super-botnet worm in Section 8.6, before concluding the chapter in

Section 8.7.

Page 141: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

129

8.1 Description

A self-stopping super-botnet worm should create individual botnets of approximately

some target size, HPB, allowing the adversary to choose the size of individual

botnets based on, e.g., economic demand. However, the number of botnets created should

be determined as the worm spreads, based on a target infection percentage, p0. One possible

approach for constructing a super-botnet as such would be the following: each zombie in a

given botnet infects more hosts to add to its botnet, until either the botnet reaches a size of

HPB, or until the botnet infers that it should halt (i.e., the infection percentage

has passed p0). If the botnet has not yet halted once it has HPB zombies, it could

then spawn some number of new botnets, L, and then halt. The L constant is a rough

analogue to the S constant in Chapter 7; however, instead of limiting how many times

each host spreads, it limits how many times each botnet creates a new individual botnet.

Both constants, though, function to narrow the tree-structured propagation of the worm.

Unfortunately, this method is likely to produce poor results for the adversary: if many

new botnets are spawned as the percentage of vulnerable hosts infected approaches p0, those

botnets could halt prior to reaching size HPB— the adversary could end up with

many small, essentially useless botnets. Note that the adversary could not compensate for

this flaw by having each botnet attempt to propagate until it is of size HPB,

simply because there may not be enough vulnerable hosts remaining to fill in all of the

smaller botnets that have been spawned.

As a way of demonstrating that this concern is well-founded, consider how the afore-

mentioned propagation method would function if the super-botnet worm started with a sin-

gle seed infection. Once that botnet grew to HPB hosts in size, it would spawn

L additional botnets and halt. Once each of those botnets grew to HPB in

size, yielding (L + 1) · HPB infected hosts in total, each of those botnets

Page 142: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

130

would spawn L additional botnets and halt. In general, after this spawning and growth

activity had been repeated s times, there would be

HPB ·s∑

i=0

Li

infected hosts ins∑

i=0

Li

botnets. Of these botnets, Ls would be active, having just finished growing in size to

HPB, and would be ready to spawn L botnets each. Now, at some point,

after n rounds of spawning new botnets, the super-botnet worm would have to halt, having

infected p0 percent of the vulnerable population. That is,

HPB ·n−1∑i=0

Li < p0V ≤ HPB ·n∑

i=0

Li .

If we use some example numbers based on previous chapters, we can solve for n. For

example, if V = 1500000, HPB = 100, L = 3, and p0 = 0.500, we find that

n = 8. This means that once all of the botnets created have grown to HPB in

size, there will be

HPB ·n∑

i=0

Li = 984100

hosts infected, yielding a final infection percentage of 984100/1500000 = 0.656, which

is considerably larger than the adversary’s intended target of p0 = 0.500. The situation

is even worse for the adversary if p0 is set to 0.800. In this case, n = 9, meaning that

Ln = 19683 botnets will be spawned in the last round of botnet creation, but there will

be only 1500000 − 984100 = 515900 vulnerable, uninfected hosts remaining. As such, even

if the worm infects all remaining hosts (even though p0 < 1), the final 19683 botnets cre-

ated will have, on average, 515900/19683 = 26.2 hosts each. Not only will the worm have

infected all vulnerable hosts, instead of the smaller p0 percent, but there will be a large

number of botnets with significantly fewer than HPB hosts. While it may be

Page 143: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

131

possible to mitigate this issue with some form of inter-botnet planning (as opposed to each

botnet blindly spawning L new botnets if the target infection percentage has not yet been

reached), we suggest that an adversary would likely use a simpler approach.

The approach that the new super-botnet worm will take is to grow each individual botnet

until its size, s, reaches or surpasses 2 · HPB; at that point, the botnet will split

into two. Prior to a split, there will be a single command-and-control (C&C) machine in

the botnet, and the remaining machines will be clients to that C&C. When the botnet splits

into two, bs/2c of the non-C&C machines will be selected at random to leave the botnet

and form a new individual botnet. One of those bs/2c hosts is chosen at random to be

the new C&C server for that botnet. Once the split is complete, the two botnets will be

completely separate entities — no information about the old botnet is maintained in the new

one. This splitting scheme guarantees that every botnet in the super-botnet will contain at

least HPB hosts when the worm stops propagating (for non-pathological values

of V , I, and HPB), and at most 2 · HPB − 1.

Following the work done in Chapter 7, we assume that each botnet, B, will use a sym-

metric key, KB, to encrypt all communication within the botnet. Each botnet will also gener-

ate a public/private key pair, {PB, S B}, and distribute PB to allow for encrypted inter-botnet

communication. Techniques for distributing KB and PB were discussed in Chapter 7.

One new problem, though, is how a new symmetric encryption key can be generated

and distributed to all of the members of a new individual botnet after they divide off from

an old botnet. The simplest approach would be for the C&C server of the new botnet, B′,

to generate a new symmetric key, KB′ . Then, the server could encrypt KB′ (along with a

cryptographic hash of KB′ to provide data integrity on decryption) using the shared symmet-

ric key from the old botnet, KB. The encrypted data could then be transmitted to all of the

clients of the new botnet. While it would be possible for hosts in the old botnet (potentially

Page 144: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

132

compromised by a defender) to discover the symmetric key for the new botnet, KB′ (since

they know KB), this is likely not a concern in practice. Why not? If one host in a botnet were

compromised by a defender, that host would have approximately a 50% chance of splitting

off into the new botnet anyway, at which point the defender would learn KB′ legitimately.

That probability increases if more than one host is compromised. Alternately, if no hosts are

compromised, then there is no reason not to use this simple approach, as KB′ would be se-

cured by KB during transmission. However, Rafaeli and Hutchison [RH03] can be consulted

for details on more advanced group key management schemes.

With these details discussed, we now come to the key question of a self-stopping super-

botnet design: how do the individual botnets know when they should stop infecting more

hosts? The C&C for the botnet can decide to halt when it infers that the percentage of

vulnerable hosts that are infected, p, is greater than p0, using the inference techniques first

discussed in Chapter 6.

Specifically, the C&C maintains two counters: infScans and vulnScans. Just as in Chap-

ter 6, the first counter records the number of random scans that found vulnerable hosts that

have already been infected, and the second counter records the total number of random

scans that found vulnerable hosts. Using these two counters, the C&C server can estimate

the proportion of the vulnerable population that is already infected, p = infScans/vulnScans.

Following the work done in Chapter 6, there is a lower bound, minN, on vulnScans be-

fore p should be computed (derived from Equations 6.2, 6.3, 6.4, and 6.5). Once vulnScans

reaches minN, the C&C computes the z statistic of the sample, as described in Equation 6.1,

then compares z to zlimit, where zlimit is computed as described in Section 6.1 from p0 and

the global confidence constant α, both chosen by the adversary prior to releasing the worm.

If z ≥ zlimit, the botnet will halt. If z < zlimit, the C&C resets its infScans and vulnScans

counters, and begins collecting a new sample.

Page 145: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

133

How do the C&C’s counters grow, though? If the C&C does not decide to halt in a

given unit of time, it will send a single command to its clients. If the size of the botnet

has reached or surpassed 2 · HPB hosts, that command will instruct the botnet

to divide itself, as described previously. Otherwise, the botnet will instruct its hosts to

attempt to propagate, and each host will initiate a random scan. If a scan finds a vulnerable,

uninfected host, that host will be infected, and it will join the botnet; the C&C will increment

its vulnScans counter. If the scan finds a vulnerable but infected host, the C&C will be

informed by the scanning host, and the C&C will increment both its infScans and vulnScans

counters. Alternately, if the scan finds an invulnerable host, or fails for any other reason, the

counters will not be affected by that scan.

Simply, the C&C server draws its simple random sample (SRS) to test its inference

directly from the results of its clients’ propagation attempts. This behavior is defined more

formally with pseudocode in Figures 8.1 and 8.2, in which any initially-infected host in an

epidemic, h, acts as a C&C server and runs SPI(h, p0, α, 0, 0). Note

that a botnet may split in the same round in which it decides to halt — this design ensures

that no botnet is larger than 2 · HPB − 1 hosts when it halts. An additional

feature has also been added to the pseudocode: a L constant. This constant is similar

in function to the L constant discussed earlier in this chapter. In the new super-botnet

worm, a botnet can split at most L times before automatically halting. If the adversary

chooses a small value for L, the amount of traffic generated by a given botnet will be

reduced; however, the tradeoff is that many hosts in the epidemic will stop early, presumably

increasing the amount of time for which the worm is spreading, relative to the perfect self-

stopping worm. If speed is more important to the adversary, a L value of ∞ may be

used.

Page 146: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

134

1: procedure SPI(cc, p0, α, initInf , initVuln)2: cc.infScans← initInf3: cc.vulnScans← initVuln4: cc.splits← 05: cc.halted ← C(cc, p0, α)6: while cc.halted = N do . Iterate while loop once per unit of time7: BA(cc, p0, α)8: cc.halted ← C(cc, p0, α)9: end while

10: end procedure

11: procedure BA(cc, p0, α)12: if cc controls ≥ 2 · HPB hosts then13: Split into two botnets, choosing a new C&C server cc′ for the new botnet14: cc.splits← cc.splits + 115: cc′ runs SPI(cc′, p0, α, cc.infScans, cc.vulnScans)16: else17: Command each host in the botnet to attempt to propagate18: Newly-infected hosts join this botnet19: cc.infScans← cc.infScans + the number of infected hosts found20: cc.vulnScans← cc.vulnScans + the number of vulnerable hosts found21: end if22: end procedure

Figure 8.1: Pseudocode for a self-stopping super-botnet worm (part 1 of 2)

Page 147: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

135

23: function C(cc, p0, α)24: if cc.splits = L then . Automatic halting condition25: return Y26: end if27: n← cc.vulnScans28: Calculate zlimit from α . See, e.g., [Moo00, pp. 580–581]29: Calculate minN from p0 and zlimit using Equations 6.2, 6.3, 6.4, and 6.530: if n < minN then . Inference is ambiguous, so do nothing31: return N32: end if33: p← cc.infScans/n34: Calculate z from p, p0 and n using Equation 6.135: if z ≥ zlimit then . Inferred that p > p0, so halt36: return Y37: else . Could not infer that p > p0, so reset the test38: cc.infScans← 039: cc.vulnScans← 040: return N41: end if42: end function

Figure 8.2: Pseudocode for a self-stopping super-botnet worm (part 2 of 2)

Page 148: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

136

8.2 Inter-Botnet Communication

The final feature of the self-stopping super-botnet worm to be described is the inter-

botnet communication mechanism. The first component of the inter-botnet communication

design functions to distribute botnets’ public keys around the super-botnet. Each C&C

server maintains H slots in its memory containing routing information describing how to

contact some botnet (that botnet’s public key and its C&C server’s IP address). Each C&C

server initializes all H slots to contain its own routing information. Every time a new bot-

net splits off from an old one, the two botnets trade routing information. Similarly, each

time a redundant infection attempt occurs, the two infected hosts contact their C&C servers

to get routing information to trade, and deliver the results of that trade to their respective

C&C servers. This trading process uses the hockey card algorithm originally presented in

Section 7.3.

The second component of the inter-botnet communication design allows for botnets to

share the samples they are collecting to make their inferences, similar to the concept of data

sharing presented in Section 6.5. When a redundant infection attempt occurs involving two

hosts in different botnets, those hosts can “piggyback” the current random samples being

collected by their C&C servers onto the routing information exchange. In this way, one

botnet can acquire the other’s infScans and vulnScans counters and add them to its own, al-

lowing its sample to grow more quickly. To prevent one C&C from acquiring stale data from

another that has halted (and hence not performed any scans for some time), halted botnets

will not engage in these data exchanges. This behavior, running alongside the hockey card

algorithm, is formalized in Figures 8.3 and 8.4, which show pseudocode from the point of

view of one infected host and its C&C server. The P procedure is executed by each

host in a botnet when it receives a command from the C&C server to attempt to propagate.

Page 149: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

137

1: procedure P(host, cc)2: target ← a host chosen by a random scan performed by host3: if target is vulnerable then4: if target is already infected then5: host informs cc of the scan result6: cc increments both cc.infScans and cc.vulnScans7: host sends an identifier for cc to target (e.g., cc’s IP address)8: host informs target that cc’s botnet is actively propagating9: host receives an identifier for target’s botnet

10: host learns whether target’s botnet is actively propagating11: if host and target are in different botnets then12: HC(host, cc, target)13: if both botnets are actively propagating then14: SE(host, cc, target)15: end if16: end if17: else18: host informs cc of the scan result19: cc increments only cc.vulnScans20: host infects target21: target joins the botnet commanded by C&C server cc22: end if23: end if24: end procedure

25: procedure HC(host, cc, target)26: host and target agree on a random number N ∈ [1,M(1,H − 1)]27: host sends N to cc28: cc partitions its routing information memory into X and Y where |X| = N29: cc sends X to host30: host sends X to target31: host waits to receive X′ from target32: host sends X′ to cc33: cc saves the concatenation of Y and X′ as its new routing information memory34: end procedure

Figure 8.3: Pseudocode for inter-botnet communication during the spread of a self-stoppingsuper-botnet worm (part 1 of 2)

Page 150: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

138

35: procedure SE(host, cc, target)36: host requests sample information from cc37: cc sends cc.infScans and cc.vulnScans to host38: host sends both values to target39: host waits to receive infScans′ and vulnScans′ from target40: host sends infScans′ and vulnScans′ to cc41: cc.infScans← cc.infScans + infScans′

42: cc.vulnScans← cc.vulnScans + vulnScans′

43: end procedure

Figure 8.4: Pseudocode for inter-botnet communication during the spread of a self-stoppingsuper-botnet worm (part 2 of 2)

8.3 Performance

To test the performance of the self-stopping super-botnet worm, we simulated its perfor-

mance in an address space of size 232 with either 131072, 750000, or 1500000 vulnerable

hosts. We varied the target infection percentage between p0 = 0.500, 0.750, 0.900, 0.990,

and 0.999. Based on the results of the inter-botnet communication experiments performed

in Chapter 7, we seeded the infection with 100 hosts; also, for consistency with that chapter,

we set HPB = 100. For now, we let L = ∞. Finally, as in Chapter 6, we

begin by setting α = 0.05.

The results of this experiment, averaged over five trials for each (V, p0) pair, are sum-

marized in Table 8.1. Notice that the percent of the vulnerable population that is infected

is consistent across different vulnerable population sizes for each p0 value, as seen by read-

ing across the rows of the table. For example, when p0 = 0.500, the self-stopping super-

botnet worm consistently infected approximately 58% of the vulnerable population. There

is no significant difference between the percent infected when V = 131072, which was ap-

proximately 58.19% on average, when V = 750000 at approximately 58.06%, and when

V = 1500000 at approximately 57.79% on average (one-way ANOVA test: F = 1.5224,

df = 2, 12, P = 0.2575).

Page 151: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

139

Vulnerable population (V)131072 750000 1500000

0.500 58.19% 58.06% 57.79%0.750 81.23% 80.90% 81.02%

p0 0.900 93.01% 92.99% 93.05%0.990 99.42% 99.43% 99.42%0.999 99.95% 99.96% 99.96%

Table 8.1: Self-stopping super-botnet worm infection percentages for different values of Vand p0 when α = 0.05

Despite this consistency, the worm did overshoot the target infection percentage. Read-

ing across the rows of Table 8.1, we see that the percent of the vulnerable population is

higher than the target p0 for all tested combinations of p0 and V . As discussed in Chapter 6,

this is not surprising, since the worm must infer with confidence 1 − α that the percent in-

fected, p, is greater than p0. So, drawing inspiration from the results of Chapter 6, we ask:

is it possible to reduce the number of hosts infected by adjusting α?

To answer this question, we first define a metric that we call absolute error. Given a trial

measuring the final infection percentage of a self-stopping super-botnet worm for a single

(V, p0) pair, the absolute error of the trial is computed as

|100 · p0 − inf %| ,

where inf % is the final infection percentage of the trial. For example, if the final infection

percentage in a test of a self-stopping super-botnet worm is 58.2%, but p0 is 0.500, then the

absolute error is 8.2. In short, the absolute error metric measures how far an actual infection

percentage is from the intended infection percentage.

To test the effect of adjusting α, we repeated the previous experiment for different values

of α, ranging from 0.05 to 0.95. We restricted our tests to the case where V = 131072, both

for clarity, and because the self-stopping super-botnet worm performs consistently across

different sizes of vulnerable population. After each test of the super-botnet worm, we mea-

Page 152: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

140

0

5

10

15

20

25

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Abso

lute

erro

r

Alpha

Target = 0.500Target = 0.750Target = 0.900Target = 0.990Target = 0.999

Figure 8.5: The effect of different α values on the absolute error in the final infection per-centage for a self-stopping super-botnet worm with different target infection percentages(p0)

sured the absolute error in the final infection percentage. The test for each (p0, α) pair was

repeated five times; the average absolute error over these tests is displayed in Figure 8.5.

Predictably, the tests where p0 was large (namely, p0 = 0.990 and p0 = 0.999) had small

absolute error, regardless of α. The only way to get a large error measurement would be if

the worm undershot its target infection percentage by a sizeable amount, which is itself un-

likely given the large p0 value used in the inference calculations. Why is it unlikely? Recall

that the inference technique used by the self-stopping super-botnet worm first computes a

value called zlimit from α. Then, the worm computes a z statistic based on p0, the sample

proportion p, and the sample size n, as per Equation 6.1, where the sample size is dictated

by p0 and zlimit, as per Equations 6.2, 6.3, 6.4, and 6.5. Interestingly, if we fix p = p0 − c for

some constant c (e.g., c = 0.02, meaning that the worm collected a sample with 2% fewer

infected hosts than p0), then z actually decreases as p0 approaches either 0 or 1. This de-

crease means that a self-stopping super-botnet worm is more likely to undershoot its target

Page 153: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

141

-10

-8

-6

-4

-2

0

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

z st

atist

ic va

lues

Target infection percentage

zz limit

Figure 8.6: The effect of different p0 values on the computation of the z statistic when p is0.02 smaller than p0, using the minimum acceptable sample size for p0 and α, with α = 0.95

when p0 is near 0.5, and the worm infers that p > p0 since z ≥ zlimit, as opposed to if p0 is

a more extreme value. This phenomenon is illustrated in Figure 8.6, which shows how z

compares to zlimit as p0 varies from 0.02 to 0.999, with α = 0.95 and c = 0.02.

Returning to our observations on absolute error, we see that the value of α did have

a more meaningful effect on the error experienced by worms with targets p0 = 0.500,

p0 = 0.750, and p0 = 0.900. By observing Figure 8.5, we see that the absolute error was

generally the smallest when α was approximately 0.20 to 0.30. Note that the adversary’s

choice of p0 affected which α value produced the smallest absolute error. For example, when

p0 = 0.900, the average absolute error over five trials when α = 0.35 was only 0.19; how-

ever, when α = 0.20, the average absolute error grew to 1.02. This is a significant difference

(two-tailed unpaired t-test: t = 8.7929, df = 6, P = 0.0001). However, when p0 = 0.500,

α = 0.20 produced the best results. The average absolute error in this case was 0.36, as

Page 154: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

142

Vulnerable population (V)131072 750000 1500000

0.500 49.24% 49.00% 48.94%0.750 74.93% 74.96% 74.91%

p0 0.900 90.68% 90.49% 90.53%0.990 99.20% 99.17% 99.21%0.999 99.91% 99.94% 99.94%

Table 8.2: Self-stopping super-botnet worm infection percentages for different values of Vand p0 when α = 0.25

opposed to 4.53 in the case α = 0.35. Again, this is a significant difference (two-tailed

unpaired t-test: t = 17.2476, df = 6, P < 0.0001).

This experiment demonstrated that an adversary may base their choice of α on their

choice of p0. However, it also demonstrated that α = 0.25 is a reasonable choice for any

of the tested p0 values, based on a visual inspection of the absolute error graph in Fig-

ure 8.5. As such, we will fix α = 0.25 for future experiments. As a concrete demonstration

of the absolute error produced by a self-stopping super-botnet worm with α = 0.25, we re-

peated the experiment in which the worm was simulated in different sizes of vulnerable

population with different target infections. As before, there were 100 seed infections and

HPB = 100. The results of this experiment, in which the final infection percent-

ages are closer to the target infection percentage than before, are summarized in Table 8.2.

For example, in the case V = 131072 and p0 = 0.500, the average infection percentage when

α = 0.05 was 58.19. With α = 0.25, the average infection percentage over five trials was

49.24. That is, the average absolute error dropped from 8.19 to 0.76, which is a significant

difference (two-tailed unpaired t-test: t = 16.8023, df = 7, P < 0.0001).

These results illustrate that the self-stopping super-botnet worm is quite accurate, rela-

tive to its target infection percentage, p0. We have yet to consider, though, how this worm

Page 155: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

143

epidemic compares to that of a perfect self-stopping worm. This issue will be the topic of

the next section.

8.4 Comparison to the Perfect Self-Stopping Worm

To fully explore the self-stopping performance of the new super-botnet worm, we must

not only focus on the accuracy of the worm, but also on its performance relative to the

perfect self-stopping worm. That is, how much network traffic does this worm generate, and

how long do its epidemics last?

Based on the results in Table 8.2, we see that the new super-botnet worm consistently in-

fects less than 100% of the vulnerable population. As such, by the results of Appendix A, the

number of scans performed during an epidemic is optimal, relative to the perfect random-

scanning self-stopping worm. However, this measurement does not take into account the

overhead of communication between a C&C server and its client hosts, nor messages passed

between hosts in different botnets to facilitate information exchanges. How much additional

communication do these control messages introduce? Very little. For example, averaged

over the five trials in which V = 131072 and p0 = 0.900, there were 9.3 · 107 overhead

messages, compared to 1.0 · 1010 random scans — several orders of magnitude fewer. In

essence, the control messages have little effect on the amount of traffic generated by this

worm.

The self-stopping super-botnet worm does not perform optimally, however, when we

measure the epidemic duration. In those same trials of the super-botnet worm, the epidemic

lasted an average of 7.0 · 105 simulation steps to infect 118854.6 hosts. The perfect self-

stopping worm, seeded with 100 infections as the super-botnet worm was, would take only

3.1 · 105 units of time to infect the same number of hosts. The super-botnet worm took over

twice as long to halt simply because there is no guarantee that all of the botnets will halt

Page 156: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

144

Super-botnetworm

Hosts infected 118854.6Time to halt 698503.8Scans 10173778141.2Overhead messages 93454947.0Total communication 10267233088.2

Perfectworm

Hosts infected 118854.6Time to halt 309909.3Scans 10188028735.0Overhead messages N/ATotal communication 10188028735.0

Table 8.3: Comparison of the self-stopping super-botnet worm with p0 = 0.900 and the per-fect self-stopping random-scanning worm, both with 100 seed infections, when V = 131072

simultaneously. This comparison is summarized in Table 8.3. This result also demonstrates

that a self-stopping worm that generates only a single traditional botnet with a centralized

C&C to perform population inference would likely outperform a super-botnet worm, in

terms of epidemic duration at least, though the tradeoff for the adversary would be a single

point of failure for the self-stopping algorithm and the resulting botnet.

The comparison between the perfect worm and the super-botnet worm does not fully

illustrate the sub-optimal performance of the super-botnet worm, with respect to its epidemic

duration. Another issue with its performance is the high variability in the epidemic duration

as p0 approaches 1. To demonstrate this point, we simulated the super-botnet worm in

an environment with V = 131072 beginning with 100 seed infections. We varied p0 from

0.500 to 0.990, and recorded the epidemic duration over twenty trials for each p0 value.

The results of this experiment are summarized in Figure 8.7; the error bars represent one

standard deviation in either direction from the mean. Note that as p0 gets large, so too does

the standard deviation of the epidemic duration.

There are two likely reasons for this increased variability in epidemic duration as p0 gets

large. First, there is the highly random operation of scanning for the last of the vulnerable

Page 157: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

145

0

1e+06

2e+06

3e+06

4e+06

5e+06

6e+06

7e+06

8e+06

9e+06

0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1

Epid

emic

dura

tion

(sim

ulat

ion

step

s)

Target infection percentage

Figure 8.7: The effect of different p0 values on the mean and standard deviation of a self-stopping super-botnet worm’s epidemic duration

hosts to infect. Additionally, the decision of any botnet to halt early will halt the propagation

attempts of many hosts — all of the hosts in that botnet. As such, there could be many or

few hosts performing the later infection attempts, depending on the outcome of each bot-

net’s repeated inference calculations. This presents an interesting tradeoff for the adversary:

while large p0 values result in epidemics with low absolute error, as we saw earlier, these

epidemics are highly variable in duration.

Another interesting topic to consider is how the L constant affects the performance

of the self-stopping super-botnet worm. Until now, we have not set any limit on the number

of times each botnet can split prior to halting — that is, we have used L = ∞. It is

reasonable to assume that setting a finite limit should increase the duration of the epidemic,

as there could be fewer hosts actively propagating near its end. To test whether this outcome

is the case, we reran the previous experiment in which p0 was varied and twenty trials of the

Page 158: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

146

super-botnet worm epidemic were simulated for each p0 value. This time, however, we set

L to either be a moderate value of 3 or a restrictive value of 2.

When L = 3, there was typically no significant effect on the performance of the

worm. For example, when p0 = 0.500, the epidemic duration was approximately 4.1 · 105,

compared to approximately 4.2 · 105 when L = ∞. The difference was not statistically

significant (two-tailed unpaired t-test: t = 0.3777, df = 35, P = 0.7080). This result was

mirrored for all of the other p0 values except, interestingly, p0 = 0.750. In this case, the

epidemic duration increased significantly, from approximately 4.8 · 105 when L = ∞ to

approximately 5.5 · 105 when L = 3 (two-tailed unpaired t-test: t = 2.5556, df = 36,

P = 0.0150). What is more, significantly fewer hosts were infected over this longer time pe-

riod: approximately 74.6% of the vulnerable population, compared to approximately 74.8%

when L = ∞ (two-tailed unpaired t-test: t = 2.2891, df = 34, P = 0.0284). One likely

cause of this unforeseen result is that the finite L constant was “artificially” halting the

activity of a large number of botnets when the epidemic was just shy of the target infec-

tion percentage, leaving fewer hosts scanning for additional targets to infect. The end result

is not only a longer epidemic duration, but also fewer hosts infected near the end of the

outbreak.

The L = 2 results add credence to this hypothesis. Here, the effect of changing

L from ∞ to 2 typically resulted in no significant change to the epidemic duration, but

decreased the final infection percentage. This result illustrates that the L value leaves

fewer active hosts performing the final scans, resulting in smaller infection percentages after

a given period of time. The only exception to this result was when p0 = 0.500. Here, the

epidemic duration fell significantly with the introduction of a finite limit, but so too did the

infection percentage. As such, this case offers little insight into the effects of a finite L

value.

Page 159: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

147

The conclusion that we can draw from these results is that an adversary may use a finite

L value to prevent any individual botnet from generating too much traffic, though there

is a risk of difficult-to-predict side effects on the worm’s performance as a tradeoff. If an

adversary is concerned about this risk, a L value of∞ is more appropriate.

8.5 Super-Botnet Capabilities

Until now, we have focused on how well this new worm design could function as a self-

stopping worm. That is, how accurate is it, relative to the target percentage p0, and how long

its epidemics last. This self-stopping super-botnet worm poses another potential threat: the

creation of a super-botnet. This section will analyze how effective this worm is at generating

a decentralized super-botnet.

From the perspective of the adversary, there are three important metrics that define the

usefulness of a super-botnet:

1. The size of the individual botnets. There is a tradeoff between botnets being large

enough to be useful to the adversary (e.g., for renting to spammers) versus the poten-

tial for a large number of infected hosts being lost to the adversary should an individ-

ual botnet be disabled.

2. How many individual botnets the adversary can command. In Chapter 7, we defined

the single-seed connectivity of a super-botnet as the number of individual botnets that

would receive a command sent to a single seed infection, on average. The command

would traverse the inter-botnet communication graph, in which there were H links

(including potential duplicate links and self links) to each botnet.

Page 160: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

148

3. The resilience of the super-botnet communication structure. Ideally, the adversary

would like the single-seed connectivity of the super-botnet to remain high even when

many individual botnets are disabled.

The tree-based super-botnet worm in Chapter 7 performed well on all three counts. The

size of the individual botnets was fixed at HPB, and our experiments showed

that the initial single-seed connectivity of a super-botnet generated with this worm was

100% when H > 1, and the resilience was highest for H ≥ 5. However, this worm was not

capable of self-stopping when the size of the vulnerable population, V , was unknown to the

adversary. Does the self-stopping capability in the new super-botnet worm interfere with

the usefulness of the generated super-botnet?

While the size of the individual botnets generated by this new worm will not be fixed

at HPB, the adversary is still guaranteed that the size of each botnet will be be-

tween HPB and 2 · HPB − 1. As such, the size of individual botnets

is not a concern. However, how will the self-stopping feature in this new worm affect the

single-seed connectivity of the resulting super-botnet?

To investigate this issue, we simulated a self-stopping super-botnet worm starting with

100 seed infections in a vulnerable population of size V = 1500000. We fixed α = 0.25,

HPB = 100, and L = ∞. We varied the number of links to each botnet in

the communication graph from H = 1 to H = 10, and we also varied the target infection

percentage, p0, among the values 0.500, 0.750, 0.900, 0.990, and 0.999, to vary the number

of communication opportunities between botnets (caused by redundant infection attempts).

The experiment for each (H, p0) pair was repeated five times, to account for randomness.

As with the tree-based super-botnet worm, the single-seed connectivity of the resulting

inter-botnet communication graph was 100% for every trial in which H > 1, demonstrating

Page 161: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

149

that the new self-stopping algorithm does not interfere with establishing a highly-connected

super-botnet.

To test the resilience of the super-botnet communication structure, we fixed p0 = 0.999,

for clarity of exposition. We retained the same V , α, HPB, and L values as in

the previous experiment. We then varied H among 2, 3, 4, 5, and 10, to ascertain the effect

that the number of links in the communication graph would have on resilience. For each

H value, we generated twenty simulated super-botnets. This number was increased from

the typical five trials used throughout this thesis to improve the results of some borderline

statistical tests. Then, for each super-botnet, we disabled up to 5000 individual botnets,

using both random and high-degree-first targeting (as in Chapter 7), recording the single-

seed connectivity of the super-botnets.

The results were similar, though not identical, to the results achieved with the tree-based

super-botnet worm. Unlike in Chapter 7, these simulations did not generate precisely 15000

individual botnets. The number of botnets generated in each simulation was determined at

run-time by the propagating worm. For example, in the H = 4 trials, an average of 10836.7

individual botnets were created. As such, 5000 botnets being disabled could potentially

have a larger effect than with the tree-based super-botnet worm.

Based on the results of the random attack experiments, illustrated in Figure 8.8, it is

reasonable to assume that the adversary would choose H = 4. In the H = 4 case, the av-

erage single-seed connectivity after 5000 botnets were disabled was 2899.40, compared to

2123.80 in the H = 3 case — a statistically-significant difference (two-tailed unpaired t-test:

t = 8.0727, df = 37, P < 0.0001). When we compare the H = 4 case to the H = 5 case, in

which the average single seed connectivity was 3046.96, we do not see a significant differ-

ence (two-tailed unpaired t-test: t = 1.6059, df = 37, P = 0.1168).

Page 162: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

150

0

2000

4000

6000

8000

10000

12000

14000

0 1000 2000 3000 4000 5000

Botn

ets

rece

iving

adv

ersa

ry’s

com

mun

icatio

n

Botnets disabled

H = 2H = 3H = 4H = 5

H = 10

Figure 8.8: The average number of botnets that receive a command sent to a single seedas random attacks bring down individual botnets created by the self-stopping super-botnetworm, for various values of H

The self-stopping version of the super-botnet worm benefited more than the tree-based

version from a larger H value when faced with oracular attacks dismantling 5000 individual

botnets, as illustrated in Figure 8.9. With this version of the super-botnet worm, unlike

with the tree-based version, there was a statistically-significant difference in the single-seed

connectivity between the H = 5 and H = 10 cases. With H = 5, the average single-seed

connectivity was 2378.43, compared to 3218.87 when H = 10 (two-tailed unpaired t-test:

t = 9.2039, df = 36, P < 0.0001).

However, if fewer individual botnets are disabled (e.g., 3000 instead of 5000), then there

is no incentive for the adversary to choose such a large H value. With 3000 disabled botnets,

there is a statistically-significant difference between the H = 3 case, in which the average

single-seed connectivity was 5100.12, and the H = 4 case with 5653.97 (two-tailed unpaired

t-test: t = 5.4944, df = 37, P < 0.0001). However, there is no significant difference between

Page 163: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

151

0

2000

4000

6000

8000

10000

12000

14000

0 1000 2000 3000 4000 5000

Botn

ets

rece

iving

adv

ersa

ry’s

com

mun

icatio

n

Botnets disabled

H = 2H = 3H = 4H = 5

H = 10

Figure 8.9: The average number of botnets that receive a command sent to a single seedas oracular attacks bring down individual botnets created by the self-stopping super-botnetworm, for various values of H

the H = 4, H = 5, and H = 10 cases, with an average connectivity between 5653.97 and

5832.16 (one-way ANOVA test: F = 1.5886, df = 2, 57, P = 0.2131).

Aside from motivating why H = 4 is a likely choice for an adversary to make, these

experiments also show that the inter-botnet communication structure created by this worm

is resilient in the face of defenders’ attacks. Even with 3000 botnets out of the original

average of 10836.7 disabled by oracular attacks (almost one-third), the adversary was still

able to command over one-half of the original botnets from any one seed, on average. In

short, the self-stopping capabilities of this worm did not negatively impact the usefulness to

the adversary of the super-botnet.

Page 164: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

152

8.6 Defense Against Self-Stopping Super-Botnets

Given the success of the self-stopping super-botnet worm, the next step is to turn to

defense: how can defenders protect themselves and others against the possible future release

of this type of worm? One method available to defenders is to inject false information about

p into a botnet as it spreads. Specifically, honeypots can be configured that pretend to be

worm instances; if a real worm instance hits a honeypot on a random scan, the honeypot will

share its “proportion data” with the infected host. The intent is that, if the shared proportion

data is designed properly, the infected host’s botnet will decide to halt soon after receiving

the fake data. In the best case for the defender, over 2 · HPB infected hosts

could halt simultaneously, if the attacked botnet was about to split. Even if the botnet was

not about to split when it received the defender’s data, it is still possible for a defender to

simultaneously halt up to 2 · HPB − 1 infected hosts.

Unfortunately for defenders, the worm’s self-stopping mechanism centers around sam-

ples being drawn from the population of vulnerable hosts. As such, if fewer vulnerable hosts

are infected at a given time because a defender shut down some of the botnets, the other bot-

nets will compensate by splitting and propagating more. The mantra for a defender, then,

will likely be to prune the spread of the worm early, hopefully before too many individual

botnets have been created.

To test the effectiveness of this defensive mechanism, a subset of the experiments used to

construct Table 8.2 were repeated; this time, however, there were honeypots in the address

space, each willing to share “sample data” of infScans = 100000 and vulnScans = 100000.

For brevity, p0 was fixed at 0.500, since the target infection percentage has nothing to do

with the goal of early pruning. Otherwise, the experiment remains unchanged from before:

HPB = 100, L = ∞, α = 0.25, and there were 100 initial seeds. V was var-

ied between 131072, 750000, and 1500000, to gauge the effect on defense effectiveness of

Page 165: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

153

the probability of a random scan hitting a honeypot, relative to the probability of a scan hit-

ting a vulnerable or infected host. As for the actual number of honeypots, that number was

varied between 0 and 10000 in increments of 1000, to ascertain how increasing the number

of honeypots increased their effectiveness.

Each simulation was repeated twenty times, to compensate for the potential of a high

degree of randomness in how frequently honeypots are hit. In the V = 131072 simulations,

where the probability of hitting a honeypot on a random scan was high relative to the prob-

ability of finding a vulnerable or infected host, even the addition of 1000 honeypot hosts

quickly reduced the average final infection percentage. A larger number of honeypots was

necessary when V = 1500000. One conclusion to draw from these results is that if there are

a large number of vulnerable hosts, a defender should (if possible) set up more honeypots,

to increase the probability of a random scan hitting a honeypot. The result of an increas-

ing number of honeypots on the final infection percentage for the three different sizes of

vulnerable population is illustrated in Figure 8.10.

One interesting point in these results is that even a small number of honeypots can help

reduce the number of hosts infected. For example, when V = 1500000, even introducing

1000 honeypots reduced the final infection percentage from 48.9% to 48.4% on average.

While this difference is by no means large, it is highly significant (two-tailed unpaired t-

test: t = 9.6349, df = 35, P < 0.0001). This result demonstrates that the mantra of early

pruning discussed earlier is not correct. That is, honeypot defenses do not have to stop an

epidemic in its tracks early on in order to be effective.

Because of this effectiveness, an adversary may attempt to include anti-defensive mea-

sures in the worm. For example, when host A communicates with host B (which may be a

honeypot), host A may choose to ignore host B’s shared proportion data, if it differs signif-

icantly from host A’s. In this way, infected hosts could still share their presumably similar

Page 166: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

154

0

5

10

15

20

25

30

35

40

45

50

0 2000 4000 6000 8000 10000

Perc

ent o

f vul

nera

ble

popu

latio

n in

fect

ed

Honeypots

V = 131072V = 750000

V = 1500000

Figure 8.10: The effect of honeypots serving false samples on the final infection percentageof the self-stopping super-botnet worm for different values of V

data, to increase the sample sizes held by their C&C servers faster; however, the avenue of

attack for defenders targeting the botnets would be reduced.

Alternately, an adversary may simply disable inter-botnet sharing of sample data. The

tradeoff is that if the individual botnets do not share their proportion data, it will take longer

for each botnet to collect a large enough sample of vulnerable hosts to perform statistical

inference. As such, we would predict that the worm would be more likely to overshoot its

target infection percentage, p0. To demonstrate the effect of disabling inter-botnet sample

sharing, we repeated the previous set of experiments using zero honeypots, but with inter-

botnet sharing of sample data disabled.

The results of this experiment, summarized in Table 8.4, were as predicted. Because

each botnet had to perform more scans prior to performing each inference calculation,

the worm consistently overshot its target infection percentage. The effect was highly pro-

nounced in the p0 = 0.999 case. When V = 131072, the worm performed an average of

Page 167: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

155

Vulnerable Population (V)131072 750000 1500000

0.500 56.57% 56.41% 56.33%0.750 82.44% 82.40% 82.41%

p0 0.900 95.90% 95.87% 95.88%0.990 100.00% 100.00% 100.00%0.999 100.00% 100.00% 100.00%

Table 8.4: Self-stopping super-botnet worm infection percentages for different values of Vand p0 when α = 0.25 and inter-botnet sample sharing is disabled

6.1 · 1011 scans during the epidemic, which infected all V vulnerable hosts. This is over an

order of magnitude larger than the 5.3 · 1010 scans that the perfect self-stopping worm would

take to infect the same number of hosts, were it seeded with 100 infections.

The adversary could adjust α to restore the worm’s accuracy when p0 = 0.500 or

p0 = 0.750. However, even with α = 0.95, the worm still consistently overshoots its tar-

get for the p0 values of at least 0.900. For example, when V = 131072 and p0 = 0.900, the

worm infected an average of 92.26% of the vulnerable population over five trials, yielding

an average absolute error of 2.26. Looking back at Table 8.2, we see that the average ab-

solute error when V = 131072 and p0 = 0.900, with sample sharing enabled and α = 0.25,

was only 0.68. Disabling data sharing caused a significant difference in the worm’s absolute

error here (two-tailed unpaired t-test: t = 13.2432, df = 6, P < 0.0001). The picture is even

worse for the adversary without data sharing when p0 = 0.999. Even with α = 0.95, the

worm still performed an average of 6.1 · 1011 scans during the epidemic, as before the ad-

justment to α. What these results illustrate is that an adversary may disable sample sharing

to make the worm immune to honeypots providing false data, though the worm’s perfor-

mance and accuracy suffer for higher values of p0.

If the adversary does elect to disable sample sharing, defenders would have to take a

different approach to combating the worm. For example, when scanned, a honeypot could

Page 168: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

156

pretend to join the scanning botnet, and it could record information about the C&C server

and other infected computers. From there, the centralized defense mechanisms discussed

in Chapter 7 could be employed to target the resulting super-botnet directly. This defensive

technique could even be used if sample sharing were enabled, giving defenders two viable

options for combating this threat.

8.7 Summary

A super-botnet, characterized by firepower and resilience, can be constructed by a self-

stopping random-scanning worm. As such, it is not necessary for an adversary to know the

size of the vulnerable population prior to releasing a super-botnet worm. Also, compared to

the tree-based worm, the risk of a large portion of the infection tree being pruned (e.g., by

early disinfection or an egress firewall) is smaller with this new worm.

However, this new propagation algorithm does introduce new defensive avenues against

super-botnets. The use of honeypots providing false samples to propagating hosts can help

to reduce the size of super-botnets created by the self-stopping super-botnet worm. The de-

sign of further defensive techniques, along with the development of the centralized defense

mechanisms developed in the previous chapter, must continue to be a priority, in advance of

a super-botnet attack in the wild.

Page 169: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

157

Chapter 9

Conclusions and Future Work

The focus of this thesis has been resilience of worm code. This resilience ranges from

the longevity of worm code on individual infected hosts, improved through the use of self-

stopping techniques, to the durable structure of a botnet generated by the worm code, in the

case of the worms that construct super-botnets.

We began this thesis by investigating a theoretical, perfect solution to the self-stopping

random-scanning worm problem in Chapter 3. While we were unable to demonstrate any

self-stopping worm design that could match the perfect worm’s performance, in terms of

epidemic duration, we showed in Appendix A that any random-scanning worm infecting

less than 100% of the vulnerable population will match the perfect worm’s performance

in terms of scans performed. Understanding what steps an adversary may take to further

reduce the epidemic duration of self-stopping worms by limiting the number of hosts that

halt prematurely is an interesting future direction of research.

In Chapter 4, we explored some previous work on the self-stopping problem, before

demonstrating in Chapter 5 how a self-stopping worm with consistent performance and a

wide range of target infection percentages could be designed. This worm design, based on

the biological technique of quorum sensing, was highly versatile. It could be augmented

with feedback loops to improve performance, and individual worm instances could employ

cheating behavior to decrease their chances of detection, at a potential cost to the overall

epidemic performance. Future work on this worm could focus on establishing a mathemati-

cal relation between many of the numbers used in the worm code (ranging from autoinducer

and enzyme message strengths and the halting threshold, to feedback values and cheating

Page 170: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

158

percentages) to the final infection percentage achieved by the worm. A better understand-

ing of this relationship, as opposed to empirical, simulation-based results, may help to shed

some light on what configurations of a quorum sensing worm an adversary may use, espe-

cially since there could be significant differences in performance between a worm released

in the wild and one in a simulator.

Chapter 6 attempted to address whether a self-stopping worm could be designed which

used fewer of these empirically-determined numbers. That is, could a self-stopping worm

be designed which used only a few more numbers than the target infection percentage, p0?

We were unable to demonstrate such a design that did not require extensive cooperation

between infected hosts. However, this chapter provides the groundwork for potential future

investigation into this problem.

The notion of cooperation between infected hosts, however, led us to the idea of a super-

botnet — a collection of independent, small botnets, each composed of many infected hosts.

In Chapter 7, we demonstrated how an adversary could build a super-botnet using a tree-

based method, though we ignored the self-stopping problem during this investigation. We

also demonstrated a new form of time-delayed attack that could be launched with a super-

botnet, and we gave a detailed mathematical analysis of the attack in Appendix B. Finally,

we discussed how centralized defensive techniques could be used to combat the decentral-

ized structure of the super-botnet. A potential area of future research would be to explore

how these defensive techniques would work from a usability standpoint. What would be

the best way for network administrators to capture routing information from an individual

botnet and deliver it to some central repository? How could, e.g., anti-virus software be

adapted to easily allow home users to participate in the defense against super-botnets?

Having determined that it is possible to construct a super-botnet, we turned our atten-

tion to how a super-botnet could solve the self-stopping problem. In Chapter 8, we applied

Page 171: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

159

the population inference model first presented in Chapter 6 to a worm that builds a super-

botnet. We demonstrated that infected hosts in a super-botnet topology can easily share their

scan results and use population inference to halt appropriately. Predictably, there was a fair

amount of overhead communication involved in this self-stopping solution (though there

were still several orders of magnitude fewer control messages than random scans). Under-

standing how an adversary may reduce this overhead communication could give defenders

significant insight into future developmental directions that super-botnet worms may take.

More important, however, is for future research to address how the defenses proposed in

Chapter 8 could be quickly deployed. If a self-stopping super-botnet worm spreads rapidly,

how could defenders deploy honeypots rapidly enough to provide false information to the

spreading worm? Having the honeypot act on heuristics may be one possible solution.

Another interesting topic for future research would be to investigate how the eventual

introduction of IPv6 will impact the random-scanning worm paradigm. Random scanning is

not practical in a sparsely populated address space of size 2128. However, given the simplic-

ity and reliability of random scanning, it is unlikely that this approach to designing worms

will disappear completely. Future research should address the potential for hybrid worm

designs (e.g., a worm may use topological, passive, or hit-list scanning to locate clusters

of hosts in the address space, then random scan within that cluster), and how those designs

could exhibit self-stopping behavior.

Future directions of research aside, the threat posed by self-stopping and super-botnet

worms is very real. Development and deployment of defensive technologies to protect

against these worms is essential to ensuring the safety of users, especially given the poten-

tial for these worms to steal valuable information or even launch large-scale attacks against

unprepared targets.

Page 172: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

160

Bibliography

[Ack08] Peter J. Acklam. An algorithm for computing the inverse normal cumula-

tive distribution function. http://home.online.no/˜pjacklam/notes/

invnorm/, last accessed 18 January 2008.

[AJB00] Reka Albert, Hawoong Jeong, and Albert-Laszlo Barabasi. Error and attack

tolerance of complex networks. Nature, 406:378–382, 2000.

[AM07] John Aycock and Alana Maurushat. Future threats. In Proceedings of the 2007

Virus Bulletin Conference, pages 275–281, September 2007.

[Avi85] Algirdas Avizienis. The N-version approach to fault-tolerant software. IEEE

Transactions on Software Engineering, SE-11(12):1491–1501, 1985.

[Ayc05] John Aycock. Covert zombie ops. Virus Bulletin, pages 11–13, May 2005.

[Ayc06] John Aycock. Computer Viruses and Malware. Springer, 2006.

[BC94] Steven M. Bellovin and William R. Cheswick. Network firewalls. IEEE Com-

munications Magazine, 32(9):50–57, 1994.

[BGK07] Francesco Bernardini, Marian Gheorghe, and Natalio Krasnogor. Quorum

sensing P systems. Theoretical Computer Science, 371(1–2):20–33, February

2007.

[Bra89] Robert Braden. Requirements for Internet hosts — communication layers.

RFC 1122, October 1989.

[Bur95] Michael Buro. ProbCut: An effective selective extension of the αβ algorithm.

International Computer Chess Association Journal, 18(2):71–76, 1995.

Page 173: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

161

[Can05] John Canavan. The evolution of malicious IRC bots. In Proceedings of the

2005 Virus Bulletin Conference, pages 104–114, 2005.

[CBR03] William R. Cheswick, Steven M. Bellovin, and Aviel D. Rubin. Firewalls and

Internet Security: Repelling the Wily Hacker. Addison-Wesley, 2nd edition,

February 2003.

[CJM05] Evan Cooke, Farnam Jahanian, and Danny McPherson. The zombie roundup:

Understanding, detecting, and disrupting botnets. In Proceedings of the

USENIX Steps to Reducing Unwanted Traffic on the Internet Workshop, pages

39–44, 2005.

[Daw89] Richard Dawkins. The Selfish Gene. Oxford University Press, new edition,

1989.

[DDHR01] Roman Danyliw, Chad Dougherty, Allen Householder, and Robin Ruefle.

Nimda worm. CERT Advisory CA-2001-26, 18 September 2001.

[DDW99] Herve Debar, Marc Dacier, and Andreas Wespi. Towards a taxonomy of

intrusion-detection systems. Computer Networks, 31(8):805–822, 1999.

[DGH+87] Alan Demers, Dan Greene, Carl Hauser, Wes Irish, John Larson, Scott Shenker,

Howard Sturgis, Dan Swinehart, and Doug Terry. Epidemic algorithms for

replicated database maintenance. In Proceedings of the Sixth Annual ACM

Symposium on Principles of Distributed Computing, pages 1–12, 1987.

[DGLL07] David Dagon, Guofei Gu, Christopher P. Lee, and Wenke Lee. A taxonomy

of botnet structures. In Proceedings of the 23rd Annual Computer Security

Applications Conference, pages 325–339, 2007.

Page 174: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

162

[DH98] Stephen E. Deering and Robert M. Hinden. Internet protocol, version 6 (IPv6)

specification. RFC 2460, December 1998.

[dKI00] Teresa R. de Kievit and Barbara H. Iglewski. Bacterial quorum sensing in

pathogenic relationships. Infection and Immunity, 68(9):4839–4849, Septem-

ber 2000.

[DMC96] Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. Ant system: Opti-

mization by a colony of cooperating agents. IEEE Transactions on Systems,

Man, and Cybernetics, Part B: Cybernetics, 26(1):29–41, February 1996.

[DMS04] Roger Dingledine, Nick Mathewson, and Paul Syverson. Tor: The second-

generation onion router. In Proceedings of the 13th USENIX Security Sympo-

sium, pages 303–320, 2004.

[FAV08] Nathan Friess, John Aycock, and Ryan Vogt. Black market botnets. In Pro-

ceedings of the 2008 MIT Spam Conference, 2008.

[GP07] Marisa I. Gomez and Alice Prince. Opportunistic infections in lung disease:

Pseudomonas infections in cystic fibrosis. Current Opinion in Pharmacology,

7(3):244–251, June 2007.

[GSN+07] Julian B. Grizzard, Vikram Sharma, Chris Nunnery, Brent ByungHoon Kang,

and David Dagon. Peer-to-peer botnets: Overview and case study. In Proceed-

ings of the First Workshop on Hot Topics in Understanding Botnets, 2007.

[HB04] Jennifer M. Henke and Bonnie L. Bassler. Bacterial social engagements.

TRENDS in Cell Biology, 14(11):648–656, November 2004.

[HD03] Allen Householder and Roman Danyliw. Increased activity targeting windows

shares. CERT Advisory CA-2003-08, 11 March 2003.

Page 175: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

163

[HKM95] James J. Higgins and Sallie Keller-McNulty. Concepts in Probability and

Stochastic Modeling. Wadsworth Publishing Company, 1995.

[IH05] Nicholas Ianelli and Aaron Hackworth. Botnets as a Vehicle for Online Crime.

CERT Coordination Center, 2005.

[KE95] James Kennedy and Russell Eberhart. Particle swarm optimization. In Pro-

ceedings of the IEEE Conference on Neural Networks, pages 1942–1948, 1995.

[Kep94] Jeffrey O. Kephart. A biologically inspired immune system for computers. In

Proceedings of the Fourth International Workshop on the Synthesis and Simu-

lation of Living Systems, pages 130–139, 1994.

[KSSW97] Jeffrey O. Kephart, Gregory B. Sorkin, Morton Swimmer, and Steve R. White.

Blueprint for a computer immune system. In Proceedings of the 1997 Virus

Bulletin Conference, 1997.

[KW91] Jeffrey O. Kephart and Steve R. White. Directed-graph epidemiological mod-

els of computer viruses. In Proceedings of the 1991 IEEE Computer Society

Symposium on Research in Security and Privacy, pages 343–359, 1991.

[KWC93] Jeffrey O. Kephart, Steve R. White, and David M. Chess. Computers and epi-

demiology. IEEE Spectrum, 30(5):20–26, May 1993.

[Lis08] Tom Liston. LaBrea: “sticky” honeypot and IDS. http://labrea.

sourceforge.net/labrea-info.html, last accessed 11 April 2008.

[Ma06] Justin Ma. Private e-mail communication, 17 November 2006.

[McC01] Jim McClurg. Stop port scans with LaBrea. SANS Institute, October 2001.

Page 176: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

164

[Mil06] Debra L. Milton. Quorum sensing in vibrios: Complexity for diversification.

International Journal of Medical Microbiology, 296(2–3):61–71, April 2006.

[MMP03] Michael T. Madigan, John M. Martinko, and Jack Parker. Brock Biology of

Microorganisms. Pearson Education, Inc., tenth edition, 2003.

[Moo00] David S. Moore. The Basic Practice of Statistics. W. H. Freeman and Com-

pany, 2nd edition, 2000.

[MPS+03] David Moore, Vern Paxson, Stefan Savage, Colleen Shannon, Stuart Staniford,

and Nicholas Weaver. Inside the Slammer worm. IEEE Security & Privacy,

1(4):33–39, July 2003.

[MSc02] David Moore, Colleen Shannon, and k claffy. Code-Red: a case study on

the spread and victims of an Internet worm. In Proceedings of the 2nd ACM

SIGCOMM Workshop on Internet Measurement, pages 273–284, 2002.

[Mur88] W. H. Murray. The application of epidemiology to computer viruses. Comput-

ers & Security, 7(2):139–145, April 1988.

[MvOV01] Alfred J. Menezes, Paul C. van Oorschot, and Scott A. Vanstone. Handbook of

Applied Cryptography. CRC Press, 5th edition, 2001.

[MVS05] Justin Ma, Geoffrey M. Voelker, and Stefan Savage. Self-stopping worms.

In Proceedings of the 2005 ACM Workshop on Rapid Malcode, pages 12–21,

2005.

[One96] Aleph One. Smashing the stack for fun and profit. Phrack, 7(49), 1996.

[PDMR06] Maxim Peysakhov, Christopher Dugan, Pragnesh Jay Modi, and William Regli.

Quorum sensing on mobile ad-hoc networks. In Proceedings of the Fifth In-

Page 177: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

165

ternational Joint Conference on Autonomous Agents and Multiagent Systems,

pages 1104–1006, 2006.

[Pos81] Jon Postel. Internet protocol. RFC 791, September 1981.

[PPIG95] James P. Pearson, Luciano Passador, Barbara H. Iglewski, and E. P. Greenberg.

A second N-acylhomoserine lactone signal produced by Pseudomonas aerugi-

nosa. Proceedings of the National Academy of Sciences of the United States of

America (PNAS), 92(5):1490–1494, February 1995.

[Ray96] Jeffrey Rayport. The virus of marketing. Fast Company Magazine, 06, De-

cember 1996.

[RH03] Sandro Rafaeli and David Hutchison. A survey of key management for secure

group communication. ACM Computing Surveys, 35(3):309–329, September

2003.

[RLK04] Peter Reiher, Jun Li, and Geoff Kuenning. Midgard worms: Sudden surprises

from a large resilient zombie army. Technical Report UCLA-CSD-040019,

University of California, Los Angeles, April 2004.

[RMK+96] Yakov Rekhter, Robert G. Moskowitz, Daniel Karrenberg, Geert Jan de Groot,

and Eliot Lear. Address allocation for private internets. RFC 1918, February

1996.

[RP94] Joyce K. Reynolds and Jon Postel. Assigned numbers. RFC 1700, October

1994.

[SAM07] Daniel R. Simon, Sharad Agarwal, and David A. Maltz. AS-based account-

ability as a cost-effective DDoS defense. In Proceedings of the First Workshop

on Hot Topics in Understanding Botnets, 2007.

Page 178: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

166

[SF01] Peter Szor and Peter Ferrie. Hunting for metamorphic. In Proceedings of the

2001 Virus Bulletin Conference, pages 123–144, 2001.

[SH99] Pyda Srisuresh and Matt Holdrege. IP network address translator (NAT) termi-

nology and considerations. RFC 2663, August 1999.

[Sha79] Adi Shamir. How to share a secret. Communications of the ACM, 22(11):612–

613, 1979.

[SMK+01] Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, and Hari Bal-

akrishnan. Chord: A scalable peer-to-peer lookup service for Internet appli-

cations. ACM SIGCOMM Computer Communication Review, 31(4):149–160,

October 2001.

[SMPW04] Stuart Staniford, David Moore, Vern Paxson, and Nicholas Weaver. The top

speed of flash worms. In Proceedings of the 2004 ACM Workshop on Rapid

Malcode, pages 33–42, 2004.

[SMS07] Kelsi M. Sandoz, Shelby M. Mitzimberg, and Martin Schuster. Social cheat-

ing in Pseudomonas aeruginosa quorum sensing. Proceedings of the National

Academy of Sciences of the United States of America (PNAS), 104(40):15876–

15881, October 2007.

[Spa89] Eugene H. Spafford. Crisis and aftermath. Communications of the ACM,

32(6):678–687, June 1989.

[Spa94] Eugene H. Spafford. Computer viruses as artificial life. Artificial Life,

1(3):249–265, 1994.

Page 179: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

167

[SPW02] Stuart Staniford, Vern Paxson, and Nicholas Weaver. How to 0wn the Internet

in your spare time. In Proceedings of the 11th USENIX Security Symposium,

pages 149–167, 2002.

[Ste05] Toby Sterling. Dutch say suspects hacked 1.5M computers. Associated Press,

20 October 2005.

[Sto03] John Stone. Detecting and recovering from a virus incident. Symantec Advan-

tage, Summer 2003.

[TW03] Jamie Twycross and Matthew M. Williamson. Implementing and testing a

virus throttle. In Proceedings of the 12th USENIX Security Symposium, pages

285–294, 2003.

[Uni05] United States v. Ancheta. Case CR05-1060, Indictment, U.S. District Court,

Central District of California, February 2005.

[VA06] Ryan Vogt and John Aycock. Attack of the 50 foot botnet. Technical Report

2006-840-33, University of Calgary, 2006.

[VAJ07a] Ryan Vogt, John Aycock, and Michael J. Jacobson, Jr. Army of botnets. In

Proceedings of the 2007 Network and Distributed System Security Symposium,

pages 111–123, 2007.

[VAJ07b] Ryan Vogt, John Aycock, and Michael J. Jacobson, Jr. Quorum sensing and

self-stopping worms. In Proceedings of the 5th ACM Workshop on Rapid Mal-

code, pages 16–22, November 2007.

[Wic88] Michael J. Wichura. Algorithm AS 241: The percentage points of the normal

distribution. Applied Statistics, 37(3):477–484, 1988.

Page 180: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

168

[Wil06] Brandon Wiley. Curious Yellow: The first coordinated worm design. http:

//blanu.net/curious_yellow, last accessed 21 September 2006.

[WL03] Matthew M. Williamson and Jasmin Leveille. An epidemiological model of

virus spread and cleanup. In Proceedings of the 2003 Virus Bulletin Confer-

ence, 2003.

[Wol06] Coupon collector’s problem – from Wolfram MathWorld. http:

//mathworld.wolfram.com/CouponCollectorsProblem.html, last ac-

cessed 31 August 2006.

[WSP+99] Steve R. White, Morton Swimmer, Edward J. Pring, William C. Arnold,

David M. Chess, and John F. Morar. Anatomy of a commercial-grade immune

system. In Proceedings of the 1999 Virus Bulletin Conference, 1999.

[WSZ07] Ping Wang, Sherri Sparks, and Cliff C. Zou. An advanced hybrid peer-to-peer

botnet. In Proceedings of the First Workshop on Hot Topics in Understanding

Botnets, 2007.

[Zim80] Hubert Zimmermann. OSI reference model — the ISO model of architec-

ture for open systems interconnection. IEEE Transactions on Communications,

28(4):425–432, April 1980.

[ZTG04] Cliff C. Zou, Don Towsley, and Weibo Gong. Email worm modeling and de-

fense. In Proceedings of the 13th International Conference on Computer Com-

munications and Networks, pages 409–414, 2004.

[ZWZ02] Hai-Bao Zhang, Lian-Hui Wang, and Lian-Hui Zhang. Genetic control of

quorum-sensing signal turnover in Agrobacterium tumefaciens. Proceedings

Page 181: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

169

of the National Academy of Sciences of the United States of America (PNAS),

99(7):4638–4643, April 2002.

Page 182: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

170

Appendix A

Expected Infections by Random-Scanning Worms

This appendix will assess the expected number of infections that would occur during

a random-scanning worm epidemic, if the total number of scans performed by all infected

hosts is known. This result, in turn, will yield a theorem about the optimal nature of any

random-scanning worm that halts prior to infecting all hosts. We begin our discussion on

this topic with a definition.

Definition A.1. Let Ns be a random variable that represents the number of hosts that are

infected after s random scans have been performed during a random-scanning worm epi-

demic. If we assume that an epidemic starts with 0 < I ≤ V seed infections, where V is the

size of the vulnerable population, then we have that N0 = I.

What we would like to know is the expected value for Ns when s > 0. We denote the

expected value of Ns as E(Ns) for s ∈ Z ≥ 0. Note that since N0 = I, we trivially have that

E(N0) = I.

Lemma A.2. E(Ns) = E(Ns−1) +V−E(Ns−1)

A−1 for all s ∈ Z > 0, where V is the size of the vul-

nerable population and A is the size of the address space.

Proof. Denote Xi as the number of additional hosts that are infected by the i-th random

scan. Specifically, Xi is 1 if the host hit by the scan is vulnerable and uninfected, and hence

becomes infected. Xi is 0 if the scan was a redundant infection attempt or if the scan does

not find a vulnerable host.

From this definition, it follows directly that, for any s ∈ Z > 0,

E(Ns) = I +

s∑i=1

P(Xi = 1) .

Page 183: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

171

Note that the Xi are not independent variables — the outcome of one scan affects the prob-

ability of subsequent scans finding a vulnerable, uninfected host. However, we can split the

sum apart, yielding

E(Ns) = I +

s−1∑i=1

P(Xi = 1) + P(Xs = 1) ,

which simplifies, given the definition of E(Ns−1), to

E(Ns) = E(Ns−1) + P(Xs = 1) . (A.1)

Because the Xi are not independent variables, we must split apart the term P(Xs = 1) into

all of its component cases. That is,

P(Xs = 1) =

V∑i=0

P(Ns−1 = i) · P(Xs = 1 | Ns−1 = i) .

Recalling that the epidemic in question starts with I > 0 initial seed infections, we know

that P(Ns−1 = i) = 0 for all i < I. This observation allows us to change the limits on the sum,

yielding

P(Xs = 1) =

V∑i=I

P(Ns−1 = i) · P(Xs = 1 | Ns−1 = i) . (A.2)

Assuming that no infected host will ever scan its own address, the probability of a ran-

dom scan finding a vulnerable, uninfected host, if i out of the V vulnerable hosts are already

infected, isV − iA − 1

.

Hence, we can rewrite Equation A.2 as

P(Xs = 1) =

V∑i=I

P(Ns−1 = i) ·V − iA − 1

=V

A − 1

V∑i=I

P(Ns−1 = i) −1

A − 1

V∑i=I

i · P(Ns−1 = i) . (A.3)

Recall that we are guaranteed that I ≤ Ns−1 ≤ V , since there are at least I infections at

any time, and there can be at most V . Hence, we know thatV∑

i=I

P(Ns−1 = i) = 1 .

Page 184: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

172

Similarly, by the definition of an expected value, we have that

V∑i=I

i · P(Ns−1 = i) = E(Ns−1) .

As such, we can rewrite Equation A.3 as

P(Xs = 1) =V

A − 1· 1 −

1A − 1

· E(Ns−1)

=V − E(Ns−1)

A − 1. (A.4)

By rewriting Equation A.1 using Equation A.4, we get our proof. Namely, given s > 0, we

know that

E(Ns) = E(Ns−1) +V − E(Ns−1)

A − 1. �

What Lemma A.2 gives us is a recurrence relationship for predicting the number of hosts

that will be infected by a random-scanning worm that performs s scans in total. Namely,

E(Ns) =

I if s = 0

E(Ns−1) +V−E(Ns−1)

A−1 if s ∈ Z > 0 .

Remark A.3. The recurrence formula for E(Ns) can be simplified to

E(Ns) = (I − V)(

A − 2A − 1

)s

+ V .

Proof. Define E(Ns) using the established recurrence relationship and let

f (s) = (I − V)(

A−2A−1

)s+ V . We will show by induction on s that E(Ns) = f (s) for all

s ≥ 0.

Component 1. The base case, s = 0, follows directly from the fact that

E(N0) = I = (I − V) · 1 + V = (I − V)(

A − 2A − 1

)0

+ V = f (0) .

Page 185: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

173

Component 2. For the inductive step, we let s ≥ 0 and assume that E(Ns) = f (s). Then,

f (s + 1) = (I − V)(

A − 2A − 1

)s+1

+ V

=

(A − 2A − 1

) [(I − V)

(A − 2A − 1

)s

+ V]−

(A − 2A − 1

)V + V

=

(A − 2A − 1

)E(Ns) −

(A − 2A − 1

)V + V

=

(A − 2A − 1

)E(Ns) +

VA − 1

= E(Ns) +V − E(Ns)

A − 1

= E(Ns+1) . �

Remark A.4. If it is possible for infected hosts to scan their own address, all instances

of A − 1 in the formula for E(Ns) become A. That is, the modified formula would be

E(Ns) = (I − V)(

A−1A

)s+ V.

Lemma A.2 and Remark A.3 allow us to visualize the expected number of hosts to be

infected as the numbers of scans performed increases. Figure A.1 shows the diminishing

returns on each scan in the case where V = 131072, I = 1, and A = 232. While the expected

number of infected hosts steadily increases at first, that value plateaus as it approaches

V . Note in both Figure A.1 and in the recurrence relationship provided by Lemma A.2

that E(Ns) increases monotonically with s (since Ns−1 ≤ V). It likely comes as no surprise

that increasing the number of scans performed will not decrease the expected number of

infections, though.

But, what is it that Lemma A.2 and Remark A.3 really tell us, aside from the ex-

pected number of hosts to be infected by a random-scanning worm? The real strength of

Lemma A.2 is that it applies to any random-scanning worm. That is, it applies equally well

to the perfect self-stopping random-scanning worm discussed in Chapter 3, as well as any of

Page 186: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

174

0

20000

40000

60000

80000

100000

120000

140000

0 1e+10 2e+10 3e+10 4e+10 5e+10 6e+10

Expe

cted

hos

ts in

fect

ed

Scans performed

Figure A.1: The effect of the increasing the number of scans performed by a random-scanning worm on the expected number of hosts infected when V = 131072, I = 1, andA = 232

the other random-scanning worm designs in this thesis. From this fact, we get the following

corollary.

Corollary A.5. Let Ns be a random variable representing the number of hosts infected after

s random scans have been performed during a random-scanning worm epidemic that started

with I initial seeds, for an arbitrary random-scanning worm. Let Ps be a random variable

representing the number of hosts infected after s random scans have been performed during

a perfect self-stopping random-scanning worm epidemic that started with I initial seeds.

Then E(Ns) = E(Ps).

This corollary leads to the following theorem about the optimal nature of any random-

scanning worm that halts prior to infecting all vulnerable hosts. We use the same definitions

for Ns and Ps as in the previous corollary, and we define the function M such that M(x) is the

expected number of scans performed by the perfect self-stopping random-scanning worm

Page 187: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

175

to infect x vulnerable hosts (for complete details on the function M, see the M-P

function in Chapter 3).

Theorem A.6. Let i = E(Ns) < V, for some number of scans s. Then M(i) ≤ s < M(i + 1).

Proof. Assume s ≥ M(i + 1). Then E(Ps) ≥ i + 1, by the definitions of M and Ps. But since

E(Ns) = E(Ps), this means that i ≥ i + 1, which is a contradiction. �

What this theorem tells us is that, if the difference between M(i) and M(i + 1) is negli-

gible (that is, the difference between the number of scans the perfect self-stopping random-

scanning worm would take to infect i hosts and i + 1 hosts), then we can say the following.

Namely, the number of scans performed by an arbitrary random-scanning worm to infect

i < V vulnerable hosts is optimal, in the sense that it performed the same number of scans

that the perfect self-stopping random-scanning worm would have to infect the same number

of hosts.

Whether the difference between M(i) and M(i + 1) is negligible depends largely

on context. Using the M-P function from Chapter 3, we can fix A = 232,

V = 131072, and I = 1, and compute some example values. The difference be-

tween M(100000) = 6182260931.08 and M(100001) = 6182399157.37 is only 138226.29

scans — less than a 0.00224% increase in the number of scans. The difference between

M(131071) = 48793894733.04 and M(131072) = 53088862028.04, on the other hand, is

4294967295.00 scans, or more than an 8.8% increase in the number of scans. However,

even in cases where the difference between M(i) and M(i + 1) is large, Theorem A.6 still

provides a bound on the amount of effort a random-scanning worm may “waste” in be-

tween infecting host i and halting, measured as the number of unsuccessful random scans

performed looking for a host to infect as the (i + 1)-th infection prior to halting.

Page 188: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

176

Appendix B

Secret Splitting in Super-Botnets

Assume that there are B individual botnets in a super-botnet, and that a secret is split

into C pieces, with each botnet knowing one random piece of the secret. How many botnets

would a defender have to compromise, on average, to learn the entire secret? How many

botnets would a defender have to disable, on average, to destroy one piece of the secret

entirely?

So long as which piece of the secret each botnet knows is chosen randomly, there are

approximately R = B/C botnets that know each piece of the secret.

As such, the problems of how many botnets need be compromised or disabled reduce to

more easily-stated problems. Namely, assume that there are C different colors of marbles,

and R marbles of each color are placed into a bag. What is the expected number of marbles

that would have to be drawn from the bag, without replacement, until at least one of each

color of marble has been drawn? Similarly, what is the expected number of marbles that

would have to be drawn from the bag, without replacement, until all the marbles of one

color have been drawn?

It may be tempting to analyze these problems using a sum of the expected values on

multiple geometric distributions, as was done during the analysis of the perfect random-

scanning self-stopping worm in Chapter 3. For example, one could conceive of the number

of marbles that would have to be drawn to get at least one of each color as the sum of the

expected values of C different geometric distributions, which is equal to

C∑i=1

Ci.

Page 189: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

177

That form of analysis, however, assumes that each marble is placed back into the bag after

being drawn. So, while using a sums of expected values of geometric distributions would

provide a good approximation to the expected number of marbles that would have to be

drawn, the following methods are more accurate. This analysis has been confirmed by

simulating drawing marbles from a bag without replacement.

B.1 Capturing Each Piece of the Secret

Recall that the number of botnets that a defender would have to compromise, on average,

to learn each piece of the secret is equivalent to the solution of the following problem. How

many marbles would one expect to draw without replacement, if there are C different colors

of marble and R marbles of each color, in order to have drawn at least one of each color? We

present a solution to this problem, a non-trivial variant of the collector’s problem [Wol06].

Imagine that, instead of stopping drawing marbles from the bag once one of each color

has finally been drawn, the marbles are drawn one at a time from the bag and placed on

a table from left to right, until there are no marbles left in the bag. Denote the first color

of marble that is drawn as color 1. One may or may not draw additional marbles of color

1 before drawing a different color. Denote this next color as color 2. Continue as such,

denoting the final new color of marble that is drawn as color C. If R > 1, more marbles will

be drawn (R − 1 of color C, and up to R − 1 of each of the other colors) to completely empty

the bag; these marbles will have no effect on how the C different colors are labeled.

Using this notation, we have a way of describing T , the total number of marbles drawn

before we have at least one of each color. Namely, T is the number of marbles drawn up to

and including the first time a marble of color C was drawn.

Page 190: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

178

Obviously, T ≥ C, since we need to draw at least one marble of each color. The question

is: how many extra marbles of colors 1 . . .C − 1 are drawn before the first marble of color

C? We denote the number of extra marbles of color i drawn as ei. Hence,

T = C +

C−1∑i=1

ei ,

where ei ≥ 0 ∀i ∈ [1,C − 1].

To compute the expected value of T , denoted E(T ), we need only compute the expected

values for the ei, E(ei). We start by computing E(eC−1).

Envision the marbles lying on the table, sorted from left to right in the order they were

drawn. Now, take away all of the marbles except those of color C and the leftmost marble

of color C − 1. If we were to put the R − 1 missing marbles of color C − 1 back on the table

where they were just lying, where would we expect to find them?

Since all R−1 of those marbles must have been drawn from the bag after the first marble

of color C − 1, all of the missing marbles must appear to the right of the lone marble of

color C − 1. However, as demonstrated by Figure B.1, there are R + 1 distinct placement

positions (relative to the remaining marbles of color C) to the right of the lone marble of

color C − 1 into which any of the missing marbles of color C − 1 could be placed (note that

multiple missing marbles may occupy a single placement position once they are returned

to the table). Namely, a missing marble will be placed to the left of λ marbles of color C,

where λ ∈ [0,R].

Note that only one of those R + 1 positions is to the left of all of the marbles of color C.

So, we expect that 1R+1 of the missing marbles will appear to the left of the marbles of color

C. Since there are R − 1 missing marbles, we conclude that

E(eC−1) =1

R + 1· (R − 1) .

A similar argument can be made to compute the value of E(eC−2) (for expository pur-

poses, we assume C ≥ 3). This time, when we start with all of the marbles on the table,

Page 191: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

179

C-1 C CC

R marbles of color C

R+1 placement spots

Figure B.1: Marble placement spots for the missing marbles of color C − 1

sorted from left to right by draw order, we take a slightly different course of action. We take

away all of the marbles except those of colors C and C − 1, and the leftmost marble of color

C − 2. Where would we expect to find the missing R − 1 marbles of color C − 2 were they

replaced? This time, there are 2R + 1 possible placement locations for the missing marbles.

But, how many of them lie to the left of the first marble of color C?

Obviously the location directly to the right of the lone marble of color C − 2, as well as

the location directly to the right of the first marble of color C − 1 both meet this criteria. Do

not forget, however, that we expect to find E(eC−1) extra marbles of color C − 1 to the left

of the marbles of color C; the locations directly to the right of those marbles are also on the

left side of the marbles of color C. As such, 2 + E(eC−1) of the possible 2R + 1 placement

locations for the R − 1 missing marbles are located to the left of the first marble of color C,

yielding

E(eC−2) =2 + E(eC−1)

2R + 1· (R − 1) .

The general form of the above argument tells us that

E(eC−i) =i +

∑i−1j=1 E(eC− j)

iR + 1· (R − 1) .

Inserting the now-computable values for the E(ei) into the equation

E(T ) = C +

C−1∑i=1

E(ei)

Page 192: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

180

1: function C-ET(C,R)2: sum← 03: for i← 1 to C − 1 do4: sum← sum + (sum + i)(R − 1)/(iR + 1)5: end for6: return sum + C7: end function

Figure B.2: An algorithm for computing E(T )

yields the final solution to our problem. A pseudocode algorithm for computing E(T ) is

included in Figure B.2.

B.2 Destroying One Piece of the Secret

Recall that the number of botnets that a defender would have to disable, on average, to

completely eliminate one piece of the secret is equivalent to the solution of the following

problem. How many marbles would one expect to draw without replacement, if there are

C different colors of marble and R marbles of each color, in order to have drawn all of the

marbles of one color? The solution is, in fact, quite similar in form to the solution of the

previous section’s problem.

Again, imagine that the marbles are drawn one at a time from the bag and placed on a

table from left to right, until there are no marbles left in the bag. We number the colors from

1 to C; however, this time we use a different scheme to assign the numbers. Denote the first

color of marble for which we succeed in drawing all R marbles as color 1. The next color of

marble for which we drew all R marbles is denoted color 2. Continuing as such, the color

of the last marble drawn is denoted as color C.

Using this numbering scheme, we want to describe S , the total number of marbles drawn

before we have all the marbles of one color. Namely, S is the number of marbles drawn up

to and including the last time a marble of color 1 was drawn.

Page 193: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

181

1 11

R marbles of color 1

R+1 placement spots

2

Figure B.3: Marble placement spots for the missing marbles of color 2

As before, we have an obvious lower bound: S ≥ R, since there are R marbles of color

1. The question is: how many marbles of colors 2 . . .C are drawn before the last marble of

color 1? We denote the number of marbles of color i drawn before the last marble of color

1 as xi. Hence,

S = R +

C∑i=2

xi ,

where xi ≥ 0 ∀i ∈ [2,C].

Similar to before, we wish to calculate E(S ) by computing the E(xi). We start with E(x2).

Envision the marbles lying on the table, sorted from left to right in the order they were

drawn. Now, take away all of the marbles except those of color 1 and the rightmost marble

of color 2. If we were to put the R − 1 missing marbles of color 2 back on the table where

they were just lying, where would we expect to find them?

Since all R − 1 of those marbles must have been drawn from the bag before the last

marble of color 2, all of the missing marbles must appear to the left of the lone marble

of color 2. However, as demonstrated by Figure B.3, there are R + 1 distinct placement

positions (relative to the remaining marbles of color 1) to the left of the lone marble of color

2 into which any of the missing marbles of color 2 could be placed (as before, multiple

missing marbles may occupy a single placement position once they are returned to the table).

Namely, a missing marble will be placed to the left of λmarbles of color 1, where λ ∈ [0,R].

Page 194: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

182

Note that all but one of those R+1 positions is to the left of some of the marbles of color

1. So, we expect that RR+1 of the missing marbles will appear to the left of a marble of color

1. Since there are R − 1 missing marbles, we conclude that

E(x2) =R

R + 1· (R − 1) .

Unsurprisingly, a similar argument can be made to compute the value of E(x3) (as before,

for expository purposes, we assume C ≥ 3). This time, when we start with all of the marbles

on the table, sorted from left to right by draw order, we take away all of the marbles except

those of colors 1 and 2, and the rightmost marble of color 3. Where would we expect to

find the missing R − 1 marbles of color 3 were they replaced? This time, there are 2R + 1

possible placement locations for the missing marbles. But, how many of them lie to the left

of the last marble of color 1?

Obviously, the locations directly to the left of any of the marbles of color 1 meet this

criteria. Do not forget, however, that we expect to find E(x2) marbles of color 2 to the left of

the last marble of color 1; the locations directly to the left of those marbles are also on the

left side of the last marble of color 1. As such, R + E(x2) of the possible 2R + 1 placement

locations for the R − 1 missing marbles are located to the left of the last marble of color 1,

yielding

E(x3) =R + E(x2)

2R + 1· (R − 1) .

The general form of the above argument tells us that

E(xi) =R +

∑i−1j=2 E(x j)

(i − 1)R + 1· (R − 1) .

Inserting the now-computable values for the E(xi) into the equation

E(S ) = R +

C∑i=2

E(xi)

Page 195: The Threat of Biologically-Inspired Self-Stopping Wormsrvogt/papers/VogtMSc.pdf · The Threat of Biologically-Inspired Self-Stopping Worms by Ryan Andrew Vogt A THESIS SUBMITTED TO

183

1: function C-ES(C,R)2: sum← 03: for i← 1 to C − 1 do4: sum← sum + (sum + R)(R − 1)/(iR + 1)5: end for6: return sum + R7: end function

Figure B.4: An algorithm for computing E(S )

yields the final solution to our problem. A pseudocode algorithm for computing E(S ) is

included in Figure B.4 (the for loop has been changed from i ∈ [2,C] to i ∈ [1,C − 1] to

simplify the i − 1 into an i inside the loop).