[ieee 2009 fourth international ieee workshop on systematic approaches to digital forensic...

8

Click here to load reader

Upload: steven-j

Post on 26-Feb-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2009 Fourth International IEEE Workshop on Systematic Approaches to Digital Forensic Engineering (SADFE) - Berkeley, California, USA (2009.05.21-2009.05.21)] 2009 Fourth International

High Assurance Digital Forensics:A Panelist’s Perspective

Steven J. GreenwaldIndependent Information Security Consultant

SteveGreenwald.comNorth Miami, Florida

Email: [email protected]

Abstract

In these times of trendy LPU1 papers, you may consider this as two papers in one. If you like controversial positions andobservations, then I suggest you focus on the second part of this paper (sections V and VI). If you like history, institutionalknowledge, definitions, and the wisdom of senior scientist-practitioners, then I suggest you focus on the first part of this paper(sections II through IV). I wrote both as parts of an integrated whole in the hope that you will like that as well.

In the first part of this paper (sections II through IV), I attempt to give an adequate working definition of the term “highassurance” for use in the context of “high assurance digital forensics,” with assistance by many luminaries in the field.

In the second part of this paper (sections V and VI), I give my observations and reactions to my panelist experience for the“High Assurance Digital Forensics” panel for the Fourth International IEEE Workshop on Systematic Approaches to DigitalForensic Engineering2 (SADFE). I also examine my overall workshop experiences. In particular, I examine how the computerscience paradigm does not compose very well with the legal paradigm and the truly massive problems and dangers that thiscauses. I sum up with a list of questions that we must answer if we truly wish high assurance digital forensics used properly.

Keywords

K Computing Milieux; K.4 Computers and Society; K.4.1 Public Policy Issues; K.4.1.g Regulation; K.4.1.i Use/abuse ofpower; K.4.2 Social Issues; K.4.2.aAbuse and crime involving computers; K.4.3 Organizational Impacts; K.4.4.d Intellectualproperty; K.4.4.f Security; K.4.4.g Internet security policies; K.4.4.h Mobile code security; K.4.4.i Economic and other policies;K.4.m Miscellaneous; K.5 Legal Aspects of Computing; K.5.0General; K.5.m Miscellaneous; K.6 Management of Computingand Information Systems; K.6.m Miscellaneous; K.6.m.b Security; K.7 The Computing Profession; K.7.0 General; K.7.0.aCareer Management; K.7.1 Occupations; K.7.2 Organizations; K.7.3 Testing, Certification, and Licensing; K.7.4 ProfessionalEthics; K.7.4.a Codes of ethics; K.7.4.b Codes of good practice; K.7.4.c Ethical dilemmas; K.7.m Miscellaneous; K.7.m.aCodes of good practice; K.7.m.b Ethics; K.m Miscellaneous; K.m.g Legal

I. INTRODUCTION

From the American Heritage Dictionary [1]:

fo-ren-sicadj.1. Relating to, used in, or appropriate for courts of law or for public discussion or argumentation.2. Of, relating to, or used in debate or argument; rhetorical.3. Relating to the use of science or technology in the investigation and establishment of facts or evidence in acourt of law: a forensic laboratory.

Please note that anything in this paper that could possibly get me sued enters the realm of my opinion. If anything in thispaper appears wishy-washy, please attribute it to my truly great fear of that ravening beast, the U.S. legal system.

On May 21, 2009, I had the extremely interesting experience of working as a panelist for the Fourth International IEEEWorkshop on Systematic Approaches to Digital Forensic Engineering (IEEE/SADFE-2009), along with Rebecca Gurley Baceof Infidel Inc., and Fred Chris Smith, a Former Assistant U.S. Attorney. The opportunity to write this paper allows me torecapture some of what happened before and during the workshop, as well as some lessons learned.

When Sean Peisert, the SADFE program co-chair (along with Matt Bishop) invited me to work as a panelist on thetopic of “High Assurance Digital Forensics,” I knew little about the field of digital forensics, but also had an interest inlearning more and had already planned on attending the workshop. Sean told me that he wanted me as a panelist due tomy knowledge of high assurance systems and because I used to work at the Naval Research Laboratory’s Center for High

1Least publishable unit.2In conjunction with the 30

th IEEE Symposium on Security & Privacy

2009 Fourth IEEE International Workshop on Systematic Approaches to Digital Forensic Engineering

978-0-7695-3792-4/09 $25.00 © 2009 IEEEDOI 10.1109/SADFE.2009.17

54

2009 Fourth International IEEE Workshop on Systematic Approaches to Digital Forensic Engineering

978-0-7695-3792-4/09 $25.00 © 2009 IEEEDOI 10.1109/SADFE.2009.17

54

Page 2: [IEEE 2009 Fourth International IEEE Workshop on Systematic Approaches to Digital Forensic Engineering (SADFE) - Berkeley, California, USA (2009.05.21-2009.05.21)] 2009 Fourth International

Assurance Computer Systems as a civilian computer scientist in the Formal Methods section (code 5543). The opportunityto work with Becky Bace and Fred Smith didn’t hurt in terms of persuading me either (in fact, everyone whom I consideras having wisdom in the field of digital forensics viewed Fred and Becky’s book, A Guide to Forensic Testimony: The Artand Practice of Presenting Testimony As An Expert Technical Witness [2] as a landmark in the field and required reading).

The experience would radically change my view of things.During one of our pre-panel meetings, Fred asked the following challenging questions.

1) Can we develop methods for educating juries about high assurance computer systems?2) Could we provide him with an example of a high assurance computer system product that he could study so that he

could better understand the high assurance process?

The rather severe page-limit constraints for this paper restricted my ability to fully answer the first question in this medium,but, even so, I hope this will provide a good starting or diverging point for an answer.

Before the workshop I studied how forensics, per se, works (due to Fred and Becky’s books, and detailed discussionswith Fred who gave generously of his time, as well as other sources including discussions with a local county medicalexaminer). The workshop, of course, provided even more material for study and thought. I wish to convey, via this paper,some of what I, as a previously naı̈ve computer scientist who specialized in security, learned about forensics and the legalsystem.

In the rest of this paper I have the following two goals.

1) To attempt a quick definition of the term “high assurance,” realizing that due to many factors it will probably containsome subtle (and maybe not so subtle) points of contention.

2) To report on some of the ways the panel and workshop affected me, and to reflect on this remarkable experience fromthe perspective of a legally-naı̈ve, legal-outsider computer scientist with a primary interest (and passion) for computersecurity and the scientific method (as opposed to someone who routinely works with the legal system/method/paradigm).

I view these as radically different goals. As such, you may, if you wish, view this as two different papers. I know in thisera of “least publishable units” for paper-writing that I break with fashion by having two papers’ worth of contents in onepaper, but I hope you will view that as a bonus.

II. HIGH ASSURANCE: DO I HAVE A LEG TO STAND ON?

Jewish history recounts one of many stories about the tolerant and mild-mannered great sage Hillel the Elder who livedvery shortly before the time of Jesus and who had a major impact on subsequent Jewish law and tradition. Once upon atime, a gentile who considered converting to Judaism asked Hillel to explain the whole of Jewish law while “standing onone foot” (idiom back then that meant “extremely concise”). While Hillel’s intellectual rival, the irascible Shammai, hadpreviously rudely dismissed the fellow, Hillel gave a complete answer to the question while standing on one leg.[3]

So when Fred C. Smith asked me for an definition/explanation of the term “high assurance” and for an simple and conciseexample of a high assurance system, I realized that I certainly could not live up to Hillel. So I require at least two legs here.

A. Assurance

The U.S. Department of Defense (DoD) often uses the term information assurance (IA) as almost synonymous with theterms computer security, information security, and cybersecurity and even has a snazzy emblem for information assurancewith the motto Defenders of the Domain [4]. I have noticed that some people in the DoD information assurance areas, whenpressed for a definition of the term, often attempt to claim IA as a superset of one or more of these synonymous terms andsometimes invoke terms such as risk management. However, I do not agree with them3. I believe that by “assurance” thatthey probably mean systems that attempt to enforce the second most common computer security paradigm of confidentiality,integrity, availability, and non-repudiation (the first paradigm did not contain non-repudiation, which, near as I can recall,dates as a paradigm shift to about 1994).

However, in particular, I (and others) often use the term assurance to mean, ultimately, a convincing argument that acomputer system has the properties that the creators claim. Therefore, by using the term high assurance we merely claimthat a system should meet higher assurance standards (in other words, a better assurance argument). Note that high assurancedoes not mean “highest assurance,” and no competent person in the field would ever claim that we can currently attain, inpractice, perfect security.

Clearly this begs the question of what we mean by the term assurance argument.

3When really pressed for a definition, some of them will invoke U.S. DoD directives, such as the 8500 series which do not actually define assurance,or the still partially-classified document National Security Directive 42 [5].

5555

Page 3: [IEEE 2009 Fourth International IEEE Workshop on Systematic Approaches to Digital Forensic Engineering (SADFE) - Berkeley, California, USA (2009.05.21-2009.05.21)] 2009 Fourth International

B. The Nature of an Assurance Argument

A proper assurance argument must have at least two properties.1) Rigor: Statements tend to logically follow from one another.2) Consistency: The argument does not contain contradictions (even one contradiction will destroy the credibility of the

entire argument under the rules of common first-order logic).A high-assurance argument almost always uses formal methods of some sort to achieve these properties. By “formal

methods” I mean the use of rigorous and logically consistent logical/mathematical techniques to specify the properties ofa system as a model. We use the term “model” to mean a formalized abstraction of the system that we wish to create;for example, we might model a system by “abstracting out” the programming language required for implementation (thisabstraction might have the very useful purpose of not unduly constraining the implementors in their choice of programminglanguage).

Ideally, a high-assurance argument will therefore contain proofs of correctness that demonstrate that the model adheresto desirable and explicitly stated security properties (such as confidentiality).

However, in the “real world” we need more than just a nice model or specification of a system; we need to actuallyimplement these systems. Ideally, implementation requires the accurate translation of the model to the implementation. Theimplementation translation step has notorious difficulties, and often works as the major point of failure.

Some failures between models and implementations include the following issues, to name just a few (we have literallythousands of examples of these types of failures).

• The formal specification requires the use of a random number generator, but the implementation uses a pseudo-randomnumber generator.

• The formal specification requires the use of a provably secure method (such as a one-time pad, a provably 100% securecryptographic algorithm) that we cannot implement in any feasible way in the “real-world” [6].

• The formal specification requires that a particular number get used only once and never, ever, re-used (e.g., a “nonce,”a term derived from the phrase “number used once”), but the implementation re-uses the number [7].

• The designers and implementors assumes that they can trust the compiler used for implementing the system even thoughwe generally have no basis whatsoever for that trust [8].

• Everyone assumes that they can trust that the hardware does things correctly, such as floating-point division, eventhough we almost always have no real reason to make such an assumption [9].

III. DEFINITION OF HIGH ASSURANCE: INSTITUTIONAL KNOWLEDGE

Prior to the workshop I tried to get an official and historical definition of high assurance, but failed for the simple reasonthat the Department of Defense has classified the definition of “high assurance.” [5, page 10] However, I did get some veryinteresting comments about High Assurance from some “seniors” in the field (for whom I have great respect) via personalcorrespondence (quoted with their permissions and approvals).

A. Peter Neumann’s Definition

Perhaps one of the best definitions of “high assurance” came from Peter G. Neumann, a formal methodist and highassurance expert.

Assurance is a measure of how likely a system is to satisfy its requirements, whatever they are. Highassurance means that you are doing things that ordinary folks never would – such as formal requirements, formalspecifications, formal analysis that the specs are consistent with the requirements, perhaps code analysis thatconsiders whether code is consistent with specs, run-time checks as well as static analysis, and so on. But noneof that assures anything definitively. . . . The essence of “high assurance” is that we must do much better than’commonly accepted best practices’, especially for critical systems with critical requirements.

(Personal correspondence on May 11, 2009 with Peter G. Neumann, Moderator of the ACM Risks Forum,Principal Scientist in the SRI International Computer Science Lab, http://www.csl.sri.com/neumann.)

B. Bob Blakley’s Recollection

Bob Blakley, VP at Burton Group, gave the following recollection. Bob, due to his famous cryptographer father, literallygrew up with this field.

My memory of the mid-80s conversations was that high-assurance was applied as a term only to artifacts whoseassurance arguments were formal specifications, formal proofs of correctness (presence & absence of properties)of the specs, and formal proofs of correspondence of the implementations to the specs.

5656

Page 4: [IEEE 2009 Fourth International IEEE Workshop on Systematic Approaches to Digital Forensic Engineering (SADFE) - Berkeley, California, USA (2009.05.21-2009.05.21)] 2009 Fourth International

C. Brian Snow’s References

Brian Snow, a retired Technical Director for the National Security Agency’s Information Assurance Directorate, gave methese references.

Two references for more detail on how to get to high assurance:

1) Peter Neumann’s “Principled Assuredly Trustworthy Composable Architectures” available at: http://www.csl.sri.com/neumann/chats4.pdf (236 pages), and

2) a more light-weight paper by me (keynote address at ACSAC 2005) We Need Assurance at: http://www.acsac.org/2005/papers/Snow.pdf (7 pages).

D. Ken Olthoff’s Comments

Ken Olthoff works in the field of information assurance for NSA and gave the following perspective.

In any case, it may be (not sure one way or another) that sufficiently high assurance is attainable for a well-bounded set of capabilities in a well-defined context. Limited, well-defined functionality is a wonderful thing. I’mnot saying that all limited-functionality contexts are “securable” to whatever definition of “secure” [we use], merelyobserving that some may be, and that the “constrained environment, constrained function” approach is likely theonly one where we can limit the complexity and permutations enough to even reason about the level of security.

The main problem is that we in the security community have followed our customers down the rat hole with thesign over the entry saying “Surely it must be possible to ’secure’ a global network of general purpose computers,running commercial bloatware, using protocols and interfaces which give one-and-all more than enough rope tocollectively hang ourselves, with said network controlled and populated by only The Flying Spaghetti Monsterknows who. And then all parties wonder why it doesn’t work.

E. Marv Schaefer’s Comments and Commentary

Marv Schaefer, former chief scientist for the National Computer Security Center (among other things) said the following.

There’s a problem here in terms of definition. ‘High assurance’, like ‘assurance’, is a With-Respect-To quality.Just as we empirically have very high assurance that Windows-based systems will crash from time-to-time, we alsohave high assurance that faithful implementations of specific PRNGs, sorting algorithms, encryption algorithms,etc. are faithful to their specifications within certain well-defined bounds.

In the information security context, one would expect to see certain properties designed and implemented withspecific techniques, documentation and evidence that would together offer high assurance within those bounds thatthe process, subsystem, system or component has those particular properties. The problems are manifold, alas,because there does not appear to be any way to control or limit complexity and configurations of fielded systemsto be simple enough or stable enough to do much with in terms of the few kinds of high assurance that can beproduced.

So in light of Peter’s and Brian’s characterisations, it depends on both the with-respect-to requirements and theparticular configuration as to whether or not the requisite assurance can ever be provided. But, without question,there are useful (to some!) simple configurations and applications for which very high degrees of assurance can beprovided. Examples are the Secure Release Workstation or the Multinet Gateway from a couple of decades ago.

Much of the open R&D that came out of the NRL, SRI, MITRE, Aerospace Corp. and SDC (among others) hasproduced designs and worked examples of high assurance systems and subsystems. Additionally, these institutionsproduced credible suites of logical and implemented tools capable of abetting the creation of high assurance.(Much of this was sponsored, and unclassified(!), by the NCSC, ARPA, and the CIA.) The old S-group at theNSA also produced, with its commercial partners, numerous worked examples and suites of often classified toolsthat were/are useful for producing specific degrees of high assurance. Emphatic assertion was not part of the usualhigh assurance in a great number of these cases. On a purely commercial level, IBM and Amdahl Corporationsproduced fully-functional separation kernel implementations that were (are!) commercially viable and that offervery high assurance that their separation kernels do just that.

So where are these solutions now? What they produced is often said to be too trivial, too limited, or too obsoletefor to-day’s applications. Enter Ken’s meatball-filled Rat-Hole. . . .

IV. HIGH ASSURANCE: A DEFINITION

I feel totally presumptuous giving a definition of high assurance after the previous luminaries weighed in, and no doubtmany will disagree with me, but here goes.

5757

Page 5: [IEEE 2009 Fourth International IEEE Workshop on Systematic Approaches to Digital Forensic Engineering (SADFE) - Berkeley, California, USA (2009.05.21-2009.05.21)] 2009 Fourth International

A. Greenwald’s Definition of Assurance

I define assurance as confidence that a system behaves the way the creators or providers of the system claim. Assurancemust always come with an assurance argument that makes it possible, in principle to verify the assurance claims of thecreators or providers.

B. Definition of High Assurance

I define high assurance as that development process that leads to a high assurance argument using at least the followingtools.

1) Modeling of the system using formal specifications with proofs of correctness thata) have logical rigor, andb) logical consistency.

2) A logical proof that the formal model accurately models the system requirements.3) A proof that the implementation accurately translated the formal model.4) Testing of the final product.5) A review of the process by the best available practitioners.6) Production of an assurance argument that details all of the above and allows, in principle, replicability of results in

the scientific method sense (in other words, falsifiability, a null hypothesis, etc).7) Some method consistent with the above to handle changes to the system (in other words, new versions) due to changes

in the requirements, bug correction, etc.As an example of a good high assurance system that also has a size that makes it amendable to relatively quick study,

I recommend the NRL Pump [10]. I would have written a lot more about the NRL Pump as an example high assurancesystem, but the severe page-limit constraints make that impossible.

To sum up, please note that I gave a very sketchy and simplified version of the issues involving high assurance, with afew different definitions, or at least perspectives.

V. WHAT HAPPENED DURING THE PANEL AND WORKSHOP

“Believe those who are seeking the truth. Doubt those who find it.”— Andre Gide, French critic, essayist, novelist, and Nobel laureate (1869–1951)

The workshop worked very dynamically and interactively (as designed) and so did the panel. I hope to give a little bit ofthe flavor of it in this section by bringing up some of the points that came up during the panel.

A. The Panel

1) I mentioned (and Matt Bishop agreed) that Philip K. Dick’s 1956 short story, The Minority Report [11] addressesa lot of the issues that came up during the panel such as

• reliance on computer systems for evidence,• using digital forensics to detect crime,• using digital forensics to prevent crime,• using digital forensics to actually commit crimes,• problem causing cybernetic positive feedback loops due to the use of digital forensics,• tampering with digital forensic evidence,• conflict resolution monitor failures leading to a failure of digital forensic evidence, and• making a convincing argument of someone’s guilt (or innocence!) based on digital forensic evidence.

2) Given that the legal community will certainly use our (high) assurance creations for digital forensics, then we mustask them the following question and get an acceptable answer: “How does the legal community wish us to build thisstuff for their digital forensics use?” The silence when I asked this question seemed quite deafening to me.

3) Fred C. Smith mentioned that the legal community views, as the “gold standard” of traditional (“first generation”)forensics, the field of medical examination. Of course, many of us do not tend to view the medical examiners’ field interms of a “gold standard” because so many of their expert witnesses really stink to high heaven according to currentand former prosecutors with whom I spoke.

4) Becky and I both mentioned that we can use digital forensics in areas other than the legal one. For example, wecan use digital forensics in industrial espionage (e.g., “who stole/leaked intellectual property?”), venture capitalism(ditto), and the intelligence community (e.g., “should we really commit to a certain course-of-action based on thedigital forensic evidence?”), to name but a few.

5858

Page 6: [IEEE 2009 Fourth International IEEE Workshop on Systematic Approaches to Digital Forensic Engineering (SADFE) - Berkeley, California, USA (2009.05.21-2009.05.21)] 2009 Fourth International

B. What Happened During the Workshop

Before and after the panel, many interesting issues came up. I will skip many of them due to their undoubtedly appearingin other places by other authors in the workshop proceedings.

1) In her outstanding workshop keynote, while Erin Murphy did not explicitly use the term “paradigm,” she still clearlydelineated many of the paradigm issues that can cause a clash between science and law. In particular, she mentionedthe fact that the legal paradigm can tolerate blatant inconsistencies due to it seemingly ascribing more importance toprocess rather than justice. She gave one example of two judges in the same area coming up with totally contradictoryrulings, after which an appellate court ruled them both correct and upheld their contradictory conclusions/rulingsbecause they both followed the correct legal process.

Early on in her keynote I thought her wrong on many points regarding computer science, computer security, andscience in general. But during her talk, and due to the interactive nature of the workshop, I realized that she had gottenincorrect information/education from people she regarded as experts. She earned my admiration; while I believe thatshe has excellent intentions and has gone out of her way to educate herself about the scientific paradigm used bycomputer scientists who specialize in security, I also believe (based on some of the exchanges that happened during herkeynote) that she has gotten extremely bad educational information from some of the people that the legal professionappears to consider “experts” in the field of computer security. For example, some of these people whom they regardas experts did not use the term “falsifiable” correctly, instead using it quite incorrectly to mean falsified (I commentmore on this in the conclusions). Normally I would ascribe this to mere misspeaking at a very interactive workshop,but two people that she appeared to regard as experts misused the term “falsifiable” so consistently that it left me ina state of shock.

2) During the workshop I repeatedly thought about the issue of quackery, a term used by the medical profession todescribe fraudulent or ignorant practitioners; we have no analogous term in the field of computer security and I thinkthat we definitely now need one. I also wondered what makes people regarded as “experts” (using the legal definitionpresented in Bace & Smith [2] and not the usual definition) act in conceited ways, totally misunderstand basic scientificterms such as “falsifiable” (again: shockingly and erroneously used to mean “falsified”). Worse yet, I wondered howand why the legal profession regards such people as the top experts in our field. Clearly the legal paradigm does notcompose well with the scientific paradigm, and worse yet, the legal paradigm does not seem to encourage the use ofthe scientific paradigm; it appears to prefer the use of persuasion/rhetoric or force instead of scientific reasoning.

3) I gathered during the workshop that everything in the legal profession boils down to the mere opinion of one“expert” over that of another in terms of their ability to persuade a jury and/or a judge. Of course, I think this worksto totally taint the scientific method (and perhaps, using a very unscientific term, the very soul of science).

VI. CONCLUDING REMARKS

Clearly, I lack the gift of tact, especially considering that I must restrict my remarks to a very limited number of pagesdue to this medium. However, I feel that the issues I noticed at the workshop strike to the very core of the very importantissue of how some people can pervert science via the legal system. Because I now view this as a crisis situation, I feelcompelled to state my opinions in a particularly strong set of statements that lack the ambiguity that often accompanies tactand diplomacy.

1) If I can believe what I heard at the workshop, then the U.S. legal community, as a practical matter, does notseek justice as its primary desideratum; instead, it prefers to adhere to a process model as its ne plus ultra. As oneexample of this, during the workshop I heard many lawyers remark (and I heard no lawyers disagree!) that they wouldgreatly prefer that thousands of innocent people remain in prison rather than effect a disruptive and inconvenient, butfundamentally just, change to the legal system. Many of them cited the well-known disgraceful example of how thebasic work that “proved” the uniqueness of fingerprints among individuals, and that led to the use of fingerprints asa “gold standard” in first-generation forensics, actually got created out of whole cloth by scientific fraud [12]. Fromwhat I gather, many lawyers would greatly prefer that we still continue to use fingerprints, even knowing that theywill punish innocent people or exonerate guilty ones and even knowing that no good evidence exists for using them,because of their perception that changing this might lead to an inconvenient temporary upset in the legal system.Please note my use of the term “legal system” instead of “justice system” for this reason.

So what primary goal should a “forensic scientist” have? In his 2005 novel The Twelfth Card, Jeffrey Deaverexpresses this through his description of the fictional character Mel Cooper (all emphasis in the original):

Cooper was a born scientist, but even more important he was a born forensic scientist, which is very different.It’s often thought that “forensic” refers to crime scene work, but in fact the word means any aspect of debating

5959

Page 7: [IEEE 2009 Fourth International IEEE Workshop on Systematic Approaches to Digital Forensic Engineering (SADFE) - Berkeley, California, USA (2009.05.21-2009.05.21)] 2009 Fourth International

issues in courts of law. To be a successful criminalist you have to translate raw facts into a form that’ll beuseful to the prosecutor. It’s not enough, for instance, to simply determine the presence of nux vomica plantmaterials at a suspected crime scene—many of which are used for such innocuous medical purposes as treatingear inflammations. A true forensic scientist like Mel Cooper would know instantly that those same materialsproduce the deadly alkaloid poison strychnine. [13, page 58]

2) Due to the paradigm issues elucidated earlier, does the legal community need, quite literally, “translators” betweenlawyers/judges/juries and true (as opposed to fraudulent or conceited) experts in the sciences?

3) Can a true scientist survive long-term exposure to the legal paradigm? Based on what I learned at the workshop Ifeel compelled to raise the conjecture that the odds that a true expert scientist will maintain his scientific standardsafter a great deal of exposure and work with the legal community/system seem very small indeed.

4) The legal profession (with a few exceptions) does not seem to have a very high regard or understanding of even thebasics of how the scientific method works. Even the simplest and most fundamental things about the scientific methodthat, I desperately hope, children in elementary school learn, seems to either elude them or serve as an impediment totheir work. I can only conclude that the legal paradigm views science, in general, as an impediment; “don’t confuseus with the facts” as the old saying goes.

5) I saw some people that the legal community regards as scientific experts committing egregious scientific mistakessuch as talking about empirical absolute Truth [sic; with a capital “T”), misusing the term “falsifiable”4 and thereforeshowing their total lack of understanding of Karl Popper’s defining use of that term [14]5, having a misunderstandingof modern physics [15], [16], confusing symbols with their referents [17], making elementary and consistent mistakeswith first-order logic and set theory, misusing standard terminology and nomenclature, confusing epistemology withphenomenology [18], and, in general, showing a lack of understanding regarding the scientific method [14] andscientific paradigms [19]. I noticed that while they did have excellent rhetorical skills they seemed to lack even themost superficial understanding of things like basic epistemology. I could continue to greatly expand this list, but Ihope I have made my point.

6) Do we need self-regulation within the various scientific communities regarding whom we consider experts? Do weneed some way to sanction people who commit fraud, commit ethical violations [20], or simply do not measure upin terms of basic knowledge (despite their excellent rhetorical skills, etc)?

7) What do we do with cases where experts go outside their area of expertise?8) Do scientists fully realize that the adversarial nature of the legal system has some fundamental incompatibilities

with the scientific method? Do lawyers, judges and (more importantly) juries realize that in science a valid (in otherwords, properly formed) argument does not necessarily equal a sound argument (in other words, a valid argumentmight not have a true conclusion)?

This leaves us with the following questions. Given that the legal community will use computer systems for forensics, thenwhat course should we take? We can certainly run away screaming from the field of digital forensics and vow never to dealwith it, but that will not solve the problem. Does the issue of high assurance even matter at this point? How can we ensurethat self-respecting scientists and engineers who deal with digital forensics will maintain their integrity during contact withthe legal system?

ACKNOWLEDGMENTS

I owe Sean Peisert a debt of gratitude for thinking of me and inviting me to work with such luminaries as Becky Bace andFred C. Smith. Matt Bishop gave me much help and support in too many ways to mention. Fred C. Smith asked the rightquestions and spent his valuable time helping to educate me on many legal things. Laura Corriss helped with proofreading,too many suggestions to enumerate, moral support, and also gave me the reference to the Deaver quote. Paul Syverson mademe aware of the wonderful quote by Gide. Carrie Gates gave generously of her time in her role as SADFE publication chair.

I owe special thanks to Peter G. Neumann, Bob Blakley, Brian Snow, Ken Olthoff, and Marv Schaefer for their time andexcellent discussion of how to precisely define the term “high assurance” from an institutional knowledge point of view.I also owe a lot of gratitude to many of the members of a very special closed mailing list who also contributed to myeducation on this topic.

Note: I wrote this paper in E-Prime [21], [22] and risk a self-serving self-citation by mentioning that I also wrote it inaccordance with the principles I espoused for writing papers in the field of computer security in [23].

4I mention this error, quite deliberately, for a third time, to emphasize, for non-scientists, the sheer magnitude of this error.5Even worse (in terms of the scale of the mistake), the legal community should understand the proper use of the term falsifiability: in 1981, Judge

William Overton’s made a landmark decision in McLean v. Arkansas Board of Education, where he listed science as having five “essential characteristics”and listed the fifth as falsifiability (for a transcript of the decision see http://www.talkorigins.org/faqs/mclean-v-arkansas.html.

6060

Page 8: [IEEE 2009 Fourth International IEEE Workshop on Systematic Approaches to Digital Forensic Engineering (SADFE) - Berkeley, California, USA (2009.05.21-2009.05.21)] 2009 Fourth International

REFERENCES

[1] AHD Editors, “Forensic,” The American Heritage Dictionary of the English Language, Fourth Edition, June 2009. [Online].Available: http://dictionary.reference.com/browse/forensic

[2] F. C. Smith and R. G. Bace, A Guide to Forensic Testimony: The Art and Practice of Presenting Testimony As An Expert TechnicalWitness. Addison-Wesley Professional, October 2002, iSBN-10: 0201752794, ISBN-13: 978-0201752793, available at: http://books.google.com/books?id=KmZqqM8nlaUC&dq=bace+smith+expert&printsec=frontcover&source=bl&ots=xaJWlzMEkb&sig=63XIViraby3zawLmzXbkmeKfdmk&hl=en&ei=IRNESri1EIq twemzOmrAQ&sa=X&oi=book result&ct=result&resnum=1.

[3] New World Encyclopedia contributors, “Hillel the elder,” New World Encyclopedia, April 2008, retrieved on June 26, 2009 fromhttp://www.newworldencyclopedia.org/entry/Hillel the Elder?oldid=680256.

[4] Wikipedia, “Information assurance,” 2009, [Online; accessed 25-June-2009]. [Online]. Available: \url{http://en.wikipedia.org/w/index.php?title=Information assurance&oldid=298559114}

[5] G. H. W. Bush, National Security Directive 42. Washington, D.C.: The White House, July 1990, N.B.: Everything after section10.f and until section 11 remains censored. [Online]. Available: http://bushlibrary.tamu.edu/research/pdfs/nsd/nsd42.pdf

[6] Wikipedia, “One-time pad,” 2009, [Online; accessed 25-June-2009]. [Online]. Available: http://en.wikipedia.org/w/index.php?title=One-time pad&oldid=298641321

[7] R. J. Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems. Wiley, April 2008, iSBN-10:0470068523 and ISBN-13: 978-0470068526.

[8] K. Thompson, “Reflections on trusting trust,” Communication of the ACM, vol. 27, no. 8, pp. 761–763, August 1984, available athttp://cm.bell-labs.com/who/ken/trust.html.

[9] M. Janeba, “The pentium problem,” 1995, available at http://www.willamette.edu/∼mjaneba/pentprob.html.

[10] M. Kang, A. Moore, and I. Moskowitz, “Design and assurance strategy for the NRL pump,” Naval Research Laboratory, Washington,D.C., Technical Report NRL Memo 5540-97-7991, 1998.

[11] P. K. Dick, “The minority report,” The Collected Stories Of Philip K. Dick, vol. 4, pp. 71–102, January 1998, iSBN-10: 0806512768and ISBN-13: 978-0806512761.

[12] J. M. Williams, “Biometrics or . . . biohazards?” in Proceedings of the 2002 New Security Paradigms Workshop. New York, NY,USA: ACM, September 2002, pp. 97–107.

[13] J. Deaver, The Twelfth Card. New York: Pocket Star Books, 2005, iSBN-10: 0743491564 and ISBN-13: 978-0743491563.

[14] K. Popper, The Logic of Scientific Discovery. Routledge; New English edition (March 29, 2002), 1959, iSBN-10: 0415278449 andISBN-13: 978-0415278447.

[15] A. Einstein, Relativity. Great Britain: Routledge; 2nd edition (May 29, 2001), 1916, iSBN-10: 0415253845 and ISBN-13: 978-0415253840.

[16] B. Russell, ABC of Relativity. Great Britain: Routledge; 6th edition (September 1, 2001), 1958, iSBN-10: 0415154294 and ISBN-13:978-0415154291.

[17] A. Korzybski, Science and Sanity An Introduction to Non-Aristotelian Systems and General Semantics. Fort Worth, Texas: Instituteof General Semantics, 1933, available in its entirety at http://www.esgs.org/uk/art/sands.htm.

[18] B. Russell, Our Knowledge of the External World. Chicago: University of Chicago Press, 1914.

[19] T. Kuhn, The Structure of Scientific Revolutions. University of Chicago Press, 1970.

[20] S. J. Greenwald, B. D. Snow, R. Ford, and R. Thieme, “Towards an ethical code for information security?” in Proceedings of the2008 New Security Paradigms Workshop. Lake Tahoe, California, USA: ACM, September 2008.

[21] D. Bourland, Jr., “A linguistic note: Writing in E-Prime,” in General Semantics Bulletin, no. 32 & 33, Fort Worth, Texas, 1965,available from http://www.esgs.org/uk/art/epr1.htm.

[22] D. Bourland, Jr., “TO BE OR NOT TO BE: E-Prime as a tool for critical thinking,” in ETC: A Review of General Semantics, vol. 46,no. 3. Fort Worth, Texas: International Society for General Semantics, Fall 1989.

[23] S. J. Greenwald, “E-Prime for security: A new security paradigm,” in Proceedings of the 2006 New Security Paradigms Workshop.Schloss Dagstuhl, Germany: ACM, September 2006, pp. 87–95.

6161