maintaining a poor person's information integrity

6

Click here to load reader

Upload: fred-cohen

Post on 21-Jun-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Maintaining a poor person's information integrity

Computers & Security, 7 (1988) 489-494

Refereed Article

Maintaining A Poor Person's Information Integrity Fred Cohen Department of Electrical and Computer Engineering, University of Cincinnati, Cincinnati, OH, U.S.A.

This paper presents some methods for maintaining the integrity of information in a sometimes corrupt information envi- ronment. We demonstrate their practical value by presenting methods to ensure that programs entering a computing environment are relatively benign.

We begin by defining in a non- mathematical sense, an information phys- ics based on three laws of information; non-conservation, transitivity, and un- decidability. We explain some ramifica- tions of this physics, and expand on it with a practical approach to the mainten- ance of integrity in untrusted computing environments.

Keywords: Integrity, Trusted systems.

This research was sponsored by a grant from The Randon Project.

1. Introduction

A typical world power may spend billions of dollars per year to

research, develop, acquire, and maintain computer systems intended to protect sensitive infor- mation from illicit dissemination and modification. Even with all o f this fiscal power at their disposal, these systems sometimes act more like leaky sieves than impregnable barriers. Unfortunately, we are not all so well endowed that we can throw money at our problems. What then is the poor person to do to maintain integrity in a world full of corruption?

The integrity problem stems from the inability to determine on its face, whether or not informa- tion is suitable for a particular pur- pose. Almost every software pack- age on the market has a warning to this effect as a condition of sale and this is in spite o f the fact that the vendor knows all the details o f how the program was designed and implemented. A great deal o f theory has been put forth to at-

tempt to prove properties of pro- grams. As a result, for a limited dass o f properties, for a limited type of program specification, using a tremendous amount of computer time, this can be done. Unfortunately, the vast majority of problems related to verifying the integrity of information are clearly undecidable. We can therefore expect that we will never be able to ensure perfectly the integrity of information in a general purpose information system, and thus we can never be perfectly sure of a system's behavior.

A typical example of the type of behavior that it would be desirable to prevent would be the uninten- tional reformatting o f the hard disk on a personal computer. There are many programs in the public domain that have been found to do this or other similarly harmful activities. Hundreds of such cases have been reported on electronic bulletin boards. In some cases, sellers o f programs have attempted to use this sort o f mechanism to prevent illicit copying of their soft- ware, usually to no avail and at considerable expense to their cus- tomers and their reputations.

In this paper, we introduce some basic properties of information and some practical steps that may re- duce the risk of corruption. None of these techniques guarantee that corruption will not occur. Rather, they serve as a guide for the im- plementation of a poor person's in- tegrity maintenance mechanism.

2. Some Basic Properties of Information

There are some basic properties of information that must be un- derstood if we are to understand how to maintain integrity. These

0167-4048/88/$3.50 (~) 1988, Elsevier Science Publishers Ltd. 489

Page 2: Maintaining a poor person's information integrity

F. Cohen/Poor Person's Information Integrity

may be considered to form the basis of an information physics; a set of laws that apply to all infor- mation in general purpose com- municating information networks, and that dictate what is and is not possible in this information universe.

(1) Information is not conserved.

(2) Information flow is tran- sitive [1].

(3) Cause and effect relation- ships are undecidable [2].

Law 1 states that information can be created and/or destroyed. This leaves us in a basic bind. We rely on information being accurately stored in an information system in order to accomplish our informa- tion goals. Since it can be created and/or destroyed, we can never be completely sure that the informa- tion we provide will not be deleted and replaced with other informa- tion which is not equally suitable for our purpose. The solution lies in two basic principles that arise out of the field of fault-tolerant computing, fault intolerance, and fault tolerance.

Fault intolerance is the design and use of systems to minimize the likelihood of corruption. Fault tolerance is the design and use of redundancy to minimize the likeli- hood that, in the unavoidable event of corruption, significant damage will result.

Law 2 states that if any system gets information, it can give it away. This is most unnerving to those who attempt to maintain the secrecy of information, and is commonly ignored by those charg- ed with maintaining its integrity. The integrity issue is the undesirable spread of corrupt information. Since information tends to spread tran- sitively, so does corrupt informa-

tion. A little bit of corruption may corrupt virtually all o f the informa- tion in an information network [ 1]. Again, we consider two distinct methods; prevention and cure.

We may try to prevent the spread of information, and thus corruption, by limiting its tran- sitive flow. We may also attempt to detect and cure corruption so that in the inevitable event of corrup- tion, we minimize the likelihood that the corruption will cause significant damage.

Law 3 states that it is impossible to tell how any particular informa- tion affects any other information. More specifically, we cannot determine whether interpretation will cause or prevent information creation, destruction, and/or flow. It is even impossible to determine whether or not information has been created or destroyed, or has flowed.

N o w we are truly in a bind, for our only hope of avoiding the con- sequences of laws 1 and 2 is in- evitably imperfect according to law 3. By virtue of these laws, it is im- possible to achieve ideal mainten- ance of integrity in an information network. The best we can ever hope to do is maximize the benefit for the cost. We will now present a number of recent issues in order to get a better understanding of how poor people can maintain their information integrity.

3. The Value of Variety

In an environment that is based on survival of the fittest, as we may be certain the information environ- ment is, variety may be the strong- est defense against mass destruc- tion. By way of analogy, we note that 50 years ago, the grain crop of the world was based on a tremen-

dous variety of grain strains. Through the years, the United States developed a relatively small set of very hearty strains of wheat, barley, corn etc., and these few strains now completely domi- nate all other varieties of those grains in the world.

What would happen then, if the world climate went through a several degree temperature change, or if a terrorist group managed to devise a virus that could rapidly kill offthose strains? The answer is simple; the vast majority of the world's population would likely die in a matter of months. It is o f course likely that some other strains would rapidly develop, but not at a rate sufficient to feed the world. Even if the wheat crop failed but the rye crop survived, there would likely be world-wide destabilization.

This very thing nearly happened a few years ago when about a fifth of the United States corn crop was lost because a virus attacked the grain near the end of the season. It took a monumental effort to generate enough of another strain of corn to provide a sufficient crop the next year. The USDA now maintains a back-up supply of seeds for each vital crop, but still, a set of about 5 viruses could severely cripple the entire United States agricultural industry.

Similarly, if the entire world computer market stabilized on a single strain of operating system, corruptions in that strain could have devastating effects on the sur- vival o f information networks. Even if only 50% of the world's computers failed, and the other 50% survived, there would likely be tremendous world-wide desta- bilization. The IBM PC operating system and the Unix operating

400

Page 3: Maintaining a poor person's information integrity

Computers and Security, Vol. 7, No. 5

system are currently dominant, and it is likely that computer vi- ruses attacking these two operating systems could have this effect.

The point here is that one system's information is another system's noise. What one system may treat as a severe integrity cor- ruption capable of reformatting the disks, another system may treat as illegal instructions or a no-- operation. This is the value of variety.

It is certainly simple to design variations on systems that allow virtually all functions to operate identically with the exception of those considered to be extremely dangerous. Those operations would have to be implemented by non-standard programs, and for the protection of integrity, they might be limited in their usability. However, this is a very small price to pay for the level o f protection provided. Similarly, if we have two word processors, one that runs Wordstar on a DOS system, and the other that runs Emacs on a Unix system, it is unlikely that a corruption in one will cause the other to fail. This then is a form of redundancy used to increase the in- tegrity of the pair o f systems as an information network. In the in- evitable event o f a failure o f one system, the other can be used with very little chance of multiple simul- taneous failures. The price to be paid for such protection is the cost of an additional system, additional training time, the purchase of multiple sets of software etc.

This leads us to a general prin- ciple o f redundancy; in order to be effective, redundant systems must be both separate and different from the systems they back-up. If they are not separate, they will likely fail along with the system being

backed up, and if they are not different, the same thing that causes one system to fail may cause the other to fail.

4. The Poor Person's Trusted Computing Base

As an alternative to variety, which is a form of fault tolerance, we may choose to take the fanlt- intolerant approach, which is based on the principle of sound initial design and implementation. The best hope for preventing attacks on otherwise unprotected systems, would seem to be the assurance that software placed in the system has the highest possible degree of integrity. We begin with a core set of hardware and software that we consider trusted by assumption. This forms the poor person's ver- sion of a trusted computing base (PPTCB). With the PPTCB, we may then strive to generate more trusted software and test previ- ously untrusted software before use to reduce the likelihood that it has undesirable properties. We are striving for an environment of control!ed growth, wherein the PPTCB grows with each addition of newly trusted software. This strategy is destined to eventual failure, but it can control the expo- sure level, and thus limit the cost o f each corruption.

A rational basis for trusting that systems provided by large manu- factures have integrity, is that business basically depends on con- fidence. If you lose the confidence of your customers, you will most likely be out o f business very soon. I f you have nothing, you have nothing to lose, and thus those with little to lose can take bigger risks with less potential for loss.

From another perspective, large

companies have many more employees, and since information flow is transitive, the more people involved, the higher the chances are that one of them has introduced a transitive corruption. Small com- parties live in a world where a small tight knit group works on the solutions to problems. They are much less vulnerable to the "one bad apple spoils the whole bunch" syndrome because there are fewer apples in the bunch.

In 1985, the popular press reported that a system in the United States embassy in Moscow had been illicitly modified to trans- mit sensitive information to the U.S.S.R. This is o f course a system that is built by a large manufac- turer, certified by the national computer security center, and guarded by the United States Mar- ines. This should shake any con- fidence in the integrity of systems provided by large companies, no matter how much money they have to throw at problems. With limited resources, we must decide to trust some finite set o f sources, and we must concede that i fa sut~icient number of them collude to corrupt information, it will be corrupted.

If we have to trust someone, we should like some sort o f guidelines as to whom. Clearly ,the less we are willing to trust a priori, the better it is for system integrity, but the more expensive it is to verify that the rest o f the system is trustworthy or to generate the trusted software necessary for the desired functionality. This then may serve as the basis for a cost- benefit analysis wherein we as- sociate costs with increasing the probable integrity of programs, and benefits with their use.

The exposure on an otherwise

491

Page 4: Maintaining a poor person's information integrity

F. Cohen/Poor Person's Information Integrity

unprotected computer is the cor- ruption ofaU its information. Under more sophisticated attacks, the exposure extends to computers to which it is attached, and through transitivity, to the entire world- wide information network. Bene- fits and exposures are relatively straightforward to analyze given an appropriate set o f assumptions, but we always have the problem of determining the likelihood of attack as a function of the cost we put into ensuring integrity.

5. The Poor Person's Method for Integrity Assurance

There are a variety of ways to expend resources in an attempt to decrease the likelihood of integrity corruption. We present a feasible alternative for those with little to spend and a great deal to lose. The basic principle is that many poor people together may be richer than even the richest person alone.

One technique to employ to ensure the integrity o f a program at low expense is to obtain a copy of the source program, examine it to ensure that it contains no obvious integrity corruption, and compile it using a trusted compiler which is part of the PPTCB. Once the pro- gram is found to have high in- tegrity and is compiled with a high integrity compiler, the executable code becomes another part of the PPTCB and thus we can expand the PPTCB to meet our needs.

There are some problems with this technique. We may not be able to obtain a copy of the source code, or it may cost far too much for our budget. Even if we can get a copy of the source code, it may be very difficult to ensure that it is o f high integrity. As we have seen, in

general, this problem is undecidable.

One solution to the cost problem is the use of a software certification bureau which examines source code under a non-disclosure agree- ment. The service bureau then pro- duces a certification that states that after in-depth examination of the source code, the program appears to be free of obvious attempts to subvert system integrity, that the program appears to be well written using common techniques that are well understood, and that when subjected to a well-known and published cryptographic checksum, the compiled code pro- duces the right value.

The non-disclosure agreement protects the owner of the source code from losing control o f the program. The certification is in- tended to ensure that obvious at- tacks are not present and that subtle attacks are probably not present because of the stylistic nature of the source software. The crypto- graphic checksum is intended to ensure that a flaw in the compila- tion and/or distribution mechan- ism has not produced corruption.

The use of a service bureau allows costs to be shared among a large number of customers, thus reducing the cost for attaining a given probability of attack, and allowing higher integrity for a given expense. The service bureau serves as a means by which many poor people may combine funds to provide more service to each than any could afford for themselves. If we wish to remain true to our principle of variety, we may endow a number of these integrity bureaus, and thus be better assured that software they all have certified has a higher assurance of integrity than software certified by only a

subset of them. In cases where source code is not

available, machine code may be decompiled into assembler lan- guage form which is more amen- able to human examination than pure machine code. For relatively small programs, this is feasible, but for large programs it may become extremely difficult to maintain a high degree of assurance. Decompilers for most commonly used machines are available at rela- tively low cost.

A service bureau is still effective in the compiled code case, but it is likely that the certification will not be as good as that given for source code, simply because information tends to be lost in the translation from source code to machine code to decompiled code.

Additional assurance can be pro- vided to the extent that no self- modifying code is apparent, and the program appears to be in- capable of producing hard disk reformatting instructions, calling external programs etc. This does not prevent a program from repla- cing an external executable file with code to perform these un- desirable functions, or any of the infinite variety of more subtle at- tacks. It simply eliminates a large, commonly used subset of the pos- sible corruptions that can occur.

6. The Poorest Person's Guide to Immediate Defense

Even the poor person's method for integrity assurance may be too costly for some ofus, and some of us may have to rely on even less expensive integrity protection mechanisms. At a recent meeting of poor but well qualified experts in

492

Page 5: Maintaining a poor person's information integrity

Computers and Security, Vol. 7, No. 5

the integrity field (held in a soup kitchen of course), the following list was considered basic to im- mediate protection from integrity attacks on untrusted computers.

• (1) Maintain user awareness of integrity issues. Integrity is not the same as secrecy. In fact, secrecy and integrity are often at odds with each other. A good example is the conflict between the secrecy of source code and the resulting in- ability to certify that it is free of harmful side effects.

(2) I fa questionable program must be used, it should be used on a test computer, or with the disks disconnected. The most often reported problem we have seen, particularly in the microcomputer area, is the reformatting o f a disk by a low integrity program. It is easy to prevent this damage, but few users are willing to make the effort, The cost o f the corruption quickly outweighs the incon- venience.

(3) A program that comes in executable form from a bulletin board should never be run. Nearly every known type of integrity at- tack has been put on a bulletin board at one time or another.

Always try to get sources or reports from other users before using a program. Unlike cars, few manufacturers o f programs allow buyers to try programs before pur- chase. I f you cannot take a test run, ask a friend who has it for an opinion, or take it for a run on a test machine.

Always run new programs the first several times on a non-critical system or a system with other disks disconnected, and do disk comparisons to test the disk for corruptions. Some programs work well the first several times, and then launch attacks. In one case, a

The following list is a small sample o f corrupt programs found and reported by the owners of bulletin boards. ARCS13----A version of the "arc" archival program appears normal, but overwrites the disk access tables. nALKTALK--A utility, changed to destroy sectors on the disk. DISKSCAN. SCANBAD, BADDISK

etc . - -A modified PC magazine program changed to write bad disk sectors. DOSKNOWs~A "File Allocation Table" destroyer given the same name as a system status utility. EGABTR~Deletes everything it can and prints "Aft! Aft! Got you!". This was reported widely in the popular press in 1986. FZLER~Labelled "Great new filing system", this program reformats the hard disk. QUIKRBBS---Copies the security f i le (RBBS-PC.DEF) into an ASC. file which can be read to exam- ine access codes. STR1PEs~Draws an American flag and copies the security file.

program designed to attack PCs was set up to reformat the disk after the 10th time the system was rebooted. This gave it time to spread to about 60 PCs before it started causing crashes. By looking for changes where none should ap- pear, we may detect such behavior and prevent its spread.

Always use your PPTCB opera- ting system to boot any machine you use. If another operating system is used, corruption in it may spread to your PPTCB.

If you must use a non-PPTCB, reformat disks when returning to

your PPTCB. There are programs that survive deleting everything on a disk under standard deletion commands. Untrusted machines may be able to write these pro-- grams onto a disk even if it is write protected.

Always keep back-ups of vital files and verify them using an orig- inal copy of your PPTCB. Verify- ing files with the PPTCB ensures that a program has not written corrupt back-ups in anticipation o f later causing the data to be destroyed, leaving you with useless back-ups.

Always power a system down and back-up before booting a PPTCB. Powering down will usually clear memory and guaran- tee to as high a degree as possible that no residual information re- mains. It also ensures that the hardware uses the PPTCB to boot instead of some residual software.

After suffering an attack, try to remember which program(s) were recently run. Restore the system from back-ups, and reduce trust in these programs.

7. Summary and Conclusions

There is little we can really do to protect the integrity of a poor person's computer, but there are many ways in which we can make sensible decisions and take reason- able precautions against attacks. Probably the best long-range solu- tion to the problem is the use of independent bureaux for the certi- fication of software integrity.

Integrity problems are wide- spread, and they extend past the computer domain. Remember that computer integrity is intimately linked to human integrity. Con- sider the source before trusting

493

Page 6: Maintaining a poor person's information integrity

F. Cohen~Poor Person's Information Integrity

information. Remember that in- t egr i ty co r rup t ion is t ransi t ive, so that improperly placed trust may corrupt others. Finally, remember that we cannot know for sure whether or not information is cor- rupt, so we should always keep our guard up, but we should not let concern yield to paranoia. A poor person has little to lose, but cannot afford to lose much o f it.

References [1] F. Cohen, Computer viruses, Ph.D.

Thesis, Southern California, 1986. [2] A. Turing, On Computable Num-

bers, with an application to the ents- cheidungsproblem, The London Mathematical Society, November 12, 1936, Series 2, Vol. 42, pp. 230-265.

Fred Cohen received a B.S. in Electrical Engineering from Carnegie-Mellon University in 1977, an M.S. in Informa- tion Science from the University of Pit- tsburgh in 1981, and a Ph.D in Electrical Engineering from the University of Southern California in 1986. He was a professor of Computer Science and Elec- trical Engineering at Lehigh University from January 1985 through April 1987, director of the Randon Project from May 1987 through August 1987, and is cur-

rently a professor of Electrical and Com- puter Engineering at The University of Cincinnati (89 Rhodes Hall, Cincinnati, OH 45221-0030). He is a member of the ACM, of directors of the Foundation for Com- puter Integrity Research, and a member of the international board of editors of the IFIPjournal "Computers and Security".

Dr. Cohen has published over 20 professional articles, has recently com- pleted a graduate text titled "Introductory Information Protection", and has desig- ned and implemented numerous devices and systems. He is most well known for his ground breaking work in computer viruses, where he did the first in-depth mathematical analysis, performed many startling experiments which have since been widely confirmed, and developed the first protection mechanisms, many of which are now in widespread use. His current research interests are concentrated in information network design, genetic models of computation, and evolutionary systems.

494