formal methods for information securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2...

73
Formal Methods for Information Security Morteza Amini Winter 1393

Upload: others

Post on 19-Aug-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Formal Methods for Information Security

Morteza Amini

Winter 1393

Page 2: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Contents

1 Preliminaries 31.1 Introduction to the Course . . . . . . . . . . . . . . . . . . . . . . . 3

1.1.1 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1.2 Evaluation Policy . . . . . . . . . . . . . . . . . . . . . . . . 31.1.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 The Concept of Formal Method . . . . . . . . . . . . . . . . . . . . 41.3 Formal Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.3.1 Set, Relation, Partial-Order . . . . . . . . . . . . . . . . . . 91.3.2 Logics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Formal Methods for Security Modeling 172.1 Discretionary Security Models . . . . . . . . . . . . . . . . . . . . . 17

2.1.1 Lampson’s Model (1971) . . . . . . . . . . . . . . . . . . . . 182.1.2 HRU Model (1976) . . . . . . . . . . . . . . . . . . . . . . . 202.1.3 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.2 Mandatory Security Models . . . . . . . . . . . . . . . . . . . . . . 282.2.1 BLP Model (1976) . . . . . . . . . . . . . . . . . . . . . . . 282.2.2 Denning’s Lattice Model of Secure Information Flow (1976) 39

2.3 Information Flow Control . . . . . . . . . . . . . . . . . . . . . . . . 432.3.1 Noninterference for Deterministic Systems (1986) . . . . . 432.3.2 Noninterference for Nondeterministic Systems . . . . . . . 492.3.3 Nondeducibility (1986) . . . . . . . . . . . . . . . . . . . . . 502.3.4 Generalized Noninterference (GNI) . . . . . . . . . . . . . . 522.3.5 Restrictiveness . . . . . . . . . . . . . . . . . . . . . . . . . . 53

2.4 Role Based Access Control Models . . . . . . . . . . . . . . . . . . 562.4.1 Core RBAC (RBAC0) . . . . . . . . . . . . . . . . . . . . . 562.4.2 Hierarchical RBAC (RBAC1) . . . . . . . . . . . . . . . . . 582.4.3 Constrained RBAC (RBAC2) . . . . . . . . . . . . . . . . . 602.4.4 RBAC3 Model . . . . . . . . . . . . . . . . . . . . . . . . . . 60

2.5 Logics for Access Control . . . . . . . . . . . . . . . . . . . . . . . . 612.5.1 Abadi’s Calculus for Access Control . . . . . . . . . . . . . 612.5.2 A Calculus of Principals . . . . . . . . . . . . . . . . . . . . 632.5.3 A Logic of Principals and Their Statements . . . . . . . . 63

1

Page 3: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

3 Exercise Answers 69

2

Page 4: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Chapter 1

Preliminaries

1.1 Introduction to the Course

1.1.1 Aim

Diversity of computer security requirements results in introducing of differentkinds of security models. In fact, each security model is an abstraction of a se-curity policy. Importance of computer security motivates us to precisely specifyand verify such security models using formal methods (such as set theory anddifferent types of logics). In the first part of this course, different approaches forformal modeling and specification of security and access control (authorization)models are introduced and surveyed. In the second part of the course, formalspecification and verification of security properties in security protocols usingformal methods (especially different types of modal logics) are introduced. In-troduction of BAN logic as well as Epistemic and Belief logic and using them forverification of some famous security protocols are the main topics of this part.

During this course, students learn how to use formal methods to formally andprecisely specify their required security model or security protocol and how toverify them using existing formal approaches and tools.

1.1.2 Evaluation Policy

1. Mid-term Exam (35%)

2. Final Exam (25%)

3. Theoretical & Practical Assignments (15%)

3

Page 5: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

4. Research Project (20%)

5. Class Activities (5%)

1.1.3 References

• G. Bella, Formal Correctness of Security Protocols, Springer, 2007.

• P. Ryan, S. Schneider, and M.H. Goldsmith, Modeling and Analysis ofSecurity Protocols, Addison-Wesley, 2000.

• M. Bishop, Computer Security, Addison-Wesley, 2003.

• Related papers and technical reports such as

– D. E. Bell and L. J. La Padula, Secure Computer System: Unifiedexposition and Multics interpretation, Technical Report ESD-TR-75-306, Mitre Corporation, Bedford, MA, March 1976.

– M. Abadi, M. Burrows, B. Lampson, and G. Plotkin, A Calculusfor Access Control in Distributed Systems, ACM Transactions onProgramming Languages and Systems, Vol. 15, No. 4, pp. 706-734,1993.

– D.F. Ferraiolo, R. Sandhu, S. Gavrila, D.R. Kuhn, R. Chandramouli,Proposed NIST Standard for Role-Based Access Control, ACM Trans-actions on Information and System Security (TISSEC), Vol. 4, No.3, pp. 224-274, ACM Press, 2001.

– D. Wijesekera and S. Jajodia, A Propositional Policy Algebra for Ac-cess Control, ACM Transactions on Information and System Security,Vol. 6, No. 2, pp. 286-325, ACM Press, 2003.

– J.M. Rushby, Noninterference, Transitivity, and Channel Control Se-curity Policies, Technical Report CSL-92-02, SRI International, 1992.

– K.J. Biba, Integrity Considerations for Secure Computing Systems,Technical Report TR-3153, Mitre Corporation, Bedford, MA, April1977.

– D. E. Denning, A Lattice Model of Secure Information Flow, Com-munication of the ACM, Vol. 19, No. 5, pp. 236-243, 1976.

– M. Burrows, M. Abadi, and R. Needham, A Logic of Authentication,ACM Transactions on Computer Systems, Vol. 8, pp. 18-36, 1990.

1.2 The Concept of Formal Method

Formal: The term formal relates to form or outward appearance.

4

Page 6: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Formal in Dictionaries

Definition of formal from Heritage:

* Relating to or involving outward form or structure, often in contrast tocontent or meaning. Being or relating to essential form or constitution: aformal principle.

• Following or being in accord with accepted or prescribed forms, conven-tions, or regulations: had little formal education; went to a formal party.

• Characterized by strict or meticulous observation of forms; methodical:very formal in their business transactions. Stiffly ceremonious: a formalgreeting.

• Characterized by technical or polysyllabic vocabulary, complex sentencestructure, and explicit transitions; not colloquial or informal: formal dis-course.

• Having the outward appearance but lacking in substance: a formal re-quirement that is usually ignored.

Definition of formal from Oxford:

* Of or concerned with outward form or appearance as distinct from content.

• Done in accordance with convention or etiquette; suitable for or consti-tuting an official or important occasion.

• Officially sanctioned or recognized

Example: Turing machine, which models a computing system, contains abstractconcepts (constructing or specifying the outward appearance of a computingsystem) such as the following

• States

• Alphabet

• Transitions

Method: A method is a means or manner of procedure, especially a regularand systematic way in accomplishing something.

It is also a set of principles for selecting and applying a number of constructiontechniques and tools in order to construct an efficient artifact (here, a securesystem).

5

Page 7: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Example: axiomatic method (based on axioms in Mathematics) or empiricalmethod (based on experiments in Physics).

Methodology: is the study of and the knowledge about methods.

Abstract: means a thing considered apart from concrete existence. It doesnot exist in reality or real experience, and cannot perceived through any of thesenses. It is also though of or stated without reference to a specific instance.

Model: A model is an abstraction of some physical phenomenon that accountsfor its known or inferred properties and may be used for further study of itscharacteristics.

Formal Method: means a method which has a mathematical foundation, andthus, employs techniques and tools based on mathematics and mathematicallogic that support the modelling, specification, and reasoning about (verificationof) hardware/sofware/... systems.

Examples of formal techniques and tools:

• Program logics (Hoare logic, dynamic logic)

• Temporal logics (CTL, LTL)

• Process algebras (CSP, PI-calculus)

• Abstract data types (CASL, Z)

• Development tools (B-tool, PVS, VSE)

• Theorem provers (Inka, Isabelle)

• Model checkers (Murphi, OFMC, Spin)

Security: is a property of a computer system to which unauthorized accessto and modification of information and data as well as unauthorized use ofresources is prevented.

Information Security: is CIA:

• Confidentiality : the nonoccurrence of unauthorized disclosure of informa-tion.

• Integrity : the nonoccurrence of unauthorized modification of programs ordata.

• Availability : the degree to which a system or component is operationaland accessible when required for use.

6

Page 8: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Other security properties can be seen as special cases of confidentiality, integrity,and availability. Such as the following:

• Anonymity : A condition in which your true identity is not known; confi-dentiality of your identity.

• Privacy : You choose what you let other people know; confidentiality ofinformation you don’t want to share.

• Authenticity : Being who you claim to be; being original not false; integrityof claimed identity.

• Non-repudiation: A message has been sent (or received) by a party and theparty cannot deny having done so; integrity of the senders (or receivers)claimed identity and integrity of the proof that the message has been sentby the sender (or received by the receiver).

. Note: Formal methods for confidentiality and integrity are rather mature,formal methods for availability not yet. Focus of this course will be on confi-dentiality and integrity.

Security Policy: captures the security requirements of an enterprise or de-scribes the steps that have to be taken to achieve security. It discriminatesauthorized and unauthorized as considered in a secure system.

Security Model: is an abstraction of a security policy. It identifies the rela-tions among the entities (such as subjects and objects) of a system from securitypoint of view.

Security mechanisms and security models are not the same thing.

Examples of security mechanism:

• Login procedure

• Firewalls

• Access control systems

Examples of security models:

• The access matrix model

• The BLP model

• The RBAC model

7

Page 9: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

What does formal approach mean?

A formal approach to security is the employment of a formal method in analyzingthe security of a given computing system or constructing a secure one.

Note that Computing System = Hardware + Software.

Formal methods can be applied on different levels of abstraction and duringdifferent development phases.

Objective of Using Formal Method for Security: Clarifying requirementsand analyzing systems such that security incidents are prevented (or at leastidentified).

Three Steps in Using Formal Methods for Security:

1. System Specification: Abstraction and modelling with a well-defined syn-tactic and semantic structure. It documents how the system operates orshould operate.

2. Requirement Specification: Security modelling (e.g., BLP). It documentsthe security requirements in unambiguous way.

3. Verification: Validates the system w.r.t. its requirements and can beformally done in different ways including:

• model checking (by searching the satisfiability of a given property inthe possible models)

• theorem proving (by inference of a given property using syntacticalinference rules in proof theory)

Applying formal methods does not mean that all three steps must be performed.

E.g., one may decide to only model the behavior and the requirements of thesystem without any verification.

It is also possible to apply formal methods only to a particularly critical part ofthe system rather than to the whole system

8

Page 10: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Advantages and Disadvantages of Formal Methods:

Some advantages are:

• clean foundation,

• abstraction; separation of policies from implementation mechanisms,

• preciseness,

• verifiability.

Some disadvantages are:

• difficulty in specification and verification (especially for complicated andbig systems),

• requires specialists of this field.

1.3 Formal Methods

1.3.1 Set, Relation, Partial-Order

Set theory is the branch of mathematics that studies sets, which are collectionsof objects. In theory, objects are abstract and can be defined of any type.

The modern study of set theory was initiated by Georg Cantor and RichardDedekind in the 1870s. After the discovery of paradoxes in naive set theory,numerous axiom systems were proposed in the early twentieth century, of whichthe ZermeloFraenkel axioms with the axiom of choice, are the best-known (thecollection named as ZFC Set Theory).

The formalism we consider in this course is based on ZFC set theory.

Basic Concepts of Sets

Sets: A, B, C, ...

Members: a, b, c, ...

Membership: ∈ (a ∈ A means a is a member of set A)

Set Inclusion: ⊆ (A ⊆ B means for all a ∈ A we have a ∈ B)

Union: ∪ (A ∪B is the set of all objects that are a member of A or B)

9

Page 11: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Intersection: ∩ (A ∩ B is the set of all objects that are members of both Aand B)

Set Difference: ∖ (A∖B is the set of all members of A that are not membersof B)

Cartesian Product: × (A×B is the set whose members are all possible orderedpairs ⟨a, b⟩ where a is a member of A and b is a member of B)

Power Set: P() (P(A) is the set whose members are all possible subsets of A)

Empty Set: ∅ (∅ is the unique set containing no elements and also denotedby {})

Basic Concepts of Relations

Relation: A k-ary relation over the nonempty sets X1, X2, ... Xk is a subsetof the cartesian product X1 ×X2 × ... ×Xk. For example, a binary relation Rcan be defined as a subset of A ×B.

Each member of a k-ary relation is k-tuple like ⟨x1, x2, ..., xk⟩ ∈ R where x1 ∈X1,x2 ∈X2, ..., xk ∈Xk.

Function: A binary relation f is a function fromX to Y (denoted by f ∶X → Y )if for every x ∈ X there is exactly one element y ∈ Y such that the ordered pair⟨x, y⟩ is contained in the subset defining the function.

Thereare different types of functions including injective functions, surjectivefunctions, bijective functions, identity functions, constant functions, invertiblefunctions.

Partial Order: A partial order, which is denoted by (P,≤), is a binary relation≤ over a set P which is reflexive, antisymmetric, and transitive, i.e., for all a, b,and c in P , we have that:

• a ≤ a (reflexivity),

• if a ≤ b and b ≤ a then a = b (antisymmetry),

• if a ≤ b and b ≤ c then a ≤ c (transitivity).

Total Order: A total order, which is denoted by (P,≤), is a binary relation ≤over a set P which is antisymmetric, transitive, and total i.e., for all a, b, and cin P , we have that:

• if a ≤ b and b ≤ a then a = b (antisymmetry),

10

Page 12: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

• if a ≤ b and b ≤ c then a ≤ c (transitivity),

• a ≤ b or b ≤ a (totality).

Totality implies reflexivity, thus a total order is also a partial order. Also everytwo elements of P are comparable based on total ordered relation.

Lattice: A lattice, which is denoted by (L,≤) is a partially ordered set in whichany two elements have a supremum (also called a least upper bound or join)and an infimum (also called a greatest lower bound or meet).

Exercise 1: Let (L,⪯) be a lattice and (T,≤) be a total order. Is (L × T,⊑),where ⊑ is defined as follows, a lattice?⟨a, b⟩ ⊑ ⟨c, d⟩⇔ (a ⪯ c) ∧ (b ≤ d)

1.3.2 Logics

Logic refers to the study of modes of reasoning.

Each logical framework may contain:

• Syntax: containing the alphabets and sentences (i.e., formulae) of a logicallanguage.

• Semantics (Model Theory): containing the interpretation or meaning ofthe symbols and formulae defined in the syntax of a logical language.Eachinterpretation is called model, which describes a possible world.

• Proof Theory: containing a set of axioms and inference rules enablinginference over a given set of formulae.

There are different types of logics:

• classical logics: which are bi-valued logics without any modal operator,such as propositional logic and predicate logic.

• non-classical logics: such as different types of modal logics (deontic logic,epistemic logic, belief logic, ...), fuzzy logic, multi-valued logic, and defaultlogic.

.Note: Modal logics are more interesting than the other ones for using insecurity specification and verification.

11

Page 13: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Propositional Logic

A propositional calculus or logic is a formal system in which formulae represent-ing propositions can be formed by combining atomic propositions using logicalconnectives, and in which a system of formal proof rules allows certain formulaeto be established as theorems.

Syntax

Formula:

• Each proposition is a formula and also � is a formula.

• If A and B are formulae, then ¬A, A ∧B, A ∨B, A→ B are formulae.

Semantics

A model in propositional logic is an interpretation function.

We define an interpretation function I for atomic propositions asI ∶ AtomicPropositions→ {0,1}and extend it for other formulae as follows:

• I(A ∧B) = 1 iff I(A) = 1 and I(B) = 1

• I(A ∨B) = 1 iff I(A) = 1 or I(B) = 1 (or both hold)

• I(A→ B) = 1 iff if I(A) = 1 then I(B) = 1

• I(¬A) = 1 iff I(A) = 0

• I(�) = 0

Truth: A formula A is true in model I if and only if I(A) = 1.

Some definitions:

• I is a model of A iff I(A) = 1 and denoted by I ⊧ A.

• If Γ is a set of formulae, then I ⊧ Γ iff for all A ∈ Γ, we have I ⊧ A.

• We say A is inferred from Γ (denoted by Γ ⊧ A) iff for every model I, ifI ⊧ Γ, then I ⊧ A.

• If Γ is empty (i.e., ⊧ A), then A is a tautology. In other words, for everymodel I, we have I ⊧ A.

12

Page 14: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Proof procedure in propositional logic is decidable (i.e., we can make the truthtable for a given formula).

A proof theory or a proof procedure should be

• sound: each provable formula is a tautology (if ⊢ A then ⊧ A).

• complete: each tautology is provable (if ⊧ A then ⊢ A).

First-Order Logic

Syntax

Term: If t1, ..., tn are terms and f is a function, then f(t1, ..., tn) is a term.

Formula:

• Each formula defined in propositional logic is a formula in FOL.

• If t1, ..., tn are terms and Pn is an n-ary predicate, then P (t1, ..., tn) is aformula.

• If A is a formula, then ∀x,A and ∃x,A are formulae.

Semantics

A model in FOL is denoted by M = ⟨∆, I⟩.

∆ is the domain (set of elements, objects, things we want to describe or reasonabout).

I is an interpretation function which is defined as follows:

• I(a) = di ∈ ∆ (an individual element of the domain)

• I(x) ∈ ∆ (any individual element of the domain)

• I(fn) ∶ ∆ × ... ×∆→∆ (an n-ary function on the domain)

• I(Pn) ⊆ ∆ × ... ×∆ ( a set on n-tuples)

• I(P 0) ∈ {0,1}

Truth:

• M ⊧ P 0i iff I(P 0

i ) = 1.

13

Page 15: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

• M ⊧ Pnj (t1, ..., tn) iff ⟨I(t1), ...,I(tn)⟩ ∈ I(P ).

• M ⊧ ∀x,P (x) iff for every element d of the domain ∆,M ⊧ P [x∣d] (wherex is substituted by d).

• M ⊧ ∃x,P (x) iff there is at least one element of the domain ∆ such thatM ⊧ P [x∣d] (where x is substituted by d).

Example: ∆ = {,,-, ce441, ce971}I(Ahmadi) = ,I(Bahmani) = -I(CE441) = ce441I(CE971) = ce971I(Lecturer) = {,,-}I(Course) = {ce441, ce971}I(Student) = ∅I(Teaches) = {⟨,, ce441⟩, ⟨,, ce971⟩, ⟨-, ce971⟩}

By the above interpretation the following relations hold:M ⊧ Lecturer(Ahmadi),M ⊧ Lecturer(Bahmani)M ⊧ Course(CE441),M ⊧ Course(CE971)M ⊧ {Teaches(Ahmadi,CE441), T eaches(Bahmani,CE971)}

Decidability First-order logic is undecidable in general; more precisely itis semi-decidable. A logical system is semidecidable if there is an effectivemethod for generating theorems (and only theorems) such that every theoremwill eventually be generated. This is different from decidability because in asemidecidable system there may be no effective procedure for checking that aformula is not a theorem.

Decidable Fragments of FOL

• Two Variable FOL: There are just two variables and only monadic andbinary predicates. Formulae like ∃y, (∀x,P (x, y) ∧ ∃x,Q(x, y)).

• Guarded Fragment of FOL: All quantifiers are relatived (guarded) byatomic formulae. In the form of ∃y(α(x, y) ∧ ψ(x, y)) or ∀y(α(x, y) →ψ(x, y)) where α is atomic and ψ is in GF and free(α) ⊆ free(ψ) = x, y.

• Horn Clauses of FOL: represent a subset of the set of sentences repre-sentable in FOL. In the form of P1(x) ∧ P2(x) ∧ ... ∧ Pn(x)→ Q(x).

Modal Logics

A modal is an expression (like necessarily or possibly) that is used to qualifythe truth of a judgement. Modal logic is, strictly speaking, the study of the

14

Page 16: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

deductive behavior of the expressions it is necessary that (denoted by ◻p) andit is possible that (denoted by ◇p). However, the term modal logic may be usedmore broadly for a family of related systems.

There are different types of modal logics such as:

• Epistemic Logic

• Belief Logic

• Deontic Logic

• Temporal Logic

More details on different types of modal logics will be presented later in thiscourse.

Propositional Modal Logic The famous type of modal logics.

Syntax

Formula:

• Each formula defined in propositional logic is a formula in PML.

• If A is a formula in PML, then ◻A is a formula.

• If A is a formula in PML, then ◇A is a formula.

Semantics

We usually use Kripke’s semantics for modal logics. A Kripke model is denotedby M = ⟨W,R,I⟩, where

• W is a set of possible worlds.

• R ⊆ W ×W is a relation between the possible worlds (the relation hasdifferent meanings in different types of modal logics and hence has differentproperties in them such as seriality, transitivity, and reflexivity).

• I ∶ Propositions → P(W ) is an interpretation function that maps eachproposition to a set of possible worlds where the proposition holds (istrue).

Truth:

• ⊧Mα p (p is a proposition) iff α ∈ I(p)

15

Page 17: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

• ⊧Mα ◻A iff in all worlds β such that ⟨α,β⟩ ∈ R, we have ⊧Mβ A.

• ⊧Mα ◇A iff there exists a possible world β such that ⟨α,β⟩ ∈ R and ⊧Mβ A.

Propositional modal logic is decidable.

16

Page 18: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Chapter 2

Formal Methods forSecurity Modeling

2.1 Discretionary Security Models

In orange Book (the book on trusted computer system evaluation criteria –TCSEC, 1985), two types of access control are defined.

• DAC (Discretionary Access Control): is a means of restricting access toobjects based on the identity of subjects and/or groups to which theybelong. The controls are discretionary in the sense that a subject with acertain access permission is capable of passing that permission (perhapsindirectly) on to any other subject (unless restricted by mandatory accesscontrol).[For commercial and non-governmental purpose, and based on need-to-know principle.]

• MAC (Mandatory Access Control): is a means of restricting access to ob-jects based on the sensitivity (as represented by a label) of the informationcontained in the objects and the formal authorization (i.e., clearance) ofsubjects to access information of such sensitivity.[For military or governmental purpose.]

In further classifications of access control systems and models, other types suchas role-based access control and attribute-based access control were introduced.

In this part of the course, we concentrate on some important DAC models andsafety problem in these models.

17

Page 19: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

2.1.1 Lampson’s Model (1971)

Reference Paper: Butler W. Lampson, “Protection”, in Proceedings of the 5thPrinceton Conference on Information Sciences and Systems, p. 437, Princeton,1971.

For the first time, Lampson defined protection as follows.

Protection: is a general term for all the mechanisms that control the access ofa program to other things in the system.

Example: samples of protection

• supervisor/user mode

• memory relocation

• access control by user to file directory

The foundation of any protection system is the idea of different protection envi-ronments or contexts. Depending on the context in which a process finds itself,it has certain powers. In Lampson’s model the following terms are equivalent:domain/ protection context/ environment/ state or sphere/ ring/ capabilitylist/ subject.

The major components of Lampson’s object system is a triple ⟨X,D,A⟩ where:

• X is a set of objects that are the things in the system which have to beprotected (e.g., files, processes, segments, terminals).

• D: is a set of domains (subjects) that are the entities that have accessto objects. A subject would be the owner of an object.

• A: is an access matrix that determines access of subjects to objects.

In access matrix A, rows are labeled by domain names and columns by objectnames. Each element Ai,j consists of strings called access attributes (such asread, write, owner, ...) that specifies the access which domain i has to objectj. Attached to each attribute is a bit called the copy flag which controls thetransfer of access in a way described in the specified rules below.

. Note: If we look at X or D, there are just sets, but for adding semantics,we specify that X is a set of objects, etc. Thus, generally, accompanying theformal specification, we need to provide informal specification of symbols to givemeaning (soul) to the formal specification.

18

Page 20: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

� �������� �������� ��������� �� ��� �� ��� ���������

�������� ������������� �

������������� �

��� � ��������������������

� �

�������� � � �� � ������ ������ �������

��������� � � ������������� �

����� ������� �

������� �������

Figure 2.1: Portion of an access matrix in Lampson’s model.

Note– How can we specify the access matrix in the Lampson’s model moreformally? Given a set of rights or access attributes R, it can be defined as afunction A ∶D×X → P(R). Thus, A maps each tuple ⟨d, x⟩ to a subset of accessrights.

Rules:

• Rule (a): d can remove access attributes from Ad′,x if it has control accessto d′. Example: domain1 can remove attributes from rows 1 and 2.

• Rule (b): d can copy to Ad′,x any access attributes it has for x which hasthe copy flag set, and can say whether the copied attributes shall have thecopy flag set or not. Example: domain1 can copy ‘write’ to A2,file1.

• Rule (c): d can add any access attribute to Ad′,x with or without thecopy flag, if it has owner access to x. Example: domain3 can add ‘write’to A2,file2.

• Rule (d): d can remove access attributes from Ad′,x if d has owner accessto x, provided d′ does not have ‘protected’ access to x. The ‘protected’restriction allows one owner to defend his access from other owners. Itsmost important application is to prevent a program being debugged fromtaking away the debugger’s access.

In the above rules, there are some commands such as add, copy, remove whichcan be defined precisely and formally. Each command has some preconditionsand has some effects on the access matrix as a result.

Exercise 2: Define add, copy, and remove formally in the way stated above.

In fact, the above rules specify a reference monitor. Now, we should verifyour required properties. One of these requirements is safety problem. It hasbeen proved that the safety problem in access matrix model is undecidable.

19

Page 21: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

2.1.2 HRU Model (1976)

Reference Paper: Michael A. Harrison , Walter L. Ruzzo , Jeffrey D. Ullman,“Protection in Operating Systems”, Communications of the ACM, 19 (8), pp.461–471, 1976.

HRU is a general model of protection mechanisms in computing systems, whichis proposed for arguing about safety problem.

A Formal Model of Protection Systems

Definition– A protection system consists of

1. R as a finite set of generic rights, and

2. C as a finite set of commands of the form:command α(X1, ...,Xk)if r1 in (Xs1 ,Xo1) and

r2 in (Xs2 ,Xo2) and

...rm in (Xsm ,Xom)

then

op1op2...opn

end

or if m is zero, simplycommand α(X1, ...,Xk) op1

...opn

end

Here, α is a name, and X1, ...,Xk are formal parameters. Each opi is one of theprimitive operations:enter r into (Xs,Xo)delete r from (Xs,Xo)create subject Xs

create object Xo

destroy subject Xs

destroy object Xo.

Also, r, r1, ..., rm are generic rights and s, s1, ..., sm and o, o1, ..., om are integersbetween 1 and k.

We may call the predicate following if the conditions of α and the sequence ofoperations op1, ..., opn the body of α.

20

Page 22: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Figure 2.2: HRU access matrix.

Definition– configuration of a protection system is a triple (S,O,P ), where Sis the set of current subjects, O is the set of current objects, S ⊆ O, and P is anaccess matrix, with a row for every subject in S and a column for every objectin O. P [s, o] is a subset of R, the generic rights. P [s, o] gives the rights toobject o possessed by subject s.

Example: R = {own, read,write, execute}

1. A process creates a new file.command CREATE (process, file)

create object fileenter own into (process, file)

end

2. The owner of a file may confer any right to that file, other than own, onany subject (including owner himself).command CONFERr(owner, friend, file)

if own in (owner, file)then enter r into (friend, file)

end

[where r ∈ {read,write, execute}]

Exercise 3: Write Lampson’s rules in the form of HRU commands.

Definition– Let (S,O,P ) and (S′,O′, P ′) be configurations of a protection sys-tem, and let op be a primitive operation. We say that (S,O,P )⇒op (S′,O′, P ′)if either:

1. op= enter r into (s, o) and S = S′,O = O′, s ∈ S, o ∈ O,P ′[s1, o1] =P [s1, o1] if (s1, o1) ≠ (s, o) and P ′[s, o] = P [s, o] ∪ {r}.

21

Page 23: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

2. op=delete r from (s, o) and S = S′,O = O′, s ∈ S, o ∈ O,P ′[s1, o1] =P [s1, o1] if (s1, o1) ≠ (s, o) and P ′[s, o] = P [s, o] − {r}.

3. op=create subject s′ where s′ is a new symbol not in O, S′ = S∪{s′},O′ =O ∪ {s′}, P ′[s, o] = P [s, o] for all (s, o) ∈ S × O. and P ′[s′, o] = ∅ for allo ∈ O′, and P ′[s, s′] = ∅ for all s ∈ S′.

4. op = create object o′, where o′ is a new symbol not in O, S′ = S, O′ =O ∪ {o′}, P ′[s, o] = P [s, o] for all (s, o) in S ×O and P ′[s, o′] = ∅ for alls ∈ S.

5. op = destroy subject s′, where s′ ∈ S, S′ = S − {s′}, O′ = O − {s′}, andP ′[s, o] = P [s, o] for all (s, o) ∈ S′ ×O′.

6. op=destroy object o′ where o′ ∈ O−S,S′ = S,O′ = O−{o′}, and P ′[s, o] =P [s, o] for all (s, o) ∈ S′ ×O′.

Definition– Let Q = (S,O,P ) be a configuration of a protection system con-taining:command α(X1, ...,Xk)if r1 in (Xs1 ,Xo1) and

...rm in (Xsm ,Xom)

then

op1, ..., opnend

Then, we say Q ⊢α(x1,...,xk) Q′ where Q′ is a configuration defined as:

1. If α’s conditions are not satisfied, i.e., if there is some 1 ≤ i ≤m such thatri is not in P [xsi , xoi], then Q = Q′.

2. Otherwise, i.e., if for all 1 ≤ i ≤ m, ri ∈ P [xsi , xoi], then there existconfigurations Q0,Q1, ...,Qn such that:Q = Q0 ⇒op∗1

Q1 ⇒op∗2...⇒op∗n Qn = Q

(op∗ denotes the primitive op with actual parameters x1, x2, ..., xk)

Q ⊢α Q′ if there exist parameters x1, ..., xk such that Q ⊢α(x1,...,xk) Q′.

Q ⊢ Q′ if there exist a command α such that Q ⊢α Q′.

Q ⊢∗ Q′ is reflexive and transitive closure of ⊢.

Example:command α(X,Y,Z)

enter r1 into (X,X)destroy subject Xenter r2 into (Y,Z)

22

Page 24: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

end

There can never be a pair of different configurationsQ andQ′ such thatQ ⊢α(x,x,z)Q′.

2.1.3 Safety

Definition– Given a protection system, we say command α(X1, ...,Xk) leaksgeneric right r from configuration Q = (S,O,P ) if α, when run on Q, can executea primitive operation which enters r into a cell of the access matrix which didnot previously contain r.More formally, there is some assignment of actual parameters xl, ..., xk such that

1. α(xl, ..., xk) has its conditions satisfied in Q, i.e. for each clause “r in(Xi,Xj)” in α’s conditions we have r ∈ P [xi, xj], and

2. if α’s body is opl, ..., opn, then there exists an m, 1 ≤ m ≤ n, and config-urations Q = Q0,Q1, ...,Qm−1 = (S′,O′, P ′), and Qm = (S”,O”, P”), suchthat Q0 ⇒op∗1

Q1 ⇒op∗2...Qm−1 ⇒op∗m Qm where op∗i denotes opi after

substitution of x1, ..., xk for X1, ...,Xk and moreover, there exist some sand o such that r /∈ P ′[s, o] but r ∈ P”[s, o].[Of course, opm must be enter r into (s, o)]

Definition– Given a particular protection system and generic right r, we saythat the initial configuration Q0 is unsafe for r (or leaks r) if there is a config-uration Q and a command α such that

1. Q0 ⊢∗ Q, and

2. α leaks r from Q.

We say Q0 is safe for r if Q0 is not unsafe for r.

Safety Problem: Is a given protection system and initial configuration unsafefor a given right r or not?

Note that “leaks” are not necessarily bad. Any interesting system will have com-mands which can enter some rights (i.e. be able to leak those rights). The termassumes its usual negative significance only when applied to some configuration,most likely modified to eliminate ”reliable” subjects, and to some right whichwe hope cannot be passed around.

Safety problem in general is undecidable, but there are special cases for whichwe can show it is decidable whether a given right is potentially leaked in anygiven initial configuration or not.

23

Page 25: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Definition– A protection system is mono-operational if each command’s inter-pretation (body) is a single primitive.

Theorem 1– There is an algorithm which decides whether or not a given mono-operational protection system and initial configuration is unsafe for a givengeneric right r.

Proof: The proof hinges on two simple observations. First, commands cantest for the presence of rights, but not for the absence of rights or objects.This allows delete and destroy commands to be removed from computationsleading to a leak. Second, a command can only identify objects by the rights intheir row and column of the access matrix. No mono-operational command canboth create an object and enter rights, so multiple creates can be removed fromcomputations, leaving the creation of only one subject. This allows the lengthof the shortest ”leaky” computation to be bounded.

Suppose (*) Q0 ⊢ClQ1 ⊢C2 ... ⊢Cm Qm is a minimal length computation reach-

ing some configuration Qm for which there is a command α leaking r. LetQi = (Si,Oi, Pi). Now we claim that every Ci, 2 ≤ i ≤ m is an enter command,and C1 is either an enter or create subject command.

Suppose not, and let Cn be the last non-enter command in the sequence (*).Then we could form a shorter computationQ0 ⊢C1 Q1 ⊢ ...Qn−1 ⊢C′

n+1Q′n+1 ⊢ ... ⊢C′

mQ′m

as follows.

(a) if Cn is a delete or destroy command, let C ′i = Ci and Q′

i = Qi plus the right,subject or object which would have been deleted or destroyed by Cn. By thefirst observation above, Ci cannot distinguish Qi−1 from Q′

i−1, so Q′i−1 ⊢C′

iQ′i

holds. Likewise, α leaks r from Q′m since it did so from Qm.

(b) Suppose Cn is a create subject command and ∣Sn−1∣ ≥ 1, or Cn is a createobject command. Note that α leaks r from Qm by assumption, so α is an entercommand. Further, we must have ∣Sm∣ ≥ 1 and ∣Sm∣ = ∣Sm−1∣ = ... = ∣Sn∣ ≥ 1(Cm, ...,Cn+1 are enter commands by assumption). Thus ∣Sn−1∣ ≥ 1 even if Cnis a create object command. Let s ∈ Sn−1. Let o be the name of the objectcreated by Cn. Now we can let C ′

i = Ci with s replacing all occurrences of o,and Q′

i = Qi with s and o merged. For example, if o ∈ On − Sn we would have

S′i = Si,O′i = Oi − {o}, P ′

i [x, y] =⎧⎪⎪⎨⎪⎪⎩

Pi[x, y], if y ≠ sPi[x, s] ∪ Pi[x, o], if y = s

Clearly, Pi[x, o] ⊆ P ′i [x, s], so for any condition in Ci satisfied by o, the corre-

sponding condition in C ′i is satisfied by s. Likewise for the conditions of α.

Exercise 4: Define Q′i precisely when the command is create subject s′.

24

Page 26: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

(c) Otherwise, we have ∣Sn−1∣ = 0, Cn is a create subject command, and n ≥ 2.The construction in this case is slightly different–the create subject commandcannot be deleted (subsequent ”enters” would have no place to enter into).However, the commands preceding Cn can be skipped (provided that the namesof objects created by them are replaced), givingQ0 ⊢cn Q′

n ⊢c′n+1⊢ ... ⊢c′m Q′m

where, if Sn = {s}, we have C ′i is Ci with s replacing the names of all objects in

On−1, and Q′i is Qi with s merged with all o ∈ On−1.

Exercise 5: Define Q′i precisely in this case.

In each of these cases we have created a shorter ”leaky” computation, contra-dicting the supposed minimality of (*). Note that no Ci enters a right r intoa cell of the access matrix already containing r, else we could get a shortersequence by deleting Ci. Thus we have an upper bound on m:m ≤ g(∣S0∣ + 1)(∣O0∣ + 1) + 1where g is the number of generic rights.

Given a graph and an integer k, produce a protection system whose initial accessmatrix is the adjacency matrix for the graph and having one command. Thiscommand’s conditions test its k parameters to see if they form a k-clique, andits body enters some right r somewhere. The matrix will be unsafe for r inthis system if and only if the graph has a k-clique. The above is a polynomialreduction of the known NP-complete clique problem to our problem, so ourproblem is at best NP-complete.

Review– Each Turing machine T consists of a finite set of states K and adistinct finite set of tape symbols Γ. One of the tape symbols is the blank B,which initially appears on each cell of a tape which is infinite to the right only(that is, the tape cells are numbered 1, 2 , . . . , i, ...). There is a tape headwhich is always scanning (located at) some cell of the tape. The moves of T arespecified by a function δ from K × Γ to K × Γ × {L,R}.For example, If δ(q,X) = (p, Y,R) for states p and q and tape symbols X andY , then should the Turing machine T find itself in state q, with its tape headscanning a cell holding symbol X, then T enters state p, erases X and prints Yon the tape cell scanned and moves its tape head one cell to the right.

Initially, T is in state q0, the initial state, with its head at cell 1. Each tape cellholds the blank. There is a particular state qf , known as the final state, andit is a fact that it is undecidable whether started as above, an arbitrary Turingmachine T will eventually enter state qf (undecidability of halting problem).

Theorem 2– It is undecidable whether a given configuration of a given protec-tion system is safe for a given generic right.

Proof: We shall show that safety is undecidable by reducing the halting problemin the Turing machine to safety problem in protection systems. In other words,we shall show that a protection system, can simulate the behavior of an arbitrary

25

Page 27: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Figure 2.3: Representing a tape as an access matrix

Turing machine, with leakage of a right corresponding to the Turing machineentering a final state (a condition we know to be undecidable).

The set of generic rights of our protection system will include the states andtape symbols of the Turing machine. At any time, the Turing machine will havesome finite initial prefix of its tape cells, say 1,2, ..., k, which it has ever scanned.This situation will be represented by a sequence of k subjects, s1, s2, ..., sk, suchthat si ”owns” si+1 for 1 ≤ i < k. Thus, we use the ownership relation to ordersubjects into a linear list representing the tape of the Turing machine. Subject sirepresents cell i, and the fact that cell i now holds tape symbol X is representedby giving si generic right X to itself. The fact that q is the current state andthat the tape head is scanning the j’th cell is represented by giving sj genericright q to itself. Note that we have assumed the states distinct from the tapesymbols, so no confusion can result.

There is a special generic right end, which marks the last subject, sk. That is,sk has generic right end to itself, indicating that we have not yet created thesubject sk+l which sk is to own. The generic right own completes the set ofgeneric rights.

The moves of the Turing machine are reflected in commands as follows. First,if δ(q,X) = (p, Y,L), then there is

command Cqx(s, s′)if

own in (s, s′) andq in (s′, s′) andX in (s′, s′)

then

delete q from (s′, s′)delete X from (s′, s′)enter p into (s, s)enter Y into (s′, s′)

26

Page 28: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

end

If δ(q,X) = (p, Y,R),command Cqx(s, s′)if

own in (s, s′) andq in (s, s) andX in (s, s)

then

delete q from (s, s)delete X from (s, s)enter p into (s′, s′)enter Y into (s, s)

end

To handle the case where the Turing machine moves into new territory, there isalsocommand Dqx(s, s′)if

end in (s, s) andq in (s, s) andX in (s, s)

then

delete q from (s, s)delete X from (s, s)create subject s′

enter B into (s′, s′)enter p into (s′, s′)enter Y into (s, s)delete end from (s, s)enter end into (s′, s′)enter own into (s, s′)

end

In each configuration of the protection system reachable from the initial con-figuration, there is at most one command applicable. This follows from thefact that the Turing machine has at most one applicable move in any situation,and the fact that Cqx and Dqx can never be simultaneously applicable. Theprotection system must therefore exactly simulate the Turing machine.

If the Turing machine enters state qf , then the protection system can leakgeneric right qf , otherwise, it is safe for qf . Since it is undecidable whether theTuring machine enters qf , it must be undecidable whether the protection systemis safe for qf . ◻

Theorem 3– The safety problem is decidable for protection systems withoutcreate commands.

27

Page 29: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Theorem 4– The safety problem is decidable for protection systems that areboth monotonic and monoconditional.Monotonic protection system is a system without destroy and delete commands.Monoconditional system is a system with only one condition in condition partof each command.

Theorem 5– The safety problem for protection systems with a finite numberof subjects is decidable.

Concluding Remarks– Elisa Bertino says: ”The results on the decidabilityof the safety problem illustrate an important security principle, the principle ofeconomy of mechanisms

• if one designs complex systems that can only be described by complexmodels, it becomes difficult to find proofs of security

• in the worst case (undecidability), there does not exist a universal algo-rithm that verifies security for all problem instances.”

2.2 Mandatory Security Models

2.2.1 BLP Model (1976)

Reference Paper: D. E. Bell and L. J. La Padula, “Secure Computer System:Unified Exposition and Multics Interpretation”, Technical Report ESD-TR-75-306, Mitre Corporation, Bedford, MA, 1976.

The model has the ability to represent abstractly the elements of computer sys-tems and of security that are relevant to a treatment of classified informationstored in a computer system.

A Narrative Description

Subjects (denoted Si individually and S collectively) that are active entities canhave access to objects (denoted Oi individually and O collectively) which arepassive entities. No restriction is made regarding entities that may be bothsubjects and objects.

The modes of access in the model are called access attributes (denoted x andA).

The two effects that an access can have on an object are

• the extraction of information (”observing” the object) and

28

Page 30: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

• the insertion of information (”altering” the object).

There are thus four general types of access imaginable:

• no observation and no alteration (denoted e – execute);

• observation, but no alteration (denoted r – read);

• alteration, but no observation (denoted a – append);

• both observation and alteration (denoted w – write).

A system state is expressed as a set of four components z = (b,M, f, h) where:

• b ∈ B is the current access set and (subject, object, attribute) ∈ b denotesthat subject has current access − attribute access to object in the state.

• h ∈H is a hierarchy (parent-child structure) imposed on objects. Only di-rected, rooted trees and isolated points are allowed for objects hierarchies(see Figure 2.4).

• M ∈M is an access permission matrix. Mij ⊆ A, where A is the set ofaccess attributes.

• f ∈ F is a level function, the embodiment of security classifications in themodel.A security level is a pair (classification, categoryset) where

– classification or clearance such as unclassified, confidential, secret,and top secret.

– categoryset as a set of categories such as Nuclear, NATO, and Crypto.

(class1, categoryet1) dominates (class2, categoryset2)⇔ class1 ≥ class2and categoryset1 ⊇ categoryset2.Dominance ordering (denoted by �) required to be partial ordering.The (maximum) security level of a subject Si is denoted formally by fS(Si)and informally by level(Si). Similarly, the security level of an object Ojis denoted formally by fO(Oj) and informally by level(Oj). The currentsecurity level of a subject Si is denoted by fC(Si). Thus, f = (fS , fO, fC) ∈F .

We refer to inputs to the system as requests (Rk and R) and outputs as decisions(Dm and D). The system is all sequences of (request, decision, state) tripleswith some initial state (z0) which satisfy a relation W on successive states (seeFigure 2.5).

29

Page 31: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

parent-child relation be maintained which allows only directed, rooted trees and isolated points as shown:

o 0

Figure 2. The Desired Object Structure

This particular structure is desired in order to take advantage of the implicit control conventions of and the wealth of experience with logical data objects structured in this way. The construct userl is called a hierarchy (denoted H and H); a hierarchy specifies the proqeny of each object so that structures of the type mentioned are the only possibilities.

The next state component which we consider involves access permission. Access permission is included in the model in an access matri x t r'~ .

.1.

[Notice that r1 is a matrix only in the model's conceptual sphere: any interpretation of ~1 whi ch records a 11 the necessary

information is acceptable.

12

Figure 2.4: The desired object hierarchies in BLP model.

z0 z1 z2 zm R1/D1 R2/D2 R3/D3 . . .

Figure 2.5: System specified in BLP model.

Security Definition

Security is defined by satisfying three properties in BLP model.

1. Simple Security Propoerty (SS-Property):

Simple property is satisfied, if (subject, object, observe−attribute) is a cur-rent access, i.e., if subject observes (viz. r or w) object, then level(subject)dominates level(object).

The expected interpretation of the model anticipates protection of informationcontainers rather than of the information itself. Hence a malicious program (aninterpretation of a subject) might pass classified information along by puttingit into an information container labeled at a lower level than the informationitself (Figure 2.6).

2. Star Property (*-Property)

Star property is satisfied if in any state, if a subject has simultaneous ob-serve access to object−1 and alter access to object−2, then level(object−1)is dominated by level(object−2).

Under the above restriction, the levels of all objects accessed by a given subjectare neatly ordered:

• level(a−accessed−object) dominates level(w−accessed−object);

30

Page 32: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Figure 2.6: Information flow showing the need for *-property.

• level(w−accessed−object−1) equals level(w−accessed−object−2); and

• level(w−accessed−object) dominates level(r−accessed−object).

Following the *-property, in any state, if (subject, object, attribute) ∈ b is acurrent access, then:

• level(object) dominates current−level(subject) if attribute is a;

• level(object) equals current−level(subject) if attribute is w; and

• level(object) is dominated by current−level(subject) if attribute is r.

There are two important comments to be made about the *-property.

• First, it does not apply to trusted subjects: a trusted subject is one guar-anteed not to consummate a security-breaching information transfer evenif it is possible.

• Second, it is important to remember that both ss-property and *-propertyare to be enforced. Neither property by itself ensures the security wedesire.

3. Discretionary Security Property (ds-Property)

If (subject−i, object−j, attribute−x) is a current access (is in b), thenattribute−x is recorded in the (subject−i, object−j)-component of M (x ∈Mij).

31

Page 33: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Basic Security Theorem

This theorem states that security (as defined) can be guaranteed systemicallywhen each alteration to the current state does not itself cause a breach of secu-rity. Thus security can be guaranteed systemically if, whenever (subject, object, attribute)is added to the current access set b,

1. level(subject) dominates level(object) if attribute involves observation(to assure the ss-property);

2. current−level(subject) and level(object) have an appropriate dominancerelation (to assure the *-property); and

3. attribute is contained in the (subject, object) component of the accesspermission matrix M (to assure the ds-property).

The basic security theorem establishes the Inductive nature of security in thatit shows that the preservation of security from one state to the next guaranteestotal system security.

Thus, in constructing general mechanisms within the model is a direct conse-quence of the basic security theorem. This framework relies on the ”rule,” afunction for specifying a decision (an output) and a next-state for every stateand every request (an input):

(request, current − state)→rule (decision,next−state).

Formal Mathematical Model

The elements of the mathematical model are represented in the following. Inthe following the notation AB denotes the set of all functions from B to A.

Elements of The Model

S = {S1, S2, ..., Sn} Set of subjects

O = {O1,O2, ...,Om} Set of objects

C = {C1,C2, ...,Cq}C1 > C2 > ... > Cq Classifications: clearance level of a subject;classification of an object

K = {K1,K2, ...,Kr} Categories: special access privileges

L = {L1, L2, ..., Lp} Security levelsLi = (Ci,Ki) where Ci ∈ C and Ki ⊆ K

32

Page 34: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

� Dominance relation on L which is defined as follows:Li � Lj iff Ci ≥ Cj and Ki ⊇Kj

(L,�) is a partial order (the proof is convenient)

A = {r, e,w, a} Access attributes [r: read-only, e: execute (no read, no write),w write (read and write); a: append (write-only)]

RA = {g, r} Request elements [g: get, give; r: release, rescind]

S ′ ⊆ S Subjects subject to *-property.

ST = S −S ′ Trusted subjects: subjects not subject to *-property but trusted notto violate security with respect to it.

R = ⋃1≤i≤5

R(i) where

R(1) = RA × S ×O ×A requests for get and release accessR(2) = S ×RA × S ×O ×A requests for give and rescind accessR(3) = RA×S ×O ×L requests for generation and reclassification of objectsR(4) = S ×O requests for destruction of objectsR(5) = S ×L requests for changing security level

D = {yes, no, error, ?} Decisions (Dm ∈D)

T = {1,2, ..., t, ...} Indices

F ⊆ LS × LO × LS Security vectors [fS : subject security level function; fO:object security level function; fC : current security level function]. An elementf = (fS , fO, fC) ∈ F iff for each Si ∈ S we have fS(Si)� fC(Si)

X = RT Request sequences (x ∈X)

Y =DT Decision sequences (y ∈ Y )

M = {M1,M2, ...,M24n.m } Access matrixes; an element of M, say Mk, is ann ×m matrix with entries from P(A); the (i, j)-entry of matrix Mk shows Si’sattributes relative to Oj ; the entry is denoted by Mij .

H ⊆ (P(O))O Hierarchies; a hierarchy is a forest possibly with stumps, i.e., ahierarchy can be represented by a collection of rooted, directed trees and isolatedpoints. A hierarchy H ∈H iff

(1)Oi ≠ Oj implies H(Oi) ∩H(Oj) = ∅(2) /∃ {O1,O2, ...,Ow} ∈ O,∀r,1 ≤ r ≤ w,Or+1 ∈H(Or) and Ow+1 = O1

B = P(S ×O ×A) Current access set (b ∈ B)

V = B ×M × F ×H States (v ∈ V )

Z = V T State sequences; if z ∈ Z, then zt ∈ z is the t-th state in the statesequence z.

33

Page 35: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Definition (System): Suppose thatW ⊂ R×D×V ×V . The system Σ(R,D,W, z0) ⊂X×Y ×Z is defined by (x, y, z) ∈ Σ(R,D,W, z0) iff (xt, yt, zt, zt−1) ∈W for each tin T , where z0 is an initial state of the system, usually of the form (∅,M, f,H).

Notation– The following notation is defined.b(S ∶ x, y, ..., z) = {O∣(S,O,x) ∈ b ∨ (S,O, y) ∈ b ∨ ... ∨ (S,O, z) ∈ b}

Simple Security Property: A state v = (b,M, f,H) satisfies the simple-securityproperty (ss-property) iff

S ∈ S ⇒ [(O ∈ b(S ∶ r,w))⇒ (fS(S)� fO(O))]

It is convenient also to define:(S,O,x) ∈ b satisfies the simple security condition relative to f (SSC rel f) iff

(i) x = e or a, or

(ii) x = r or w and fS(S)� fO(O)

Star-Property: Suppose S ′ is a subset of S. A state v = (b,M, f,H) satisfies the*-property relative to S ′ iff

S ∈ S ′ ⇒⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

(O ∈ b(S ∶ a))⇒ (fO(O)� fC(S))(O ∈ b(S ∶ w))⇒ (fO(O) = fC(S))(O ∈ b(S ∶ r))⇒ (fC(S)� fO(O))

An immediate consequence is: if v satisfies *-property rel S ′ and S ∈ S ′ then[Oj ∈ b(S ∶ a) and Ok ∈ b(S ∶ r)]⇒ fO(Oj)� fO(Ok).

Discretionary-Security Property: A state v = (b,M, f,H) satisfies the discretionary-security property (ds-property) iff(Si,Oj , x) ∈ b⇒ x ∈Mij

Definition (Secure System): A state v is a secure state iff v satisfies the ss-property and *-property rel S ′ and ds-property. A state sequence z is a securestate sequence iff zt is a secure state for each t ∈ T . Call (x, y, z) ∈ Σ(R,D,W, z0)an appearance of the system. (x, y, z) ∈ Σ(R,D,W, z0) is a secure appearanceiff z is a secure sequence. Finally, Σ(R,D,W, z0) is a secure system iff everyappearance of Σ(R,D,W, z0) is a secure appearance. Similar definitions pertainfor the notions.

(i) the system Σ(R,D,W, z0) satisfies the ss-property,

(ii) the system satisfies *-property rel S ′, and

(iii) the system satisfies the ds-property.

34

Page 36: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Definition (Rule): A rule is a function ρ ∶ R × V → D × V . A rule thereforeassociates with each request-state pair (input) a decision-state pair (output).

A rule ρ is secure-state-preserving iff v∗ is a secure state whenever ρ(Rk, v) =(Dm, v

∗) and v is a secure state. Similar definitions pertain for the notions

(i) ρ is ss-property-preserving,

(ii) ρ is *-property-preserving, and

(iii) ρ is ds-property-preserving.

Suppose w = {ρ1, ρ2, ..., ρs} is a set of rules. The relation W (w) is defined by(Rk,Dm, v

∗, v) ∈ W (w) iff Dm ≠? and (Dm, v∗) = ρi(Rk, v) for a unique i,

1 ≤ i ≤ s.

Definition: (Ri,Dj , v∗, v) ∈ R ×D × V × V is an action of Σ(R,D,W, z0) iff

there is an appearance (x, y, z) of Σ(R,D,W, z0) and some t ∈ T such that(Ri,Dj , v

∗, v) = (xt, yt, zt, zt−1).

Theorem 1– Σ(R,D,W, z0) satisfies the ss-property for any initial state z0which satisfies the ss-property iff W satisfies the following conditions for eachaction (Ri,Dj , (b∗,M∗, f∗,H∗), (b,M, f,H)):

(i) each (S,O,x) ∈ b∗ − b satisfies the simple security condition relative to f∗

(SSC rel f∗);

(ii) each (S,O,x) ∈ b which does not satisfy SSC rel f∗ is not in b∗.

Proof: (⇐)Suppose z0 = (b,M, f,H) is an initial state which satisfies ss-property. Pick(x, y, z) ∈ Σ(R,D,W, z0) and write zt = (b(t),M (t), f (t),H(t)) for each t ∈ T .

z1 satisfies ss-property(x1, y1, z1, z0) is in W . In order to show that z1 satisfies ss-property we need toshow that each (S,O,x) ∈ b(1) satisfies SSC rel f (1).

Notice that b(1) = (b(1) − b(0)) ∪ (b(0) ∩ b(1)) and (b(1) − b(0)) ∩ (b(1) ∩ b(0)) =φ. Suppose (S,O,x) ∈ b(1). Then either (S,O,x) is in (b(1) − b(0)) or is in(b(1) ∩ b(0)). Suppose (S,O,x) ∈ (b(1) − b(0)). Then (S,O,x) satisfies SSC relf (1) according to (i). Suppose (S,O,x) ∈ (b(1) ∩ b(0)). Then (S,O,x) satisfiesSSC rel f (1) according to (ii). Therefore z1 satisfies ss-property.

if zt−1 satisfies ss-property, then zt satisfies ss-property.The argument given for z1 satisfies ss-property applies with t−1 substituted for0 and t substituted for 1.

35

Page 37: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

By induction, z satisfies ss-property so that the appearance (x, y, z) satisfies ss-property. Since (x, y, z) being arbitrary, Σ(R,D,W, z0) satisfies the ss-property.

(⇒) Suppose Σ(R,D,W, z0) satisfies the ss-property for any initial state z0which satisfies ss-property. Argue by contradiction. Contradiction yields theproposition

“there is some action (xt, yt, zt, zt−1) such that either

(iii) some (S,O,x) in b(t) − b(t−1) does not satisfy SSC rel f (t) or

(iv) some (S,O,x) in b(t−1) which does not satisfy SSC rel f (t) is in b(t), i.e.,is in b(t−1) ∩ b(t).”

Suppose (iii). Then there is some (S,O,x) ∈ b(t) which does not satisfy SSC relf (t). Suppose (iv). Then there is some (S,O,x) ∈ b(t) which does not satisfy SSCrel f (t). Therefore zt does not satisfy ss-property, (x, y, z) does not satisfy ss-property, and so Σ(R,D,W, z0) does not satisfy ss-property, which contradictsinitial assumption of the argument. ◻

Theorem 2– Σ(R,D,W, z0) satisfies the *-property relative to S ′ ⊆ S for anyinitial state z0 which satisfies *-property relative to S ′ iff W satisfies the follow-ing conditions for each action (Ri,Dj , (b∗,M∗, f∗,H∗), (b,M, f,H)):

(i) for each S ∈ S ′,

(a) O ∈ (b∗ − b)(S ∶ a)⇒ f∗O(O)� f∗C(S),(b) O ∈ (b∗ − b)(S ∶ w)⇒ f∗O(O) = f∗C(S),(c) O ∈ (b∗ − b)(S ∶ r)⇒ f∗C(S)� f∗O(O);

(ii) for each S ∈ S ′,

(a’) [O ∈ b(S ∶ a) and f∗O(O) /� f∗C(S)]⇒ O /∈ b∗(S ∶ a), and

(b’) [O ∈ b(S ∶ w) and f∗O(O) ≠ f∗C(S)]⇒ O /∈ b∗(S ∶ w), and

(c’) [O ∈ b(S ∶ r) and f∗C(S) /� f∗O(O)]⇒ O /∈ b∗(S ∶ r).

Proof: As an exercise (similar to the proof of Theorem 1). ◻

Theorem 3– Σ(R,D,W, z0) satisfies the ds-property iff z0 satisfies the ds-property andW satisfies the following condition that for each action (Ri,Dj , (b∗,M∗, f∗,H∗), (b,M, f,H)):

(i) (Sa,Oa′ , x) ∈ b∗ − b⇒ x ∈M∗a,a′ ; and

(ii) (Sa,Oa′ , x) ∈ b and x /∈M∗a,a′ ⇒ (Sa,Oa′ , x) /∈ b∗.

36

Page 38: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Proof: As an exercise (similar to the proof of Theorem 1). ◻

Corollary 1– Σ(R,D,W, z0) is a secure system iff z0 is a secure state and Wsatisfies the conditions of theorems 1 to 3 for each action.

Corrolary 2– Suppose w is a set of secure-state-preserving rules and z0 is aninitial state which is a secure state. Then Σ(R,D,W (w), z0) is a secure system.

Theorem 5– Let ρ be a rule and ρ(Rk, v) = (Dm, v∗) , where v = (b,M, f,H)

and v∗ = (b∗,M∗, f∗,H∗).

(i) If b∗ ⊆ b and f∗ = f , then ρ is ss-property-preserving.

(ii) If b∗ ⊆ b and f∗ = f , then ρ is *-property-preserving.

(iii) If b∗ ⊆ b and M∗ij ⊇Mij for all i and j, then ρ is ds-property-preserving.

(iv) If b∗ ⊆ b, f∗ = f , and M∗ij ⊇ Mij for all i and j, then ρ is secure-state-

preserving.

Proof: (i) If v satisfies the ss-property, then (S,O,x) ∈ b∗ with x = w or rimplies (S,O,x) ∈ b so that fS(S)� fO(O) by assumption. Since f∗ = f , hencef∗S(S)� f∗O(O). Thus v∗ satisfies ss-property and ρ is ss-property-preserving.

(ii) and (iii) are proved in ways exactly analogous to the proof of (i). Implications(i), (ii), and (iii) prove implication (iv). ◻

Definition of Rules

Notation– The symbol ∖ will be used in expressions of the form A∖B; to mean“proposition A except as modified by proposition B”.Suppose M is a matrix. Then M ∖Mij ← {a} means the matrix obtained fromM by replacing the (i, j)th element by {a}. M ∖Mij ∪ {x} means the matrixobtained from M by adding the element x to the (i, j)th set entry.

There are 11 rules defined in BLP model. Some of these rules are presented inthe following.

Rule 1 (R1): get−read

Domain of R1: allRk = (g,Si,Oj , r) inR(1). (Denote domain ofRi by dom(Ri).)

Semantics: Subject Si requests access to object Oj in read-only mode (r).

*-property function: ∗1(Rk, v) = TRUE⇔ fC(Si)� fO(Oj).The rule:

37

Page 39: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

R1(Rk, v) =

⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩

(?, v), if Rk /∈ dom(R1);(yes, (b ∪ {(Si,Oj , r)},M, f,H)), if [Rk ∈ dom(R1)]&[r ∈Mij]&

[fS(Si)� fO(Oj)]&[Si ∈ ST or ∗ 1(Rk, v)];(no, v); otherwise.

Algorithm for R1:if Rk /∈ dom(R1)

then R1(Rk, v) = (?, v);else if r ∈ Mij and ⟨[Si ∈ S ′ and ∗ 1(Rk, v)] or [Si ∈ ST and fS(Si) �

fO(Oj)]⟩then R1(Rk, v) = (yes, (b ∪ {(Si,Oj , r)},M, f,H));else R1(Rk, v) = (no, v);

end;

Similarly, rules R2 ∶ get-append, R3 ∶ get-execute, R4 ∶ get-write for requests oftype R(1) are defined.

Rule 5 (R5) ∶ release-read/execute/write/append

Domain of R5: all Rk = (r, Si,Oj , x) ∈ R(1), x ∈ A.

Semantics: Subject Si signals the release of access to object Oj in mode x, wherex is r (read-only), e (execute), w (write), or a (append).

*-property function: ∗5(Rk, v) = TRUE.

The rule:

R5(Rk, v) =⎧⎪⎪⎨⎪⎪⎩

(yes, (b − {(Si,Oj , x)},M, f,H)), if Rk ∈ dom(R5);(?, v), otherwise.

Algorithm for R5:if Rk /∈ dom(R5); then R5(Rk, v) = (?, v);

else R5(Rk, v) = (yes, (b − {(Si,Oj , x)},M, f,H));

end;

Rule 6 (R6) ∶ give-read/execute/write/append

Notation– In the following rule, OR denotes root object in the object hierarchyand OS(j) denotes Oj ’s immediately superior object in the hierarchy. Also,GIVE(Sλ,Oj , v) means Sλ is allowed (has an administrative permission) to givepermission to object Oj in current state v.

Domain of R6: all Rk = (Sλ, g, Si,Oj , x) ∈ R(2), x ∈ A.

Semantics: Subject Sλ gives subject Si access permission to Oj in mode x, wherex is r, w, e, or a.

38

Page 40: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

*-property function: ∗6(Rk, v) = TRUE.

The rule:

R6(Rk, v) =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

(?, v), if Rk /∈ dom(R6);(yes, (b,M ∖Mij ∪ {x}, f,H)), if [Ri ∈ dom(R6)]&

[⟨[Oj ≠ OR]&[OS(j) ≠ OR]&[OS(j) ∈ b(Sλ ∶ w)]⟩ or

⟨[OS(j) = OR]&[GIVE(Sλ,Oj , v)]⟩ or

⟨[Oj = OR]&[GIVE(Sλ,OR, v)]⟩];(no, v), otherwise.

Algorithm for R6:if Rk /∈ dom(R6) then R6(Rk, v) = (?, v);

else if [⟨[Oj ≠ OR] and [OS(j) ≠ OR] and [OS(j) ∈ b(S ∶ w)]⟩ or ⟨[OS(j) =OR]&[GIVE(Sλ,Oj , v)]⟩ or ⟨[Oj = OR] and [GIVE(Sλ,OR, v)]⟩]

then R6(Rk, v) = (yes, (b,M ∖Mij ∪ {x}, f,H));else R6(Rk, v) = (no, v);

end;

Other rules includingR7 ∶ rescind-read/execute/write/append, R8 ∶ create-object,R9 ∶ delete-object-group, R10 ∶ change-subject-current-security-level, andR11 ∶change-object-security-level are defined similar to the ones specified above.

2.2.2 Denning’s Lattice Model of Secure Information Flow(1976)

Reference: D.E., Denning, A Lattice Model of Secure Information Flow, Com-munications of ACM, 19(5), pp. 236–243, 1976.

The Model

An information flow model FM is defined by FM = ⟨N,P,SC,⊕,→⟩, where

• N = {a, b, ...} is a set of logical storage objects or information receptacles.

• P = {p, q, ...} is a set of processes. Processes are the active agents respon-sible for all information flow.

• SC = {A,B, ...} is a set of security classes corresponding to disjoint classesof information.

– Each object a is bound to a security class, denoted by a. Thereare two methods of binding objects to security classes: static bind-ing, where the security class of an object is constant, and dynamicbinding, where the security class of an object varies with its content.

39

Page 41: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

– Users and processes may be bound to security classes. In this case,p (security class of process p) may be determined by the securityclearance of the user owning p or by the history of security classes towhich p has had access.

• ⊕ ∶ SC ×SC → SC is the class-combining operator, which is an associativeand commutative binary operator that specifies how to label informationobtained by combining information from two security classes. The set ofsecurity classes is closed under ⊕.

• →⊆ SC × SC is a can flow relation, which is defined on pairs of securityclasses. For classes A and B, we write A → B if and only if informationin class A is permitted to flow into class B. This includes flows alonglegitimate and storage channels. We shall not be concerned with flowsalong covert channels (i.e. a process’s effect on the system load).

The security requirements of the model: a flow model FM is secure if and onlyif execution of a sequence of operations cannot give rise to a flow that violatesthe relation →.

If a value f(a1, ..., an) flows to an object b that is statically bound to a securityclass b, then a1 ⊕ ...⊕ an → b must hold. If f(a1, ..., an) flows to a dynamicallybound object b, then the class of b must be updated (if necessary) to hold theabove relation.

Example [High-Low Policy]– The high-low policy can be defined by triple⟨SC,→,⊕⟩ as follows:

SC = {H,L}

→= {(H,H), (L,L), (L,H)}

H ⊕H =H, H ⊕L =H, L⊕H =H, L⊕L = L

Denning’s Axioms (Derivation of Lattice Structure)

Under certain assumptions, the model components SC,→, and ⊕ form a uni-versally bounded lattice. These assumptions follow from the semantics of infor-mation flow.

⟨SC,→,⊕⟩ forms a universally bounded lattice iff

1. ⟨SC,→⟩ is a partially ordered set;

2. SC is finite;

3. SC has a lower bound L such that L→ A for all A ∈ SC;

40

Page 42: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

4. ⊕ is a least upper bound operator.

In assumption (1), reflexivity and transitivity of security classes are requiredfor consistency, and antisymmetry follows from the practical assumption of ir-redundant classes.

Assumption (2), that the set of security classes SC is finite, is a property of anypractical system.

Assumption (3), that there exists a lower bound L on SC, acknowledges theexistence of public information in the system. All constants (public contents)are candidates to be labeled L, because information from constants should beallowed to flow to any other object.

Assumption (4), that the class-combining operator ⊕ is also a least upper boundoperator, is demonstrated by showing that for all A,B,C ∈ SC:

(a) A→ A⊕B and B → A⊕B.

(b) A→ C and B → C ⇒ A⊕B → C.

Without property (a) we would have the semantic absurdity that operands couldnot flow into the class of a result generated from them. Moreover, it would beinconsistent for an operation such as c ∶= a + b to be permitted whereas c ∶= a isnot, since the latter operation can be performed by executing the former withb = 0.

For part (b), consider five objects a, b, c, c1, and c2 such that a → c, b → c, andc = c1 = c2; and consider this program segment:c1 ∶= a;c2 ∶= b;c ∶= c1 ∗ c2.Execution of this program segment assigns to c information derived from a andb; therefore, the flow a ⊕ b → c is implied semantically. For consistency, werequire the flow relation to reflect this fact. Thus for any two classes A and B,A⊕B is the least upper bound, also referred to as the join, of A and B.

Notation– IfX ⊆ SC is a subset of security classes, then ⊕X =⎧⎪⎪⎨⎪⎪⎩

L, if X = ∅A1 ⊕ ...⊕An, if X = {A1, ...,An}

Assumptions (1)-(4) imply the existence of a greatest lower bound operatoron the security classes, which we denote by ⊗. It can be easily shown thatA⊗B = ⊕L(A,B), where L(A,B) = {C ∣ C → A ∧ C → B}.

Also ⊗X for X ⊆ SC is defined similar to ⊕X.

Proposition– Ai → B(1 ≤ i ≤ n) if and only if ⊕X → B, or A1 ⊕ ...⊕An → B.

41

Page 43: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Enforcement of Security

The primary difficulty with guaranteeing security lies in detecting (and moni-toring) all flow causing operations.

We distinguish between two types of flow:

• Explicit flow to an object b occurs as the result of executing any statement(e.g. assignment or I/O) that directly transfers to b information derivedfrom operands a1, ..., an.

• Implicit flow to b occurs as the result of executing -or not executing- astatement that causes an explicit flow to b when that statement is condi-tioned on the value of an expression.

Definition (Program): An abstract program (or statement) S is defined re-cursively by:

• S is an elementary statement; e.g. assignment or I/O.

• If S1 and S2 are programs (statements), then S = S1;S2 is a program(statement).

• If S1, ..., Sm are programs (statements) and c is an m-valued variable thenS = c ∶ S1, ..., Sm is a program (statement).

The conditional structure is used to represent all conditional (including itera-tive) statements found in programming languages. For example:(if c then S1 else S2) ⇒ (c ∶ S1, S2)(while c do S1) ⇒ (c ∶ S1)(do case c of S1, ..., Sm) ⇒ (c ∶ S1, ..., Sm)

Definition– The security requirements for any program of the above form arenow stated as follows.

• If S is an elementary statement, which replaces the contents of an object bwith a value derived from objects a1, ..., an (ai = b for some ai is possible),then security requires that a1 ⊕ ...⊕ an → b hold after execution of S. If bis dynamically bound to its class, it may be necessary to update b whenS is executed.

• S = S1;S2 is secure if both S1 and S2 are individually secure (because ofthe transitivity of →).

• S = c ∶ S1, ..., Sm is secure if each Sk (1 ≤ k ≤m) is secure and all implicitflows from c are secure.

42

Page 44: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Let b1, ..., bn be the objects into which S specifies explicit flows (i.e. i =1, ..., n implies that, for each bi, there is an operation in some Sk thatcauses an explicit flow to bi); then all implicit flow is secure if c → bi(1 ≤i ≤ n), or equivalently c→ b1 ⊗ ...⊗ bn holds after execution of S.

If bi is dynamically bound to its security class, it may be necessary toupdate bi by bi ∶= bi ⊕ c

Access Control Mechanism

Each process p has an associated clearance class p specifying the highest class pcan read from (observe) and the lowest class p can write into (modify or extend).

Security is enforced by a run-time mechanism that permits p to acquire readaccess to an object a only if a→ p, and write access to an object b only if p→ b.Hence, p can read from a1, ..., am and write into b1, ..., bn only if a1 ⊕ ...⊕ am →p→ b1 ⊗ ...⊗ bn.

This mechanism automatically guarantees the security of all flows, explicit orimplicit, since no flow from an object a to an object b can occur unless a→ p→ b,which implies a→ b.

2.3 Information Flow Control

2.3.1 Noninterference for Deterministic Systems (1986)

Reference: J.A. Goguen, J. Meseguer, “Security Policies and Security Models”,IEEE Symposium on Security and Privacy, pp. 11–20, 1982.

One group of users, using a certain set of commands, is noninterferencings withanother group of users if what the first group does with those commands hasno effect on what that second group of users can see.

In this approach, security verification consists of showing that a given policy(contains security requirements) is satisfied by a given model of a system.

The Model

Two types of systems are considered:

• Static system: what users are permitted to do does not change over time;thus, their capabilities do not change in such a system.

43

Page 45: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

• Dynamic system: what users are permitted to do can change with time;thus, there are some commands that can change the users’ capabilities.

Static Systems

We may assume that all the information about what users are permitted to dois encoded in a single abstract capability table.

The system will also have information which is not concerned with what ispermitted; this will include users’ programs, data, messages, etc. We will calla complete characterization of all such information a state of the system. Thesystem will provide commands that change these states.

Definition– A static machine M consists of the following elements:

• U as a set of users (could also be taken to be subjects in the more generalway).

• S as a set of states.

• SC as a set of state commands.

• Out as a set of outputs.

Together with:

• out ∶ S ×U → Out; a function which tells what a given user sees when themachine is in a given state, called output function.

• do ∶ S × U × SC → S; a function which tells how states are updated bycommands, called state transition function.

• s0 ∈ S; a constant that indicates the initial machine state.

Note– U × SC can be considered as the set of inputs.

Capability Systems

We assume that in addition to the state machine features there are also capabilitycommands that can change the capability table.

Definition– A capability system M consists of the following elements:

• U as a set of users;

• S as a set of states;

• SC as a set of static commands;

44

Page 46: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Capt S Commands CHECK Out

State Commands

Figure 2.7: Static and capability commands execution.

• Out as a set of outputs;

• Capt as a set of capability tables;

• CC as a set of capability commands.

Together with the following functions:

• out ∶ S × Capt × U → Out; the output function, which tells what a givenuser sees when the machine, including its capability component, is in agiven state.

• do ∶ S ×Capt ×U × SC → S; the state transition function, which tells howstates are updated by commands.

• cdo ∶ Capt×U ×CC → Capt; the capability transition function, which tellshow capability tables are updated.

• (t0, s0) ∈ Capt × S as an initial capability table and initial state.

C = SC ∪CC is a set of all commands. We assume that there are no commandsthat change both the state and the capability table (see Figure 2.7).

A subset of C is called an ability. Let Ab = P(C) denotes the set of all suchsubsets (abilities). Evidently, Capt = AbU .

Given a capability system M , we can define a system transition function asfollows, which describes the effect of commands on the combined system statespace, which is S ×Capt.

csdo ∶ S ×Capt ×U ×C → S ×Captwhich is defined as

csdo(s, t, u, c) =⎧⎪⎪⎨⎪⎪⎩

(do(s, t, u, c), t) if c ∈ SC(s, cdo(t, u, c)) if c ∈ CC

We can now view a capability system as a state machine, with state spaceS ×Capt, input space (U ×C)∗ and output space Out. The extended version offunction csdo can be defined as follows.

45

Page 47: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

csdo ∶ S ×Capt × (U ×C)∗ → S ×Captwhich is defined by

• csdo(s, t,NIL) = (s, t) and

• csdo(s, t,w.(u, c)) = csdo′(csdo(s, t,w), u, c))

where w ∈ (U ×C)∗, NIL denotes the empty string, dot denotes concatenation,and csdo′ denotes the primary definition of function csdo.

[[w]] = csdo(s0, t0,w) denotes the effect of the input string w on states, startingfrom the initial state of the whole system.

A state s of a state machine M is reachable iff ∃w ∈ (U ×C)∗, [[w]] = (s, t).

Static Policies

Security policy is a set of noninterference assertions. Each noninterference as-sertion says that

what one group of users does using a certain ability has no effect on whatsome other group of users sees.

Notation– Let w ∈ (U × C)∗ and u ∈ U . we define [[w]]u to be output to uafter doing w on M , i.e., [[w]]u = out([[w]], u).

Definition– Let G ⊆ U (a group of users), A ⊆ C (an ability), and w ∈ (U ×C)∗.Then we let PG(w) denotes the subsequence of w obtained by eliminating thosepairs (u, c) with u ∈ G. Similarly, for PA(w) and PG,A(w).

Example: G = {u, v},A = {c1, c2}PG,A( (u′, c1).(u, c3).(u, c2).(v′, c1) ) = (u′, c1).(u, c3).(v′, c1)PA( (u′, c1).(u, c3).(u, c2).(v′, c1) ) = (u, c3)

Definition– Given a state machine M and sets G and G′ of users, we say thatG does not interfere with (or is noninterfering with) G′, written G ∶ ∣G′ iff

∀w ∈ (U ×C)∗,∀u ∈ G′, [[w]]u = [[PG(w)]]u

Similarly, an ability A does not interfere G′, written A ∶ ∣G′ iff

∀w ∈ (U ×C)∗,∀u ∈ G′, [[w]]u = [[PA(w)]]u

Finally, users in G with ability A does not interfere with users in G′, writtenA,G ∶ ∣G′ iff

∀w ∈ (U ×C)∗,∀u ∈ G′, [[w]]u = [[PG,A(w)]]u

46

Page 48: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

a A1 A2

A3

b c

d

Figure 2.8: An information flow diagram.

Example: A ∶ ∣{u} means running commands A does not have any effect on whatuser u sees.

Definition– A security policy is a set of noninterference assertions.

Example: (Multilevel security) such as BLPlevel ∶ U → L,U[−∞, x] = {u ∈ U ∣ level(u) ≤ x}U[x,+∞] = {u ∈ U ∣ level(u) ≥ x}∀x > x′, U[x,+∞] ∶ ∣U[−∞, x′] (specifies both SS and * propoerties of BLP)

Definition– G is invisible (relative to other users) iff G ∶ ∣ −G.

Now, it is very easy to express MLS using this notion:∀x ∈ L,U[x,+∞] is invisible. OR∀x ∈ L,U −U[−∞, x] is invisiblei.e., ∀x ∈ L,U −U[−∞, x] ∶ ∣U[−∞, x]

Example: (Security Officer) The set A consists of exactly those commands thatcan change the capability table.Policy: There is just one designated user seco, the security officer, whose use ofthose commands will have any effect. A,−{seco} ∶ ∣U

Example: (Channel Control) A very general notion of channel is just a set ofcommands, i.e., an ability A ⊆ C.Policy: G and G′ can communicate only through the channel A.−A,G ∶ ∣G′ ∧ −A,G′ ∶ ∣G

Example: (Information Flow) a, b, , c, d are processes, and A1,A2, and A3 arechannels. a, b, c, and d can communicate (as depicted in figure 2.8) as follows:{b, c, d} ∶ ∣{a}{c, d} ∶ ∣{b}{c} ∶ ∣{d}{d} ∶ ∣{c}−A1,{a} ∶ ∣{b, c, d} −A2,{b} ∶ ∣{c} −A3,{b} ∶ ∣{d}

47

Page 49: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Dynamic Policies

In dynamic policies, whether or not a given user u can interfere with anotheruser v, by using an operation (command) c may vary with time.

Definition– Let G and G′ be sets of users. Let A be a set of commands, andQ be a predicate defined over (U × C)∗, i.e., Q ∶ (U × C)∗ → {0,1}. Then, Gusing A is noninterfering with G′ under condition Q, written

G,A ∶ ∣G′ if Q

iff∀u′ ∈ G′,∀w ∈ (U ×C)∗, [[w]]u′ = [[P (w)]]u′

where P is defined byP (λ) = λ, where λ is the empty string, and

P (o1...on) = o′1...o′n, where o′i =⎧⎪⎪⎨⎪⎪⎩

λ , if Q(o′1...o′i−1) ∧ oi = (u, a) with u ∈ G and a ∈ Aoi , otherwise.

Example: (Discretionary Access) We assume the existence of a function CHECK(w,u, c),which looks at the capability table in state [[w]] to see whether or not u is au-thorized to do command c; it returns true if he is, and false if not.CHECK ∶ (U ×C)∗ ×U ×C → {0,1}equivalently CHECK(u, c) ∶ (U ×C)∗ → {0,1}are general policy that we wish to enforce for all users u and all commands c is

{u},{c} ∶ ∣U if ¬CHECK(u, c)

We can define such a policy in another way.pass(u, c) is a command, which gives a capability to a user.unpass(u, c) is a command , which takes a capability from a user.

w ∈ (U ×C)∗ ∧w = w′.o⇒ previous(w) = w′, last(w) = o

Policy:{u},{c} ∶ ∣U if [¬CHECK(previous, u, c) ∧ ( CHECK(previous, u′, pass(u, c)) →¬(last = (u′, pass(u, c))) ) ]This says that u using c cannot interfere if in the previous state he didn’t havethe capability to use c, unless some user u′ having the capability in the previousstate to pass u the ability to use c, in fact did so.

The corresponding assertion for the revocation operation, which we shall denoteunpass(u, c), is{u},{c} ∶ ∣U if [CHECK(previous, u′, unpass(u, c)) ∧ last = (u′, unpass(u, c))) ]

48

Page 50: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

2.3.2 Noninterference for Nondeterministic Systems

In nondeterministic systems for each input we may have different outputs.

We need a framework for describing a nondeterministic systems. In this frame-work out to be relation instead of a function, i.e., allow the same input generatedifferent outputs.

To catch channels, we will include outputs in the history itself. The resultingtraces represent acceptable input/output behaviors, and a system is set of ac-ceptable traces.Example–A = {⟨⟩, ⟨in1⟩, ⟨in1, out1⟩, ⟨in1, in2, out1⟩, ...}We can show the above set with the following notation as well.A = {⟨⟩, in1, in1.out1, in1.in2.out1, ...}

Example– A system in which a user can give as input either 0 or 1 and imme-diately receives that input as output is specified by the following set of traces:A = {⟨⟩, in(0), in(1), in(0).out(0), in(1).out(1), in(0).out(0).in(1), ...}

For simplicity, we assume that any prefix of an acceptable trace must also bean acceptable trace and that a user can give input at any time.

The obvious way to generalize noninterference is to require that the purge ofan acceptable trace be an acceptable trace, where the purge of a trace is formedby removing all high level inputs from the trace.

Example– In the previous example, assume that all inputs and outputs are high-level. Since the system generates no low-level output, it is trivially secure. Now

• T = highin(0).highout(0) is an acceptable trace,

• P (T ) = highout(0) but its purged trace is not acceptable (since it con-tains unsolicited output), and the system is not secure by the provideddefinition.

Thus, the provided definition is not appropriate. An obvious way is to refinethe purge operator so that it removes, not simply all high-level input, but allhigh-level output as well.

Example*– the system, specified by the following set of traces, satisfies thedescribed property and is secure.A = {⟨⟩, highin(0), highin(1), lowout(0), lowout(1), highin(0).lowout(0),

highin(1).lowout(1)}

The above approach has some problems:

49

Page 51: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

1. It is too strong in that it rules out any system where low-level input mustgenerate high-level output.For example a system that secretly monitors low-level usage and sends itsaudit records as high-level output to some other system for analysis, isnonsecure.

2. In the previous example (labeled by *), consider a scenario where a TrojanHorse acting on behalf of a high-level user can pass information to a low-level user using such a system. If the Trojan Horse wants to send a 0 or1 to the low-level user, it simply gives the appropriate bit as input beforethe next low-level output is generated.

To tackle the second problem, it would also have to regard the traces highin(0).lowout(1)and highin(1).lowout(0) as being acceptable, which would close the nonsecurechannel.A = {⟨⟩, highin(0), highin(1), lowout(0), lowout(1), highin(0).lowout(0),

highin(1).lowout(1), highin(0).lowout(1), highin(1).lowout(0)}

Of course, It would be too strong to require that any arbitrary insertion of high-level events into an acceptable trace must be acceptable. The lighter versionwould be enough, which is considered in the definition of Nondeducibility.

2.3.3 Nondeducibility (1986)

Definition– For any two acceptable traces T and S, there is an acceptabletrace R consisting of T ’s low-level events (in their respective order), S’s high-level inputs (in their respective order), and possibly some other events that areneither low-level events in T nor high-level inputs from S.

Intuitively whatever the low-level user sees is compatible with any acceptablehigh-level input.

Nondeducibility has some problems:

1. Nondeducibility is weak,

2. Nondeducibility is not composable.

Nondeducibility is weak

For example, consider a system where a high-level user H gives arbitrary high-level input (presumably a secret messages of some sort) and some low-level userL gives the low-level input, look.

When L issues look, he or she receives as low-level output the encryption of H’sinput up to that time, if there is any, or else a randomly generated string (seeFigure 2.9).

50

Page 52: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

H

L

m

look L E(m)

Figure 2.9: An example of the weakness of nondeducibility.

Such a system models an encryption system where low-level users can observeencrypted messages leaving the system, but to prevent traffic analysis, randomstrings are generated when there is no encrypted output.

This system satisfies nondeducibility since low-level users can learn nothingabout high-level input. The sample of acceptable traces of the system is asfollows.

T = highin(m1).lowin(look).lowout(E(m1)).lowin(look).lowout(random)S = highin(m2).lowin(look).lowout(E(m2))For nondeducibilityR = lowin(look).lowout(E(m1)).lowin(look).lowout(random).highin(m2)(E(m1) seems random here)

The problem arises when we realize it would still satisfy nondeducibility even ifwe removed the encryption requirement. For example:

S = highin(attack at dawn)T = lowin(look).lowout(xxx)R = lowin(look).lowout(xxx).highin(attack at dawn)

Similarly,

S = ⟨⟩T = highin(attack at dawn).lowin(look).lowout(attack at dawn)R = highin(attack at dawn).lowin(look).lowout(attack at dawn)

The system is nondeducibility secure, but intuitively is not secure.

Nondeducibility is not composable

The system A has the following traces:Each trace starts with some number of high-level input, or outputs followed bythe low-level output STOP followed by the low-level output ODD (if there has beenan odd number of high-level events prior to STOP) or EVEN otherwise.The high-level outputs and the output of STOP leave via the right channel, andthe events ODD and EVEN leave via left channel (see Figure 2.10).

The system B behaves exactly like A (see Figure 2.10), except that

• its high-level outputs leave it via left channel,

• its EVEN and ODD outputs leave it via right channel, and

51

Page 53: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

A

EVEN

STOP

time A

ODD

STOP

time B

STOP

EVEN

time B

ODD

STOP

time

Figure 2.10: An example of the non-composability of nondeducibility.

A

ODD

STOP

B

ODD

A

EVEN

B

EVEN

STOP

Figure 2.11: Hook-up composition of two sample systems.

• STOP is an input to its left channel.

Both systems A and B are nondeducibility secure.

Composition by hook-up: A and B are connected so that the left channel of Bis connected to the right channel of A (see Figure 2.11).

Since the number of shared high-level signals is the same for A and B, the factthat A says ODD while B says EVEN (or vice versa) means that there has been atleast one high-level input from outside. Therefore, the composition of A and Bby hook-up is not nondeducibility secure.

Referring back to the definition of nondeducibility, we see that the cause of theseproblems is that it allows us too much freedom in constructing an acceptabletrace R from the high-level inputs of an acceptable trace T and low-level eventsfrom an acceptable trace S.

2.3.4 Generalized Noninterference (GNI)

Given an acceptable system trace T and alternation T1 formed by inserting ordeleting a high-level input to or from T , there is an acceptable trace T2 formed

52

Page 54: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

by inserting or deleting high-level outputs to or from T1 after the occurrence ofthe alternation in T made to form T1.

For example, in previous example, a possible trace is lowin(look).lowout(xxx).If we alter this trace to obtain highin(attack at dawn).lowin(look).lowout(xxx),we are left with unacceptable trace that cannot be made acceptable by insert-ing or deleting high-level outputs after the occurrence of the inserted high-levelinput. Hence, the systems fails to satisfy GNI.

The problem is that again, GNI is not composable.

2.3.5 Restrictiveness

To create a composable security property, we must be even more restrictive. Werequire that a high-level input may not change the low-level state of the system.Therefore, the system should respond the same to a low-level input whether ornot a high-level input was made immediately before.

State Machine

Definition– A state machine consists of

1. a set of possible states,

2. a set of possible events, which might be the inputs, outputs, and internalsignals of the system,

3. a set of possible transitions;

4. an initial state (named start).

σ0eÐ→ σ1 is a transition, where σ0 is the state of machine before the transition.

e is the accompanying event for the transition and σ1 is the state of machineafter transition.

σ0[e1,...,en]ÐÐÐÐÐ→ σn is a sequence of transitions starting in σ0 and ending in σn,

involving events e1, ..., en.

σ0 can accept event e if for some state σ1, σ0eÐ→ σ1.

Definition– traces of a state machine are all sequences of events γ such that

for some state σ1, startγÐ→ σ1, where start is the initial state.

Definition— A state machine is said to be input total if in any state it canaccept an input.

In a total input state machine, one can only learn about its state by watchingits outputs; no information is conveyed to the user by accepting inputs.

53

Page 55: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

input totality is a condition for a state machine to be restrictive, but this isnot intended to imply that only such machines are secure.

Security for State Machine

Definition– If σ1 and σ2 are two states, then we say σ1 ≈ σ2 if the statesdiffer only in their high-level information, or in other words, if the values of alllow-level variables are the same in the two states.

Definition– If γ1 and γ2 are two sequences of events, then we say that γ1 ≈ γ2if the two sequences agree for low-level events.

Example– a ∶ high-level, b ∶ low-level [a, b, b, a] ≈ [b, a, b, a] ≈ [b, b]

Definition– A state machine is defined to be restrictive for the view determinedby ≈ if:

1. It is input total.

2. Inputs affect equivalent states equivalently.Formally, for any state σ1, σ′1, and σ2, and for any two input sequencesβ1 and β2,

[σ1β1Ð→ σ′1 ∧ σ2 ≈ σ1 ∧ β1 ≈ β2]⇒ ∃σ′2[σ2

β2Ð→ σ′2 ∧ σ′2 ≈ σ′1]

3. Equivalent states produce equivalent outputs, which lead again to equiv-alent states.Formally, for any states σ1, σ′1, and σ2, and for any output sequence γ1,

[σ1γ1Ð→ σ′1 ∧ σ2 ≈ σ1]⇒ ∃σ′2,∃γ2[σ2

γ2Ð→ σ′2 ∧ σ′2 ≈ σ′1 ∧ γ2 ≈ γ1]

Exercise 6– Prove by induction that it is enough to consider cases in which γ1(but not necessarily γ2) consists of a single event.

Hooking Up Machine

Assume A and B are two state machines. Then, hooking them up means thatsome output of A are sent to B vise versa.

The common events will then be communication events.

The state of the combined machine are pais ⟨σ, ν⟩, where σ is a state of A andν is a state of B.

An event of a composite machine is any event from either component machine.For any sequence of events γ from their composite machine, let γ ↑EA be thesequence of events engaged in by machine A. Similarly for γ ↑EB .

54

Page 56: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

⟨σ, ν⟩ γÐ→ ⟨σ′, ν′⟩ is a valid transition of the composite machine if (σγ↑EAÐÐÐ→ σ′ and

νγ↑EBÐÐÐ→ ν′ are valid transitions of A and B respectively).

⟨σ, ν⟩ ≈ ⟨σ′, ν′⟩⇔ σ ≈ σ′ ∧ ν ≈ ν′γ ≈ γ′⇔ γ ↑EA ≈ γ′ ↑EA ∧ γ ↑EB ≈ γ′ ↑EB

Theorem– If state machines A and B are restrictive, then a composite machineformed from hooking them up is restrictive.

Proof: (1) The input machine is input total. If β is any state of input forthe composite machine and ⟨σ, ν⟩ is any starting state, then β ↑EA and β ↑EBare sequences of inputs for A and B respectively. Since A and B are input

total, there are states σ′ and ν′ such that σβ↑EAÐÐÐ→ σ′ and ν

β↑EBÐÐÐ→ ν′. Therefore

⟨σ, ν⟩ βÐ→ ⟨σ′, ν′⟩.

(2) Suppose ⟨σ1, ν1⟩, ⟨σ′1, ν′1⟩, ⟨σ2, ν2⟩ are states and β1 and β2 are input se-quences.

(I) A is restrictive. Thus, ∃σ′2[σ2β2↑EAÐÐÐ→ σ′2 ∧ σ′2 ≈ σ′1]

(II) B is restrictive. Then, ∃ν′2[ν2β2↑EBÐÐÐ→ ν′2 ∧ ν′2 ≈ ν′1]

(3) As state earlier, it is sufficient to consider outputs of single event, (γ1 = [e]).⟨σ1, ν1⟩

eÐ→ ⟨σ′1, ν′1⟩⟨σ1, ν1⟩ ≈ ⟨σ2, ν2⟩

Assume e is an output from A. Since A is restrictive and σ1eÐ→ σ′1 and σ1 ≈ σ2,

then∃σ′2∃γ[σ2

γÐ→ σ′2 ∧ σ′2 ≈ σ′1 ∧ γ ≈ [e] ].

Since the sequence γ is an output sequences any event shared by both A and Bmust be inputs to B. Since γ ≈ [e], it follows that γ ↑EB ≈ [e]↑EB . Therefore

there exist ν′2 such that ν′2 ≈ ν′1, ν2γ↑EBÐÐÐ→ ν′2, and γ ↑EB ≈ [e]↑EB .

Thus, there exists a state ⟨σ′2, ν′2⟩ such that ⟨σ2, ν2⟩γÐ→ ⟨σ′2, ν′2⟩ and ⟨σ′2, ν′2⟩ ≈

⟨σ′1, ν′1⟩.

Shortcomings of Restrictiveness

Restrictiveness is not preserved by many standard views of refinement.

Restrictiveness address only noise free channels.

Example– possible traces with 0.0001 probability.A = {lowout(0), lowout(1), highin(0).lowout(1), highin(1).lowout(0)}

55

Page 57: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

RBAC3

Role Hierarchies Constraints

RBAC0

RBAC1 RBAC2

Figure 2.12: RBAC reference models.

2.4 Role Based Access Control Models

The basic concept of RBAC is that users are assigned to roles, permissions areassigned to roles, and users acquire permissions by being members of roles.

Example– The roles existing in a university are Student, Professor, Staff, etc.

A role is a job function or job title within the organization with some associatedsemantics regarding the authority and responsibility conferred on a member ofthe role. It can be thought as a set of transactions a user or set of users canperform with in the context of an organization.

For example an Instructor can present a course, enter the grades, publish his/herlecture notes,

A user is assigned to a role that allows him or her to perform only what isrequired for that role.

A permission is an approval to perform operation on one or more objects inthe system and an operation is an executable image of a program

Permissions are positive and denial of access is modeled as constraints ratherthan negative permissions.

RBAC is a set of reference models which is presented in Figure 2.12.

2.4.1 Core RBAC (RBAC0)

Definition– The RBAC0 (as it is shown in Figure 2.13) has the followingcomponents:

• U,R,S,OPS, and OBS (users, roles, sessions, operations, and objectsrespectively)

56

Page 58: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

ROLESPERMISS-

IONS

PERMISSION

HIERARCHY

ROLESOD CONSTRAINTS

user roles..

.

USERS

ASSIGNMENT

USERASSIGNMENTU

UA PA

PR

RH

SESSIONS

S

Figure 2.13: The components of RBAC models.

• UA ⊆ U ×R (user-to-role assignment relation)

• assigned-users ∶ R → P(U) (the mapping of role r onto a set of users.Formally: assigned-users(r) = {u ∈ U ∣ ⟨u, r⟩ ∈ UA}.)

• P = P(OPS ×OBS) (permissions)

• PA ⊆ P ×R (permission-to-role assignment relation)

• assigned-permissions ∶ R → P(P ) (the mapping of role r onto a set ofpermissions. Formally: assigned-permissions(r) = {p ∈ P ∣ ⟨p, r⟩ ∈ PA}.)

• user-sessions ∶ U → P(S) (the mapping of user u onto a set of sessions)

• session-user ∶ S → U (determines the user of a given session. In otherwords, session-user(s) = u iff s ∈ user-sessions(u).)

• session-roles ∶ S → P(R) (a function mapping each session si to a set ofroles. Formally: session-roles(si) ⊆ {r∣ ⟨session-user(si), r⟩ ∈ UA})

• avail-session-perms(si) = ⋃r∈session-roles(si)

assigned-permissions(r) (the

permissions available in session si)

Note– Assume that only a single security officer can change these components.

57

Page 59: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

2.4.2 Hierarchical RBAC (RBAC1)

RBAC1 adds the role hierarchies to RBAC0. Role hierarchies define an inher-itance relation among roles. Inheritance has been described in terms of per-missions; that is, r1 inherits role r2 if all privileges of r2 are also privileges ofr1. Note that user membership is inherited top-down, and role permissions areinherited bottom-up.

This standard recognizes two different hierarchies.

• General role hierarchies provide support for an arbitrary partial order toserve as the role hierarchy, to include the concept of multiple inheritancesof permissions and user membership among roles.

• Limited role hierarchies impose restrictions resulting in a simpler tree struc-ture (i.e., a role may have one or more immediate ascendants, but is re-stricted to a single immediate descendant).

Note that an inverted tree is also possible. Examples of possible hierarchicalrole structures are shown in Figure 2.14.

Definition– General Role Hierarchies:

• RH ⊆ R×R is a partial order on R called the inheritance relation, writtenas ⪰, where r1 ⪰ r2 only if all permissions of r2 are also permissions of r1,and all users of r1 are also users of r2.Formally: r1 ⪰ r2 ⇒ authorized-permissions(r2) ⊆ authorized-permissions(r1)∧authorized-users(r1) ⊆ authorized-users(r2).

• authorized-users ∶ R → P(U), the mapping of role r onto a set of users inthe presence of a role hierarchy.Formally: authorized-users(r) = {u ∈ U ∣ ∃r′, r′ ⪰ r ∧ ⟨u, r′⟩ ∈ UA}.

• authorized-permissions ∶ R → P(P ), the mapping of role r onto a set ofpermissions in the presence of a role hierarchy.Formally: authorized-permissions(r) = {p ∈ P ∣ ∃r′, r ⪰ r′ ∧ ⟨p, r′⟩ ∈ PA}.

Notation– r1 ≫ r2, iff r1 ⪰ r2 ∧ ¬(∃r3, r3 ≠ r1 ∧ r3 ≠ r2 ∧ r1 ⪰ r3 ⪰ r2)

Definition (Limited Role Hierarchies) Previous definition with the followinglimitation:∀r, r1, r2 ∈ R, r ≫ r1 ∧ r ≫ r2 ⇒ r1 = r2.

58

Page 60: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Figure 2.14: Different types of role hierarchies: (a) tree; (b) inverted tree; (c)lattice.

59

Page 61: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

2.4.3 Constrained RBAC (RBAC2)

Definition– RBAC2 is unchanged from RBAC0 except that for requiring thatthere be a collection of constraints that determine whether or not values ofvarious components of RBAC0 are acceptable.

The constraint which is specified in the NIST standard is Separation of Duties(SOD). SOD Enforces conflict of interest policies employed to prevent usersfrom exceeding a reasonable level of authority for their position.

There are two types of SOD:

• Static SOD (based on user-role assignment),

• Dynamic SOD (based on role activation).

Definition (Static Separation of Duties) No user is assigned to n or more rolesfrom the same role set, where n or more roles conflict with each other.

SSD ⊆ P(R) ×N

∀⟨rs, n⟩ ∈ SSD, [n ≥ 2 ∧ ∣rs∣ ≥ n]

∀⟨rs, n⟩ ∈ SSD,∀t ⊆ rs, [∣t∣ ≥ n⇒ ⋂r∈tassigned-users(r) = ∅]

In presence of role hierarchies, we should ensure that inheritance does not un-dermine SSD policies.∀⟨rs, n⟩ ∈ SSD,∀t ⊆ rs, [∣t∣ ≥ n⇒ ⋂

r∈tauthorized-users(r) = ∅]

Definition (Dynamic Separation of Duties) These constraints limit the numberof roles a user can activate in a single session.

DSD ⊆ P(R) ×N

∀⟨rs, n⟩ ∈DSD, [n ≥ 2 ∧ ∣rs∣ ≥ n]

∀s ∈ S,∀rs ∈ P(R),∀rs′ ∈ P(R),∀n ∈ N, [⟨rs, n⟩ ∈ DSD ∧ rs′ ⊆ rs ∧ rs′ ⊆session-roles(s)⇒ ∣rs′∣ < n]

2.4.4 RBAC3 Model

RBAC3 combines RBAC1 and RBAC2 to provide both role hierarchies andconstraints.

60

Page 62: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

2.5 Logics for Access Control

2.5.1 Abadi’s Calculus for Access Control

At least three ingredients are essential for security in computing systems:

1. A trusted computing base: the hardware and systems software should becapable of preserving the secrecy and integrity of data.

2. Authentication: it should be possible to determine who made a statement;for example, a user should be able to request that his files be deleted andto prove that the command is his, and not that of an intruder.

3. Authorization, or access control : access control consists in deciding whetherthe agent that makes a statement is trusted on this statement; for exam-ple, a user may be trusted (hence obeyed) when he says that his filesshould be deleted.

These ingredients are fairly well understood in centralized systems. However,distributed systems pose new problems, due to the difficulties with scale, com-munication, booting, loading, authentication, and authorization.

The basic questions of authentication and access control are, always,

• who is speaking?

• who is trusted?

Typically the answer is the name of a simple principal.

Main feature of this work:

It accounts for how a principal may come to believe that another principal ismaking a request, ether on his or on someone else’s behalf. It also provides alogical language for access control lists (ACLs).

Principals:

• Users and machines

• Channels

• Conjunctions of principals (A ∧B)

• Groups

• Principals in roles (A as R)

61

Page 63: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

• Principals on behalf of principals (B for A or B∣A).

Composite principals play a central role in reasoning in distributed systems.For composite principals, ∧ and ∣ are primitive operations. Other operationsare defined based on the primitive operations.

Composite Principals:

• A∧B: A and Bas cosigners. A request from A∧B is a request that bothA and B make.

• A ∨B: the group of which A and B are the sole members. Disjunction isoften replaced with implication, in particular in dealing with groups.“A is a member of the group G” can be written A ⇒ G. Here, A is atleast as power as G.

• A as R: the principal A in role R.

• B∣A (B quoting A): the principal obtained when B speaks on behalf ofA, not necessarily with a proof that A has delegated authority to B.

• B for A: the principal obtained when B speaks on behalf of A, withappropriate delegation certificates.

In order to define the rights of these composite principals, we develop an alge-braic calculus. In this calculus, one can express equations such as(B ∧C) for A = (B for A) ∧ (C for A)and then examine their consequences.

Since ∧ is the standard meet in a semilattice, we are dealing with an orderedalgebra, and we can use a partial order ⇒ among principals: A⇒ B stands forA = A ∧ B and means that A is at least as powerful as B; we pronounce this“A implies B” or “A speaks for B”.

A modal logic extends the algebra of principals. In this logic, A says s representsthe informal statement that the principal A says s. Here s may function asan imperative (“the file should be deleted”) or not (“Cs public key is K”);imperative modalities are not explicit in the formalism.

The logic also underlies a theory of ACLs. We write ⊃ for the usual logicalimplication connective and A controls s as an abbreviation for (A says s) ⊃ s,which expresses trust in A on the truth of s.

ACL: an ACL for a formula s is a list of assertions of the form A controls s.When s is clear from context, the ACL for s may simply be presented as thelist of principals trusted on s.

If A⇒ B and B controls s, then A controls s as well. Thus, when B is listedin ACL, access should be granted to any member of group B such as A.

62

Page 64: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Premises: B controls s ≡ B says s ⊃ s and A = A ∧B.

Sentence: A controls s

Proof: A says s ≡ A∧B says s ⊃ B says s ⊃ s. Thus, A says s ⊃ s ≡ A controls s.

2.5.2 A Calculus of Principals

Principals form a semilattice under the operation of conjunction, and obey theusual semilattice axioms

• ∧ is associative [i.e., (A∧B)∧C = A∧(B∧C)], commutative [i.e., A∧B =B ∧A], and idempotent [i.e., A ∧A = A].

The principals form a semigroup under ∣:

• ∣ is associative.

The final axiom is the multiplicativity of ∣ in both of its arguments, which means:

• ∣ distributes over ∧ [i.e., A∣(B ∧ C) = A∣B ∧ A∣C and (A ∧ B)∣C =A∣C ∧ B∣C].

In short, the axioms given for principals are those of structures known as mul-tiplicative semilattice semigroups. A common example of a multiplicativesemilattice semigroup is an algebra of binary relations over a set, with the op-erations of union and composition.

2.5.3 A Logic of Principals and Their Statements

Syntax: The formulas are defined inductively, as follows:

• a countable supply of primitive propositions p0, p1, p2, ... are formulas;

• if s and s′ are formulas then so are ¬s and s ∧ s′;

• if A and B are principal expressions then A⇒ B is a formula;

• if A is a principal expression and s is a formula then A says s is a formula.

We use the usual abbreviations for boolean connectives, such as ⊃, and wealso treat equality between principals (=) as an abbreviation. In addition,A controls s stands for (A says s) ⊃ s.

63

Page 65: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Axioms: The basic axioms are those for normal modal logics:

• if s is an instance of a propositional-logic tautology then ⊢ s;

• if ⊢ s and ⊢ (s ⊃ s′) then ⊢ s′;

• ⊢ A says (s ⊃ s′) ⊃ (A says s ⊃ A says s′);

• if ⊢ s then ⊢ A says s, for every A.

The calculus of principals is included:

• if s is a valid formula of the calculus of principals then ⊢ s.

Other axioms connect the calculus of principals to the modal logic:

• ⊢ (A ∧B) says s ≡ (A says s) ∧ (B says s);

• ⊢ (B∣A) says s ≡ B says A says s;

• ⊢ (A⇒ B) ⊃ ((A says s) ⊃ (B says s)).

The last axiom is equivalent to (A = B) ⊃ ((A says s) ≡ (B says s)), a substi-tutivity property.

Semantics: The semantics is provided by a Kripke structureM = ⟨W,w0, I, J⟩,where

• W is a set (as usual, a set of possible worlds);

• w0 ∈W is a distinguished element of W ;

• I ∶ Propositions → P(W ) is an interpretation function that maps eachproposition symbol to a subset of W (the set of worlds where the propo-sition symbol is true);

• J ∶ Principals→ P(W ×W ) is an interpretation function that maps eachprincipal symbol to a binary relation over W (the accessibility relation forthe principal symbol).

The meaning functionR extends J , mapping a principal expression to a relation:R(Ai) = J(Ai)R(A ∧B) =R(A) ∪R(B)R(B∣A) =R(A) ○R(B)

The meaning function E maps each formula to its extension, that is, to the setof worlds where it is true:

64

Page 66: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

E(pi) = I(pi)E(¬s) =W − E(s)E(s ∧ s′) = E(s) ∩ E(s′)E(A says s) = {w∣R(A)(w) ⊆ E(s)}E(A⇒ B) =W if R(B) ⊆R(A) and ∅ otherwise

where R(C)(w) = {w′∣wR(C)w′}.

A formula s holds in M at a world w if w ∈ E(s), and it holds in M if it holdsat w0. In the latter case, we writeM ⊧ s, and say thatM satisfies s. Moreover,s is valid if it holds in all models; we write this ⊧ s.

Example:

0 w1

ww

ww

ww

w

76

54

32 l

b

pp

b

l

the pork is spoiledthe pork is freshthe bananas are greenthe bananas are yellowagent is in meat departmentagent is in produce department

p

p

bpb

lp

l

bl

pbl

b

p

p b

l l

l l b b p

Soundness and Completeness: The axioms are sound, in the sense that if⊢ s then ⊧ s. Although useful for our application, the axioms are not complete.For example, the formula

(C says (A⇒ B)) ≡ ((A⇒ B) ∨ (C says false))is valid but not provable.

Exercise 7– prove the validity of the above equivalency by the presented se-mantics.

On Idempotence

The idempotence of ∣ is intuitively needed:

65

Page 67: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

• A∣A = A: A says A says s and A says s are equal.

• Suppose that G represents a collection of nodes, that B and C representmembers of G, and that an ACL includes G∣A. By idempotence, theprincipal C ∣B∣A obtains access. This means that multiple hops within acollection of nodes does not reduce rights and should not reduce security.In particular, by idempotence there is no need to postulate that G∣G⇒ G,or to make sure that G∣G∣A appears in the ACL explicitly.

However, adding idempotence to the logic has some problems:

• Idempotence impose more complexity. e.g., it yields (A∧B)⇒ (B∣A) and(A∧B)⇒ (A∣B) (since (A∧B) = (A∧B)∣(A∧B)). On a request of A∧Bwe need to check both (A∣B) and (B∣A).

• We unable to find a sensible condition on binary relations that would forceidempotence and would be preserved by union and composition.

Corollary: The authors preferred to do without idempotence and rely on as-sumptions of the form G∣G⇒ G.

Roles

There are many situations in which a principal may wish to reduce his powers.A principal may wishes to respect the principle of least privilege, according towhich the principal should have only the privileges it needs to accomplish itstask.

These situations can be handled by the use of roles. A principal A may adopt arole R and act with the identity A as R when he wants to diminish his powers.For example, define the roles Ruser and Radmin representing a person acting asa user and as an administrator, respectively. Suppose the ACLs in the systeminclude A∣Radmin controls s1 and A∣Ruser controls s2. In her daily work, Alicemay step into her role as user by quoting Ruser; when she needs to performadministrative tasks, Alice can explicitly quote Radmin to gain access to objectssuch as s1 that mention her administrative role.

Axioms of Roles:

For all roles the following axioms are hold:

• R∣R = R ( idempotency),

• R∣R′ = R′∣R (commutativity).

• A⇒ (A as R)

66

Page 68: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

These yield the following:

• A as R as R = A as R

• A as R as R′ = A as R′ as R

Roles and Groups: Roles may be related to Groups. e.g., Grole related to groupG. A as Grole means A act in the role of member of G. We do allow rolesrelated to groups but this relation is not formal.

Semantics of Roles:

Definition (identity)– A special principal 1, the identity, believes everythingthat is true and nothing that is not. R(1)(w) = w, ∀w ∈W

Definition (Role)– In the binary relation model, roles are subsets of the iden-tity relations (R(R) ⊆R(1)), i.e., 1⇒ R.

A principal A in role R is defined as (A as R) which is equal to A∣R.

Roles reduce privileges: R(R) ○R(A) ⊆R(A)

◦ =

An arbitrary principalrelation R(A) . . .

. . . composed with arole relation R(R) . . .

. . . gives a new rela-tion that is always a

subset of R(A).

Figure 2.15: The semantics of roles.

Access Control Decision

A general access control problem. The problem of making access control deci-sions is computationally complex. It is important therefore to understand theprecise form of its instances. The parts of an instance are:

• An expression P in the calculus of principals represents the principal thatis making the request. In particular, all appropriate delegations are takeninto account in constructing this expression. The various relevant certifi-cates are presented for checking.

67

Page 69: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

• A statement s represents what is being requested or asserted. The precisenature of s is ignored; it is treated as an uninterpreted proposition symbol.

• Assumptions state implications among principals; these typically representassumptions about group memberships. They have the form Pi ⇒ Gi,where Pi is an arbitrary expression in the calculus of principals and Gi anatom. Note that this syntax is liberal enough to write G∣G⇒ G for everyappropriate G of interest, obtaining some of the benefit of the idempotenceaxiom.

• Certain atomic symbols R0, ...,Ri, ... are known to denote roles.

• An ACL is a list of expressions E0, ...,Ei, ... in the calculus of principals;these represent the principals that are trusted on s.

The basic problem of access control is deciding whether ⋀i(Pi ⇒ Gi), derived

from the assumptions, and ⋀i(Ei controls s), derived from the ACL, imply

P controls s, given the special properties of roles and of the delegation serverD.

There is a proof that the problem of making access control decisions is equivalentto the acceptance problem for alternating pushdown automata and requiresexponential time.

68

Page 70: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Chapter 3

Exercise Answers

Exercise 1: Since ⪯ is a partial order and ≤ is a total order, ⟨L × T,⊑⟩ is apartial ordered set. Precisely:

• ⟨a, b⟩ ⊑ ⟨a, b⟩ because (a ⪯ a) and (b ≤ b)

• If ⟨a1, b1⟩ ⊑ ⟨a2, b2⟩ and ⟨a2, b2⟩ ⊑ ⟨a3, b3⟩, then

– a1 ⪯ a2 and a2 ⪯ a3, thus a1 ⪯ a3– b1 ≤ b2 and b2 ≤ b3, thus b1 ≤ b3

Hence, ⟨a1, b1⟩ ⊑ ⟨a3, b3⟩.

• If ⟨a, b⟩ ⊑ ⟨c, d⟩ and ⟨c, d⟩ ⊑ ⟨a, b⟩, then

– a ⪯ c and c ⪯ a, thus a = c– b ≤ d and d ≤ b, thus b = d

Hence, ⟨a, b⟩ = ⟨c, d⟩

Also, every two elements of L × T has a supremum and an infimum, equal tothe following:

• GLB(⟨a, b⟩, ⟨c, d⟩) = (GLB(a, b), min(c, d)).

• LUB(⟨a, b⟩, ⟨c, d⟩) = (LUB(a, b), max(c, d)).

The above claim can be easily proven.

Exercise 2:

69

Page 71: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

In the followings, ⟨X ′,D′,A′⟩ is the next state of ⟨X,D,A⟩ after execution of acommand.

Add access attribute r to cell Ad,x.X ′ =X,D′ =D

A′[a, b] =⎧⎪⎪⎨⎪⎪⎩

A[a, b] ∪R , if a = d, b = x,R ⊂ {r, r∗},R ≠ ∅A[a, b] , otherwise

Remove access attribute r from cell Ad,x.X ′ =X,D′ =D

A′[a, b] =⎧⎪⎪⎨⎪⎪⎩

A[a, b] − {r, r∗} , if a = d, b = xA[a, b] , otherwise

Copy access attribute r (or r∗) from cell Ad,x to Ad′,x.X ′ =X,D′ =D

A′[a, b] =⎧⎪⎪⎨⎪⎪⎩

A[a, b] ∪R , if a = d′, b = x, r∗ ∈ A[d, x],R ⊂ {r, r∗},R ≠ ∅A[a, b] , otherwise

Exercise 3:

For each right r in Lampson model, we should have some rules of the followingtypes.

Command Rule1r(d, d’, x)if control in (d, d′) andr in (d′, x)

then

delete r from (d′, x)end;

Command Rule2-1r(d, d’, x)if r∗ in (d, x) andthen

enter r into (d′, x)end;

Command Rule2-2r(d, d’, x)if r∗ in (d, x) andthen

enter r∗ into (d′, x)end;

Command Rule3-1r(d, d’, x)if own in (d, x) andthen

enter r into (d′, x)end;

70

Page 72: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

Command Rule3-2r(d, d’, x)if own in (d, x) andthen

enter r∗ into (d′, x)end;

Rule 4 cannot be specified in terms of HRU commands, because we need to checknot existing of protected right. To solve the problem we can replace protectedright with its negation, i.e., not-protected right and add such a right in all cellsof access matrix by default. In this new model, giving protected right changesto removing not-protected right. Thus, we need to rewrite all of the previousrules in this new model (which are easy) and have the fourth ruls of Lampson’smodel as follows.

Command Rule4r(d, d’, x)if own in (d, x) andnot-protected in (d′, x) andr in (d, x)

then

delete r from (d′, x)end;

Exercise 4:

S′i = Si − {s′},O′i = Oi − {s′}

P ′i [x, y] =

⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩

Pi[x, y] , if x, y ≠ sPi[s, y] ∪ Pi[s′, y] , if x = s, y ≠ sPi[x, s] ∪ Pi[x, s′] , if x ≠ s, y = sPi[s, s] ∪ Pi[s′, s′] ∪ Pi[s, s′] ∪ Pi[s′, s] , if x, y = s

Exercise 5:

Q′i = S′i = {s}

P ′i [s, s] = Pi[s, s] ∪ ⋃

o∈On−1

Pi[s, o]

Exercise 6: Suppose that we have [σ1[e]Ð→ σ′1 ∧ σ2 ≈ σ1] ⇒ ∃σ′2,∃γ2[σ2

γ2Ð→σ′2 ∧ σ′2 ≈ σ′1 ∧ γ2 ≈ [e]]

We prove by induction that if the above equation holds for any ∣γ1∣ = n, then italso holds for γ′1 = γ1.e where ∣γ′∣ = n + 1.

[σ1γ′1Ð→ σ′1 ∧ σ2 ≈ σ1]⇒ ∃σ3[σ1

γ1Ð→ σ3[e]Ð→ σ′1 ∧ σ2 ≈ σ1]

⇒ ∃σ′3,∃γ2[σ2γ2Ð→ σ′3 ∧ σ′3 ≈ σ3 ∧ γ2 ≈ γ1 ∧ σ3

[e]Ð→ σ′1] (I)

From (I) ⇒ ∃σ′3,∃γ2[σ2γ2Ð→ σ′3 ∧ γ2 ≈ γ1] (II)

From (I) and holding the theorem for single events ⇒ ∃σ′2,∃γ3[γ3 ≈ [e] ∧ σ′2 ≈

71

Page 73: Formal Methods for Information Securityce.sharif.edu/courses/93-94/2/ce678-1/resources/... · 2 Formal Methods for Security Modeling 17 ... 2.5.3 A Logic of Principals and Their Statements

σ′1 ∧ σ′3γ3Ð→ σ′2] (III)

From (II) and (III) ⇒ ∃σ′2,∃γ4 = γ2.γ3[σ2γ4Ð→ σ′2 ∧ γ4 ≈ γ′1 ∧ σ′2 ≈ σ′1]

Thus, the theorem holds for γ′1 where ∣γ′1∣ = n + 1.

Exercise 7: We should prove that for every model like M = ⟨W,w0, I, J⟩ weshould have M ⊧ (C says (A⇒ B)) ≡ ((A⇒ B) ∨ (C says false)). Thus, weshould prove that E(C says (A⇒ B)) = E((A⇒ B) ∨ (C says false)).

Regarding the semantics of A⇒ B, we have E(A⇒ B) =W or ∅.

Suppose that E(A⇒ B) =W . Now we haveE((A⇒ B)∨(C says false)) = E(A⇒ B)∪E(C says false) =W∪E(C says false) =W .

Also we haveE(C says (A⇒ B)) = {w∣ R(C)(w) ⊆ E(A⇒ B)} = {w∣ R(C)(w) ⊆W} =W .

(I) Thus, in this case the theorem holds.

Suppose that E(A⇒ B) = ∅. Now we haveE((A⇒ B)∨(C says false)) = E(A⇒ B)∪E(C says false) = E(C says false) ={w∣ R(C)(w) ⊆ E(false) = ∅} = {w∣ R(C)(w) = ∅}.

Also we have E(C says (A ⇒ B)) = {w∣ R(C)(w) ⊆ E(A ⇒ B) = ∅} ={w∣ R(C)(w) = ∅}.

(II) Thus, in this case the theorem holds as well.

From (I) and (II), we can conclude that E(C says (A ⇒ B)) = E((A ⇒ B) ∨(C says false)).

72