lecture 1: shannon’s theory of secrecy and its extension...

Post on 13-Aug-2020

3 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

James L. Massey

Prof.-em. ETH Zürich, Adjunct Prof., Lund Univ.,Sweden, and Tech. Univ. of Denmark

Trondhjemsgade 3, 2THDK-2100 Copenhagen East

JamesMassey@compuserve.com

Lecture 1: Shannon’s Theory of Secrecy

and its Extension to Authenticity

EWSCS’06 Palmse, Estonia5-10 March 2006

2

Cryptology(“hidden word”)

Cryptography Cryptanalysis(code making) (code breaking)

The “good guys” The “bad guys”

3

Goals of cryptography

AuthenticitySecrecy

Xuejia Lai has given a useful razor fordeciding whether something is a matter ofsecrecy or a matter of authenticity.

4

Secrecy - concerned with who has access to (or can read) a legitimate message.

Secrecy deals with safeguarding the future by ensuring that only authorized recipients will be ableto gain access to (or read) a legitimate message.

5

Authenticity - concerned with who can create (or write) a legitimate message.

Authenticity deals with protecting the past by• ensuring that the creator (or author) wasentitled to create (or write) the message• ensuring that the contents of the messagehave not been altered

6

A secrecy scheme: The Caesar Cipher

ABC

ZYX

01

2

2524

23

.. .

...

Arithmetic on a CIRCLE(Modulo 26 arithmetic)

Encrypt = Add 3(move clockwise 3 places)

Decrypt = Subtract 3(move counterclockwise 3 places)

SECRET KEY = 3

C A E S A R

F D H V D U

plaintext

ciphertext

M A S S E Y

P D V V H B

plaintext

ciphertext

7

0

1

Today we use a SMALLER CIRCLE!

Arithmetic on this CIRCLE(Modulo 2 arithmetic)

Encrypt = Add(move clockwise)

Decrypt = Subtract(move counterclockwise)= (move clockwise)

and a LONGER SECRET KEY!

1 0 0 1 1 1 0 1 plaintext

ciphertext

secret key0 1 1 0 0 1 1 11 1 1 1 1 0 1 0

⇒⇒⇒⇒ Decrypt = Encrypt

8

Everybody likes to make secret codes!

9

Photograph ofShannon at homein 1962. (from theNew York TimesMagazine, 30December 2001)

10

“As a first step in the mathematical analysis ofcryptography, it is necessary to idealize thesituation suitably, and to define in amathematically acceptable way what we shallmean by a secrecy system.”C.E. Shannon, "Communication Theory of Secrecy Systems",Bell System Tech. J., vol. 28, pp. 656-715, Oct., 1949.

This was a radical departure from previouspapers in cryptography where (as in Steen andStoffer) conjecture and imprecision reigned.

Just how did Shannon define a secrecy system in“a mathematically acceptable way”?

11

He drew a picture!

(Shannon, 1949)

12

Claude Elwood Shannon (1916-2001)(photographed 17 April 1961 by Göran Einarsson)

13

Shannon’s

Fig. 1—Schematic of a general secrecy system

makes the following assumptions crystal clear:

• The message M and the key K are independentrandom variables.

• The sender and receiver both know the key.

• The attacker knows only the cryptogram E (i.e., aciphertext-only attack is assumed).

• The receiver is able to recover the message M fromknowledge of the cryptogram E and key K.

• No assumption is made about who generates the key.

You don’t need a lot of words and/or equationsto make yourself mathematically precise!

14

Kerckhoffs’ Principle

This principle was first stated in 1881 by theDutchman Auguste Kerckhoffs (1835 - 1903).

When evaluating security, one assumes thatthe enemy cryptanalyst knows everything(including the source statistics and keystatistics) except the secret key.

A cipher should be secure when the enemycryptanalyst knows all details of theenciphering process and deciphering processexcept for the value of the secret key.

15

Shannon ‘s 1949 definition: A cipher providesperfect secrecy against a ciphertext-onlyattack if the plaintext and the ciphertext,considered as random variables, are independent.

What does “unbreakable” mean?

To Shannon, a cipher is unbreakable in aciphertext-only attack if it providesunconditional security, i.e., no matter how hardor how long the attacker works, he/she can dono better than to guess the plaintext bythe best guessing rule that he/she woulduse without having seen the ciphertext.

16

Binary Symmetric Source

Vernam’s 1926 Cipher:

BinaryPlaintext Source

BSS

Destination

Secure Channel

M ME

R R

R

Vernam claimed that his cipher was unbreakable!

Enemy cryptanalystin a ciphertext-onlyattack.

R is the secret key,a “totally random” sequence.

17

Vernam not only claimed that his cipherwas unbreakable, but also stated that hehad confirmed this in “field trials withthe U. S. Army Signal Corps”.

Was Vernam right? Was hiscipher the first unbreakablecipher in the many thousandsof years of cryptographichistory?

18

The Binary Symmetric Source (BSS) ofinformation theory is a monkey with a fairbinary coin ( 0 on one side and 1 on the other).

19

Cryptographic property of the BSS:

The modulo-two sum of a BSS output andan arbitrary random sequence is anotherBSS output that is INDEPENDENT ofthe arbitrary random sequence.

Example:

BSS output: 0 1 0 0 1 0 1 0 1 1 1 0 1 . . .

Arb. Ran. Seq. 1 1 1 1 1 1 1 1 1 1 1 1 1 . . .

Modulo-2 sum 1 0 1 1 0 1 0 1 0 0 0 1 0 . . .

20

Vernam’s cipher provides perfect secrecyagainst a ciphertext-only attack!

The cryptogram E that the enemy cryptanalystsees is independent of the plaintext message M.This simple proof of unbreakability of Vernam’s1926 cipher was first given by Shannon in 1949!

BinaryPlaintext Source

BSS

Destination

Secure Channel

M ME

R R

R

21

Vernam’s cipher is usually today called the“one-time pad” to emphasize that the key isto be used for only one message . It wasused by spies on both sides in World War IIand is still the cipher of choice forextremely important secret communications.

(What Shannon called the plaintext is thetotal data that will be encrypted before thekey is changed, i.e., Shannon specified a“one-time key” in his theory of secrecysystems.)

22

Vernam’s cipher needs as many binarydigits of secret key as there are bits ofplaintext to be encrypted.

Vernam was right about his cipher beingunbreakable, but does an unbreakablecipher really need this huge amount ofsecret key???

23

For perfect secrecy, the number of differentkeys must be AT LEAST AS GREAT as thenumber of different plaintexts.

Shannon’s 1949 Lower Bound on Key Length:

Proof:• For any fixed key k, the number of different ciphertextse equals the number of different plaintexts m.• Perfect secrecy ⇒⇒⇒⇒ for all possible e and any fixed m,

P(E=e|M=m) = P(E=e) ≠≠≠≠ 0• ⇒⇒⇒⇒ For a fixed m, the number of different ciphertexts emust equal at least the number of different plaintexts m.• But all keys from a fixed m to different e’s must bedifferent.

24

Shannon later gave the following proof of a slightlyweaker lower bound on key length, namely

Perfect secrecy ⇒ H(E) = H(M|E) ≤ H(MK|E) = H(K|E) + H(M|EK)= H(K|E)≤ H(K)

Thus, if the cipher is to give perfect secrecyregardless of the source statistics, it must alsogive perfect secrecy for the BSS for whichH(M) = N bits. Thus H(K) ≥ N so that the keymust be at least N binary digits long.

H(K) ≥ H(E).

= 0

25

The number of different plaintext messages is about2H(M) where H(M) is the entropy of the plaintextmessage. Equivalently, one says that H(M) is thenumber of bits of information in the plaintextmessage. An ideal data compressor will compress Mto about H(M) binary digits. Consider the system:

Ideal DataCompressorM

Vernam’sCipher

K

E

Achieves perfect secrecy and the number ofbinary digits of the key K is H(M), whichsatisfies Shannon’s lower bound with equality.

26

MESSAGESOURCE

ENCIPHERERTK

DECIPHERERTK

-1

M E

ENEMY CRYPTANALYST

KEYSOURCE

K

Simmons’ Model of aSubstitution Attack on an Authenticity System

E'

RECOGNIZEDUNAUTHENTIC

E' can be the legitimate cryptogram E or a phonycryptogram E' (E' ≠ E) inserted by the attacker.

E' is accepted if and only if it is a valid cryptogramfor the key K.

M'ACCEPTED

Secure Channel

K

K

27

In an impersonation attack, the attacker forms E'without seeing a legitimate cryptogram E and wins ifhis cryptogram is accepted.

PI = Probability of successful impersonation when theattacker uses an optimum attack.

PS = Probability of successful substitution when theattacker uses an optimum attack.

Pd = Probability of deception = max(PI, PS)

Simmons’ 1984 bound on the probability of deception:Pd ≥≥≥≥ 2

-I(E; K)

The only way to get unconditionally secure authenticityis to let the cryptogram give information about the key!

where I(E; K) = H(K) - H(K|E) is the mutualinformation between E and K.

28

Example of an authenticity system meetingSimmon’s lower bound on PI with equality:

Plaintext is sent in the clear and the keyis added as a signature: E = [M : K]

If the key has length n binary digits, then

PI = 2-n

because the attacker can only make a randomguess at the secret key in an impersonation attack.I(E; K) = n bits so that Simmons’ bound on PIholds with equality!

This authenticity system gives no secrecy!

In a substitution attack, the attacker can achievePS = 1.

29

1-bit messages with individual signatures.K = (K1, K2, . . . Kνννν , Kνννν +1, . . . K2νννν ) [n = 2νννν-bitkey] assumed generated by a BSS.M is 0 or 1.M = 0 ⇒ E = (0, K1, K2, . . . Kνννν )M = 1 ⇒ E = (1, Kνννν +1, Kνννν +2, . . . K2νννν )

Note that again there is no secrecy!

Whether the attacker observes E or not, he mustguess νννν bits of key to produce a cryptogram E'that will be accepted as authentic.

⇒ PI = PS = Pd = 2-νννν.

But I(E; K) = νννν bits so that Simmons’ bound on Pdholds with equality!

Example of an authenticity system meetingSimmon’s lower bound on PS and Pd with equality:

30

This example shows that we can have unconditionallysecure authenticity with no secrecy.

Vernam’s cipher gives perfect secrecy against aciphertext-only attack but no protection againstan impersonation attack, i.e., PI = 1.

The important conclusion to make is thatsecrecy and authenticity are independentattributes of a cryptographic system.

31

The informational divergence (or the “Kullbach-Leibler distance” or the “relative entropy” or the“discrimination”) from P to Q, two probabilitydistributions on the same alphabet, is the quantity

. log )||()(supp

∑∈

−=Px P(x)

Q(x)P(x)QPD

Fundamental property of informational divergence:. ifonly and ifequality with 0)||( QPQPD =≥

Let H0 and H1 be the two possible hypotheses and letY be the observation used to determine whichhypothesis is true. Let D0 and D1 be the regions of Yvalues in which one decides for H0 or H1, respectively.Let αααα or ββββ be the error probabilities when H0 or H1 istrue, respectively.

32

Let V (0 or 1) be the decision as to whichhypothesis is true so that

)1( 0|HVP=α and . )0(

1|HVP=β

Information-theoretic bound for hypothesis testing:

αβα

αβα

-1log )1(

-1log )||(

10 || −−−≥HYHY PPD

with equality if and only if)y(

)y(

1

0

|Y

|Y

H

H

P

Phas the same

value for all y ∈ D0 and has the same value for all y ∈ D1 .

Direct calculation gives

. -1

log )1(-1

log )||(10 || α

βαα

βα −−−=HVHV PPD

33

For the important special case where αααα = 0, i.e., wherewe never err when H0 is true, the previous bound gives

. 2)||(

1|0| HYHY PPD−≥βNow suppose that H0 is the hypothesis that theobservation Y = E' is the legitimate cryptogram E forthe key K = k, i.e.,

, )()( || 0yPyP kKEHY == and that H1 is the hypothesis

that Y = E' is formed by the attacker according to

, )()()()( || 1yPkPyPyP

kkKEkKEHY ∑ ====

which may not be the optimum attacking strategy.Let ββββk be the error probability when K = k so that

. 2 )||( | EkKE PPDk

=−≥β

34

Moreover, ββββ is just the probability PI of successfulimpersonation, so the information-theoretic boundbecomes

. 2 ) ;( EKI−≥PI

)(

)(log )()||(

||| yP

yPyPPPD

kKE

E

ykKEEkKE =

== ∑−=

)||()( |

2EkKE

kK PPDkP =∑

≥−

)||( |

2)()(∑

≥==−

∑∑ kEkKE PPD

kKk

kK kPkP ββ

where we have used Jensen’s inequality. But

. ) ;()|()()||()(|

KEIKEHEHPPDkPEkKE

kK =−==∑

This completes the proof of Simmons’ lower bound.

35

Simmons’ proof of his bound on the probability ofdeception (or impersonation) appears inG. J. Simmons, "Authentication Theory/Coding Theory," pp. 411-431 in Advances inCryptology - CRYPTO '84 (Eds. G. R. Blakey and D. Chaum), Lecture Notes inComputer Science No. 196. Heidelberg and New York: Springer, 1985.

Several simplifications of his derivation have since beengiven. The most insightful one, which we have followed,is by Maurer, cf.U. M. Maurer, "Information Theoretic Bounds in Authentication Theory," p.12 inProc. IEEE Inst. Symp. Info. Th., Whistler, Canada, Sept. 17-22, 1995.

U.M. Maurer, "A Unified and Generalized Treatment of Authentication Theory, pp.387-398 in Proc. 13th Symp. on Theoretical Aspects of Computer Science(STACS'96), Lecture Notes in Computer Science No. 1046, New York: Springer,1996.

Maurer based his treatment on Blahut’s information-theoretic approach to hypothesis testing, cf.R. E. Blahut, “Hypothesis testing and information theory”, IEEE Trans. Inform.Theory, vol. IT-20, pp. 405-417, July 1974

top related