Sunday, 7 June 2015

Cryptography

Cryptography



Cryptography is about communication in the presence of an adversary. It encompasses many problems (encryption, authentication, key distribution to name a few). The field of modern cryptography provides a theoretical foundation based on which we may understand what exactly these problems are, how to evaluate protocols that purport to solve them, and how to build protocols in whose security we can have confidence. We introduce the basic issues by discussing the problem of encryption. 1.1 Encryption: Historical Glance The most ancient and basic problem of cryptography is secure communication over an insecure channel. Party A wants to send to party B a secret message over a communication line which may be tapped by an adversary. The traditional solution to this problem is called private key encryption. In private key encryption A and B hold a meeting before the remote transmission takes place and agree on a pair of encryption and decryption algorithms E and D, and an additional piece of information S to be kept secret. We shall refer to S as the common secret key. The adversary may know the encryption and decryption algorithms E and D which are being used, but does not know S. After the initial meeting when A wants to send B the cleartext or plaintext message m over the insecure communication line, A encrypts m by computing the ciphertext c = E(S, m) and sends c to B. Upon receipt, B decrypts c by computing m = D(S, c). The line-tapper (or adversary), who does not know S, should not be able to compute m from c. Let us illustrate this general and informal setup with an example familiar to most of us from childhood, the substitution cipher. In this method A and B meet and agree on some secret permutation f: Σ → Σ (where Σ is the alphabet of the messages to be sent). To encrypt message m = m1 . . . mn where mi ∈ Σ, A computes E(f, m) = f(m1). . . f(mn). To decrypt c = c1 . . . cn where ci ∈ Σ, B computes D(f, c) = f −1 (c1). . . f −1 (cn) = m1 . . . mn = m. In this example the common secret key is the permutation f. The encryption and decryption algorithms E and D are as specified, and are known to the adversary. We note that the substitution cipher is easy to break by an adversary who sees a moderate (as a function of the size of the alphabet Σ) number of ciphertexts. A rigorous theory of perfect secrecy based on information theory was developed by Shannon [192] in 1943. 1 . In this theory, the adversary is assumed to have unlimited computational resources. Shannon showed that secure (properly defined) encryption system can exist only if the size of the secret information S that A and B agree on prior to remote transmission is as large as the number of secret bits to be ever exchanged remotely using the encryption system.

An example of a private key encryption method which is secure even in presence of a computationally unbounded adversary is the one time pad. A and B agree on a secret bit string pad = b1b2 . . . bn, where bi ∈R {0, 1} (i.e pad is chosen in {0, 1} n with uniform probability). This is the common secret key. To encrypt a message m = m1m2 . . . mn where mi ∈ {0, 1}, A computes E(pad, m) = m ⊕ pad (bitwise exclusive or). To decrypt ciphertext c ∈ {0, 1} n, B computes D(pad, c) = pad ⊕ c = pad ⊕ (m ⊕ pad) = m. It is easy to verify that ∀m, c the Prpad [E(pad, m) = c] = 1 2n . From this, it can be argued that seeing c gives “no information” about what has been sent. (In the sense that the adversary’s a posteriori probability of predicting m given c is no better than her a priori probability of predicting m without being given c.) Now, suppose A wants to send B an additional message m0 . If A were to simply send c = E(pad, m0 ), then the sum of the lengths of messages m and m0 will exceed the length of the secret key pad, and thus by Shannon’s theory the system cannot be secure. Indeed, the adversary can compute E(pad, m)⊕ E(pad, m0 ) = m⊕m0 which gives information about m and m0 (e.g. can tell which bits of m and m‘ are equal and which are different). To fix this, the length of the pad agreed upon a-priori should be the sum total of the length of all messages ever to be exchanged over the insecure communication line.

 A Co1.2 Modern Encryption:mputational Complexity Based Theory Modern cryptography abandons the assumption that the Adversary has available infinite computing resources, and assumes instead that the adversary’s computation is resource bounded in some reasonable way. In particular, in these notes we will assume that the adversary is a probabilistic algorithm who runs in polynomial time. Similarly, the encryption and decryption algorithms designed are probabilistic and run in polynomial time. The running time of the encryption, decryption, and the adversary algorithms are all measured as a function of a security parameter k which is a parameter which is fixed at the time the cryptosystem is setup. Thus, when we say that the adversary algorithm runs in polynomial time, we mean time bounded by some polynomial function in k. Accordingly, in modern cryptography, we speak of the infeasibility of breaking the encryption system and computing information about exchanged messages where as historically one spoke of the impossibility of breaking the encryption system and finding information about exchanged messages. We note that the encryption systems which we will describe and claim “secure” with respect to the new adversary are not “secure” with respect to a computationally unbounded adversary in the way that the one-time pad system was secure against an unbounded adversary. But, on the other hand, it is no longer necessarily true that the size of the secret key that A and B meet and agree on before remote transmission must be as long as the total number of secret bits ever to be exchanged securely remotely. In fact, at the time of the initial meeting, A and B do not need to know in advance how many secret bits they intend to send in the future. We will show how to construct such encryption systems, for which the number of messages to be exchanged securely can be a polynomial in the length of the common secret key. How we construct them brings us to anther fundamental issue, namely that of cryptographic, or complexity, assumptions. As modern cryptography is based on a gap between efficient algorithms for encryption for the legitimate users versus the computational infeasibility of decryption for the adversary, it requires that one have available primitives with certain special kinds of computational hardness properties. Of these, perhaps the most basic is a one-way function. Informally, a function is one-way if it is easy to compute but hard to invert. Other primitives include pseudo-random number generators, and pseudorandom function families, which we will define and discuss later. From such primitives, it is possible to build secure encryption schemes. Thus, a central issue is where these primitives come from. Although one-way functions are widely believed to exist, and there are several conjectured candidate one-way functions which are widely used, we currently do not know how to mathematically prove that they actually exist. We shall thus design cryptographic schemes assuming we are given a one-way function. We will use the conjectured candidate one-way functions for our working examples, throughout our notes. We will be explicit about what exactly can and cannot be proved and is thus assumed, attempting to keep the latter to a bare minimum.

No comments:

Post a Comment