Lecture log

Here I will be giving a short description of the material covered in each lecture. Please report any mistakes/omissions to me.

07/11: We discussed two protocols for password based authenticated key agreement, SPEKE and SRP. Our discussion was rather superficial, the main aim being to get an idea of two real-life implementations rather than a rigorous security analysis.
We ended the lecture with a reference to the all important side channel attacks, i.e. attacks that target the hardware/software implementations of cryptosystems and deduce the secret information by observing, for example, the time needed for a computation, sounds, electromagnetic emissions, power consumption etc.
Today was the last proper lecture of the course, hence this is the final entry in this Log. The remaining four lectures will be spend on revision.

05/11: We looked at a concrete blind signature construction based on the RSA signature scheme. The idea is to "blind" the message to be signed by adding some randomness in a way that it can be removed once the signature is received (i.e. the signature can be "un-blinded"). This can be achieved by multiplying the hash of the message to be signed by \(r^e\) for some random element \(r\) and asking for a plain RSA signature on this.
We then went on to discuss coin flipping over the phone. We first recalled one method of achieving this based on hash functions that we had already seen. Then, we discussed an alternative based on square roots in modular arithmetic that takes advantage of the fact that we can easily compute square roots modulo a prime \(p \equiv 3 \mod 4\), and that we can combine several such using C.R.T. to obtain square roots of composites. However, knowing only the composite (but not its factorisation), it is hard to compute its square roots, and knowing the composite and its square roots, it is easy to factor it.
In the end, we argued about how any card game can be played over the telephone using cryptography. In particular, we described how to deal the deck of cards in a two-player game, where each card is encoded as an element of \(\mathbb Z_p\) (in which DLog is hard). The thing to watch out for is Quadratic Residues (Q.R.s), i.e. elements of \(\mathbb Z_p\) that are squares: since these can be recognised even after encryption, an "honest" deck should be encoded as either all Q.R.s or all non Q.R.s.

31/10: We introduced the notion of blind signatures, where the signer doesn't know what he is signing. This might sound absurd at first, so we had to motivate the need for such signatures by discussing remote voting and digital cash. Digital coins are validated by the bank's signature and to preserve the user's anonymity when spent (just like bank notes), the bank shouldn't be able to control the content of the coin, only its value. We saw how this can be achieved using blind signatures and what is needed in addition to prevent fraud (re-using coins).

29/10: We spent the first half of the lecture on two concrete MAC constructions, namely CBC-MAC based on the CBC block cipher mode of operation (no initial value, take only the last ciphertext block) and HMAC based on any secure (iterated) hash function (concatenate message with key xored with some constant, hash, concatenate with key xored with some other constant and hash again). This completed the third part of the course.
In the second half of the lecture we entered the fourth and final part of the course on applications of cryptography. First, we looked at a simple secret sharing scheme, where the holder of a secret distributes "shares" of it to different people so that later on, all together can reconstruct it. This protects the secret from theft, but if a share is lost, so is the secret. In a \((t,m)\)-secret sharing scheme that we saw next, \(m\) "shares" are distributed but only \(t\) of them are needed to reconstruct the secret. This is easily done by constructing a "random" polynomial \(f(x) \in \mathbb Z_p[x]\) of degree \(t-1\) whose constant term is the secret encoded as an element of \(\mathbb Z_p\), where \(p\) is a prime. Using Lagrange's interpolation, any set of \(t\) or more "shareholders" can reconstruct the secret, but fewer cannot.

24/10: At the beginning of the lecture we looked at how the output length of a hash function affects its collision resistance: If the output is \(l\)-bits long then by the birthday paradox, taking \(\mathcal O \left ( 2^{l/2} \right )\) random hashes we would, with probability \(\frac{1}{2}\), find a collision. Thus \(l\) should be large enough to prevent such "brute force" attacks.
We then saw a hash function based on discrete logs that is collision resistant provided DLog is hard (finding second pre-images leads to finding collisions so the former is not easier than the latter). However, this function is by far too inefficient to be used for practical purposes.
Next, we had a very brief overview of the latest hash function standard SHA-3 and its pre-predecessor SHA-1, which completed our study of the topic.
Finally, we started discussing the last cryptographic primitives we'll be looking at in the course: Message Authentication Codes (MACs). These are symmetric key primitives used for data integrity that, unlike digital signatures, do not bind messages to their authors. We just had time to motivate the need for MACs: in certain scenarios it might be desired and even advantageous to use a modified version of DH-KAP where one party remains anonymous by using a MAC instead of a signature.

22/10: Continuing with identifying the required properties the O-W "compression" function used in digital signatures should have, we mentioned collision resistance: finding two messages with the same image (and consequently with the same signature) should be hard. Finally, if the domain of the function is messages of arbitrary length and the range is messages of a fixed length then an arbitrary length message will have a signature of much shorter length instead of one of equal length.
Next, we remarked that these properties actually define cryptographic hash functions, and the hash-then-sign approach can be used with any signature scheme. Before moving on to the study of these functions, we first had to finish with digital signatures. To begin with, we mentioned that in the case of RSA, the hash image size should be equal to the size of the RSA modulus used (i.e. we must employ Full Domain Hash) otherwise an index calculus type of attack exploiting the hash function's short range can determine the signing exponent.
We then described how DH-KAP can be authenticated using signatures and had a brief look at the ElGamal signature scheme which doesn't have as straightforward relation to its PKC version as it was in the RSA case. Finally, we just mentioned the existence of the digital signature algorithm (DSA), the signature standard that is a variant of and improves upon ElGamal by removing one of the three exponentiations needed for verification. However, they both break entirely if the same randomness is used in signing more than one message.
We concluded the lecture by giving a couple of more examples of other uses cryptographic hash functions have (e.g. password authentication and coin flipping over the phone) and mentioning that the winner of the international competition for a hash function standard was announced 20 days ago to be Keccak, now named SHA-3 (secure hash algorithm).

17/10: We explained that if Bob uses the same randomness in more than one ElGamal encryptions to Alice then the security of the corresponding ciphertexts is interconnected: if one of the plaintexts gets known to Eve then she can recover all of them.
We finished our study of ElGamal by mentioning (without proof) that we need safe primes (primes of the form \(p=2q+1\) where \(q\) is also prime, called Sophie-Germain prime) in order to have semantic security. Taking advantage of this opportunity, we briefly described (again without the mathematical reasoning) how to construct such primes more efficiently than using the M-R primality test twice, where they get their name from (\(p-1\) is not smooth), why they are particularly useful (easy to find primitive elements modulo \(p\)) and also how to find appropriate RSA primes.
This concluded the second part of the course on public key cryptography and we moved on to the next one on message integrity and authentication. We began with digital signatures which are the equivalent of hand-written signatures in the digital world. After formally defining them and their desired properties (secret signing algorithms, public verification algorithms, no forgeries) we saw an example based on RSA. Knowing the internal workings of RSA-PKC, it was easy to make it into a signature scheme with the same setup: a signer signs messages by "decrypting" them using her secret exponent \(d\) and a verifier verifies signatures by "encrypting" them using the signer's public exponent \(e\) and comparing the result to the message.
This version is insecure as for example forgeries on (probably meaningless) messages can be found easily. Moreover, signatures are of the size of the message which is not very efficient. As a possible remedy, we started investigating the effect of adding a O-W "compression" function in the public parameters of the scheme and signing the images of messages under this function instead of the actual messages. This way, the signatures will be smaller than the message. Security wise, one-wayness is a necessary property but not sufficient: we also want the function to be such that given a message and its signature it's hard to find another message that has the same image because it would then have the same signature as well (i.e. second pre-images, though existing due to the "compression" property, should be hard to find). More on this next week.

15/10: As promised, we opened the lecture with RSA-OAEP, which offers ciphertext indistinguishability, a stronger form of semantic security: given two plaintexts \( x_1\) and \( x_2\), and an encryption \(y\) of one of the two, an adversary cannot say, without guessing, which of \(x_1\) or \(x_2\) was encrypted to give \(y\). This is achieved by adding randomness that expands the message to be encrypted by a factor that makes any ciphertext distinguishing algorithms infeasible to run, as follows: first we apply some O-W function to some random integer \(r\) and xor the result with the secret \(x\) to obtain \(x_1\), we then apply some other O-W function to \(x_1\) and xor it with \(r\) to obtain \(x_2\), and finally we encrypt the concatenation of \(x_1\) and \(x_2\) using RSA. We also demonstrated that this process is invertible and mentioned that we will study suitable O-W functions later.
Next we discussed the ElGamal PKC which is based on the hardness of the DLog problem. For a prime \(p\) such that DLog is hard in \(\mathbb Z_p^*\), \(g\) primitive modulo \(p\) and exponent \(\alpha \in \mathbb Z_p^*\), the public key is \( (p,g,g^\alpha \mod p)\) and the secret key \((p,\alpha)\). Encryption consists of hiding the plaintext \(x\) via multiplication with \( (g^\alpha)^r\) modulo \(p\), for some random exponent \( r \in \mathbb Z_p^*\). To allow decryption it is necessary to provide \( g^r \mod p\) as additional information on the introduced randomness, in which case the plaintext can be recovered by multiplying its "masked" version with \( (g^r)^{-\alpha}\) modulo \( p\).
We mentioned that the semantic security of ElGamal is equivalent to the intractability of the DDH problem and that ciphertext decryption without knowledge of the secret key is equivalent to solving the CDH problem. We proved only one part of this and left the rest as an exercise.
Finally, we explained why ElGamal PKC is malleable: Eve can multiply any \(x'\) to the second part of the ciphertext Bob is sending to Alice so that Alice decrypts it to \(xx'\) instead of the originally encrypted \(x\).

10/10: We began with an example of Pollard's \(p-1\) method, and mentioned that its running time is \(\mathcal O(B\log B\log^2 n + \log^3 n)\). After that, we had a look at the Random Squares algorithm, whose approach is a bit similar to to that of the Index Calculus method for DLog (the Quadratic Sieve and state-of-the-art Number Field Sieve are improvements of this). The idea is to find \(x\) and \(y\) such that \(x^2 \equiv y^2 \mod n\) but with \(x \not \equiv \pm y \mod n\), in which case \(\gcd(n,(x\pm y))\) will give a factor of \(n\). To do so, we first choose several primes \(p_1,p_2,\ldots,p_k\) (called the factor base) and then try to find \( z \in \mathbb Z_n\) such that \(z^2\) factors over the factor base modulo \(n\). If we obtain enough such \(z\)'s (we mentioned a couple of ways of how to do this) we can find a linear dependence that will give us \((\underbrace{\prod z_i}_{x} ) ^2 \equiv ( \underbrace{\prod%_{j=1}^{k} p_j^{e_j}}_{y})^2 \mod n\). Such linear dependencies can be found by forming a matrix of the exponents of the factor base primes for each \( z^2\) and using Gaussian elimination modulo \(2\). In the case of RSA moduli, each linear dependence has probability \(\frac{1}{2}\) of yielding inappropriate \(x\) and \(y\) (i.e. satisfying \( x \equiv \pm y \mod n\) ), and having a little bit more than \(k+1\) suitable \(z\)'s guarantees that we get more than one dependencies. Hence the tradeoff: the larger the size \(k\) of the factor base is, the easier it is to find suitable \(z\) 's but the more are needed.
Happy that now we (hopefully) know how the primes \(p\) and \(q\) should be chosen to prevent an adversary from factoring \(n\) and obtaining the decryption exponent \(d\), we discussed other ways things can go wrong in RSA: naively encrypting a secret we obtain a ciphertext that can leak information about it (chosen plaintext and meet-in-the-middle attacks). We instead want PKCs that this doesn't happen, i.e. that are semantically secure. We mentioned two ways of making RSA into such, by introducing randomness to it:
RSA-KEM can be used for secure transmission of a secret key for symmetric crypto. It simply encrypts a random \( r\in \mathbb Z_n\) using RSA and applies some O-W function to the ciphertext to obtain the secret. Unfortunately we didn't have time for the second method (RSA-OAEP) and we left it for the next lecture.

08/10: We started the lecture with an example on last lecture's final result, and then went into the cryptanalysis of RSA. An adversary knowing the public key will not be able to decrypt ciphertexts unless she obtains the decryption exponent \(d\). If she can factorise \(n\) to obtain \(p\) and \(q\), then she can compute \(\phi(n)=(p-1)(q-1)\) and therefore also obtain \(d \equiv e^{-1} \mod{\phi(n)}\). We showed that learning \(\phi(n)\) or \(d\) in some other way both lead to the factorisation of \(n\) and thus doing so is at least as hard as factoring \(n\). For this reason we agreed that it is sufficient to focus on factoring algorithms.
Undoubtedly, naive trial division is inefficient for large \(n\). Fermat's method of expressing \(n\) as a difference of two squares \(n=x^2-y^2\) by looking for an integer \(k\) such that \((\underbrace{\left [\sqrt{n} \right ]+k}_{x})^2-n\) is a square modulo \(n\) (i.e. it equals \(y^2\)) is efficient only if the two factors of \(n\) (in this case \((x+y)\) and \((x-y)\) ) are close to each other. We deduced that \(p\) and \(q\) should be randomly and independently generated and of slightly different sizes.
Further conditions on the choice of \(p\) and \(q\) for RSA are imposed by Pollard's \(p-1\) method: If \(p\) is a prime factor of \(n\) and \(p-1\) has small distinct prime power factors, say smaller than a bound \(B\), then \(p-1\) divides \(B!\) and so \(\gcd \left(\left (2^{B!}-1 \mod n \right),n\right)\) gives a factor of \(n\). The larger \(B\) has to be before the divisibility condition is satisfied, the longer the running time of the algorithm will be, hence we should choose \(p\) and \(q\) so that \(p-1\) and \(q-1\) have at least one large prime divisor (i.e. they are not smooth).

03/10: We introduced the notion of public key cryptography (PKC) with which Alice and Bob can exchange encrypted messages without having to be available at the same time, as is the case with DH-KAP. Each of them has a personal public key that other people can use to encrypt messages addressed to them, and a different private key with which she/he (and she/he alone) can decrypt received ciphertexts (hence the alternative name asymmetric cryptography for PKC). To realise PKC we need trapdoor one-way (T-O-W) functions, that is functions that are easy to compute but hard to invert, unless additional information (the trapdoor) is known. Although the existence of O-W functions is not proven, modular exponentiation and multiplication of large primes are, at the moment, empirically considered to be such and can be used as ingredients for T-O-W functions.
We then defined the RSA PKC based on the difficulty of factoring: Let \( p,q\) be primes, \( n=pq\) and \( ed \equiv 1 \mod{\phi(n)}\). Encryption is raising the plaintext to the power \( e\) modulo \(n\) (hence the public key is \( (n,e)\)) and decryption is raising the ciphertext to the power \( d\) modulo \( n\) (hence the private key is \( (p,q,d)\)). Decryption works due to Euler's theorem, and the fact that to compute \(d\) we need \(\phi(n)=(p-1)(q-1)\) which is only available to whoever knows \(p\) and \(q\), works as the trapdoor.
We also argued that RSA can be efficiently implemented, but is still "expensive", and therefore is often only used for transmission of secret keys. In the end, we demonstrated why it is not a good idea to use a fixed encryption exponent \(e\) (such as a small one that would speed up encryption) and to just vary the modulus \(n\): In case the same message is encrypted with several moduli, applying the C.R.T. and taking the \(e^\text{th}\) root of the (unreduced) result recovers the message!

01/10: Pohlig and Hellman's algorithm reduces the DL problem solving running time further to about \( \displaystyle \mathcal O \left (\sum_{i=1}^{k} e_iq_i \right)\) , where \(p-1=\displaystyle \prod_{i=1}^{k}q_i^{e_i}\), by computing \( x \mod q_i^{e_i}\) for each \(i\) and then combining the congruences using C.R.T. to obtain the required \(x \mod p-1\). To get to this, for each \(q_i^{e_i}\) we need to compute a discrete log for each of the \(e_i \ q_i\)-dic digits of \(x \mod q_i^{e_i}\) by exhaustive search. If instead we use Shank's algorithm, the running time is reduced even further to \(\displaystyle \mathcal O \left (\sum_{i=1}^{k} e_i\sqrt{q_i} \right)\). Therefore, for the DL problem to be intractable by the P-H algorithm, \(p-1\) must have at least one large prime factor (\(\approx 10^{20}\)).
We ended the lecture with one last method for computing discrete logs, the Index Calculus, which works as follows in \(\mathbb Z_p^*\): First we choose several primes \(p_1,p_2,\ldots,p_k\) (called the factor base) and then try to find random exponents \(y\) such that \(g^{y}\) factors over the factor base modulo \(p\). In such case, \(g^{y}\equiv \displaystyle \prod_{j=1}^k p_j^{e_j} \mod p\) which implies \(x \equiv\ \displaystyle \sum_{j=1}^k e_j\mbox{dlog}_g{p_j}\mod{p-1}\). If we obtain enough such linear congruence equations (a little bit more than \(k\) ) we can solve the system to obtain the discrete logarithm of every element in the factor base (i.e. \(\mbox{dlog}_g{p_j}\) for \(1\leq j \leq k\) ). At this point we noted that the factor base should be chosen carefully: the larger it is is, the easier it would be to find suitable exponents, but the more of them would be needed. The final step is to look for a random exponent \(r\) such that \(hg^r\) also factors over the factor base modulo \(p\), i.e. \(hg^{r}\equiv \displaystyle \prod_{j=1}^k p_j^{e'_j} \mod p\), so that \( x+r \equiv\ \displaystyle \sum_{j=1}^k e'_j\mbox{dlog}_g{p_j}\mod{p-1}\), which can be solved for \(x\) as required. I.C. can work for primes \(p\) of up to \(300\) digits, making it the most efficient of those we have seen.

26/09: We looked at the DL problem in \(\mathbb Z_p^*\) for prime \(p\) and \(g\) a primitive element modulo \(p\): given an element \( h \in \mathbb Z_p^* \), find the unique exponent \(x \in \mathbb Z^*_p\), such that \( g^x \equiv h \mod p\). Our goal is to find conditions on \(p\) such that doing this is difficult. Clearly, \(p\) has to be large so that it is infeasible to find \(x\) by exhaustive search (which requires time of about \(\mathcal O (p-1) \)).
Shank's baby-step-giant-step algorithm reduces this to about \(\mathcal O (\sqrt{p-1})\) at the expense of using memory of similar order: by writing \(x=q \left \lceil \sqrt{p-1} \right \rceil + r=qm+r\) with \(0 \leq q,r \leq m-1\), we can use exhaustive search over the product of \(h\) with the \(m\) different powers of \(g^{-1}\) to find a match with the powers of \(g^m\). A match determines \(q\) and \(r\) and thus we can reconstruct \(x\). This works for primes up to about \(10^{20}\).
In the remaining time, we discussed how to combine two congruence equations with coprime moduli \(m_1\) and \(m_2\) to one congruence modulo their product \(m_1m_2\) using the Chinese Remainder Theorem (C.R.T.). This, and its generalised version applicable to an arbitrary number of pairwise coprime moduli, will be very useful for us in several future occasions.

24/09: We tried to convince ourselves that DH-KAP can be implemented efficiently. A slightly optimised version of the square and multiply algorithm that we have already mentioned before can compute \(a^n \mod m\) in time \(\mathcal O \left (\log n\log^2 m \right )\) and so is efficient even with large exponents and moduli. It remains to determine a way to find big primes.
Out of historical interest, we skimmed through the sieve of Eratosthenes, which finds all prime numbers in a given range, but is quite impractical for our purposes. The probability that a random number is prime in a given range is reasonable (by the prime number theorem) so we can pick odd integers at random and test them for primality until we succeed.
First we tried using Fermat's Little Theorem: for the number \(n\) in question and some \(a, \ 2\leq a \leq n-1\), compute \(a^{n-1}\mod n\). If it is not \(1\), we know for sure that \(n\) is composite. Otherwise, it might be prime or a composite, called a Carmichael number or pseudoprime. Therefore we needed to refine Fermat's test to distinguish Carmichael numbers from primes. This we did by looking at successive square roots of \(a^{n-1}\mod n\): if \(n\) is a prime they should all be \(1\) until we reach the order of \(a\) in which case they should start being \(-1\). The Miller-Rabin primality test follows the reverse procedure (i.e. exponentiation instead of taking square roots) and has running time \(\mathcal O \left (k\log^3 n \right )\). Composites that pass this test (called strong-pseudoprimes), do so for at most one fourth of the bases, so if the test is repeated enough times (say \(k\)), a strong-pseudoprime has low (\(\approx 4^{-k}\)) probability of surviving to the end.
Once more, we spent quite some of the lecturing time to prove the necessary Number Theory and Modular Arithmetic results that we needed.

19/09: We studied powers of elements in \(\mathbb{Z}_m\) and realised that if the element is invertible then it will eventually reach \(1\) for some exponent. We defined the smallest such exponent as the order of the element, and saw that since (by Euler's Theorem) this order has to divide the number of invertible elements, namely \(\phi(m)\), by choosing a prime modulus \(p\), we can find primitive elements, that is elements of order \(p-1\). This means that their powers will cover all of \(\mathbb{Z}_p\) except \(0\), before repeating. As a result, we could define the final version of DH-KAP: Alice and Bob choose a large prime as modulus and a primitive element as the base. This is sufficient to prevent exhaustive-search attacks for the secret exponents, but additional conditions on the choice of \(p\) will be imposed by the other methods for solving DLog that we will study.
We then saw how an "active" Eve can fool Alice and Bob into thinking that they are talking to each other although they are talking to her, thus compromising the security of DH-KAP. This is due to the absence of authentication, which we will deal with later on when we introduce digital signatures.
One of the important modular arithmetic results we came across today was that if we are working modulo \(m\), the exponents should be reduced modulo \(\phi(m)\).

17/09: We completed the presentation of AES by mentioning that the best attack against it is a few times faster than brute force, meaning that it is also infeasible and hence, at the time being, AES is safe.
Next, we briefly talked about how to use block ciphers: naively encrypting the plaintext block by block can leak significant information about it as identical blocks encrypt to the same block. Instead, other modes of operation should be used, some of which transform a block cipher into an asynchronous stream cipher, and they have various levels of error propagation. In particular we saw CBC, CFB, OFB and CTR modes.
We concluded the first part of the course (on symmetric cryptography) by considering double encryption in an attempt to make a cryptosystem with keyspace \(\mathcal K\) that can be brute-forced, secure. As it turned out, if we possess memory big enough for storing \(|\mathcal K|\) ciphertexts, the new keyspace \(\mathcal K^2\) can also be brute forced with \(2|\mathcal K|\) encryptions/decryptions plus comparisons instead. Hence the effective keyspace is not really what we had originally thought it would be.
We then moved on to the second part of the course which is about asymmetric or public key cryptography. The setting now is that Alice and Bob have chosen a good cryptosystem to use but they cannot meet to agree on a common secret key for it. A solution is given by the Diffie-Hellman key agreement protocol (KAP) which is based on modular exponentiation. We started with some modified versions of it in order to understand better how it works:
Each of the parties creates a secret exponent (say \(a\) and \(b\) ), and send to each other \(n^a \mod m\) and \(n^b \mod m\) respectively, where \(m \in \mathbb Z\) and \(n \in \mathbb Z_m\) are parameters they chose in clear. Their agreed key is then \(n^{ab} \mod m\).
We showed how \(n^a \mod m\) can be efficiently computed with the square and multiply algorithm that by repeated squaring requires much fewer steps than multiplying \(n\) by itself \(a\) times would, and reduces the result after each step so we don't have to work with huge numbers, even if the exponent \(a\) and modulus \(m\) are large.
Finally, we discussed what Eve can do by defining three related problems: the Discrete Logarithm (DL) - ”can an observer extract the secret exponents?”, the Computational Diffie-Hellman (CDH) - ”can an observer also create the established key?”, and the Decisional Diffie-Hellman (DDH) - ”can an observer distinguish the established key from a random element of \(\mathbb Z_m\)?”

12/09: We saw how important the concept of linear complexity is for the security of LFSR generated key streams. A periodic sequence \(s\) has linear complexity \(l\) if and only if \(l\) is the length of the shortest LFSR that can generate the sequence. This means that \(2l\) bits of \(s\) are enough to reproduce it entirely, something we demonstrated through an example where we constructed and solved matrix equations (as we did for the Hill cipher). We did remark though, that what one should actually use to do this is the much more efficient Berlekamp-Massey algorithm. We concluded that FSR based stream ciphers should never use the output of LFSRs as keystream but instead first perform some nonlinear operation to it. Using Non-linear FSRs is still beyond our full understanding.
Next we talked about Block Ciphers which break the plaintext into blocks of given length (the block length) before encryption, which is based on substituting and permuting the bits in each block a given number of times (the rounds). We had a look at AES, the new block cipher standard, and its internal workings. These are designed so to provide diffusion and confusion.

10/09: We introduced the notion of Stream Cipher, where a relatively short secret key can be used to generate a long keystream for bit-wise encryption and decryption. We saw how recursions, easily implemented by Feedback Shift Registers (FSRs), can be used as keystream generators, and had a closer look at linear ones. We ended the lecture remarking that an attack on linear FSR (LFSR) based stream ciphers should be possible in the same way and for the same reason as it was for the Hill cipher: linearity!

05/09: We spent the entire lecture on the Hill cipher, which is an example of a polygraphic or block cipher, i.e. a cipher that substitutes blocks of letters for blocks of letters rather than letters for letters. In particular, the Hill cipher breaks the plaintext into blocks (or vectors) of length \(n\) and multiplies them with a secret \(n\times n\) matrix \(k\) over \(\mathbb Z_{26}\) to obtain the ciphertext. We observed that for decryption to be possible, \(k\) has to be invertible, i.e. have an invertible determinant which means coprime to \(26\).
We then argued that the key-space is too large to brute force even for fairly small \(n\) and that statistical attacks can be from quite hard to impossible (frequency distributions of digrams and trigrams are too close to each other hence require lots of ciphertexts for a successful attack, for larger \(n\) these are much more complex to compute in the first place). However, a known plaintext attack can yield the key by exploiting the cipher's linearity (which we hence agreed is a bad thing to have alone): knowing the decryption of at least \(n\) ciphertext blocks, we can form a matrix equation which, if we are lucky, is solvable for the secret key. If not solvable, i.e. the plaintext matrix is not invertible, we have to form a different equation using different blocks. If all fail, then probably our guess for \(n\) was wrong and we try again with \(n+1\).
On the way we reviewed (from linear algebra) how to perform matrix multiplication and invert matrices, and applied it to two concrete examples for \(n=2\) and \(3\), the first one as part of the aforementioned cryptanalysis. This concluded our study of classical ciphers.

03/09: We continued with the Vigenère cipher's cryptanalysis by describing a statistical ciphertext-only attack but without going through a concrete example. There are two unknowns here: the key-length \(l\) and the actual key. We noticed that (since this is a polyalphabetic cipher) the ciphertext as a whole doesn't have the frequency distribution of the English language. However, each ciphertext word formed by taking every \(l^\text{th}\) ciphertext letter will retain this distribution (after all, this cipher is simply an application of the shift cipher to plaintext words formed by taking every \(l^\text{th}\) plaintext letter) and we can exploit this to find the key used in the creation of each ciphertext word \(W\): We first compute the frequency distribution of the letters in \(W\) as \(q_0,q_1,\ldots,q_{25}\). This should be the same as the distribution \(p_0,\ldots,p_{25}\) of letters in English texts. Then either we look at the highest value in the \(q_i\)s and guess that \(i\) corresponds to 4 (i.e. "e"), thus deducing the key (exaclty as we did for the substitution cipher), or alternatively compute \(\displaystyle \sum_{i=0}^{25}q_ip_{i+k}\) for each of the possible keys \(k\) and pick the one that gives the highest value (brief explanations of why this works were given).
To find the key-length \(l\) we start guessing values (from 2 and up). For each guess we form ciphertext words as above, find their distributions and compute \(\displaystyle \sum_{i=0}^{25}q_i^2\). The one for which the sum is close to \(0,066\) should be the correct length. As before, long ciphertexts, trial and error and a bit of luck are all necessary ingredients for success. With this we concluded that repeating the key of a polyalphabetic cipher is not a good idea.
Remedying this vulnerability, i.e. taking the key to be the same length as the plaintext gives us the One Time Pad (O.T.P.), which we saw is unconditionally secure provided that the key is used only once (hence the name) and is chosen completely at random (we saw the importance of these via a small "artificial" example). This is because any intercepted ciphertext can be decrypted into every possible meaningful plaintext using some key, and we have no other criteria to help us choose the right plaintext. Unfortunately, the Perfect Secrecy that O.T.P. provides is at the expense of efficiency (key management is costly), and in the coming lectures we will be looking at some trade-offs.

29/08: We started the lecture off with the cryptanalysis of the Affine cipher. Although its key-space is small, it was more worthwhile (and educational modular arithmetic-wise) to use frequency analysis in a ciphertext only attack than to brute force it: By guessing two letters, we could form two equations which we could solve for the two unknowns, i.e. the key-pair. During this procedure, we obtained a linear congruence equation (l.c.e). A unique solution to this meant that our guesses of letters were correct and we could find the key. Multiple solutions meant that we had to pick out the one that was the key by ruling out the others. A congruence with no solutions meant that at least one of the letter guesses was wrong!
In general (and as a generalisation of what we learned last lecture about l.c.e.s), the l.c.e. \(ax\equiv b \mod m\) has \(d=\gcd(a,m)\) solutions, provided that \(d\) divides \(b\). To find these we first cancel out \(d\) from both sides and the modulus (as we learned in the last lecture) and find the solution of the resulting l.c.e.. This is also a solution to the original l.c.e., and the rest of the solutions are found by adding to this multiples of \(\frac{m}{d}\) that are smaller than \(m\). With this we were done with monoalphabetic ciphers.
Next, we had a look at polyalphabetic ones, where different occurrences of the same letter might be substituted by different letters. The first such cipher we are going to study is the Vigenère cipher, where the key is a key-word of fixed length which is repeated until it reaches the length of the plaintext, and then added to it to obtain the ciphertext. We saw that the key-space is large even for small key-lengths, so in the next lecture we'll have to cryptanalyse it another way.

27/08: We continued from where we ended last lecture and gave an example of how to compute the greatest common divisor (gcd) of two integers using the Euclidean Algorithm (E.A.). We then recalled that the possible keys in the affine cipher are pairs \((a,b)\) such that \(\gcd(a,26)=1\) (i.e. \(a\) and 26 are coprime), and explained why this is an important condition: in order to decrypt ciphertexts we need the multiplicative inverse of \(a\) which we denote by \(a^{-1}\), has the property that \(aa^{-1} \equiv 1 \mod 26\), and exists if and only if \(a\) and \(m\) are coprime (this is in contrast to regular arithmetic were the multiplicative inverse of a non-zero number always exists!). We also learned that we can solve a linear congruence equation \(ax \equiv b \mod m\) by multiplying both sides by \(a^{-1}\) if it exists, and that we can cancel out a common factor \(c\) in a congruence only if we also change the modulus \(m\) by dividing it with its gcd with \(c\) (i.e. \(ac \equiv bc \mod m \Rightarrow \) \(a \equiv b \mod \frac{m}{\gcd(m,c)}\)).
In the end we saw an example of how to encrypt and decrypt with the affine cipher: encryption is straightforward, and to find inverses modulo \(m\) as needed for decryption, we use the Extended Euclidean Algorithm (E.E.A.). This uses the output at each step of the E.A. during the computation of \(\gcd(a,m)\) and works backwards in order to give an expression of the form \(as+tm=\gcd(a,m)\). Suitable integers \(s\) and \(t\) always exist by Bézout's lemma, and \(s\) is the required inverse.

22/08: We started looking at classical monoalphabetic ciphers that replace each letter of the alphabet by a single letter of the alphabet.
We began with the shift cipher which simply uses a shifted alphabet to write the messages in, and is easily cryptanalysed in a ciphertext-only attack by exhaustive search over all possible keys (25 effective).
The shift cipher is a special case of the substitution cipher, which is what we saw next. It uses a randomly permuted alphabet to write messages in. Though its key-space is enormous, it can still be ciphertext-only cryptanalysed using frequency analysis: The letters in the English language appear each with its own frequency. If we compare this with the frequency distribution of ciphertext letters and use a bit of trial-and-error plus common sense, we can deduce the permutation used (though we didn't really apply it to an example).
The shift cipher is a special case also of the next cipher we looked at, the affine cipher, which in turn is a special case of the permutation cipher. Here we first permute the alphabet (in a non-random way) and then shift it. More on this next week.
On the way we encountered some notions that were new to some people so we mentioned them very briefly, with the aim of going into more depth as we go. In particular we introduced \(\mathbb Z_m=\{0,1,\ldots m-1\}\) i.e. "the ring of integers modulo (the positive integer) \(m\)", and a bit of modular arithmetic: for integers \(a, b\) and \(m\), \(a \mod m\) is equal to the remainder after dividing \(a\) by \(m\), and \(a \equiv b \mod m\) (read "\(a\) is congruent to \(b\) modulo \(m\)") means that \(m\) divides \(b-a\). In fact \(\equiv\) is an equivalence relation. We also gave some examples of permutations of \(n\) objects to explain what the Symmetric group \(S_n\), that contains them all, is. Finally, we discussed divisibility and what the greatest common divisor of two integers is. How to compute it was left for the next lecture.

20/08: We gave an overview of the course - what it is about and what we will be covering. Cryptography is basically (but not only) about achieving secure communication between Alice and Bob in the presence of Eve, an eavesdropper. We'll mainly be dealing with confidentiality (who can read the message?), authentication (who sent the message?) and message integrity (has the message been tampered with?).
We finished this first introductory lecture with a formal definition of a cryptosystem and the various attack models we can consider when attacking one (i.e in cryptanalysis).

2012-11-07, petrides