Lecture log

16/11: We began the lecture with a simple secret sharing scheme, where the holder of a secret distributes "shares" of it to different people so that later on, all together can reconstruct it. This protects the secret from theft, but if a share is lost, so is the secret. In a \((t,m)\)-secret sharing scheme, \(m\) "shares" are distributed but only \(t\) of them are needed to reconstruct the secret. This is easily done by constructing a "random" polynomial \(f(x) \in \mathbb Z_p[x]\) of degree \(t-1\) whose constant term is the secret encoded as an element of \(\mathbb Z_p\), where \(p\) is a prime. Using Lagrange's interpolation, any group of \(t\) or more "shareholders" can reconstruct the secret, but fewer cannot.
Afterwards, we argued about how any card game can be played over the telephone using cryptography. In particular, we described how to deal the deck of cards in a two-player game, where each card is encoded as an element of \(\mathbb Z_p\) (in which Dlog is hard).
Finally, we briefly mentioned some examples of side channel attacks, i.e. attacks that target the hardware/software implementations of cryptosystems and deduce the secret information by observing, for example, the time needed for a computation, sounds, electromagnetic emissions, power consumption etc.

14/11: We introduced the notion of blind signatures, where the signer doesn't know what he is signing. This might sound absurd at first, so we had to motivate the need for such signatures by discussing digital cash. Digital coins are validated by the bank's signature and to preserve the user's anonymity when spent (just like bank notes), the bank shouldn't be able to control the content of the coin, only its value. We saw how this can be achieved using blind signatures and what is needed in addition to prevent fraud (re-using coins).
We then looked at two concrete constructions based on RSA and Schnorr signatures. The idea is to "blind" the message to be signed by adding some randomness in a way that it can be removed once the signature is received (i.e. the signature can be "unblinded"). For example, in the RSA case this can be achieved by multiplying the hash of the message to be signed by \(r^e\) for some random element \(r\) and asking for a plain RSA signature on this.

09/11: We saw one final MAC construction based on polynomials in finite fields. The idea is to encode messages \(m\) as tuples \((m_1,m_2,\ldots,m_L) \in \mathbb F_q^L\) and define polynomials \(f_m(x)=\displaystyle \sum_{i=1}^L m_ix^i \in \mathbb F_q[x]\). Then for a secret \(k=(k_1,k_2) \in \mathbb F_q^2\) we can define MAC\((k,m)=f_m(k_1)+k_2\). We noted that given a MAC value \(t\) on a message \(m\), for each \(k_1\in \mathbb F_q\) we can always compute \(k_2=t-f_m(k_1')\) such that MAC\(((k_1,k_2),m)=t\). This means that for each observed MAC value on a message \(m\), there are \(q\) possibilities for \(k\).
We then concentrated on the case where each key \(k\) is used only once. After observing MAC\((k,m)=t\) for some secret \(k\), one can forge the MAC value of a message \(m' \neq m\) by using one of the \(q\) possible \(k\)'s and hope it will be the right one. This will work with probability \(\frac{1}{q}\). Another approach is to make a guess of the MAC value, say MAC\((k,m')=t'\), and consider \(t-t'=f_m(k_1)-f_{m'}(k_1')\) which implies \(f_{m-m'}(k_1)+t'-t=0\). Hence, if the guess was a possible one, \(k_1\) must be one of the (at most \(L\)) roots of this polynomial in \(\mathbb F_q[x]\). We concluded that the probability of a forgery, even for an adversary with infinite computing power, is at most \(\frac{L}{q}\).

07/11: We discussed Message Authentication Codes (MACs) which are symmetric key primitives used for data integrity. They are different from digital signatures in terms of, e.g., efficiency (higher) and non-repudiation. As an application, we saw a modified version of the Diffie-Hellman KAP where one party remains anonymous using a MAC instead of a signature, and argued about the advantage of this approach in a certain scenario. We then started talking about concrete MAC constructions, namely CBC-MAC based on the CBC block cipher mode of operation (no initial value, take only the last ciphertext block) and HMAC based on any secure (iterated) hash function (concatenate message with key xored with some constant, hash, concatenate with key xored with some other constant and hash again).

02/11: We described Schnorr's signature scheme, which is motivated by Schnorr's identification protocol. In the latter, a prover proves to a verifier that she's in possession of a secret \(x\) in a three step process, without revealing the actual secret. One of this steps involves the verifier, who sends a random "challenge" to the prover in order to ensure the "freshness" of the process. We discussed the security of this protocol (a cheating prover who doesn't know \(x\) can succeed in convincing the verifier only if she can guess the "challenge") and how it can be transformed into a signature scheme by replacing the verifier's "challenge" by a hash value of the message to be signed concatenated with some prover-generated randomness.

31/10: We studied the RSA signature scheme. The setup is identical to RSA-PKC, a signer signs messages by "decrypting" them using her secret exponent \(d\) and a verifier verifies signatures by "encrypting" them using the signer's public exponent \(e\) and comparing the result to the message. This version is insecure (e.g. due to the multiplicative property of RSA, forgeries can be found easily, and more). To remedy this, we add a hash function in the public parameters of the scheme and sign hashes of messages instead. If the hash function is secure (i.e. collision, preimage and 2nd preimage resistant), forgeries are hard to find.
An additional advantage of using hash functions is that the signature size is reduced: an arbitrary length message now has a signature of much shorter length instead of one of equal length. However, the hash outputs must be padded to the size of the RSA modulus used (i.e. we must employ Full Domain Hash) otherwise an index calculus type of attack exploiting the hash function's short range is applicable:
First we choose a factor base and then choose messages whose hash values factor of this base. If we collect more such messages than the size of the base, using gaussian elimination we can express one of these hash values as a combination of the others, i.e \(h(x)=\displaystyle \prod_i h(x_i)^{e_i}\). As a result, by asking the signer to sign all the \(x_i\) messages, we can forge a signature on message \(x\) by computing \(h(x)^d=\displaystyle \prod_i \left (h(x_i)^d\right )^{e_i}\).

26/10: We proved last lecture's statement, i.e. that a Merkle-Damgård hash function constructed using a collision resistant hash function is also collision resistant. We also made a correction to the proof on discrete log based hash functions from last lecture and defined families of keyed hash functions, which are mainly used as message authentication codes that we will study later. As a toy application we considered coin flipping over the phone and how it can be implemented using hash functions. We will see a proper and secure version of it at the end of the course.
Finally we started discussing digital signatures, the equivalent of paper signatures in the digital world. Digital signatures have secret signing algorithms and public verification algorithms.

24/10: We defined Hash-functions and described their desirable properties: easy computation, preimage resistance (given a hash value it should be difficult to find a preimage \(x\) s.t. its image is equal to the hash value), second preimage resistance (given any \(x\) it should be difficult to find another \(x'\) s.t. their images are equal) and collision resistance (it should be difficult to find a pair \((x,x')\) s.t. \(x \neq x' \) and with equal images). Finding second preimages leads to finding collisions so the former is not easier than the latter.
We saw a hash function based on discrete logs that is collision resistant provided DLog is hard, and started discussing the Merkle-Damgård iterated hash function construction. This uses a hash function mapping fixed length inputs to shorter fixed length outputs (a.k.a. a compression function) to create one which maps arbitrarily large inputs to fixed length outputs. We will prove that if the building-block hash function is collision resistant then so is the constructed one.

17/10, 19/10: First we studied what elliptic curves are: in our case, smooth curves of the form \(y^2=x^3+Ax+B\). We discussed the intersection of straight lines with elliptic curves and motivated the introduction of an extra point \(\mathcal{O}\) "at infinity". We defined a binary operation on the points on the curve and argued that this made the points on the curve into an abelian group. We also derived explicit formulae for the group operation.
Having defined the group, we studied the number of points on the group. We proved that \(y^2=x^3+1\) defined over the finite field with \(q\) elements has \(q+1\) points when \(3\) does not divide \(q-1\), and argued that Hasse's theorem was reasonable. Then we discussed the group structure, arguing that the group is isomorphic to \(\mathbb{Z}_m^+ \times \mathbb{Z}_n^+\) for some \(m\) dividing \(n\) and \(q-1\).

12/10: We discussed the ELGamal PKC based on the hardness of DLog problem. For prime \(p\) such that DLog is hard in \(\mathbb Z_p^*\), \(g\) primitive modulo \(p\) and exponent \(a, 1\leq a \leq p-2\), the public key is \((p,g,g^a \mod p)\) and the secret key \((p,a)\). Encryption consists of hiding the plaintext \(x\) via multiplication with \((g^a)^r\) modulo \(p\), for some random exponent \(r, 1 \leq r \leq p-2\). To allow decryption, providing \(g^r \mod p\) as additional information on the introduced randomness is necessary, in which case the plaintext can be recovered by multiplying its "masked" version with \((g^r)^{-a}\) modulo \(p\).
We saw that the semantic security of ElGamal is equivalent to the intractability of the DDH problem and that ciphertext decryption without knowledge of the secret key is equivalent to solving the CDH problem. We then explained why the given implementation of ElGamal is not semantically secure and that we should use \(\left <g^2 \right >\), the group of QRs modulo a safe prime \(p=2q+1\), (i.e. where \(q\) is also prime) instead.

10/10: Continuing the study of Dixon's random squares, we saw that in order to find linear dependencies we can form a matrix of the exponents of the factor base primes modulo \(2\) for each \(z^2\) and use Gaussian elimination. In trying to factor an RSA modulus, each linear dependence has probability \(\frac{1}{2}\) of yielding inappropriate \(x\) and \(y\) (i.e. satisfying \(x \equiv \pm y \mod n\)), and having \(c > m + 1\) suitable \( z\)'s guarantees that we get more than one linear dependence. Hence the tradeoff: the larger the size \(m\) of the factor base is, the easier it is to find suitable \(z\)'s but the more are needed.
Now (hopefully) knowing how the primes \(p\) and \(q\) should be chosen to prevent an adversary from factoring \(n\) and obtaining the decryption exponent \(d\), we discussed other ways things can go wrong in RSA: naively encrypting a secret we obtain a ciphertext that can leak information about it. We instead want PKCs offering ciphertext indistinguishability: given two plaintexts \(x_1\) and \(x_2\), and an encryption \(y\) of one of the two, one cannot say, without guessing, which of \(x_1\) or \(x_2\) was encrypted to give \(y\). This does not hold for RSA as we described it, unless we introduce some randomness to it. We presented two ways of doing this.
RSA-OAEP expands the message to be encrypted by a factor that makes any ciphertext distinguishing algorithms infeasible to run. This is achieved by applying some O-W function to some random integer \(r\) and xor-ing the result with the secret \(x\) to obtain \(x_1\), applying some other O-W function to \(x_1\) and xor-ing it with \(r\) to obtain \(x_2\), and finally encrypting the concatenation of \(x_1\) and \(x_2\) using RSA.
RSA-KEM can be used for secure transmission of a secret key for symmetric crypto. It simply encrypts a random \(r\in \mathbb Z_n\) using RSA and applies some O-W function to the ciphertext to obtain the secret.

05/10: We had a look at various factoring algorithms that find a non-trivial factor of an odd composite integer \(n\). Firstly, we agreed that naive trial division is inefficient for large \(n\).
Fermat's method of expressing \(n\) as a difference of two squares \(n=x^2-y^2\) by looking for an integer \(k\) such that \((\underbrace{\left [\sqrt{n} \right ]+k}_{x})^2-n\) is a square modulo \(n\) (i.e. it equals \(y^2\)), is efficient only if the two factors of \(n\) (in this case \((x+y)\) and \((x-y)\)) are close to each other.
Further conditions on the choice of \(p\) and \(q\) for RSA are imposed by Pollard's \(p-1\) method: If \(p\) is a prime factor of \(n\) and \(p-1\) has small distinct prime power factors, say smaller than a bound \(B\), then \(p-1\) divides \(B!\) and so \(\gcd \left(\left (2^{B!}-1 \mod n \right),n\right )\) gives a factor of \(n\). The larger \(B\) has to be before the divisibility condition is satisfied, the longer the running time of the algorithm will be, hence we should choose \(p\) and \(q\) so that \(p-1\) and \(q-1\) have at least one large prime divisor (i.e. they are not smooth). This can be done by testing \(2kp'+1\) for primality for different integers \(k\) and a prime \(p'\).
Next we saw how Pollard's \(\rho\) method can be applied to factoring as well: Choose a "random looking" function \(f:\mathbb Z_n \rightarrow \mathbb Z_n\) (say \(f=x^2+1 \mod n\)) and an element \(x_0 \in \mathbb Z_n\) and iterate using \(x_i=f(x_{i-1}), \ i \geq 1,\) until \(\gcd(x_i-x_{2i},n)\neq 1\). What we are really doing is looking for a congruence \(x_i \equiv x_{2i} \mod p\), for the smallest prime divisor of \(p\) of \(n\), which would then imply that \(p\) divides this \(\gcd\), i.e. the \(\gcd\) is \(>1\) and thus a factor of \(n\). By the birthday paradox, we expect this to happen after \(\mathcal O(n^{1/4})\) iterations with probability \(\frac{1}{2}\).
Finally, we began looking at Dixon's Random Squares algorithm, which is an extension of Fermat's method to modular arithmetic and is a bit similar to the Index Calculus method for DLog (the Quadratic Sieve and state-of-the-art Number Field Sieve improve on this). The idea is to find \(x\) and \(y\) such that \(x^2 \equiv y^2 \mod n\) but with \(x \not \equiv \pm y \mod n\), in which case \(\gcd(n,(x\pm y))\) will give a factor of \(n\). To do so, we first choose several primes \(p_1,p_2,\ldots,p_m\) (called the factor base) and then try to find \(z \in \mathbb Z_n\) such that \(z^2\) factors over the factor base modulo \(n\). If we obtain enough such \(z\)'s (a little bit more than \(m\)) we can find a linear dependence that will give us \(\displaystyle \left (\prod z_i\right )^2 \equiv \left( \prod_{j=1}^{m} p_j^{e_j}\right)^2 \mod n\). We will conclude the description of this method in the next lecture.

03/10: We introduced the notion of public key cryptography with which two parties can exchange encrypted messages without having to be available at the same time, as is the case with DH-KAP. Each party has a public key that other parties can use to encrypt messages addressed to her, and a different private key with which she (and she alone) can decrypt received ciphertexts (hence the alternative name assymetric cryptography for PKC). To realise PKC we need trapdoor one-way functions, that is functions that are easy to compute but hard to invert, unless additional information (the trapdoor) is known. Although the existence of O-W functions is not proven, modular exponentiation and multiplication of large primes are, at the moment, empirically considered to be such.
We then defined the RSA PKC based on the difficulty of factoring: Let \(p,q\) be primes, \(n=pq\) and \(ed \equiv 1 \mod{\phi(n)}\). Encryption is raising the plaintext to the power \(e\) modulo \(n\) (hence the public key is \((n,e)\)) and decryption is raising the ciphertext to the power \(d\) modulo \(n\) (hence the private key is \((p,q,d)\)). An adversary knowing the public key will not be able to decrypt ciphertexts unless she obtains the decryption exponent \(d\). If she can factorise \(n\) to obtain \(p\) and \(q\), then she can compute \(\phi(n) =(p-1)(q-1)\) and therefore also obtain \(d \equiv e^{-1} \mod{\phi(n)}\). We showed that learning \(\phi(n)\) or \(d\) in some other way both lead to the factorisation of \(n\) and thus doing so is at least as hard as factoring \(n\). For this reason we will focus on factoring algorithms starting from next lecture.

28/09: We looked at one last method for computing discrete logs, the index calculus, which works as follows in \(\mathbb Z_p^*\): First we choose several primes \(p_1,p_2,\ldots,p_m\) (called the factor base) and then try to find random exponents \(x\) such that \(g^{x}\) factors over the factor base modulo \(p\). In such case, \(g^{x}\equiv \displaystyle \prod_{j=1}^m p_j^{e_j} \mod p\) which implies \(x \equiv\ \displaystyle \sum_{j=1}^m e_j\log_g{p_j}\mod{p-1}\). If we obtain enough such linear congruence equations (a little bit more than \(m\)) we can solve the system to obtain the discrete logarithm of every element in the factor base (i.e. \(\log_g{p_j}\) for \( 1\leq j \leq m\)). At this point we noted that the factor base should be chosen carefully: the larger it is is, the easier it would be to find suitable exponents, but the more of them would be needed. The final step is to look for a random exponent \(r\) such that \(hg^r\) also factors over the factor base modulo \(p\), i.e. \(hg^{r}\equiv \displaystyle \prod_{j=1}^m p_j^{e'_j} \mod p\), so that \(a+r \equiv\ \displaystyle \sum_{j=1}^m e'_j\log_g{p_j}\mod{p-1}\), which can be solved for \(a\) as required.
We remarked that this method can also be used in \(\mathbb F_{p^n}^*\) with irreducible polynomials in the role of primes, and that it can work for primes \(p\) of up to 300 digits, making it the most efficient of those we have seen. The reason for this is that it is specific for the groups we mentioned. In other words, unlike Shank's, Pollard's and Pohlig-Hellman's algorithms, index calculus is not a generic algorithm. We discussed generic algorithms briefly: they work for any cyclic group since the group representation is not important, but the best they can do with respect to efficiency is the same as Shank's and Pollard's methods.

26/09: We studied Pollard's \(\rho\) method. The idea is that given a congruence \(g^{ay+z} \equiv g^{ay'+z'} \mod p\), we can obtain \(a\) as a solution to \((y-y')a \equiv z'-z \mod{p-1}\) (if it exists), so all we need to do is to find such such a congruence. First we noticed that \(g^{ay+z}\equiv h^yg^z \equiv x \mod{p}\) can be represented as a triple \((x,y,z) \in \mathbb Z^*_p \times \mathbb Z_{p-1} \times \mathbb Z_{p-1}\), and if we have a random-looking function \(f\) that maps such triples to triples of the same form, we can iterate until we get a first coordinate collision (iterate=start with \((x_0,y_0,z_0)\) and obtain \((x_i,y_i,z_i)=f(x_{i-1},y_{i-1},z_{i-1})\)). By the birthday paradox, we expect a collision after \(\mathcal O\left (\sqrt{p-1}\right )\) iterations, and by using Floyd's cycle finding algorithm (i.e. by looking for triples \((x_i,y_i,z_i)\) and \((x_{2i},y_{2i},z_{2i})\) with \(x_i \equiv x_{2i} \mod p\)) we can eliminate any memory requirements. We finished by defining a suitable function \(f\) which requires partitioning the group into three "equal" subsets.

21/09: We began looking at ways to solve the DL problem. Exhaustive search, which we already excluded as inefficient by choosing large \(p\), requires time of about \(\mathcal O (p-1) \). Shank's baby-step-giant-step algorithm reduces this to about \(\mathcal O (\sqrt{p-1}) \) at the expense of using memory of similar order: by writing \(a=q \left \lceil \sqrt{p-1} \right \rceil + r=qm+r\) with \(0 \leq q,r \leq m-1\), we can use exhaustive search over the product of \(h\) with the \(m\) different powers of \(g^{-1}\) to find a match with the powers of \(g^m\). A match determines \(q\) and \(r\) and thus we can reconstruct \(a\). This works for primes up to about \(10^{20}\).
Pohlig and Hellman's algorithm reduces the running time further to about \(\displaystyle \mathcal O \left (\sum_{i=1}^{k} e_iq_i \right)\), where \(p-1=\displaystyle \prod_{i=1}^{k}q_i^{e_i}\), by computing \(a \mod q_i^{e_i}\) for each \(i\) and then combining the congruences using the Chinese Remainder Theorem to obtain the required \(a \mod p-1\). To get to this, for each \(q_i^{e_i}\) we need to compute a discrete log for each of the \(e_i \ q_i\)-dic digits of \(a \mod q_i^{e_i}\) by exhaustive search. If instead we use Shank's algorithm, the running time is reduced even further to \(\displaystyle \mathcal O \left (\sum_{i=1}^{k} e_i\sqrt{q_i} \right)\). Therefore, for the DL-problem to be intractable by the P-H algorithm, \(p-1\) must have at least one large prime factor (\(\approx 10^{20}\)).

19/09: We generalised the Legendre symbol to odd composite integers by defining the Jacobi symbol \(\left(\frac{a}{n}\right)\) as the product \(\displaystyle \prod_{i=1}^k \left(\frac{a}{p_i}\right)^{e_i}\) of the Legendre symbols of the prime factors of \(n=\displaystyle \prod_{i=1}^k p_i^{e_i}\). Since for prime \(p\), the Legendre and Jacobi symbols are the same and, as we showed, \(\left(\frac{a}{p}\right) \equiv a^{(p-1)/2} \mod p\), we can use this as a criterion for a primality test (called the Solovay-Strassen test). We saw that the Jacobi symbol can be computed efficiently using certain properties without factorising, and that the test is passed by composites (called Euler-pseudoprimes) for at most half the bases \(a\). Thus, the S-S test is efficient and if repeated enough times (say \(k\), between \(50\) and \(100\)), an Euler-pseudoprime has low (\(\approx 2^{-k}\)) probability of surviving to the end.
We finished the lecture looking at the Miller-Rabin primality test which is better than S-S in the sense that composites that pass it (called strong-pseudoprimes), do so for at most one fourth of the bases, hence its error probability is lower.

14/09: Before discussing the difficulty of the DL problem, we saw how an active adversary can compromise the security of DH-KAP by tricking the two parties into communicating with her while thinking they are communicating with each other. To overcome this we need digital signatures, which we will study towards the end of the course, to authenticate the origin of the transmitted group elements.
Next we looked at the DL problem when \(G=\mathbb Z_p^*\) for prime \(p\) and \(g\) is a primitive element modulo \(p\): given an element \(h \in G\), find the unique exponent \(a, 0 \leq a \leq p-2\), such that \(g^a \equiv h \mod p\). Our goal is to find conditions on \(G\) such that doing this is difficult. Clearly, \(p\) has to be large so that it is infeasible to find \(a\) by exhaustive search. Before looking for further conditions, we need to convince ourselves that DH-KAP can still be implemented efficiently. The square and multiply algorithm can be used for efficient modular exponentiation even with large exponents and moduli. It remains to determine a way to find big primes.
Out of historical interest, we skimmed through the sieve of Eratosthenes, which finds all prime numbers in a given range, but is quite impractical for our purposes. As was the case with irreducible polynomials, the probability that a random number is prime in a given range is reasonable (by the prime number theorem) so we can pick numbers at random and test them for primality until we succeed. The first thing we tried was to use Fermat's Little Theorem: for the number \(n\) in question and some \(a, \ 2\leq a \leq n-1\), compute \(a^{n-1}\mod n\). If it is not \(1\), we know for sure that \(n\) is composite. Otherwise, it might be prime or a composite, called a Carmichael number. Therefore we need to refine our test to distinguish Carmichael numbers from primes.
We ended the lecture by defining quadratic residues (i.e. integers that are squares) modulo a prime and the Legendre symbol \(\left(\frac{a}{p}\right)\) that equals 0,1 or -1 depending on whether \(a\) is a multiple of \(p\), a QR or a QNR.

12/09: We completed the presentation of AES and briefly talked about how to use block ciphers: naively encrypting the plaintext block by block can leak significant information about it as, for example, identical blocks encrypt to the same block. Instead, other modes of operation should be used, some of which transform a block-cipher into a stream cipher.
We then described the Diffie-Hellman key agreement protocol with which two parties can establish a common key to use for block or stream cipher encrypted communication. Each of the parties creates a secret exponent (say \(a\) and \(b\)), and send to each other \(g^a\) and \(g^b\). Their agreed key is then \(g^{ab}\) (here \(G=<g>\) is a cyclic multiplicative group). We began analysing its security by defining three related problems: The Computational Diffie-Hellman (CDH) - "can an observer also create the established key?", the Discrete Logarithm (DL) - "can an observer extract the secret exponents?", and the Decisional Diffie-Hellman (DDH) - "can an observer distinguish the established key from a random group element?".

09/09: We saw how important the concept of linear complexity is for the security of LFSR generated key streams. Given a periodic sequence \(s\) with linear complexity \(L(s), \ 2L(s)\) digits are enough to reproduce \(s\) entirely using the Berlekamp-Massey algorithm. We concluded that keystreams should have long periods and large linear complexities, and that m-sequences are inappropriate for direct cryptographic use since they have low complexity.
Next we talked about Block Ciphers which break the plaintext into blocks of given length (the block length) before encryption, which is based on substituting and permuting the bits in each block a given number of times (the rounds). We are focusing on AES, the new block cipher standard, and its internal workings.

07/09: We introduced the notion of Stream Cipher, where a secret key can be used to generate a long keystream for bit-wise encryption and decryption. We saw how Feedback Shift Registers can be used as keystream generators and discussed several properties, in particular that by using primitive polynomials of degree \(n\) in \(\mathbb Z_2[x]\) we can obtain sequences of period \(2^n-1\) (the keystream) from sequences of length \(n\) (the secret key). Such sequences are called maximal or m-sequences.

02/09: We reviewed the construction of Finite Fields. We saw the important role irreducible polynomials play in this, and discussed how to find such of a given degree. We ended the lecture by defining primitive polynomials. These results will be useful for us from the next lecture onwards.

31/08: After discussing polyalphabetic (one-time pad) and transposition ciphers, we were done with classical ciphers.

26/08: We finished with monoalphabetic ciphers (affine) and began studying polyalphabetic ones (Vigenère).

24/08: We gave an overview of the course and started looking at classical monoalphabetic ciphers (shift and substitution).

2011-11-16, petrides