**Question 1**: In Problem Set 2, Problem 8, how to understand the fact that the morphism f from R/I to R/J given by f(r+I)=r+J is surjective?

**Answer**: In this problem we have that I⊆J. This means that in general I is a smaller ideal than J. Hence R/I will have, generally, more equivalence classes than R/J. Indeed, if one picks two arbitrary elements a and b in I such that a-b is in I (so a and b are in the same equivalence class mod I), then a-b is also in J, as I is included in J. Hence a and b are in the same equivalence class mod J. However, it could happen in general that a-b is in J but not in I (since J is larger than I, the difference of two elements has a better chance to be in J than in I).

For a concrete example, let R=ℤ/(8), let I=(4)={0,4} and let J=(2)={0,2,4,6}. Then I⊆J. For an element k in R, let us denote by k_{I} its equivalence class in R/I and by k_{J} its equivalence class in R/J. Then we have for example that 1_{I}=5_{I} since the difference 5-1=4 is in I. Then we compute that 0_{I}=4_{I},1_{I}=5_{I},2_{I}=6_{I},3_{I}=7_{I} and so R/I={0_{I},1_{I},2_{I},3_{I}}. Similarly, in R/J we compute that 0_{J}=2_{J}=4_{J}=6_{J} and 1_{J}=3_{J}=5_{J}=7_{J} and so R/J={0_{J},1_{J}}. The morphism in this problem sends the equivalence class r+I to the equivalence class r+J and so here we have f(0_{I})=0_{J}, f(1_{I})=1_{J}, f(2_{I})=2_{J}=0_{J}, f(3_{I})=3_{J}=1_{J}. Hence we see that it is surjective, as expected.

**Question 2**: In statement 3 of Lemma 8.4 in the notes what we want to express is: p is irreducible if and only if pu is irreducible and p is prime if and only if pu is prime. Is this correct?

**Answer**: Yes, that is correct.

**Question 3**: In the book, for the definition of UFD it is said that the irreducible elements must be prime, but this condition is not mentioned in the class notes. Is this necessary?

**Answer**: Let R be a commutative unital integral domain. According to the book, R is a UFD if

- (a) nonunits in R can be written as a product of irreducible elements, and
- (b) irreducible elements in R are prime.

Our definition (Definition 8.5 in the notes) of a UFD is:

- (a) same as (a) above, and
- (b') the factorization in (a) is unique up to reordering and multiplying by units.

It can be shown that the two definitions are equivalent. Indeed, that (a)+(b) implies (a)+(b') is shown in Theorem 11.1.3 in the book. Let us show that (a)+(b') implies (a)+(b), that is let us show that an irreducible element is always prime if (a)+(b') hold. Let p in R be nonzero, nonunit and divide rs, so that rs=pt for some t in R. We need to show that p is prime, that is that p divides r or p divides s. If r is a unit, then s=p(tr^{-1}) and so p divides s. Also if r is 0 then clearly p divides r, since 0=r=p0. Hence we may assume that r is nonzero, nonunit and similarly s is nonzero, nonunit. Then by (a) we can write r and s as product of irreducibles. Looking at rs=pt, we have that the left hand side has at least two irreducibles (since r and s are nonunit), and on the right hand side we have that p is an irreducible by assumption. It follows that t is not a unit and can be written as a product of irreducibles, as by (b') we need the number of irreducibles on both left hand side and right hand side to match up. Then we write rs=pt as a product of irreducibles in an essentially unique way by (b'). Hence p appears in the left hand side, as it is irreducible and appears on the right hand side. Hence p is one of the factors of r or of s (up to a unit). Say it is one of the factors of r. Then r=(up)p_2…p_k where u is a unit and p_1,…,p_k are irreducible. Then p divides r as required.

The reason we had this different definition in class is that I feel it reflects the naming of UFD much better: it is a factorization domain where the factorization is unique.

**Question 4**: In the book, for the definition of a simple R module we have an extra condition: RM is not equal to (0), but this condition is not mentioned in the class notes. Is this necessary?

**Answer**: The definition of simple modules in the book needs that condition because the ring R is not assumed to be unital. If the ring is unital, as is our case, then RM=M, and so that condition is replaced by M is not equal to (0), which is included in our definition of simple modules (Definition 11.1 in the notes).

**Question 5**: In the solution of problem 2 from Problem Set 2, shouldn't "e_{jk}" be "e_{kj}"?

**Answer**: Yes, that is correct. The solution has been corrected.

**Question 6**: Is the solution of Problem 14.(b) from Problem Set 3 missing the proof that M is isomorphic to the direct sum of N and S?

**Answer**: Yes, it was missing. The solution has been corrected, together with a typo in the question of the problem (the fact that S is assumed to be simple was missing).

**Question 7**: There seems to be a mistake in the solution of Problem 5(b), the matrix multiplication does not work.

**Answer**: Yes, the answer was wrong. The map f should in the solution should have been the transpose map. The solution is now corrected.

**Question 8**: In proposition 14.8 it is mentioned the set Sp. How do we define it?

**Answer**: The set S_p is the group of permutations of the set {1,2,…,p}, in other words the set of bijections from {1,2,…,p} to {1,2,…,p} (with group operation given by composition of functions).

**Question 9**: In definition 17.1 we define a torsion element with the condition that m is nonzero. However, the book only uses this restriction in the definition of a torsion free-element. Is it also necessary having the condition in a torsion element?

**Answer**: Yes, we need 0 to be a torsion element so that TorM is always a submodule. This has been corrected in the notes.

**Question 10**: In example 17.3.1 we use the notation: ord(g). What does it mean?

**Answer**: For an element g in a group G, ord(g) denotes the order of the element g, that is the smallest positive integer m such that g^m=e (the identity of G). If no such positive integer exists, then we say that g has infinite order.

**Question 11**: In theorem 17.6 the book adds to our statement that the decomposition is a direct sum of cyclic modules. What is the reason for stating that?

**Answer**: That is a more or less obvious statement. The module R is cyclic (it is generated by 1) and a module R/Ra is also cyclic (it is generated by 1+Ra, the equivalence class of 1 in R/Ra).

**Question 12**: In theorem 18.1 the book adds that the ai´s are nonzero nonunits in Z. Should we add that to the statement that we have from the lecture notes?

**Answer**: In the lecture notes we had that a_i>0. This means that a_i is nonzero, so that part is covered. But it is correct that we have not stated that a_i is nonunit. Indeed, to fix this we need to change `a_i>0`

to `a_i>1`

as 1 is the only positive integer which is a unit. Notice that the only thing this affects is the uniqueness, since one could add as many copies of Z_1 as they wanted (since Z_1=0). This has been corrected in the notes.

**Question 13**. During the explanation of theorem 19.7, it was said a way to compute the rational canonical form of a matrix based on the Smith normal form of A - XI. However, I did not get how can we do it.

**Answer**: We didn't have time to give an example, but we will give one next time. The idea is the following: compute first the Smith normal form of A-XI. Make it so that all polynomials in the diagonal are monic, by multiplying each row with the inverse of the coefficient of the highest degree term. Now consider the non-constant polynomials which appear in the diagonal, let us say that they are p_1(X),…,p_u(X). Then the rational canonical form of A is the block matrix which has the matrices C_{p_1(X)},…,C_{p_u(X)} in the diagonal and zeros everywhere else (where C_{p(X)} is the companion matrix of p(X), defined in Proposition 19.4). This follows by Theorem 19.7(2).

**Question 14**: In theorem 19.7 I do not understand why the determinant of A - XI cannot be zero. The Smith normal form could have zeros in the diagonal. During the lecture it was said that this situation could not be possible because of the proof of the second statement, i.e., K[x] ^ (n-r) should be zero, but I do not understand the reason.

**Answer**: Any two matrices over a PID are equivalent if and only if their rank is the same. This follows from Problem 8 in Problem Set 6, and one direction has also been mentioned in Remark 15.3(3). Since the matrix A-XI is equivalent to its Smith normal form S, it follows that the two matrices have the same rank. Now consider the matrix A-XI. This is an n x n matrix over K[X]. In each column of A-XI there is exactly one appearance of X, that is there is an X only in position 1 in column 1, there is an X only in position 2 in column 2, etc. Hence the columns of A-XI, viewed as vectors of K[X]^n, are linearly independent (this is an easy exercise). Since the columns of A-XI generate the column module of A-XI, it follows that they are a basis of the column module of A-XI. Hence the rank of A-XI is n (the number of columns). Since A-XI and S have the same rank, S has rank n as well. But S is a diagonal matrix, and so the rank of S is equal to the number of nonzero elements in the diagonal. It follows that all elements in the diagonal are nonzero. The point is that, although a general matrix can of course have zeros in the diagonal of its Smith normal form, the matrix A-XI has a very special shape which removes this possibility.

**Question 15**: If I compute the Smith normal form of a matrix is it unique? Say I compute the Smith normal form over the integers for a matrix and I find 1,1,10 in the diagonal and zeros everywhere else, why can't I multiply by 2 to get for example 2,2,20?

**Answer**: The Smith normal form of a matrix is unique up to multiplication of rows/columns by units. 2 is not a unit in the integers and so we cannot multiply by 2 the rows. We could however multiply some or all rows by -1, and so, for example, obtain a matrix with -1,1,-10 in the diagonal, and this would still be a Smith normal form for the same matrix (it's just that we prefer to have positive integers so that the Smith normal form is really unique.) The reason we are not allowed to multiply rows by things that are not units is simple: if we multiply by something that is not a unit, there is no way to go back to the previous form of the matrix! In the example, multiplying a row by 2 would require us to multiply a row by 1/2 to go back to the previous row, but 1/2 is not an integer so we cannot do that. If however we wanted to find the Smith normal form over the real numbers, we would be allowed to do that as well and then we could make every matrix equivalent to the identity matrix, which would not be very interesting (that is why the exercises are over the integers or the polynomial ring K[X] over a field K).

**Question 16**: Why do we use the words normal/canonical to describe the Smith/rational/Jordan forms?

**Answer**: The main idea of the use of the words "normal/canonical" in mathematics has to do with answering a question in a way that is easy, standard or unique. So the question we are faced is: given a set of matrices which are equivalent or similar, which matrix should we choose to represent this class? The standard answer for the equivalence class is the Smith normal form and the standard answer for the similarity class is the rational canonical form or the Jordan canonical form. The idea is that these matrices are in a sense unique (although one may multiply rows of the Smith normal form by units, or one may rearrange the blocks in the Jordan canonical form, this doesn't change the matrix in a serious way), and they are also very easy to work with. Hence they are the standard choice in this situation.

**Question 17**: In some places in the notes, for example Theorem 4.5, a 1:1 correspondence is mentioned. Is this the same as a bijection?

**Answer**: Yes, the two notions are exactly the same. Sometimes we use the term 1:1 correspondence when we want to make clear that there is some "structural" way to connect the two sets, as a bijection between two sets only says that the sets have the same amount of elements which is not necessarily exciting.

**Question 18**: In the chapter about modules most definitions are defined for left modules, with the remark that right and two-sided modules are defined similarly. In the definition of a submodule N you have to show that rn is in N, does that show that it is a left submodule? In other words, for a two sided submodule, do you need to show that both rn and nr is in N?

**Answer**: Whenever the word "submodule" is used, it is used with regards to a pre-existing module M of some sort (left or right). Therefore, if M is a left module, then a submodule of M is a subgroup such that rn is in N for all r in R and all n in N (which is a left module on its own right, and we can call it a *left submodule* of M). If M is a right module, then a submodule of M is a subgroup such that nr is in N for all r in R and all n in N (which would now be a right module and we can call it a *right submodule* of M).

Note that in this course there are no two-sided modules. What happens is that a left R-module is the same as a right R^{op}-module, and in the case where R is commutative, the two module structures coincide. In this case we usually say module without mentioning left and right.

Of course, one could have a set M which is both a left R-module and a right R-module for different operations. The prime example of this is the ring R itself which is both a right and a left R-module, but the two structures are different (unless R is commutative). In this case, if someone says "a submodule of M" this is not well-defined; one should explicit state which module structure is considered. For example one could say "a submodule of M when M is viewed as a left R-module" or equivalently "a left submodule of M" and similarly if one wants to consider M as a right R-module.