Chapter 1
Chapter 1
Chapter 1
• Given an integral domain D its field of fractions is the field of all the
equivalence classes of the form [a/b] where a, b 2 D, b 6= 0 (often in this
course we will be informal and confuse these equivalence classes with
some representative of them);
• A gcd domain is an integral domain such that any pair of elements have
a gcd;
g = xa + yb
for some x, y 2 D;
3
• An ideal I is prime if ab 2 I implies either a 2 I or b 2 I; it is maximal
if I and D are the only ideals containing I; it is finitely generated
P exist generators g1 , . . . , gk 2 I such that for any a 2 I then
if there
a = j cj gj , cj 2 D; it is principal if it is generated by one generator;
• One can prove that a ring is a Bézout domain if and only if it is an
integral domain and any finitely generated ideal is principal;
• A principal ideal domain (PID) is an integral domain D such that every
ideal I ✓ D is principal; as consequence any PID is Bézout;
• A Euclidean domain (ED) is an integral domain D such that there is a
function f : D \ {0} ! N such that given any a, 0 6= b 2 D there exist
q, r 2 D satisfying a = bq + r and either r = 0 or f (r) < f (b).
Proposition 1.1. Any ED is a PID.
Proof. Let I be a nonzero ideal of the ED D, equipped with the Euclidean
function f . Let 0 6= b 2 I be any argument minimum of f on I \ {0} and
suppose a 2 I. Then a = bq + r, implying r 2 I. Hence, r = 0, as if not
f (r) < f (b) contradicting minimality. It follows that I = hbi. (It is clear
that the zero ideal is also principal.)
4
However, one familiar equivalence that fades away is that of an invertible
and a nonsingular matrix. We have that:
Proposition 1.2. A square matrix A 2 Rn⇥n , where R is a commutative
ring, is invertible (over R) if and only if det A 2 R has an inverse (in R).
Proof. If A is invertible, then taking determinants from AA 1 = I yields
det(A) det(A 1 ) = 1R , so det(A) is a unit of R. Conversely, if det(A) is
invertible (det A) 1 adj(A) 2 Rn⇥n is an inverse.
We have to agree on what a singular square matrix is, and we will take it
to mean “a square matrix such that there is a nonzero vector in its kernel”.
The proof that we give of the next proposition is due to W. Brown.
Proposition 1.3 (McCoy). A square matrix A 2 Rn⇥n , where R is a com-
mutative ring, is singular if and only if det A is a zero divisor in R.
Proof. Suppose that there is a nonzero w 2 Rn such that Aw = 0. Without
loss of generality if we may assume⇥ wn 6= 0R (else
⇤ we can simply permute the
0 T
columns of A) and partition w = (w ) wn . Let
I w0 ⇥ ⇤
M= n 1 ) AM = ? 0 .
0 wn
5
by deleting the a-th row and b-th column. Since w = adj(B)ej for some j, it
holds for all i = 1, . . . , n k 1 that
k+1
X k+1
X
(Y ↵w)i = ↵ Yi` (adjB)`j = ↵ ( 1)j+` Yi` j` = ↵ det C,
`=1 `=1
where C is the matrix obtained from B by replacing the jth row of B with
the ith row of Y . Hence, det C is a (k + 1) ⇥ (k + 1) minor of A, and therefore
↵ det C = 0. It follows that Y ↵w = 0 and therefore
↵w
A = 0.
0
Propositions 1.2 and 1.3 imply that square matrices over a general com-
mutative ring cannot be both invertible and singular, but there may exist
matrices that are neither.
Example 1.4. Let R = Z. Then 2 is a (1 ⇥ 1) square matrix over Z which
is neither invertible nor singular.
Example 1.5. Let R be the two-dimensional real vector space {↵A+ B, ↵, 2
R} where A and B are nonzero elements that satisfy A + B = 1, A2 = A,
B 2 = B and AB = BA. Note that these relations imply that C 2 R is a zero
divisor if and only if C = ↵A or C = B for some ↵, 2 R.
Consider
A B
M= 2 R2⇥2 ) det M = B.
1 B
By Proposition 1.3, M is singular. One can indeed check that ker M =
⇥ ⇤T
span 0 A .
All this is a little less wild over an integral domain. First, in an integral
domain the only zero divisor is 0. Thus, a matrix is singular if and only if its
determinant is 0, just as over a field. Moreover, any integral domain can be
extended into its field of fractions, and therefore most of the linear algebra
over a field can be inherited by embedding.
We will call a matrix invertible over an integral domain D a unit (as
these are e↵ectively the units of Dn⇥n ), or more often a unimodular matrix
(meaning a matrix whose determinant is a unit of D), while we use the
6
adjective invertible to mean that it is invertible over the field of fractions of
D.
We conclude this chapter with a purely algebraic proof of a famous the-
orem, valid over any commutative ring.
Note that R[t] is a commutative ring whenever R is: hence, the character-
istic polynomial is well defined.
Pn 1 Toi state the next theorem, we will consider
n
the expansion pA (t) = t + i=0 pi t and thus define a matrix valued function
n 1
X
n⇥n n⇥n n
pA (X) : R !R , X 7! X + pi X i .
i=0
I = Mn 1 , p i I = Mi 1 Mi A for all i = 1, . . . , n 1, p0 I = M0 A.
Hence,
n 1
! n 1
X X
n i
pA (A) = A + (Mi 1 Mi A)A M0 A = (Mi Ai+1 Mi AAi ) = 0.
i=1 i=0