Nothing Special   »   [go: up one dir, main page]

Chapter 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

1 Matrices over a commutative ring

1.1 Review of basic ring theory


Let us recall that:

• A commutative ring is a ring over which multiplication is commutative;

• An integral domain is a commutative ring where ab = 0 implies that


either a = 0 or b = 0 (or both). If ab = 0, but b 6= 0, then a is called a
zero divisor;

• A field is a commutative ring where every nonzero element has a com-


mutative inverse (in particular, any field is an integral domain);

• Given an integral domain D its field of fractions is the field of all the
equivalence classes of the form [a/b] where a, b 2 D, b 6= 0 (often in this
course we will be informal and confuse these equivalence classes with
some representative of them);

• If D is an integral domain, g 2 D is called a gcd of a1 , . . . , an 2 D


if (1) for all i, g|ai (2) if for all i g 0 |ai then g 0 |g. Note that a gcd is
only defined up to multiplication by units in g, so that technically any
n-uple of elements has a whole equivalence class of gcd’s. In this course
we will be informal and happily confuse these equivalence classes with
any representative, so that for example we will write gcd(a, b) = 1 to
mean that any unit of D is a gcd of a, b 2 D.

• A gcd domain is an integral domain such that any pair of elements have
a gcd;

• A Bézout domain is a gcd domain D such that for any pair a, b 2 D


there is a gcd g satisfying the Bézout identity

g = xa + yb

for some x, y 2 D;

• An ideal of integral domain D is a subset closed under addition and


multiplication by elements of D;

3
• An ideal I is prime if ab 2 I implies either a 2 I or b 2 I; it is maximal
if I and D are the only ideals containing I; it is finitely generated
P exist generators g1 , . . . , gk 2 I such that for any a 2 I then
if there
a = j cj gj , cj 2 D; it is principal if it is generated by one generator;
• One can prove that a ring is a Bézout domain if and only if it is an
integral domain and any finitely generated ideal is principal;
• A principal ideal domain (PID) is an integral domain D such that every
ideal I ✓ D is principal; as consequence any PID is Bézout;
• A Euclidean domain (ED) is an integral domain D such that there is a
function f : D \ {0} ! N such that given any a, 0 6= b 2 D there exist
q, r 2 D satisfying a = bq + r and either r = 0 or f (r) < f (b).
Proposition 1.1. Any ED is a PID.
Proof. Let I be a nonzero ideal of the ED D, equipped with the Euclidean
function f . Let 0 6= b 2 I be any argument minimum of f on I \ {0} and
suppose a 2 I. Then a = bq + r, implying r 2 I. Hence, r = 0, as if not
f (r) < f (b) contradicting minimality. It follows that I = hbi. (It is clear
that the zero ideal is also principal.)

1.2 Matrices over commutative rings


Every mathematician, or more generally anyone having attended a basic
linear algebra course, is familiar with doing linear algebra and matrix the-
ory over a field (often of characheristic zero). Although many will mostly
have focused over some subfield of the complex numbers, algebraically not
much changes when considering another field. When the underlying alge-
braic structure is weakened, though, some of the familiar concepts can fade
away.
In this first chapter, we will deal with matrices over commutative rings.
Commutativity allows us to keep useful functions such as the determinant;
and it keeps being multiplicative, too. Another function that keeps being
well defined is the adjugate matrix (indeed it is constructed from minors
of the starting matrix). A relation that still is true by construction, and
comes from the very definition of the determinant (that implies that the
Laplace expansion holds), is that for any square matrix M with elements in
a commutative ring it must be M adj(M ) = adj(M )M = det(M )I.

4
However, one familiar equivalence that fades away is that of an invertible
and a nonsingular matrix. We have that:
Proposition 1.2. A square matrix A 2 Rn⇥n , where R is a commutative
ring, is invertible (over R) if and only if det A 2 R has an inverse (in R).
Proof. If A is invertible, then taking determinants from AA 1 = I yields
det(A) det(A 1 ) = 1R , so det(A) is a unit of R. Conversely, if det(A) is
invertible (det A) 1 adj(A) 2 Rn⇥n is an inverse.
We have to agree on what a singular square matrix is, and we will take it
to mean “a square matrix such that there is a nonzero vector in its kernel”.
The proof that we give of the next proposition is due to W. Brown.
Proposition 1.3 (McCoy). A square matrix A 2 Rn⇥n , where R is a com-
mutative ring, is singular if and only if det A is a zero divisor in R.
Proof. Suppose that there is a nonzero w 2 Rn such that Aw = 0. Without
loss of generality if we may assume⇥ wn 6= 0R (else
⇤ we can simply permute the
0 T
columns of A) and partition w = (w ) wn . Let

I w0 ⇥ ⇤
M= n 1 ) AM = ? 0 .
0 wn

It follows wn det(A) = 0. Conversely, let k  n 1 be the largest nonnegative


integer such that all the minors of A having order k do not admit a common
annihilator. If k = 0 then there is ↵ 2 R such that ↵Aij = 0 for all i, j,
and hence, for example, ↵e1 2 ker A. If k = n 1, then observe that the
entries of adjA do not admit a common annihilator, and let ↵ 6= 0 satisfy
↵ det A = 0. We know that there exists a nonzero column of the adjoint,
say v, such that ↵v 6= 0. Hence A(↵v) = 0. If 0 < k < n 1 suppose,
without loss of generality up to permutations of rows and columns, that the
(k + 1) ⇥ (k + 1) top submatrix of A, say B, has the property that there is
↵ 2 R such that ↵ is a common annihilitor of all minors of size k + 1 of A,
in particular ↵ det(B) = 0, but ↵adjB 6= 0. Partition

B X
A= .
Y Z

Applying the same argument as above to B yields a nonzero vector w such


that ↵w 6= 0, B↵w = 0. Furthermore, denote by ab the minor of B obtained

5
by deleting the a-th row and b-th column. Since w = adj(B)ej for some j, it
holds for all i = 1, . . . , n k 1 that
k+1
X k+1
X
(Y ↵w)i = ↵ Yi` (adjB)`j = ↵ ( 1)j+` Yi` j` = ↵ det C,
`=1 `=1

where C is the matrix obtained from B by replacing the jth row of B with
the ith row of Y . Hence, det C is a (k + 1) ⇥ (k + 1) minor of A, and therefore
↵ det C = 0. It follows that Y ↵w = 0 and therefore

↵w
A = 0.
0

Propositions 1.2 and 1.3 imply that square matrices over a general com-
mutative ring cannot be both invertible and singular, but there may exist
matrices that are neither.
Example 1.4. Let R = Z. Then 2 is a (1 ⇥ 1) square matrix over Z which
is neither invertible nor singular.
Example 1.5. Let R be the two-dimensional real vector space {↵A+ B, ↵, 2
R} where A and B are nonzero elements that satisfy A + B = 1, A2 = A,
B 2 = B and AB = BA. Note that these relations imply that C 2 R is a zero
divisor if and only if C = ↵A or C = B for some ↵, 2 R.
Consider 
A B
M= 2 R2⇥2 ) det M = B.
1 B
By Proposition 1.3, M is singular. One can indeed check that ker M =
⇥ ⇤T
span 0 A .
All this is a little less wild over an integral domain. First, in an integral
domain the only zero divisor is 0. Thus, a matrix is singular if and only if its
determinant is 0, just as over a field. Moreover, any integral domain can be
extended into its field of fractions, and therefore most of the linear algebra
over a field can be inherited by embedding.
We will call a matrix invertible over an integral domain D a unit (as
these are e↵ectively the units of Dn⇥n ), or more often a unimodular matrix
(meaning a matrix whose determinant is a unit of D), while we use the

6
adjective invertible to mean that it is invertible over the field of fractions of
D.
We conclude this chapter with a purely algebraic proof of a famous the-
orem, valid over any commutative ring.

Definition 1.1. Let R be a commutative ring and A 2 Rn⇥n . The charac-


teristic polynomial of A is pA (t) = det(tI A) 2 R[t].

Note that R[t] is a commutative ring whenever R is: hence, the character-
istic polynomial is well defined.
Pn 1 Toi state the next theorem, we will consider
n
the expansion pA (t) = t + i=0 pi t and thus define a matrix valued function
n 1
X
n⇥n n⇥n n
pA (X) : R !R , X 7! X + pi X i .
i=0

Theorem 1.6 (Cayley-Hamilton). Let R be a commutative ring and let A 2


Rn⇥n have characteristic polynomial pA (t). Then pA (A) = 0.

Proof. Let MA (t) = adj(tI A) 2 R[t]n⇥n , and expand it as


n 2
X
n 1
MA (t) = t I+ Mi t i , Mi 2 Rn⇥n .
i=0

From the identity pA (t)I = MA (t)(tI A) we readily obtain


n 1 n 1
!
X X
n
t I+ pi Iti = tn Mn 1 + (Mi 1 Mi A)ti M0 A.
i=0 i=1

Since this is an equality of two polynomial matrices, the terms in ti from


each side must also be equal. In particular, we have

I = Mn 1 , p i I = Mi 1 Mi A for all i = 1, . . . , n 1, p0 I = M0 A.

Hence,
n 1
! n 1
X X
n i
pA (A) = A + (Mi 1 Mi A)A M0 A = (Mi Ai+1 Mi AAi ) = 0.
i=1 i=0

You might also like