Nothing Special   »   [go: up one dir, main page]

Determinants, Part II Math 130 Linear Algebra

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Determinants, part II Math 130 Linear Algebra

D Joyce, Fall 2012 So far weve only dened determinants of 2 2 and 3 matrices. The 2 2 determinants had 2 terms, while the determinants had 6 terms. There are many ways that general n n determinants can be dened. Well rst dene a determinant function in terms of characterizing properties that we want it to have. Then well use the construction of a determinant following the method given in the text. Denition 1. A determinant function assigns to each square matrix A a scalar associated to the matrix, denoted det(A) or |A| such that 1. The determinant of an n n identity matrix I is 1. |I| = 1. 2. If the matrix B is identical to the matrix A except the entries in one of the rows of B are each equal to the corresponding entries of A multiplied by the same scalar c, then |B| = c |A|. 3. If the matrices A, B, and C are identical except for the entries in one row, and for that row an entry in A is found by adding the corresponding entries in B and C, then |A| = |B| + |C|. 4. If the matrix B is the result of exchanging two rows of A, then the determinant of B is the negation of the determinant of A. The second and third conditions together say that determinants are linear in each row. Usually that phrased by saying determinants are mulitlinear in their rows. The last condition says its an alternating function. These conditions are enough to characterize the determinant, but they dont show such a determinant function exists and is unique. Well show both existence and uniqueness, but start with uniqueness. First, well note a couple of properties that determinant functions have that follow from the denition. Theorem 2. A determinant function has the following two properties. (a). The determinant of any matrix with an entire row of 0s is 0. (b). The determinant of any matrix with two identical rows is 0. Proof. Property (a) follows from the second statement in the denition. If A has a whole row of 0s, then using that row and c = 0 in the second statement of the denition, then B = A, so |A| = 0|A|. Therefore, |A| = 0. Property (b) follows from the fourth statement in the denition. If you exchange the two identical rows, the result is the original matrix, but its determinant is negated. The only scalar which is its own negation is 0. Therefore, the determinant of the matrix is 0. q.e.d. 1

Now we can show the uniqueness of determinant functions. Theorem 3. There is at most one determinant function. Proof. The four properties that determinants have are enough to nd the value of the determinant of a matrix. Suppose a matrix A has more than one nonzero entry in a row. Then using the sum property of multilinearity, we know |A| = |A1 | + |A2 | + + |An | where Aj is the matrix that looks just like A except in that row, all the entries are 0 except the j th one which is the j th entry of that row in A. That means we can reduce the question of evaluating determinants of general matrices to evaluating determinants of matrices that have at most one nonzero entry in each row. By property (a) in the preceding theorem, if the matrix has a row of all 0s, its determinant is 0. Thus, we only need to consider matrices that have exactly one nonzero entry in each row. Using the scalar property of multilinearity, we can further assume that the nonzero entry in that row is 1. Now were down to evaluating determinants that only have entries of 0s and 1s with exactly one 1 in each row. If two of those rows have the 1s in the same column, then by property (b) of the preceding theorem, that matrix has determinant 0. Now the only matrices left to consider are those that only have entries of 0s and 1s with exactly one 1 in each row and column. These are called permutation matrices. Using alternation, the fourth condition, the rows can be interchanged until the 1s only lie on the main diagonal, and that negates the value of the matrix for interchange we make. Finally, were left with the identity matrix, but by the rst condition in the denition, its determinant is 0. Thus, the value of the determinant of of every matrix is determined by the denition. There can be only one determinant function. q.e.d. Although that argument shows that theres at most one function with those four properties, it doesnt show that there is such a function. In other words, we havent shown the properties are consistent. We need some way to construct a function with those properties, and well do that with a cofactor construction. In the process of proving that theorem, we found out what the determinant of a permutation matrix had to be. Lets state that as a corollary since its an important result in its own right. Corollary 4. A permutation matrix is a square matrix that only has 0s and 1s as its entries with exactly one 1 in each row and column. The determinant of a permutation matrix will have to be either 1 or 1 depending on whether it takes an even number or an odd number of row interchanges to convert it to the identity matrix.

The cofactor construction. Although there are many ways to construct the determinant function, well use the cofactor construction. Another common construction uses permutations. Example 5. Probably the best way to understand cofactor expansion is to use it in an example. In this example, well do cofactor expansion across the rst row of a matrix. Lets start with a typical 3 3 matrix 5 2 1 3 A = 4 2 6 1 4 For each of the three entries across the rst row, well cross out the row and column of that entry to get a 2 2 matrix called a cofactor matrix. Then well multiply each entry in the rst row by the determinant of its cofactor matrix, and then add or subtract the resulting products to get the determinant. |A| = 5 2 3 4 3 4 2 2 + (1) 6 4 6 1 1 4

Thus, weve reduced the problem of nding one 3 3 determinant to nding three 2 2 determinants. The rest of the computation goes like this: |A| = 5 (11) 2 (2) + (1) 16 = 55 + 4 16 = 67 Now, notice how we alternately added and subtracted the three terms to get the product. When you expand across the rst row, the signs you use are ++. You could instead expand across the second row, youll use the signs + . The sign you use for the ij th entry is (1)i+j . This corresponds to this checkerboard pattern for the entries of the matrix + + + + + which starts with + in the upper left corner. Well construct a determinant function using expansion across the rst row of the matrix. Well call this function were constructing a determinant and use the standard notation for it, but we have to show that it has the four properties required for a determinant function. This cofactor construction is an recursive denition (also called an inductive denition) since the determinant of an nn matrix is dened in terms of determinants of (n1)(n1) matrices. Like all recursive denitions, we need a base case, and that will be when n = 1. Well dene the determinant of a 1 1 matrix to be its entry. We need to dene cofactor matrices before we can get to the construction. Denition 6. If A is an n n matrix, i is a row number, and j is a column number, then the ij th cofactor matrix, denoted Aij , is the matrix that results from deleting the ith row and th the j column from A. The cofactor matrix Aij is an (n 1) (n 1) matrix. 3

Our recursive construction for determinants expands the matrix along the rst row. We dene |A| as an alternating sum of the determinants of its cofactor matrices
n

|A| =
j=1

(1)1+j A1j |A1j | = A11 |A11 | A12 |A12 | + A13 |A13 | A1n |A1n |

This denition gives the same values for 2 2 and 3 3 determinants we saw before. The following theorem is formally proven in steps in the text. The proof here is complete except one step which will be illustrated in the case n = 3. Theorem 7. The four characterizing properties of determinants listed above are satised by the cofactor denition of determinants. Proof. Well check the four properties of a determinant function one at a time. 1. The determinant of an n n identity matrix I is 1. |I| = 1. Its easy to check that with this construction, the determinant of the identity matrix is 1. 2. If the matrix B is identical to the matrix A except the entries in one of the rows of B are each equal to the corresponding entries of A multiplied by the same scalar c, then |B| = c |A|. Well have to consider two cases, one for the rst row, the other for any other row. For the rst row, the entries of the rst row of B are c times the entries of the rst row of A, that is B1j = cA1j , while the cofactor matrices B1j are the same as the cofactors 1j . Therefore, matrices A
n n

|B| =
j=1

(1)

1+j

B1j |B1j | =
j=1

(1)1+j cA1j |A1j | = c |A|.

Suppose now that the entries of the rth row are multiplied by c, where r is some row number greater than 1. This time the rst row of B is the same as the rst row of A, so B1j = A1j , but the matrices B1j each have a row, the r 1st row, which is c times the 1j . By induction we may assume their determinants are each multiplied same row in A by c, that is, |B1j | = c |A1j |. Thus
n n

|B| =
j=1

(1)

1+j

B1j |B1j | =
j=1

(1)1+j A1j c|A1j | = c |A|.

3. If the matrices A, B, and C are identical except for the entries in one row, and for that row an entry in A is found by adding the corresponding entries in B and C, then |A| = |B| + |C|. Again, well have to consider two cases, one for the rst row, the other for any other row. 4

For the rst row, the entries in the rst row of A are the sums of the corresponding entries in the rst row of B and C, that is, A1j = B1j + C1j , but all the rest of the rows are the same in A, B, and C. Then the cofactor matrices for A, B, and C are the same: A1j = B1j = C1j . Therefore,
n

|A| =
j=1 n

(1)1+j A1j |A1j | (1)1+j (B1j + C1j )|A1j |


j=1 n n

= =
j=1 n

(1)1+j B1j |A1j | +


j=1 n

(1)1+j C1j |A1j | (1)1+j C1j |B1j | = |B| + |C|


j=1

=
j=1

(1)1+j B1j |B1j | +

Suppose now for r > 1 that the entries of the rth row of A are the sums of the corresponding entries in the rth row of B and C, that is, Arj = Brj + Crj , but all the rest of the rows are the same in A, B, and C. Then the entries in the rst row are the same, A1j = B1j = C1j , and the cofactor matrix A1j will be the same as the cofactor matrices 1j and C1j except that the r 1st row of A1j will be sum of the r 1st rows of B1j B and C1j . By induction, we may assume |A1j | = |B1j | + |C1j |. Therefore,
n

|A| =
j=1 n

(1)1+j A1j |A1j | (1)1+j A1j (|B1j | + |C1j |)


j=1 n n

= =
j=1 n

(1)

1+j

A1j |B1j | +
j=1 n

(1)1+j A1j |C1j |) (1)1+j C1j |C1j |) = |B| + |C|


j=1

=
j=1

(1)1+j B1j |B1j | +

4. If the matrix B is the result of exchanging two rows of A, then the determinant of B is the negation of the determinant of A. One more time, well have to consider two cases. In one case, neither of the two rows being exchanged is the rst row. In that case, by induction, since the determinants of all the matrices A1j are negated, then the determinant of A will be negated. That leaves the case when one of the two rows being exchange is the rst. We can reduce this case to the subcase when the other row is the second, since you can exchange the rst and the rth rows by three exchanges: rst exchange the second and the rth , then the rst and the second, then the second and the rth again. 5

Now were down to the subcase where we exchange the rst and the second rows of A. Well show that exchange negates the determinant. Well expand A twice, rst along the rst row of A, then along the rst row of each of A1j , which, of course, comes from the second row of A. This gets dicult to write formally for general n, so instead, well take n = 3 which may be large enough to see how it works for larger n. |A| = A11 A12 A13 A21 A22 A23 A31 A32 A33

A22 A23 A A23 A A22 A12 21 + A13 21 A32 A33 A31 A33 A31 A32 = A11 (A22 |A33 | A23 |A32 |) A12 (A21 |A33 | A23 |A31 |) + A13 (A21 |A32 | A22 |A31 |) = (A11 A22 A12 A21 )|A33 | +(A12 A23 A13 A22 )|A31 | +(A13 A21 A11 A23 )|A32 | = A11 Now, to determine what |B| is, all we have to do is interchange all the 1s and 2s for any of the rst subscripts in that last computation. (3s and any larger subscripts for general n wont change.) Well nd |B| = (A21 A12 A22 A11 )|A33 | +(A22 A13 A23 A12 )|A31 | +(A23 A11 A21 A13 )|A32 |

But thats the negation of the last expression for the computation for |A|, so |B| = |A|. Thus, this cofactor construction satises all four properties of a determinant function. q.e.d. The real purpose of the preceding theorem is just to show the existence of a determinant function. We usually wont use the construction to actually compute determinants because there are much better ways. There are some special matrices where this construction gives a quick method of evaluation, namely diagonal, upper triangular, and lower triangular matrices. Theorem 8. The determinant of a diagonal matrix or a triangular matrix is the product of its diagonal entries. Proof. In each of these three kinds of matrices, if you use cofactor expansion along the rst row, the only term in the expansion thats not zero is the rst, and that term is the product of the rst entry and its cofactor matrix, which by induction will have a determinant which is the product of its diagonal entries. q.e.d. Math 130 Home Page at http://math.clarku.edu/~djoyce/ma130/

You might also like