• Properties of linearly dependent and linearly independent matrix columns. Linear dependence and independence of matrix rows

    where are some numbers (some of these numbers or even all of them may be equal to zero). This means that there are the following equalities between the elements of the columns:

    or , .

    From (3.3.1) it follows that

    (3.3.2)

    where is the zero string.

    Definition. The rows of matrix A are linearly dependent if there are numbers that are not all equal to zero at the same time, such that

    (3.3.3)

    If equality (3.3.3) is true if and only if , then the rows are called linearly independent. Relation (3.3.2) shows that if one of the rows is linearly expressed in terms of the others, then the rows are linearly dependent.

    It is easy to see the converse: if the strings are linearly dependent, then there is a string that is a linear combination of the other strings.

    Let, for example, in (3.3.3), then .

    Definition. Let a certain minor be selected in matrix A r th order and let minor ( r The +1)th order of the same matrix entirely contains the minor . We will say that in this case the minor borders the minor (or is bordering for ).

    Now we will prove an important lemma.

    Lemmaabout bordering minors. If the minor is of order r matrix A = is different from zero, and all minors bordering it are equal to zero, then any row (column) of matrix A is a linear combination of its rows (columns) that make up .

    Proof. Without losing the generality of reasoning, we will assume that a nonzero minor r - order is in the left top corner matrices A=:

    .

    For the first k rows of matrix A, the statement of the lemma is obvious: it is enough to include in a linear combination the same row with a coefficient equal to one, and the rest - with coefficients equal to zero.

    Let us now prove that the remaining rows of matrix A are linearly expressed in terms of the first k lines. To do this, we will construct a minor ( r +1)th order by adding to the minor k -th line () and l th column():

    .

    The resulting minor is equal to zero for all k and l . If , then it is equal to zero as containing two identical columns. If , then the resulting minor is an edge minor for and, therefore, is equal to zero by the conditions of the lemma.

    Let's expand the minor according to the elements of the lastl th column:

    (3.3.4)

    where are the algebraic complements to the elements. Algebraic addition is a minor of matrix A, therefore . Divide (3.3.4) by and express it through:

    (3.3.5)

    Where , .

    Assuming , we get:

    (3.3.6)

    Expression (3.3.6) means that k The th row of matrix A is linearly expressed through the first r lines.

    Since when a matrix is ​​transposed, the values ​​of its minors do not change (due to the property of determinants), then everything proven is also true for columns. The theorem has been proven.

    Corollary I . Any row (column) of a matrix is ​​a linear combination of its basis rows (columns). Really, basic minor of the matrix is ​​nonzero, and all minors bordering it are equal to zero.

    Corollary II. Determinant n of order is equal to zero if and only if it contains linearly dependent rows (columns). The sufficiency of the linear dependence of rows (columns) for the determinant to be equal to zero was proven earlier as a property of determinants.

    Let's prove the necessity. Let a square matrix be given n th order, the only minor of which is zero. It follows that the rank of this matrix is ​​less n , i.e. there is at least one row that is a linear combination of the basis rows of this matrix.

    Let us prove another theorem about the rank of the matrix.

    Theorem.The maximum number of linearly independent rows of a matrix is ​​equal to the maximum number of its linear independent columns and is equal to the rank of this matrix.

    Proof. Let the rank of matrix A= be equal to r. Then any of its k basis rows are linearly independent, otherwise the basis minor would be zero. On the other hand, any r +1 or more rows are linearly dependent. Assuming the contrary, we could find a minor of order greater than r , different from zero by Corollary 2 of the previous lemma. The latter contradicts the fact that the maximum order of non-zero minors is equal to r . Everything proven for rows is also true for columns.

    In conclusion, we will outline another method for finding the rank of a matrix. The rank of a matrix can be determined by finding a minor of maximum order that is different from zero.

    At first glance, this requires the calculation of a finite, but perhaps very large number of minors of this matrix.

    The following theorem allows, however, to introduce significant simplifications into this.

    Theorem.If the minor of matrix A is non-zero, and all minors bordering it are equal to zero, then the rank of the matrix is ​​equal to r.

    Proof. It is enough to show that any subsystem of matrix rows with S>r will be linearly dependent under the conditions of the theorem (it will follow from this that r is the maximum number of linearly independent rows of the matrix or any of its minors of order greater than k are equal to zero).

    Let's assume the opposite. Let the rows be linearly independent. According to the lemma about bordering minors, each of them will be linearly expressed through the lines containing the minor and which, due to the fact that they are different from zero, are linearly independent:

    (3.3.7)

    Consider the matrix K from the coefficients of linear expressions (3.3.7):

    .

    The rows of this matrix will be denoted by . They will be linearly dependent, since the rank of the matrix K, i.e. the maximum number of its linearly independent lines does not exceed r< S . Therefore, there are numbers, not all equal to zero, that

    Let's move on to the equality of components

    (3.3.8)

    Now consider the following linear combination:

    or

    Let

    Dimension matrix columns. Linear combination of matrix columns called a column matrix, with some real or complex numbers called linear combination coefficients. If in a linear combination we take all coefficients equal to zero, then linear combination is equal to the zero column matrix.

    The columns of the matrix are called linearly independent , if their linear combination is equal to zero only when all the coefficients of the linear combination are equal to zero. The columns of the matrix are called linearly dependent , if there is a set of numbers among which at least one is non-zero, and the linear combination of columns with these coefficients is equal to zero

    Similarly, definitions of linear dependence and linear independence matrix rows. In what follows, all theorems are formulated for the columns of the matrix.

    Theorem 5

    If there is a zero among the matrix columns, then the matrix columns are linearly dependent.

    Proof. Consider a linear combination in which all coefficients are equal to zero for all non-zero columns and one for all zero columns. It is equal to zero, and among the coefficients of the linear combination there is a non-zero coefficient. Therefore, the columns of the matrix are linearly dependent.

    Theorem 6

    If matrix columns are linearly dependent, that's all matrix columns are linearly dependent.

    Proof. For definiteness, we will assume that the first columns of the matrix linearly dependent. Then, by the definition of linear dependence, there is a set of numbers, among which at least one is nonzero, and the linear combination of columns with these coefficients is equal to zero

    Let's make a linear combination of all the columns of the matrix, including the remaining columns with zero coefficients

    But . Therefore, all columns of the matrix are linearly dependent.

    Consequence. Among linearly independent matrix columns, any are linearly independent. (This statement can be easily proven by contradiction.)

    Theorem 7

    In order for the columns of a matrix to be linearly dependent, it is necessary and sufficient that at least one column of the matrix be a linear combination of the others.

    Proof.

    Necessity. Let the columns of the matrix be linearly dependent, that is, there is a set of numbers among which at least one is nonzero, and the linear combination of columns with these coefficients is equal to zero

    Let us assume for definiteness that . Then that is, the first column is a linear combination of the rest.

    Adequacy. Let at least one column of the matrix be a linear combination of the others, for example, , where are some numbers.

    Then , that is, the linear combination of columns is equal to zero, and among the numbers in the linear combination at least one (at ) is different from zero.

    Let the rank of the matrix be . Any non-zero minor of order 1 is called basic . Rows and columns at the intersection of which there is a basis minor are called basic .

    A system of vectors of the same order is called linearly dependent if a zero vector can be obtained from these vectors through an appropriate linear combination. (It is not allowed that all coefficients of a linear combination be equal to zero, since this would be trivial.) Otherwise, the vectors are called linearly independent. For example, the following three vectors:

    are linearly dependent, since that is easy to check. In the case of a linear dependence, any vector can always be expressed through a linear combination of other vectors. In our example: either or This is easy to check with the appropriate calculations. It follows from this following definition: A vector is linearly independent of other vectors if it cannot be represented as a linear combination of these vectors.

    Let us consider a system of vectors without specifying whether it is linearly dependent or linearly independent. For each system consisting of column vectors a, it is possible to identify the maximum possible number of linearly independent vectors. This number, denoted by the letter , is the rank of this vector system. Since each matrix can be viewed as a system of column vectors, the rank of a matrix is ​​defined as the maximum number of linearly independent column vectors it contains. Row vectors are also used to determine the rank of a matrix. Both methods give the same result for the same matrix, and cannot exceed the smallest of or The rank of a square matrix of order ranges from 0 to . If all vectors are zero, then the rank of such a matrix is ​​zero. If all vectors are linearly independent of each other, then the rank of the matrix is ​​equal. If we form a matrix from the above vectors, then the rank of this matrix is ​​2. Since every two vectors can be reduced to a third by a linear combination, then the rank is less than 3.

    But we can make sure that any two vectors of them are linearly independent, hence the rank

    A square matrix is ​​called singular if its column vectors or row vectors are linearly dependent. The determinant of such a matrix is ​​equal to zero and its inverse matrix does not exist, as noted above. These conclusions are equivalent to each other. As a result, a square matrix is ​​called non-singular, or non-singular, if its column vectors or row vectors are independent of each other. The determinant of such a matrix is ​​not equal to zero and its inverse matrix exists (compare with p. 43)

    The rank of the matrix has a quite obvious geometric interpretation. If the rank of the matrix is ​​equal to , then the -dimensional space is said to be spanned by vectors. If the rank is then the vectors lie in an -dimensional subspace that includes all of them. So, the rank of the matrix corresponds to the minimum required dimension of the space “which contains all the vectors”; a -dimensional subspace in an -dimensional space is called an -dimensional hyperplane. The rank of the matrix corresponds to the smallest dimension of the hyperplane in which all vectors still lie.

    Orthogonality. Two vectors a and b are said to be mutually orthogonal if their scalar product is equal to zero. If the order matrix has the equality where D is a diagonal matrix, then the column vectors of matrix A are pairwise mutually orthogonal. If these column vectors are normalized, that is, reduced to a length equal to 1, then equality holds and we speak of orthonormal vectors. If B is a square matrix and the equality holds, then the matrix B is called orthogonal. In this case, it follows from formula (1.22) that the Orthogonal matrix is ​​always non-singular. Hence, from the orthogonality of the matrix, the linear independence of its row vectors or column vectors follows. The converse statement is not true: the linear independence of a system of vectors does not imply the pairwise orthogonality of these vectors.

    Let k rows and k columns (k ≤ min(m; n)) be randomly selected in a matrix A of dimensions (m; n). The matrix elements located at the intersection of the selected rows and columns form a square matrix of order k, the determinant of which is called the minor M kk of order k y or the kth order minor of the matrix A.

    The rank of a matrix is ​​the maximum order of r nonzero minors of the matrix A, and any minor of order r that is nonzero is a basis minor. Designation: rang A = r. If rang A = rang B and the sizes of matrices A and B are the same, then matrices A and B are called equivalent. Designation: A ~ B.

    The main methods for calculating the rank of a matrix are the method of bordering minors and the method.

    Bordering minor method

    The essence of the bordering minors method is as follows. Let a minor of order k, different from zero, have already been found in the matrix. Then we consider below only those minors of order k+1 that contain (i.e., border) a minor of kth order that is different from zero. If all of them are equal to zero, then the rank of the matrix is ​​equal to k, otherwise among the bordering minors of the (k+1)th order there is a non-zero one and the whole procedure is repeated.

    Linear independence of rows (columns) of a matrix

    The concept of matrix rank is closely related to the concept of linear independence of its rows (columns).

    are called linearly dependent if there are numbers λ 1, λ 2, λ k such that the equality is true:

    The rows of matrix A are called linearly independent if the above equality is possible only in the case when all numbers λ 1 = λ 2 = … = λ k = 0

    The linear dependence and independence of the columns of matrix A are determined in a similar way.

    If any row (a l) of matrix A (where (a l)=(a l1 , a l2 ,…, a ln)) can be represented as

    The concept of a linear combination of columns is defined in a similar way. The following theorem about the basis minor is valid.

    The basis rows and basis columns are linearly independent. Any row (or column) of matrix A is a linear combination of the basis rows (columns), i.e., rows (columns) intersecting the basis minor. Thus, the rank of matrix A: rang A = k is equal to the maximum number of linearly independent rows (columns) of matrix A.

    Those. The rank of a matrix is ​​the dimension of the largest square matrix within the matrix for which the rank needs to be determined, for which the determinant is not equal to zero. If the original matrix is ​​not square, or if it is square but its determinant is zero, then for square matrices of lower order the rows and columns are chosen arbitrarily.

    In addition to determinants, the rank of a matrix can be calculated by the number of linearly independent rows or columns of the matrix. It is equal to the number of linearly independent rows or columns, whichever is smaller. For example, if a matrix has 3 linearly independent rows and 5 linearly independent columns, then its rank is three.

    Examples of finding the rank of a matrix

    Using the method of bordering minors, find the rank of the matrix

    Solution: Second order minor

    the bordering minor M 2 is also nonzero. However, both minors are of fourth order, bordering M 3 .

    are equal to zero. Therefore, the rank of matrix A is 3, and the basis minor is, for example, the minor M 3 presented above.

    Method elementary transformations is based on the fact that elementary transformations of a matrix do not change its rank. Using these transformations, you can bring the matrix to a form where all its elements, except a 11, a 22, ..., a rr (r ≤min (m, n)), are equal to zero. This obviously means that rank A = r. Note that if an nth-order matrix has the form of an upper triangular matrix, that is, a matrix in which all elements under the main diagonal are equal to zero, then its definition is equal to the product of the elements on the main diagonal. This property can be used when calculating the rank of a matrix using the method of elementary transformations: it is necessary to use them to reduce the matrix to a triangular one and then, by selecting the corresponding determinant, we find that the rank of the matrix is ​​equal to the number of elements of the main diagonal that are different from zero.

    Using the method of elementary transformations, find the rank of the matrix

    Solution. Let us denote i-th line matrix A by symbol α i . At the first stage, we will perform elementary transformations

    At the second stage, we perform the transformations

    Matrixrectangular table arbitrary numbers arranged in a certain order, size m*n (rows by columns). The elements of the matrix are designated where i is the row number, aj is the column number.

    Addition (subtraction) matrices are defined only for one-dimensional matrices. The sum (difference) of matrices is a matrix whose elements are, respectively, the sum (difference) of the elements of the original matrices.

    Multiplication (division)per number– multiplication (division) of each matrix element by this number.

    Matrix multiplication is defined only for matrices, the number of columns of the first of which is equal to the number of rows of the second.

    Matrix multiplication– matrix, the elements of which are given by the formulas:

    Matrix Transpose– such a matrix B, the rows (columns) of which are the columns (rows) in the original matrix A. Designated

    Inverse matrix

    Matrix equations– equations of the form A*X=B are the product of matrices, the answer to given equation is the matrixX, which is found using the rules:

    1. Linear dependence and independence of matrix columns (rows). Linear dependence criterion, sufficient conditions for linear dependence of matrix columns (rows).

    The system of rows (columns) is called linearly independent, if the linear combination is trivial (equality holds only for a1...n=0), where A1...n are columns (rows), aa1...n are expansion coefficients.

    Criterion: in order for a system of vectors to be linearly dependent, it is necessary and sufficient that at least one of the vectors of the system is linearly expressed through the remaining vectors of the system.

    Sufficient condition:

    1. Matrix determinants and their properties

    Matrix determinant (determinant)– a number that for a square matrix A can be calculated from the elements of the matrix using the formula:

    , where is the additional minor of the element

    Properties:

    1. Inverse matrix, algorithm for calculating the inverse matrix.

    Inverse matrix is a square matrix X such that, together with square matrix A of the same order satisfies the condition: where E is the identity matrix of the same order as A. Any square matrix with determinant not equal to zero has 1 inverse matrix. Found using the method of elementary transformations and using the formula:

      The concept of matrix rank. The theorem on the basis minor. Criterion for the determinant of a matrix to be equal to zero. Elementary transformations of matrices. Rank calculations using the method of elementary transformations. Calculation of the inverse matrix using the method of elementary transformations.

    Matrix rank – order of basis minor (rg A)

    Basic minor – a minor of order r not equal to zero, such that all minors of order r+1 and higher are equal to zero or do not exist.

    The basis minor theorem - In an arbitrary matrix A, each column (row) is a linear combination of the columns (rows) in which the basis minor is located.

    Proof: Let the basis minor in a matrix A of dimensions m*n be located in the first r rows and first r columns. Let us consider the determinant, which is obtained by assigning the corresponding elements to the basis minor of the matrix A sth row and kth column.

    Note that for any u this determinant is equal to zero. If or, then the determinant D contains two identical rows or two identical columns. If they are, then the determinant D is equal to zero, since it is a minor of (r+λ)-ro order. Expanding the determinant along the last row, we obtain:, where are the algebraic complements of the elements of the last row. Note that since this is a basic minor. Therefore, where Writing the last equality for, we get , i.e. kth column(for any) there is a linear combination of the columns of the basis minor, which is what we needed to prove.

    Criterion detA=0– A determinant is equal to zero if and only if its rows (columns) are linearly dependent.

    Elementary transformations:

    1) multiplying a string by a number other than zero;

    2) adding elements of another line to the elements of one line;

    3) rearrangement of strings;

    4) crossing out one of the identical rows (columns);

    5) transposition;

    Rank calculation – From the basis minor theorem it follows that the rank of matrix A is equal to the maximum number of linearly independent rows (columns in the matrix), therefore the task of elementary transformations is to find all linearly independent rows (columns).

    Calculating the inverse matrix- Transformations can be implemented by multiplying by matrix A some matrix T, which is the product of the corresponding elementary matrices: TA = E.

    This equation means that the transformation matrix T is the inverse of the matrix . Then, therefore,