• Linear combination of columns. §4.8. Linear dependence of the rows and columns of a matrix. Matrices, operations with matrices, inverse matrix. Matrix equations and their solutions

    Rows and Columns matrices can be seen as row matrices and, accordingly, column matrices. Therefore, on them, as on any other matrices, one can perform linear operations. The restriction on the addition operation is that the rows (columns) must be the same length (height), but this condition is always satisfied for the rows (columns) of the same matrix.

    Linear operations on rows (columns) make it possible to compose rows (columns) in the form of expressions α 1 a 1 + ... + α s a s, where a 1, ..., a s is an arbitrary set of rows (columns) of the same length (height) , and α 1, ..., α s are real numbers. Such expressions are called linear combinations of rows (columns).

    Definition 12.3. Rows (columns) a 1, ..., a s are called linearly independent, if equality

    α 1 a 1 + ... + α s a s = 0, (12.1)

    where 0 on the right side is the zero row (column), only possible when α 1 = ... = a s = 0. Otherwise, when there are real numbers α 1 , ... , α s that are not equal to zero at the same time, that equality (12.1) is satisfied, these rows (columns) are called linearly dependent.

    The following statement is known as the linear dependence test.

    Theorem 12.3. Rows (columns) a 1, ..., a s, s > 1, are linearly dependent if and only if at least one (one) of them is a linear combination of the others.

    ◄ We will carry out the proof for rows, and for columns it is similar.

    Necessity. If the strings a 1 , ..., a s are linearly dependent, then, according to Definition 12.3, there are real numbers α 1 , ... , α s that are not equal to zero at the same time, such that α 1 a 1 +... + α s a s = 0. Let us choose a non-zero coefficient αα i . To be specific, let this be α 1 . Then α 1 a 1 = (-α 2)a 2 + ... + (-α s)a s and, therefore, a 1 = (-α 2 /α 1)a 2 + ... + (-α s /α 1)a s, i.e. the string a 1 is represented as a linear combination of the remaining strings.

    Adequacy. Let, for example, a 1 = λ 2 a 2 + ... + λ s a s. Then 1a 1 + (-λ 2)a 2 + ... +(-λ s)a s = 0. The first coefficient of the linear combination is equal to one, i.e. it is non-zero. According to Definition 12.3, the strings a 1 , ..., a s are linearly dependent.

    Theorem 12.4. Let the rows (columns) a 1 , ..., a s be linearly independent, and at least one of the rows (columns) b 1 ,..., b l be their linear combination. Then all rows (columns) a 1, ..., a s, b 1, ..., b l are linearly dependent.

    ◄ Let, for example, b 1 be a linear combination of a 1, ..., a s, i.e. b 1 = α 1 a 1 + ... + α s a s , α i ∈R, i = 1,s . To this linear combination we add rows (columns) b 2, ..., b l (for l > 1) with zero coefficients: b 1 = α 1 a 1 + ... + α s a s + 0b 2 + ... + 0b l. According to Theorem 12.3, the rows (columns) a 1, ..., a s, b 1, ..., b i are linearly dependent.

    Linear independence of matrix rows

    Given a size matrix

    Let's denote the rows of the matrix as follows:

    The two lines are called equal , if their corresponding elements are equal. .

    Let us introduce the operations of multiplying a string by a number and adding strings as operations carried out element-by-element:

    Definition. A row is called a linear combination of matrix rows if it is equal to the sum of the products of these rows by arbitrary real numbers (any numbers):

    Definition. The rows of the matrix are called linearly dependent , if there are numbers that are not simultaneously equal to zero, such that a linear combination of matrix rows is equal to the zero row:

    Where . (1.1)

    Linear dependence of matrix rows means that at least 1 row of the matrix is ​​a linear combination of the rest.

    Definition. If a linear combination of rows (1.1) is equal to zero if and only if all coefficients are , then the rows are called linearly independent .

    Matrix rank theorem. The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns through which all other rows (columns) are linearly expressed.

    The theorem plays a fundamental role in matrix analysis, in particular, in the study of systems of linear equations.

    6, 13,14,15,16. Vectors. Operations on vectors (addition, subtraction, multiplication by a number),n -dimensional vector. The concept of vector space and its basis.

    A vector is a directed segment with a starting point A and end point IN(which can be moved parallel to itself).

    Vectors can be designated either by 2 capital letters or by one lowercase letter with a line or an arrow.

    Length (or module) vector is a number equal to the length of the segment AB representing the vector.

    Vectors lying on the same line or on parallel lines are called collinear .

    If the beginning and end of the vector coincide (), then such a vector is called zero and is denoted = . The length of the zero vector is zero:

    1) Product of a vector and a number:

    There will be a vector having a length whose direction coincides with the direction of the vector if , and opposite to it if .

    2) Opposite vector - called the product of the vector - and the number (-1), i.e. -=.

    3) The sum of two vectors and a vector is called, the beginning of which coincides with the beginning of the vector, and the end with the end of the vector, provided that the beginning coincides with the end. (rule of triangles). The sum of several vectors is determined similarly.



    4) The difference of two vectors and is called the sum of the vector and the vector -, opposite .

    Dot product

    Definition: The scalar product of two vectors is a number equal to the product of the lengths of these vectors and the cosine of the angle between them:

    n-dimensional vector and vector space

    Definition. An n-dimensional vector is an ordered collection n real numbers written in the form x = (x 1,x 2,…,x n), Where x i i -th component of the vector X.

    The concept of an n-dimensional vector is widely used in economics, for example, a certain set of goods can be characterized by a vector x = (x 1,x 2,…,x n), and the corresponding prices y = (y 1,y 2,…,y n).

    - Two n-dimensional vectors are equal if and only if their corresponding components are equal, i.e. x=y, if x i= y i, i = 1,2,…,n.

    - The sum of two vectors same size n called a vector z = x + y, whose components are equal to the sum of the corresponding components of the summand vectors, i.e. z i= x i+ y i, i = 1,2,…, n.

    - The product of a vector x and a real number is called a vector whose components are equal to the product of the corresponding components of the vector, i.e. , i= 1,2,…,n.

    Linear operations on any vectors satisfy the following properties:



    1) - commutative (commutative) property of the sum;

    2) - associative (combinative) property of the sum;

    3) - an associative property with respect to a numerical factor;

    4) - distributive (distributive) property relative to the sum of vectors;

    5) - distributive property with respect to the sum of numerical factors;

    6) There is a zero vector such that for any vector (the special role of the zero vector);

    7) For any vector there is an opposite vector such that ;

    8) for any vector (special role of the numerical factor 1).

    Definition. The set of vectors with real components, in which the operations of adding vectors and multiplying a vector by a number that satisfies the above eight properties (considered as axioms) are defined, is called vector state .

    Dimension and basis of vector space

    Definition. Linear space is called n-dimensional , if it exists n linearly independent vectors, and any of the vectors are already dependent. In other words, dimension of space is the maximum number of linearly independent vectors it contains. The number n is called the dimension of space and is denoted by .

    A set of n linearly independent vectors in n-dimensional space is called basis .

    7. Eigenvectors and eigenvalues ​​of a matrix. Characteristic equation of a matrix.

    Definition. The vector is called eigenvector linear operator if there is a number such that:

    The number is called proper operator value (matrices A), corresponding to the vector .

    Can be written in matrix form:

    Where is a column matrix of vector coordinates, or in expanded form:

    Let's rewrite the system so that there are zeros on the right sides:

    or in matrix form: . The resulting homogeneous system always has a zero solution. For the existence of a non-zero solution, it is necessary and sufficient that the determinant of the system: .

    The determinant is a polynomial n th degree relative to . This polynomial is called characteristic polynomial of the operator or matrix A, and the resulting equation is characteristic equation of the operator or matrix A.

    Example:

    Find the eigenvalues ​​and eigenvectors of the linear operator given by the matrix.

    Solution: We compose the characteristic equation or , whence the eigenvalue of the linear operator .

    We find the eigenvector corresponding to the eigenvalue. To do this, we solve the matrix equation:

    Or , or , from where we find: , or

    Or .

    Let us assume that , we obtain that the vectors , for any, are eigenvectors of a linear operator with eigenvalue .

    Likewise, vector .

    8. System n linear equations with n variables (general view). Matrix form of recording such a system. System solution (definition). Consistent and incompatible, definite and indefinite systems of linear equations.

    Solving a system of linear equations with unknowns

    Systems of linear equations are widely used in economics.

    The system of linear equations with variables has the form:

    ,

    where () are arbitrary numbers called coefficients for variables And free terms of the equations , respectively.

    Brief entry: ().

    Definition. The solution of the system is such a set of values ​​, upon substitution of which each equation of the system turns into a true equality.

    1) The system of equations is called joint , if it has at least one solution, and non-joint, if it has no solutions.

    2) The simultaneous system of equations is called certain , if it has a unique solution, and uncertain , if it has more than one solution.

    3) Two systems of equations are called equivalent (equivalent) , if they have the same set of solutions (for example, one solution).

    Let's write the system in matrix form:

    Let's denote: , Where

    A– matrix of coefficients for variables, or matrix of the system, X – matrix-column of variables, IN – matrix-column of free members.

    Because the number of columns of the matrix is ​​equal to the number of rows of the matrix, then their product is:

    There is a column matrix. The elements of the resulting matrix are the left parts of the initial system. Based on the definition of equality of matrices, the initial system can be written in the form: .

    Cramer's theorem. Let be the determinant of the matrix of the system, and let be the determinant of the matrix obtained from the matrix by replacing the th column with a column of free terms. Then, if , then the system has a unique solution, determined by the formulas:

    Cramer's formula.

    Example. Solve a system of equations using Cramer's formulas

    Solution. Determinant of the system matrix. Therefore, the system has a unique solution. Let us calculate , obtained from replacing the first, second, third columns with a column of free terms, respectively:

    According to Cramer's formulas:

    9. Gauss method for solving the systemn linear equations with n variables. The concept of the Jordan–Gauss method.

    Gauss method - method of sequential elimination of variables.

    The Gauss method consists in the fact that, using elementary row transformations and column permutations, a system of equations is reduced to an equivalent system of a step (or triangular) form, from which all other variables are found sequentially, starting with the last (by number) variables.

    It is convenient to carry out Gaussian transformations not with the equations themselves, but with the extended matrix of their coefficients, obtained by assigning a column of free terms to the matrix:

    .

    It should be noted that the Gauss method can solve any system of equations of the form .

    Example. Use the Gaussian method to solve the system:

    Let us write down the extended matrix of the system.

    Step 1 . Let's swap the first and second lines so that it becomes equal to 1.

    Step 2. Multiply the elements of the first row by (–2) and (–1) and add them to the elements of the second and third rows so that zeros appear under the element in the first column. .

    For simultaneous systems of linear equations, the following theorems are true:

    Theorem 1. If the rank of the matrix of a joint system is equal to the number of variables, i.e. , then the system has a unique solution.

    Theorem 2. If the rank of the matrix of a joint system is less than the number of variables, i.e. , then the system is uncertain and has an infinite number of solutions.

    Definition. A basis minor of a matrix is ​​any non-zero minor whose order is equal to the rank of the matrix.

    Definition. Those unknowns whose coefficients are included in the notation of the basic minor are called basic (or basic), the remaining unknowns are called free (or non-basic).

    Solving a system of equations in the case means expressing and (since the determinant composed of their coefficients is not equal to zero), then and are free unknowns.

    Let us express the basic variables in terms of free ones.

    From the second row of the resulting matrix we express the variable:

    From the first line we express: ,

    General solution of the system of equations: , .

    Each row of matrix A is denoted by e i = (a i 1 a i 2 ..., a in) (for example,
    e 1 = (a 11 a 12 ..., a 1 n), e 2 = (a 21 a 22 ..., a 2 n), etc.). Each of them is a row matrix that can be multiplied by a number or added to another row according to the general rules for working with matrices.

    Linear combination The lines e l , e 2 ,...e k are called the sum of the products of these lines by arbitrary real numbers:
    e = l l e l + l 2 e 2 +...+ l k e k, where l l, l 2,..., l k are arbitrary numbers (coefficients of a linear combination).

    The rows of the matrix e l , e 2 ,...e m are called linearly dependent, if there are numbers l l , l 2 ,..., l m that are not equal to zero at the same time, such that the linear combination of rows of the matrix is ​​equal to the zero row:
    l l e l + l 2 e 2 +...+ l m e m = 0, where 0 = (0 0...0).

    A linear relationship between the rows of a matrix means that at least one row of the matrix is ​​a linear combination of the others. Indeed, for definiteness, let the last coefficient l m ¹ 0. Then, dividing both sides of the equality by l m, we obtain an expression for the last line as a linear combination of the remaining lines:
    e m = (l l /l m)e l + (l 2 /l m)e 2 +...+ (l m-1 /l m)e m-1 .

    If a linear combination of rows is equal to zero if and only if all coefficients are equal to zero, i.e. l l e l + l 2 e 2 +...+ l m e m = 0 Û l k = 0 "k, then the lines are called linearly independent.

    Matrix rank theorem. The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns through which all its other rows or columns can be linearly expressed.

    Let's prove this theorem. Let a matrix A of size m x n have rank r (r(A) £ min (m; n)). Consequently, there exists a nonzero minor of rth order. We will call every such minor basic. Let it be a minor to be clear

    The lines of this minor will also be called basic.

    Let us prove that then the rows of the matrix e l , e 2 ,...e r are linearly independent. Let's assume the opposite, i.e. one of these rows, for example the r-th, is a linear combination of the others: e r = l l e l + l 2 e 2 +...+ l r-1 e r-1 = 0. Then, if we subtract the elements of the r-th row 1st row multiplied by l l , elements of the 2nd row multiplied by l 2 , etc., finally, elements of the (r-1)th row multiplied by l r-1 , then the r-th row will become zero. In this case, according to the properties of the determinant, the above determinant should not change, and at the same time it should be equal to zero. A contradiction is obtained and the linear independence of the rows is proven.

    Now we prove that any (r+1) rows of the matrix are linearly dependent, i.e. any string can be expressed in terms of basic ones.

    Let's supplement the previously considered minor with one more row (i-th) and one more column (j-th). As a result, we obtain a minor of (r+1) order, which by definition of rank is equal to zero.

    Note that the rows and columns of the matrix can be considered as arithmetic vectors of dimensions m And n, respectively. Thus, the size matrix can be interpreted as a set m n-dimensional or n m-dimensional arithmetic vectors. By analogy with geometric vectors, we introduce the concepts of linear dependence and linear independence of the rows and columns of a matrix.

    4.8.1. Definition. Line
    called linear combination of strings with odds
    , if all elements of this line have the following equality:

    ,
    .

    4.8.2. Definition.

    Strings
    are called linearly dependent, if there is a non-trivial linear combination of them equal to the zero row, i.e. there are numbers that are not all equal to zero


    ,
    .

    4.8.3. Definition.

    Strings
    are called linearly independent, if only their trivial linear combination is equal to the zero row, i.e.

    ,

    4.8.4. Theorem. (Criterion for linear dependence of matrix rows)

    In order for the rows to be linearly dependent, it is necessary and sufficient that at least one of them is a linear combination of the others.

    Proof:

    Necessity. Let the lines
    are linearly dependent, then there is a nontrivial linear combination of them equal to the zero row:

    .

    Without loss of generality, assume that the first of the coefficients of the linear combination is nonzero (otherwise, the rows can be renumbered). Dividing this ratio by , we get


    ,

    that is, the first row is a linear combination of the others.

    Adequacy. Let one of the lines, for example, , is a linear combination of the others, then

    that is, there is a non-trivial linear combination of strings
    , equal to the zero string:

    which means the lines
    are linearly dependent, which is what needed to be proven.

    Comment.

    Similar definitions and statements can be formulated for the columns of the matrix.

    §4.9. Matrix rank.

    4.9.1. Definition. Minor order matrices size
    called the order determinant with elements located at the intersection of some of it lines and columns.

    4.9.2. Definition. Non-zero minor order matrices size
    called basic minor, if all minors of the matrix are of order
    are equal to zero.

    Comment. A matrix can have several basis minors. Obviously, they will all be of the same order. It is also possible that the matrix size
    minor order is different from zero, and the minors are of order
    does not exist, that is
    .

    4.9.3. Definition. The rows (columns) that form the basis minor are called basic rows (columns).

    4.9.4. Definition. Rank of a matrix is ​​called the order of its basis minor. Matrix rank denoted by
    or
    .

    Comment.

    Note that due to the equality of the rows and columns of the determinant, the rank of the matrix does not change when it is transposed.

    4.9.5. Theorem. (Invariance of matrix rank under elementary transformations)

    The rank of a matrix does not change during its elementary transformations.

    No proof.

    4.9.6. Theorem. (About the basic minor).

    The underlying rows (columns) are linearly independent. Any row (column) of a matrix can be represented as a linear combination of its basic rows (columns).

    Proof:

    Let's do the proof for strings. The proof of the statement for columns can be carried out by analogy.

    Let the rank of the matrix sizes
    equals , A
    − basic minor. Without loss of generality, we assume that the basis minor is located in the upper left corner (otherwise, the matrix can be reduced to this form using elementary transformations):

    .

    Let us first prove the linear independence of the basis rows. We will carry out the proof by contradiction. Let us assume that the basis rows are linearly dependent. Then, according to Theorem 4.8.4, one of the strings can be represented as a linear combination of the remaining basic strings. Therefore, if we subtract the specified linear combination from this row, we get a zero row, which means that the minor
    is equal to zero, which contradicts the definition of a basis minor. Thus, we have obtained a contradiction; therefore, the linear independence of the basis rows has been proven.

    Let us now prove that every row of a matrix can be represented as a linear combination of basis rows. If the line number in question from 1 to r, then, obviously, it can be represented as a linear combination with a coefficient equal to 1 for the line and zero coefficients for the remaining rows. Let us now show that if the line number from
    to
    , it can be represented as a linear combination of basis strings. Consider the matrix minor
    , obtained from the basis minor
    adding a line and an arbitrary column
    :

    Let us show that this minor
    from
    to
    and for any column number from 1 to .

    Indeed, if the column number from 1 to r, then we have a determinant with two identical columns, which is obviously equal to zero. If the column number from r+1 to , and the line number from
    to
    , That
    is a minor of the original matrix of higher order than the basis minor, which means that it is equal to zero from the definition of basis minor. Thus, it has been proven that the minor
    is zero for any line number from
    to
    and for any column number from 1 to . Expanding it over the last column, we get:

    Here
    − corresponding algebraic additions. Note that
    , since therefore
    is a basic minor. Therefore, the elements of the line k can be represented as a linear combination of the corresponding elements of the basis rows with coefficients independent of the column number :

    Thus, we have proven that an arbitrary row of a matrix can be represented as a linear combination of its basis rows. The theorem has been proven.

    Lecture 13

    4.9.7. Theorem. (On the rank of a non-singular square matrix)

    In order for a square matrix to be non-singular, it is necessary and sufficient that the rank of the matrix is ​​equal to the size of this matrix.

    Proof:

    Necessity. Let the square matrix size n is non-degenerate, then
    , therefore, the determinant of the matrix is ​​a basis minor, i.e.

    Adequacy. Let
    then the order of the basis minor is equal to the size of the matrix, therefore the basis minor is the determinant of the matrix , i.e.
    by definition of a basic minor.

    Consequence.

    In order for a square matrix to be non-singular, it is necessary and sufficient that its rows be linearly independent.

    Proof:

    Necessity. Since a square matrix is ​​non-singular, its rank is equal to the size of the matrix
    that is, the determinant of the matrix is ​​a basis minor. Therefore, by Theorem 4.9.6 on the basis minor, the rows of the matrix are linearly independent.

    Adequacy. Since all rows of the matrix are linearly independent, its rank is not less than the size of the matrix, which means
    therefore, by the previous Theorem 4.9.7, the matrix is non-degenerate.

    4.9.8. The method of bordering minors for finding the rank of a matrix.

    Note that part of this method has already been implicitly described in the proof of the basis minor theorem.

    4.9.8.1. Definition. Minor
    called bordering relative to minor
    , if it is obtained from a minor
    by adding one new row and one new column to the original matrix.

    4.9.8.2. The procedure for finding the rank of a matrix using the bordering minors method.

      We find any current minor of the matrix that is different from zero.

      We calculate all minors bordering it.

      If they are all equal to zero, then the current minor is a basis one, and the rank of the matrix is ​​equal to the order of the current minor.

      If among the bordering minors there is at least one non-zero, then it is considered current and the procedure continues.

    Using the method of bordering minors, we find the rank of the matrix

    .

    It is easy to specify the current non-zero second order minor, e.g.

    .

    We calculate the minors bordering it:




    Consequently, since all bordering minors of the third order are equal to zero, then the minor
    is basic, that is

    Comment. From the example considered, it is clear that the method is quite labor-intensive. Therefore, in practice, the method of elementary transformations is much more often used, which will be discussed below.

    4.9.9. Finding the rank of a matrix using the method of elementary transformations.

    Based on Theorem 4.9.5, it can be argued that the rank of the matrix does not change under elementary transformations (that is, the ranks of equivalent matrices are equal). Therefore, the rank of the matrix is ​​equal to the rank of the step matrix obtained from the original one by elementary transformations. The rank of a step matrix is ​​obviously equal to the number of its non-zero rows.

    Let's determine the rank of the matrix

    by the method of elementary transformations.

    Let's present the matrix to step view:

    The number of non-zero rows of the resulting echelon matrix is ​​three, therefore,

    4.9.10. Rank of a system of linear space vectors.

    Consider the system of vectors
    some linear space . If it is linearly dependent, then a linearly independent subsystem can be distinguished in it.

    4.9.10.1. Definition. Rank of the vector system
    linear space the maximum number of linearly independent vectors of this system is called. Vector system rank
    denoted as
    .

    Comment. If a system of vectors is linearly independent, then its rank is equal to the number of vectors in the system.

    Let us formulate a theorem showing the connection between the concepts of the rank of a system of vectors in a linear space and the rank of a matrix.

    4.9.10.2. Theorem. (On the rank of a system of vectors in linear space)

    The rank of a system of vectors in a linear space is equal to the rank of a matrix whose columns or rows are the coordinates of vectors in some basis of the linear space.

    No proof.

    Consequence.

    In order for a system of vectors in a linear space to be linearly independent, it is necessary and sufficient that the rank of the matrix, the columns or rows of which are the coordinates of vectors in a certain basis, is equal to the number of vectors in the system.

    The proof is obvious.

    4.9.10.3. Theorem (On the dimension of a linear shell).

    Dimension of linear hull vectors
    linear space equal to the rank of this vector system:

    No proof.

    Consider an arbitrary, not necessarily square, matrix A of size mxn.

    Matrix rank.

    The concept of matrix rank is associated with the concept of linear dependence (independence) of the rows (columns) of the matrix. Let's consider this concept for strings. For columns - similarly.

    Let us denote the drains of matrix A:

    e 1 =(a 11,a 12,…,a 1n); e 2 =(a 21,a 22,…,a 2n);…, e m =(a m1,a m2,…,a mn)

    e k =e s if a kj =a sj , j=1,2,…,n

    Arithmetic operations on matrix rows (addition, multiplication by a number) are introduced as operations carried out element by element: λе k =(λа k1 ,λа k2 ,…,λа kn);

    e k +е s =[(a k1 +a s1),(a k2 +a s2),…,(a kn +a sn)].

    Line e is called linear combination rows e 1, e 2,…, e k, if it is equal to the sum of the products of these lines by arbitrary real numbers:

    e=λ 1 e 1 +λ 2 e 2 +…+λ k e k

    The lines e 1, e 2,…, e m are called linearly dependent, if there are real numbers λ 1 ,λ 2 ,…,λ m , not all equal to zero, that the linear combination of these strings is equal to the zero string: λ 1 e 1 +λ 2 e 2 +…+λ m e m = 0 ,Where 0 =(0,0,…,0) (1)

    If a linear combination is equal to zero if and only if all coefficients λ i are equal to zero (λ 1 =λ 2 =...=λ m =0), then the rows e 1, e 2,..., e m are called linearly independent.

    Theorem 1. In order for the strings e 1 , e 2 ,…, e m to be linearly dependent, it is necessary and sufficient that one of these strings be a linear combination of the remaining strings.

    Proof. Necessity. Let the strings e 1, e 2,…, e m be linearly dependent. Let, for definiteness, (1) λ m ≠0, then

    That. the string e m is a linear combination of the remaining strings. Etc.

    Adequacy. Let one of the strings, for example e m, be a linear combination of the remaining strings. Then there will be numbers such that the equality holds, which can be rewritten in the form

    where at least 1 of the coefficients, (-1), is not equal to zero. Those. the rows are linearly dependent. Etc.

    Definition. Minor kth order matrix A of size mxn is called a k-th order determinant with elements lying at the intersection of any k rows and any k columns of matrix A. (k≤min(m,n)). .

    Example., 1st order minors: =, =;

    2nd order minors: , 3rd order

    A 3rd order matrix has 9 1st order minors, 9 2nd order minors and 1 3rd order minor (the determinant of this matrix).

    Definition. Rank of matrix A is the highest order of the nonzero minors of this matrix. Designation - rg A or r(A).

    Matrix rank properties.

    1) the rank of the matrix A nxm does not exceed the smaller of its dimensions, i.e.

    r(A)≤min(m,n).

    2) r(A)=0 when all matrix elements are equal to 0, i.e. A=0.

    3) For a square matrix A of nth order r(A)=n, when A is non-degenerate.



    (The rank of a diagonal matrix is ​​equal to the number of its non-zero diagonal elements).

    4) If the rank of a matrix is ​​equal to r, then the matrix has at least one minor of order r that is not equal to zero, and all minors of higher orders are equal to zero.

    The following relations hold for the ranks of the matrix:

    2) r(A+B)≤r(A)+r(B); 3) r(AB)≤min(r(A),r(B));

    3) r(A+B)≥│r(A)-r(B)│; 4) r(A T A)=r(A);

    5) r(AB)=r(A), if B is a square non-singular matrix.

    6) r(AB)≥r(A)+r(B)-n, where n is the number of columns of matrix A or rows of matrix B.

    Definition. A non-zero minor of order r(A) is called basic minor. (Matrix A may have several basis minors). Rows and columns at the intersection of which there is a basis minor are called respectively base strings And base columns.

    Theorem 2 (about the basis minor). The underlying rows (columns) are linearly independent. Any row (any column) of matrix A is a linear combination of the basis rows (columns).

    Proof. (For strings). If the basic rows were linearly dependent, then according to Theorem (1) one of these rows would be a linear combination of other basic rows, then, without changing the value of the basic minor, you can subtract the indicated linear combination from this row and get a zero row, and this contradicts to the fact that the basis minor is different from zero. That. the basis rows are linearly independent.

    Let us prove that any row of the matrix A is a linear combination of the basis rows. Because with arbitrary changes of rows (columns) the determinant retains the property of being equal to zero, then, without loss of generality, we can assume that the basis minor is in the upper left corner of the matrix

    A=, those. located on the first r rows and first r columns. Let 1£j£n, 1£i£m. Let us show that the determinant of (r+1) order

    If j£r or i£r, then this determinant is equal to zero, because it will have two identical columns or two identical rows.

    If j>r and i>r, then this determinant is a minor of the (r+1)th order of the matrix A. Since The rank of the matrix is ​​r, which means that any minor of a higher order is equal to 0.

    Expanding it according to the elements of the last (added) column, we get

    a 1j A 1j +a 2j A 2j +…+a rj A rj +a ij A ij =0, where the last algebraic complement A ij coincides with the basis minor M r and therefore A ij = M r ≠0.

    Dividing the last equality by A ij, we can express the element a ij as a linear combination: , where .

    Let us fix the value of i (i>r) and find that for any j (j=1,2,...,n) the elements of the i-th row e i are linearly expressed through the elements of the rows e 1, e 2,...,e r, i.e. e. The i-th row is a linear combination of the basis rows: . Etc.

    Theorem 3. (necessary and sufficient condition for the determinant to be equal to zero). In order for the nth order determinant D to be equal to zero, it is necessary and sufficient that its rows (columns) be linearly dependent.

    Proof (p.40). Necessity. If the nth order determinant D is equal to zero, then the basis minor of its matrix is ​​of order r

    Thus, one row is a linear combination of the others. Then, by Theorem 1, the rows of the determinant are linearly dependent.

    Adequacy. If the rows D are linearly dependent, then by Theorem 1 one row A i is a linear combination of the remaining rows. Subtracting the indicated linear combination from the string A i without changing the value of D, we obtain a zero string. Therefore, according to the properties of determinants, D=0. etc.

    Theorem 4. During elementary transformations, the rank of the matrix does not change.

    Proof. As was shown when considering the properties of determinants, when transforming square matrices, their determinants either do not change, or are multiplied by a non-zero number, or change sign. In this case, the highest order of non-zero minors of the original matrix is ​​preserved, i.e. the rank of the matrix does not change. Etc.

    If r(A)=r(B), then A and B are equivalent: A~B.

    Theorem 5. Using elementary transformations, you can reduce the matrix to stepped view. The matrix is ​​called stepwise, if it has the form:

    A=, where a ii ≠0, i=1,2,…,r; r≤k.

    The condition r≤k can always be achieved by transposing.

    Theorem 6. The rank of an echelon matrix is ​​equal to the number of its non-zero rows .

    Those. The rank of the step matrix is ​​equal to r, because there is a nonzero minor of order r: