• Equality of matrices, equivalent matrices. Equivalent matrices Equivalent matrix transformations

    Equivalent matrices

    As mentioned above, the minor of a matrix of order s is the determinant of a matrix formed from elements of the original matrix located at the intersection of any selected s rows and s columns.

    Definition. In a matrix of order mn, a minor of order r is called basic if it is not equal to zero, and all minors of order r+1 and higher are equal to zero or do not exist at all, i.e. r matches the smaller of m or n.

    The columns and rows of the matrix on which the basis minor stands are also called basis.

    A matrix can have several different basis minors that have the same order.

    Definition. The order of the basis minor of a matrix is ​​called the rank of the matrix and is denoted by Rg A.

    A very important property of elementary matrix transformations is that they do not change the rank of the matrix.

    Definition. Matrices obtained as a result of an elementary transformation are called equivalent.

    It should be noted that equal matrices and equivalent matrices are completely different concepts.

    Theorem. The largest number of linearly independent columns in a matrix is ​​equal to the number of linearly independent rows.

    Because elementary transformations do not change the rank of the matrix, then the process of finding the rank of the matrix can be significantly simplified.

    Example. Determine the rank of the matrix.

    2. Example: Determine the rank of the matrix.

    If, using elementary transformations, it is not possible to find a matrix equivalent to the original one, but of a smaller size, then finding the rank of the matrix should begin by calculating the minors of the highest possible order. In the above example, these are minors of order 3. If at least one of them is not equal to zero, then the rank of the matrix is ​​equal to the order of this minor.

    The theorem on the basis minor.

    Theorem. In an arbitrary matrix A, each column (row) is a linear combination of the columns (rows) in which the basis minor is located.

    Thus, the rank of an arbitrary matrix A is equal to the maximum number of linearly independent rows (columns) in the matrix.

    If A is a square matrix and det A = 0, then at least one of the columns is a linear combination of the remaining columns. The same is true for strings. This statement follows from the property of linear dependence when the determinant is equal to zero.

    Solving arbitrary systems of linear equations

    As mentioned above, the matrix method and Cramer's method are applicable only to those systems of linear equations in which the number of unknowns is equal to the number of equations. Next, we consider arbitrary systems of linear equations.

    Definition. A system of m equations with n unknowns in general form is written as follows:

    where aij are coefficients and bi are constants. The solutions of the system are n numbers, which, when substituted into the system, turn each of its equations into an identity.

    Definition. If a system has at least one solution, then it is called joint. If a system does not have a single solution, then it is called inconsistent.

    Definition. A system is called determinate if it has only one solution and indefinite if it has more than one.

    Definition. For a system of linear equations the matrix

    A = is called the matrix of the system, and the matrix

    A*= is called the extended matrix of the system

    Definition. If b1, b2, …,bm = 0, then the system is called homogeneous. a homogeneous system is always consistent, because always has a zero solution.

    Elementary system transformations

    Elementary transformations include:

    1) Adding to both sides of one equation the corresponding parts of the other, multiplied by the same number, not equal to zero.

    2) Rearranging the equations.

    3) Removing from the system equations that are identities for all x.

    Kronecker-Kapeli theorem (consistency condition for the system).

    (Leopold Kronecker (1823-1891) German mathematician)

    Theorem: A system is consistent (has at least one solution) if and only if the rank of the system matrix is ​​equal to the rank of the extended matrix.

    Obviously, system (1) can be written in the form.

    Transition to a new basis.

    Let (1) and (2) be two bases of the same m-dimensional linear space X.

    Since (1) is a basis, the vectors of the second basis can be expanded from it:

    From the coefficients of we create a matrix:

    (4) – coordinate transformation matrix when moving from basis (1) to basis (2).

    Let it be a vector, then (5) and (6).

    Relationship (7) means that

    The matrix P is non-degenerate, since otherwise there would be a linear relationship between its columns, and then between the vectors.

    The converse is also true: any non-singular matrix is ​​a coordinate transformation matrix defined by formulas (8). Because P is a non-singular matrix, then its inverse exists. Multiplying both sides of (8) by, we get: (9).

    Let there be 3 bases chosen in the linear space X: (10), (11), (12).

    From where, i.e. (13).

    That. with sequential transformation of coordinates, the matrix of the resulting transformation is equal to the product of the matrices of the component transformations.

    Let be a linear operator and let a pair of bases be chosen in X: (I) and (II), and in Y – (III) and (IV).

    Operator A in a pair of bases I – III corresponds to the equality: (14). The same operator in the pair of bases II – IV corresponds to the equality: (15). That. for a given operator A we have two matrices and. We want to establish a dependency between them.

    Let P be the coordinate transformation matrix during the transition from I to III.

    Let Q be the coordinate transformation matrix during the transition from II to IV.

    Then (16), (17). Substituting expressions for and from (16) and (17) into (14), we obtain:

    Comparing this equality with (15), we obtain:

    Relation (19) connects the matrix of the same operator in different bases. In the case when the spaces X and Y coincide, the role of the III basis is played by I, and the role of IV by the II, then relation (19) takes the form: .

    Bibliography:

    3. Kostrikin A.I. Introduction to algebra. part II. Fundamentals of algebra: textbook for universities, -M. : Physics and mathematics literature, 2000, 368 pp.

    Lecture No. 16 (II semester)

    Subject: Necessary and sufficient condition for matrix equivalence.

    Two matrices, A and B, of the same size are called equivalent, if there are two non-singular matrices R and S such that (1).

    Example: Two matrices corresponding to the same operator for different choices of bases in linear spaces X and Y are equivalent.

    It is clear that the relation defined on the set of all matrices of the same size using the above definition is an equivalence relation.



    Theorem 8: In order for two rectangular matrices of the same size to be equivalent, it is necessary and sufficient that they be of the same rank.

    Proof:

    1. Let A and B be two matrices for which it makes sense. The rank of the product (matrix C) is not higher than the rank of each of the factors.

    We see that the kth column of matrix C is a linear combination of vectors of columns of matrix A and this holds for all columns of matrix C, i.e. for everyone. That. , i.e. – subspace of linear space.

    Since and since the dimension of the subspace is less than or equal to the dimension of the space, then the rank of matrix C is less than or equal to the rank of matrix A.

    In equalities (2), we fix the index i and assign k all possible values ​​from 1 to s. Then we obtain a system of equalities similar to system (3):

    From equalities (4) it is clear that the i-th row of matrix C is a linear combination of the rows of matrix B for all i, and then the linear hull spanned by the rows of matrix C is contained in the linear hull spanned by the rows of matrix B, and then the dimension of this linear hull is less than or equal to the dimension of the linear hull of the row vectors of matrix B, which means that the rank of matrix C is less than or equal to the rank of matrix B.

    2. The rank of the product of matrix A on the left and on the right by a non-singular square matrix Q is equal to the rank of matrix A.(). Those. The rank of matrix C is equal to the rank of matrix A.

    Proof: According to what was proven in case (1). Since the matrix Q is non-singular, then for it there exists: and in accordance with what was proven in the previous statement.

    3. Let us prove that if the matrices are equivalent, then they have the same ranks. By definition, A and B are equivalent if there are R and S such that. Since multiplying A on the left by R and on the right by S produces matrices of the same rank, as proven in point (2), the rank of A is equal to the rank of B.

    4. Let matrices A and B be of the same rank. Let us prove that they are equivalent. Let's consider.

    Let X and Y be two linear spaces in which bases (basis X) and (basis Y) are chosen. As is known, any matrix of the form defines a certain linear operator acting from X to Y.

    Since r is the rank of matrix A, then among the vectors exactly r are linearly independent. Without loss of generality, we can assume that the first r vectors are linearly independent. Then everything else can be expressed linearly through them, and we can write:

    Let us define a new basis in space X as follows: . (7)

    The new basis in Y space is as follows:

    Vectors, by condition, are linearly independent. Let's supplement them with some vectors to the basis Y: (8). So (7) and (8) are two new bases X and Y. Let’s find the matrix of operator A in these bases:

    So, in the new pair of bases, the matrix of the operator A is the matrix J. Matrix A was initially an arbitrary rectangular matrix of the form, rank r. Since the matrices of the same operator in different bases are equivalent, this shows that any rectangular matrix of type and rank r is equivalent to J. Since we are dealing with an equivalence relation, this shows that any two matrices A and B of type and rank r , being equivalent to the matrix J are equivalent to each other.

    Bibliography:

    1. Voevodin V.V. Linear algebra. St. Petersburg: Lan, 2008, 416 p.

    2. Beklemishev D.V. Course of analytical geometry and linear algebra. M.: Fizmatlit, 2006, 304 p.

    3. Kostrikin A.I. Introduction to algebra. part II. Fundamentals of algebra: textbook for universities, -M. : Physics and mathematics literature, 2000, 368 p.

    Lecture No. 17 (II semester)

    Subject: Eigenvalues ​​and eigenvectors. Own subspaces. Examples.

    The concepts of equality and equivalence of matrices are often encountered.

    Definition 1

    The matrix $A=\left(a_(ij) \right)_(m\times n) $ is said to be equal to the matrix $B=\left(b_(ij) \right)_(k\times l) $ if their dimensions $(m=k,n=l)$ coincide and the corresponding elements of the compared matrices are equal to each other.

    For 2nd order matrices written in general form, the equality of matrices can be written as follows:

    Example 1

    Given matrices:

    1) $A=\left(\begin(array)(cc) (2) & (0) \\ (-1) & (3) \end(array)\right),B=\left(\begin( array)(cc) (2) & (0) \\ (-1) & (3) \end(array)\right)$;

    2) $A=\left(\begin(array)(cc) (2) & (0) \\ (-1) & (3) \end(array)\right),B=\left(\begin( array)(c) (-3) \\ (2) \end(array)\right)$;

    3) $A=\left(\begin(array)(cc) (2) & (0) \\ (-1) & (3) \end(array)\right),B=\left(\begin( array)(cc) (2) & (4) \\ (1) & (3) \end(array)\right)$.

    Determine whether the matrices are equal.

    1) $A=\left(\begin(array)(cc) (2) & (0) \\ (-1) & (3) \end(array)\right),B=\left(\begin( array)(cc) (2) & (0) \\ (-1) & (3) \end(array)\right)$

    Matrices A and B have the same order, equal to 2$\times $2. The corresponding elements of the matrices being compared are equal, therefore the matrices are equal.

    2) $A=\left(\begin(array)(cc) (2) & (0) \\ (-1) & (3) \end(array)\right),B=\left(\begin( array)(c) (-3) \\ (2) \end(array)\right)$

    Matrices A and B have different orders, equal to 2$\times $2 and 2$\times $1, respectively.

    3) $A=\left(\begin(array)(cc) (2) & (0) \\ (-1) & (3) \end(array)\right),B=\left(\begin( array)(cc) (2) & (4) \\ (1) & (3) \end(array)\right)$

    Matrices A and B have the same order, equal to 2$\times $2. However, not all corresponding elements of the matrices being compared are equal; therefore, the matrices are not equal.

    Definition 2

    An elementary matrix transformation is a transformation that preserves the equivalence of the matrices. In other words, an elementary transformation does not change the set of solutions of the system of linear algebraic equations (SLAE) that this matrix represents.

    Elementary transformations of matrix rows include:

    • multiplying a row of a matrix by a number $k$ that is not equal to zero (the determinant of the matrix increases by $k$ times);
    • swapping any two rows of a matrix;
    • adding to the elements of one row of a matrix the elements of another row.

    The same applies to matrix columns and is called elementary column transformations.

    Definition 3

    If we move from matrix A using an elementary transformation to matrix B, then the original and resulting matrices are called equivalent. To denote the equivalence of matrices, use the sign “$ \sim$”, for example, $A\sim B$.

    Example 2

    Given the matrix: $A=\left(\begin(array)(ccc) (-2) & (1) & (4) \\ (1) & (0) & (3) \\ (1) & (2 ) & (3) \end(array)\right)$.

    Perform elementary transformations of matrix rows one by one.

    Let's swap the first row and the second row of matrix A:

    Multiply the first row of matrix B by the number 2:

    Let's add the first row with the second row of the matrix:

    Definition 4

    A step matrix is ​​a matrix that satisfies the following conditions:

    • if there is a zero row in a matrix, all rows below it are also zero;
    • The first non-zero element of each non-zero line must be located strictly to the right of the leading element in the line that is above this one.

    Example 3

    Matrices $A=\left(\begin(array)(ccc) (1) & (2) & (3) \\ (0) & (2) & (7) \\ (0) & (0) & ( 3) \end(array)\right)$ and $B=\left(\begin(array)(ccc) (1) & (2) & (3) \\ (0) & (2) & (7) \\ (0) & (0) & (0) \end(array)\right)$ are echelon matrices.

    Comment

    You can reduce a matrix to echelon form using equivalent transformations.

    Example 4

    Given the matrix: $A=\left(\begin(array)(ccc) (-2) & (1) & (4) \\ (1) & (0) & (3) \\ (1) & (2 ) & (3) \end(array)\right)$. Reduce the matrix to a stepwise form.

    Let's swap the first and second rows of matrix A:

    Let's multiply the first row of matrix B by the number 2 and add it to the second row:

    Let's multiply the first row of matrix C by the number -1 and add it to the third row:

    Let's multiply the second row of matrix D by the number -2 and add it to the third row:

    $K=\left(\begin(array)(ccc) (1) & (0) & (3) \\ (0) & (1) & (10) \\ (0) & (0) & (- 20) \end(array)\right)$ is a matrix of echelon type.

    Our immediate goal is to prove that any matrix can be reduced to some standard forms using elementary transformations. The language of equivalent matrices is useful along this path.

    Let it be. We will say that a matrix is ​​l_equivalent (p_equivalent or equivalent) to a matrix and denote (or) if the matrix can be obtained from a matrix using a finite number of row (column or row and column, respectively) elementary transformations. It is clear that l_equivalent and n_equivalent matrices are equivalent.

    First, we will show that any matrix can be reduced to a special form called reduced by just row transformations.

    Let it be. A non-zero row of this matrix is ​​said to have the reduced form if it contains an element equal to 1 such that all elements of the column other than are equal to zero, . We will call the marked single element of the line the leading element of this line and enclose it in a circle. In other words, a row of a matrix has the reduced form if this matrix contains a column of the form

    For example, in the following matrix

    the line has the following form, since. Let us pay attention to the fact that in this example an element also pretends to be the leading element of the line. In the future, if a line of the given type contains several elements that have leading properties, we will select only one of them in an arbitrary manner.

    A matrix is ​​said to have a reduced form if each of its non-zero rows has a reduced form. For example, matrix

    has the following form.

    Proposition 1.3 For any matrix there is an equivalent matrix of the reduced form.

    Indeed, if the matrix has the form (1.1) and, then after carrying out elementary transformations in it

    we get the matrix

    in which the string has the following form.

    Secondly, if the row in the matrix was reduced, then after carrying out elementary transformations (1.20) the row of the matrix will be reduced. Indeed, since given, there is a column such that

    but then and, consequently, after carrying out transformations (1.20) the column does not change, i.e. . Therefore, the line has the following form.

    Now it is clear that by transforming each non-zero row of the matrix in turn in the above manner, after a finite number of steps we will obtain a matrix of the reduced form. Since only row elementary transformations were used to obtain the matrix, it is l_equivalent to a matrix. >

    Example 7. Construct a matrix of reduced form, l_equivalent to the matrix