• Matrix multiplication rule. Multiplying a square matrix by a column matrix

    Matrix addition:

    Subtraction and addition of matrices reduces to the corresponding operations on their elements. Matrix addition operation entered only for matrices the same size, i.e. for matrices, in which the number of rows and columns is respectively equal. Sum of matrices A and B are called matrix C, whose elements are equal to the sum of the corresponding elements. C = A + B c ij = a ij + b ij Defined similarly matrix difference.

    Multiplying a matrix by a number:

    Matrix multiplication (division) operation of any size by an arbitrary number is reduced to multiplying (dividing) each element matrices for this number. Matrix product And the number k is called matrix B, such that

    b ij = k × a ij . B = k × A b ij = k × a ij . Matrix- A = (-1) × A is called the opposite matrix A.

    Properties of adding matrices and multiplying a matrix by a number:

    Matrix addition operations And matrix multiplication per number have the following properties: 1. A + B = B + A; 2. A + (B + C) = (A + B) + C; 3. A + 0 = A; 4. A - A = 0; 5. 1 × A = A; 6. α × (A + B) = αA + αB; 7. (α + β) × A = αA + βA; 8. α × (βA) = (αβ) × A; , where A, B and C are matrices, α and β are numbers.

    Matrix multiplication (Matrix product):

    Operation of multiplying two matrices is entered only for the case when the number of columns of the first matrices equal to the number of lines of the second matrices. Matrix product And m×n on matrix In n×p, called matrix With m×p such that with ik = a i1 × b 1k + a i2 × b 2k + ... + a in × b nk , i.e., the sum of the products of the elements of the i-th row is found matrices And to the corresponding elements of the jth column matrices B. If matrices A and B are squares of the same size, then the products AB and BA always exist. It is easy to show that A × E = E × A = A, where A is square matrix, E - unit matrix the same size.

    Properties of matrix multiplication:

    Matrix multiplication not commutative, i.e. AB ≠ BA even if both products are defined. However, if for any matrices the relationship AB=BA is satisfied, then such matrices are called commutative. The most typical example is a single matrix, which commutes with any other matrix the same size. Only square ones can be permutable matrices of the same order. A × E = E × A = A

    Matrix multiplication has the following properties: 1. A × (B × C) = (A × B) × C; 2. A × (B + C) = AB + AC; 3. (A + B) × C = AC + BC; 4. α × (AB) = (αA) × B; 5. A × 0 = 0; 0 × A = 0; 6. (AB) T = B T A T; 7. (ABC) T = C T V T A T; 8. (A + B) T = A T + B T;

    2. Determinants of the 2nd and 3rd orders. Properties of determinants.

    Matrix determinant second order, or determinant second order is a number that is calculated by the formula:

    Matrix determinant third order, or determinant third order is a number that is calculated by the formula:

    This number represents an algebraic sum consisting of six terms. Each term contains exactly one element from each row and each column matrices. Each term consists of the product of three factors.

    Signs with which members determinant of the matrix included in the formula finding the determinant of the matrix third order can be determined using the given scheme, which is called the rule of triangles or Sarrus's rule. The first three terms are taken with a plus sign and determined from the left figure, and the next three terms are taken with a minus sign and determined from the right figure.

    Determine the number of terms to find determinant of the matrix, in an algebraic sum, you can calculate the factorial: 2! = 1 × 2 = 2 3! = 1 × 2 × 3 = 6

    Properties of matrix determinants

    Properties of matrix determinants:

    Property #1:

    Matrix determinant will not change if its rows are replaced with columns, each row with a column with the same number, and vice versa (Transposition). |A| = |A| T

    Consequence:

    Columns and Rows determinant of the matrix are equal, therefore, the properties inherent in rows also apply to columns.

    Property #2:

    When rearranging 2 rows or columns matrix determinant will change the sign to the opposite one, maintaining the absolute value, i.e.:

    Property #3:

    Matrix determinant having two identical rows is equal to zero.

    Property #4:

    Common factor of elements of any series determinant of the matrix can be taken as a sign determinant.

    Corollaries from properties No. 3 and No. 4:

    If all elements of a certain series (row or column) are proportional to the corresponding elements of a parallel series, then such matrix determinant equal to zero.

    Property #5:

    determinant of the matrix are equal to zero, then matrix determinant equal to zero.

    Property #6:

    If all elements of a row or column determinant presented as a sum of 2 terms, then determinant matrices can be represented as the sum of 2 determinants according to the formula:

    Property #7:

    If to any row (or column) determinant add the corresponding elements of another row (or column), multiplied by the same number, then matrix determinant will not change its value.

    Example of using properties for calculation determinant of the matrix:

    1st year, higher mathematics, studying matrices and basic actions on them. Here we systematize the basic operations that can be performed with matrices. Where to start getting acquainted with matrices? Of course, from the simplest things - definitions, basic concepts and simple operations. We assure you that the matrices will be understood by everyone who devotes at least a little time to them!

    Matrix Definition

    Matrix is a rectangular table of elements. Well, what if in simple language– table of numbers.

    Typically matrices are denoted in capitals in Latin letters. For example, matrix A , matrix B and so on. Matrices can be different sizes: rectangular, square, there are also row matrices and column matrices called vectors. The size of the matrix is ​​determined by the number of rows and columns. For example, let's write rectangular matrix size m on n , Where m – number of lines, and n – number of columns.

    Items for which i=j (a11, a22, .. ) form the main diagonal of the matrix and are called diagonal.

    What can you do with matrices? Add/Subtract, multiply by a number, multiply among themselves, transpose. Now about all these basic operations on matrices in order.

    Matrix addition and subtraction operations

    Let us immediately warn you that you can only add matrices of the same size. The result will be a matrix of the same size. Adding (or subtracting) matrices is simple - you just need to add their corresponding elements . Let's give an example. Let's perform the addition of two matrices A and B of size two by two.

    Subtraction is performed by analogy, only with the opposite sign.

    Any matrix can be multiplied by an arbitrary number. To do this you need to multiply each of its elements by this number. For example, let's multiply matrix A from the first example by the number 5:

    Matrix multiplication operation

    Not all matrices can be multiplied together. For example, we have two matrices - A and B. They can be multiplied by each other only if the number of columns of matrix A is equal to the number of rows of matrix B. In this case each element of the resulting matrix located in the i-th row and jth column, will be equal to the sum of the products of the corresponding elements in i-th line the first factor and the j-th column of the second. To understand this algorithm, let's write down how two square matrices are multiplied:

    And an example with real numbers. Let's multiply the matrices:

    Matrix transpose operation

    Matrix transposition is an operation where the corresponding rows and columns are swapped. For example, let's transpose the matrix A from the first example:

    Matrix determinant

    Determinant, or determinant, is one of the basic concepts linear algebra. Once upon a time, people came up with linear equations, and after them they had to come up with a determinant. In the end, it’s up to you to deal with all this, so, the last push!

    The determinant is a numerical characteristic square matrix, which is needed to solve many problems.
    To calculate the determinant of the simplest square matrix, you need to calculate the difference between the products of the elements of the main and secondary diagonals.

    The determinant of a matrix of first order, that is, consisting of one element, is equal to this element.

    What if the matrix is ​​three by three? This is more difficult, but you can cope.

    For such a matrix, the value of the determinant is equal to the sum of the products of the elements of the main diagonal and the products of the elements lying on the triangles with a face parallel to the main diagonal, from which the product of the elements of the secondary diagonal and the product of the elements lying on the triangles with the face of the parallel secondary diagonal are subtracted.

    Fortunately, in practice it is rarely necessary to calculate determinants of matrices of large sizes.

    Here we looked at basic operations on matrices. Of course, in real life you may never encounter even a hint of a matrix system of equations, or, on the contrary, you may encounter much more complex cases when you really have to rack your brains. It is for such cases that professional student services exist. Ask for help, get quality and detailed solution, enjoy your academic success and free time.

    So, in the previous lesson we looked at the rules for adding and subtracting matrices. These are such simple operations that most students understand them literally right off the bat.

    However, you rejoice early. The freebie is over - let's move on to multiplication. I’ll warn you right away: multiplying two matrices is not at all multiplying numbers located in cells with the same coordinates, as you might think. Everything is much more fun here. And we will have to start with preliminary definitions.

    Matched matrices

    One of the most important characteristics matrix is ​​its size. We've already talked about this a hundred times: the notation $A=\left[ m\times n \right]$ means that the matrix has exactly $m$ rows and $n$ columns. We have already discussed how not to confuse rows with columns. Something else is important now.

    Definition. Matrices of the form $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$, in which the number of columns in the first matrix coincides with the number of rows in the second, are called consistent.

    Once again: the number of columns in the first matrix is ​​equal to the number of rows in the second! From here we get two conclusions at once:

    1. The order of the matrices is important to us. For example, the matrices $A=\left[ 3\times 2 \right]$ and $B=\left[ 2\times 5 \right]$ are consistent (2 columns in the first matrix and 2 rows in the second), but vice versa — matrices $B=\left[ 2\times 5 \right]$ and $A=\left[ 3\times 2 \right]$ are no longer consistent (5 columns in the first matrix are not 3 rows in the second ).
    2. Consistency can be easily checked by writing down all the dimensions one after another. Using the example from the previous paragraph: “3 2 2 5” - the numbers in the middle are the same, so the matrices are consistent. But “2 5 3 2” are not consistent, since there are different numbers in the middle.

    In addition, Captain Obviousness seems to hint that square matrices of the same size $\left[ n\times n \right]$ are always consistent.

    In mathematics, when the order of listing objects is important (for example, in the definition discussed above, the order of matrices is important), we often talk about ordered pairs. We met them back in school: I think it’s a no brainer that the coordinates $\left(1;0 \right)$ and $\left(0;1 \right)$ define different points on a plane.

    So: coordinates are also ordered pairs that are made up of numbers. But nothing prevents you from making such a pair from matrices. Then we can say: “An ordered pair of matrices $\left(A;B \right)$ is consistent if the number of columns in the first matrix is ​​the same as the number of rows in the second.”

    So what?

    Definition of multiplication

    Consider two consistent matrices: $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$. And we define the multiplication operation for them.

    Definition. The product of two matched matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$ is the new matrix $C=\left[ m\times k \right] $, the elements of which are calculated using the formula:

    \[\begin(align) & ((c)_(i;j))=((a)_(i;1))\cdot ((b)_(1;j))+((a)_ (i;2))\cdot ((b)_(2;j))+\ldots +((a)_(i;n))\cdot ((b)_(n;j))= \\ & =\sum\limits_(t=1)^(n)(((a)_(i;t))\cdot ((b)_(t;j))) \end(align)\]

    Such a product is denoted in the standard way: $C=A\cdot B$.

    Those who see this definition for the first time immediately have two questions:

    1. What kind of fierce game is this?
    2. Why is it so difficult?

    Well, first things first. Let's start with the first question. What do all these indices mean? And how not to make mistakes when working with real matrices?

    First of all, we note that the long line for calculating $((c)_(i;j))$ (I specially put a semicolon between the indices so as not to get confused, but there is no need to put them at all - I myself got tired of typing the formula in the definition) actually comes down to a simple rule:

    1. Take the $i$th row in the first matrix;
    2. Take the $j$th column in the second matrix;
    3. We get two sequences of numbers. We multiply the elements of these sequences with identical numbers, and then add up the resulting products.

    This process is easy to understand from the picture:


    Scheme for multiplying two matrices

    Once again: we fix row $i$ in the first matrix, column $j$ in the second matrix, multiply elements with the same numbers, and then add the resulting products - we get $((c)_(ij))$. And so on for all $1\le i\le m$ and $1\le j\le k$. Those. There will be $m\times k$ of such “perversions” in total.

    In fact, we have already encountered matrix multiplication in school curriculum, only in a greatly reduced form. Let the vectors be given:

    \[\begin(align) & \vec(a)=\left(((x)_(a));((y)_(a));((z)_(a)) \right); \\ & \overrightarrow(b)=\left(((x)_(b));((y)_(b));((z)_(b)) \right). \\ \end(align)\]

    Then their scalar product will be exactly the sum of pairwise products:

    \[\overrightarrow(a)\times \overrightarrow(b)=((x)_(a))\cdot ((x)_(b))+((y)_(a))\cdot ((y )_(b))+((z)_(a))\cdot ((z)_(b))\]

    Basically, back when the trees were greener and the skies were brighter, we simply multiplied the row vector $\overrightarrow(a)$ by the column vector $\overrightarrow(b)$.

    Nothing has changed today. It’s just that now there are more of these row and column vectors.

    But enough theory! Let's look at real examples. And let's start with the simplest case - square matrices.

    Square matrix multiplication

    Task 1. Do the multiplication:

    \[\left[ \begin(array)(*(35)(r)) 1 & 2 \\ -3 & 4 \\\end(array) \right]\cdot \left[ \begin(array)(* (35)(r)) -2 & 4 \\ 3 & 1 \\\end(array) \right]\]

    Solution. So, we have two matrices: $A=\left[ 2\times 2 \right]$ and $B=\left[ 2\times 2 \right]$. It is clear that they are consistent (square matrices of the same size are always consistent). Therefore, we perform the multiplication:

    \[\begin(align) & \left[ \begin(array)(*(35)(r)) 1 & 2 \\ -3 & 4 \\\end(array) \right]\cdot \left[ \ begin(array)(*(35)(r)) -2 & 4 \\ 3 & 1 \\\end(array) \right]=\left[ \begin(array)(*(35)(r)) 1\cdot \left(-2 \right)+2\cdot 3 & 1\cdot 4+2\cdot 1 \\ -3\cdot \left(-2 \right)+4\cdot 3 & -3\cdot 4+4\cdot 1 \\\end(array) \right]= \\ & =\left[ \begin(array)(*(35)(r)) 4 & 6 \\ 18 & -8 \\\ end(array)\right]. \end(align)\]

    That's it!

    Answer: $\left[ \begin(array)(*(35)(r))4 & 6 \\ 18 & -8 \\\end(array) \right]$.

    Task 2. Do the multiplication:

    \[\left[ \begin(matrix) 1 & 3 \\ 2 & 6 \\\end(matrix) \right]\cdot \left[ \begin(array)(*(35)(r))9 & 6 \\ -3 & -2 \\\end(array) \right]\]

    Solution. Again, consistent matrices, so we perform the following actions:\[\]

    \[\begin(align) & \left[ \begin(matrix) 1 & 3 \\ 2 & 6 \\\end(matrix) \right]\cdot \left[ \begin(array)(*(35)() r)) 9 & 6 \\ -3 & -2 \\\end(array) \right]=\left[ \begin(array)(*(35)(r)) 1\cdot 9+3\cdot \ left(-3 \right) & 1\cdot 6+3\cdot \left(-2 \right) \\ 2\cdot 9+6\cdot \left(-3 \right) & 2\cdot 6+6\ cdot \left(-2 \right) \\\end(array) \right]= \\ & =\left[ \begin(matrix) 0 & 0 \\ 0 & 0 \\\end(matrix) \right] . \end(align)\]

    As you can see, the result is a matrix filled with zeros

    Answer: $\left[ \begin(matrix) 0 & 0 \\ 0 & 0 \\\end(matrix) \right]$.

    From the above examples it is obvious that matrix multiplication is not such a complicated operation. At least for 2 by 2 square matrices.

    In the process of calculations, we compiled an intermediate matrix, where we directly described which numbers are included in a particular cell. This is exactly what should be done when solving real problems.

    Basic properties of the matrix product

    In a nutshell. Matrix multiplication:

    1. Non-commutative: $A\cdot B\ne B\cdot A$ in the general case. There are, of course, special matrices for which the equality $A\cdot B=B\cdot A$ (for example, if $B=E$ is the identity matrix), but in the vast majority of cases this does not work;
    2. Associatively: $\left(A\cdot B \right)\cdot C=A\cdot \left(B\cdot C \right)$. There are no options here: adjacent matrices can be multiplied without worrying about what is to the left and to the right of these two matrices.
    3. Distributively: $A\cdot \left(B+C \right)=A\cdot B+A\cdot C$ and $\left(A+B \right)\cdot C=A\cdot C+B\cdot C $ (due to the non-commutativity of the product, it is necessary to separately specify right and left distributivity.

    And now - everything is the same, but in more detail.

    Matrix multiplication is in many ways similar to classical number multiplication. But there are differences, the most important of which is that Matrix multiplication is, generally speaking, non-commutative.

    Let's look again at the matrices from Problem 1. We already know their direct product:

    \[\left[ \begin(array)(*(35)(r)) 1 & 2 \\ -3 & 4 \\\end(array) \right]\cdot \left[ \begin(array)(* (35)(r)) -2 & 4 \\ 3 & 1 \\\end(array) \right]=\left[ \begin(array)(*(35)(r))4 & 6 \\ 18 & -8 \\\end(array) \right]\]

    But if we swap the matrices, we get a completely different result:

    \[\left[ \begin(array)(*(35)(r)) -2 & 4 \\ 3 & 1 \\\end(array) \right]\cdot \left[ \begin(array)(* (35)(r)) 1 & 2 \\ -3 & 4 \\\end(array) \right]=\left[ \begin(matrix) -14 & 4 \\ 0 & 10 \\\end(matrix )\right]\]

    It turns out that $A\cdot B\ne B\cdot A$. In addition, the multiplication operation is only defined for the consistent matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$, but no one has guaranteed that they will remain consistent. if they are swapped. For example, the matrices $\left[ 2\times 3 \right]$ and $\left[ 3\times 5 \right]$ are quite consistent in the specified order, but the same matrices $\left[ 3\times 5 \right] $ and $\left[ 2\times 3 \right]$ written in reverse order are no longer consistent. Sad.:(

    Among square matrices given size$n$ there will always be those that give the same result both when multiplied in direct and in reverse order. How to describe all such matrices (and how many there are in general) is a topic for a separate lesson. We won't talk about that today. :)

    However, matrix multiplication is associative:

    \[\left(A\cdot B \right)\cdot C=A\cdot \left(B\cdot C \right)\]

    Therefore, when you need to multiply several matrices in a row at once, it is not at all necessary to do it straight ahead: it is quite possible that some are nearby standing matrices when multiplied they give interesting result. For example, zero matrix, as in Problem 2 discussed above.

    In real problems, most often we have to multiply square matrices of size $\left[ n\times n \right]$. The set of all such matrices is denoted by $((M)^(n))$ (i.e., the entries $A=\left[ n\times n \right]$ and \ mean the same thing), and it will necessarily contain matrix $E$, which is called the identity matrix.

    Definition. An identity matrix of size $n$ is a matrix $E$ such that for any square matrix $A=\left[ n\times n \right]$ the equality holds:

    Such a matrix always looks the same: there are ones on its main diagonal, and zeros in all other cells.

    \[\begin(align) & A\cdot \left(B+C \right)=A\cdot B+A\cdot C; \\ & \left(A+B \right)\cdot C=A\cdot C+B\cdot C. \\ \end(align)\]

    In other words, if you need to multiply one matrix by the sum of two others, you can multiply it by each of these “other two” and then add the results. In practice, we usually have to perform the opposite operation: we notice the same matrix, take it out of brackets, perform addition and thereby simplify our life. :)

    Note: to describe distributivity, we had to write two formulas: where the sum is in the second factor and where the sum is in the first. This happens precisely because matrix multiplication is non-commutative (and in general, in non-commutative algebra there are a lot of all sorts of tricks that, when working with ordinary numbers don’t even come to mind). And if, for example, you need to write down this property in an exam, then be sure to write both formulas, otherwise the teacher may get a little angry.

    Okay, these were all fairy tales about square matrices. What about rectangular ones?

    The case of rectangular matrices

    But nothing - everything is the same as with square ones.

    Task 3. Do the multiplication:

    \[\left[ \begin(matrix) \begin(matrix) 5 \\ 2 \\ 3 \\\end(matrix) & \begin(matrix) 4 \\ 5 \\ 1 \\\end(matrix) \ \\end(matrix) \right]\cdot \left[ \begin(array)(*(35)(r)) -2 & 5 \\ 3 & 4 \\\end(array) \right]\]

    Solution. We have two matrices: $A=\left[ 3\times 2 \right]$ and $B=\left[ 2\times 2 \right]$. Let's write down the numbers indicating the sizes in a row:

    As you can see, the central two numbers coincide. This means that the matrices are consistent and can be multiplied. Moreover, at the output we get the matrix $C=\left[ 3\times 2 \right]$:

    \[\begin(align) & \left[ \begin(matrix) \begin(matrix) 5 \\ 2 \\ 3 \\\end(matrix) & \begin(matrix) 4 \\ 5 \\ 1 \\ \end(matrix) \\\end(matrix) \right]\cdot \left[ \begin(array)(*(35)(r)) -2 & 5 \\ 3 & 4 \\\end(array) \right]=\left[ \begin(array)(*(35)(r)) 5\cdot \left(-2 \right)+4\cdot 3 & 5\cdot 5+4\cdot 4 \\ 2 \cdot \left(-2 \right)+5\cdot 3 & 2\cdot 5+5\cdot 4 \\ 3\cdot \left(-2 \right)+1\cdot 3 & 3\cdot 5+1 \cdot 4 \\\end(array) \right]= \\ & =\left[ \begin(array)(*(35)(r)) 2 & 41 \\ 11 & 30 \\ -3 & 19 \ \\end(array) \right]. \end(align)\]

    Everything is clear: the final matrix has 3 rows and 2 columns. Quite $=\left[ 3\times 2 \right]$.

    Answer: $\left[ \begin(array)(*(35)(r)) \begin(array)(*(35)(r)) 2 \\ 11 \\ -3 \\\end(array) & \begin(matrix) 41 \\ 30 \\ 19 \\\end(matrix) \\\end(array) \right]$.

    Now let's look at one of the best training tasks for those who are just starting to work with matrices. In it you need not just multiply some two tablets, but first determine: is such multiplication permissible?

    Problem 4. Find all possible pairwise products of matrices:

    \\]; $B=\left[ \begin(matrix) \begin(matrix) 0 \\ 2 \\ 0 \\ 4 \\\end(matrix) & \begin(matrix) 1 \\ 0 \\ 3 \\ 0 \ \\end(matrix) \\\end(matrix) \right]$; $C=\left[ \begin(matrix)0 & 1 \\ 1 & 0 \\\end(matrix) \right]$.

    Solution. First, let's write down the sizes of the matrices:

    \;\ B=\left[ 4\times 2 \right];\ C=\left[ 2\times 2 \right]\]

    We find that the matrix $A$ can only be reconciled with the matrix $B$, since the number of columns of $A$ is 4, and only $B$ has this number of rows. Therefore, we can find the product:

    \\cdot \left[ \begin(array)(*(35)(r)) 0 & 1 \\ 2 & 0 \\ 0 & 3 \\ 4 & 0 \\\end(array) \right]=\ left[ \begin(array)(*(35)(r))-10 & 7 \\ 10 & 7 \\\end(array) \right]\]

    I suggest that the reader complete the intermediate steps independently. I will only note that it is better to determine the size of the resulting matrix in advance, even before any calculations:

    \\cdot \left[ 4\times 2 \right]=\left[ 2\times 2 \right]\]

    In other words, we simply remove the “transit” coefficients that ensured the consistency of the matrices.

    What other options are possible? Of course, one can find $B\cdot A$, since $B=\left[ 4\times 2 \right]$, $A=\left[ 2\times 4 \right]$, so the ordered pair $\left(B ;A \right)$ is consistent, and the dimension of the product will be:

    \\cdot \left[ 2\times 4 \right]=\left[ 4\times 4 \right]\]

    In short, the output will be a matrix $\left[ 4\times 4 \right]$, the coefficients of which can be easily calculated:

    \\cdot \left[ \begin(array)(*(35)(r)) 1 & -1 & 2 & -2 \\ 1 & 1 & 2 & 2 \\\end(array) \right]=\ left[ \begin(array)(*(35)(r))1 & 1 & 2 & 2 \\ 2 & -2 & 4 & -4 \\ 3 & 3 & 6 & 6 \\ 4 & -4 & 8 & -8 \\\end(array) \right]\]

    Obviously, we can also agree on $C\cdot A$ and $B\cdot C$ - and that’s it. Therefore, we simply write down the resulting products:

    It was easy. :)

    Answer: $AB=\left[ \begin(array)(*(35)(r)) -10 & 7 \\ 10 & 7 \\\end(array) \right]$; $BA=\left[ \begin(array)(*(35)(r)) 1 & 1 & 2 & 2 \\ 2 & -2 & 4 & -4 \\ 3 & 3 & 6 & 6 \\ 4 & -4 & 8 & -8 \\\end(array) \right]$; $CA=\left[ \begin(array)(*(35)(r)) 1 & 1 & 2 & 2 \\ 1 & -1 & 2 & -2 \\\end(array) \right]$; $BC=\left[ \begin(array)(*(35)(r))1 & 0 \\ 0 & 2 \\ 3 & 0 \\ 0 & 4 \\\end(array) \right]$.

    In general, I highly recommend doing this task yourself. And one more similar task, which is in homework. These seemingly simple thoughts will help you practice all the key stages of matrix multiplication.

    But the story doesn't end there. Let's move on to special cases of multiplication. :)

    Row vectors and column vectors

    One of the most common matrix operations is multiplication by a matrix that has one row or one column.

    Definition. A column vector is a matrix of size $\left[ m\times 1 \right]$, i.e. consisting of several rows and only one column.

    A row vector is a matrix of size $\left[ 1\times n \right]$, i.e. consisting of one row and several columns.

    In fact, we have already encountered these objects. For example, an ordinary three-dimensional vector from stereometry $\overrightarrow(a)=\left(x;y;z \right)$ is nothing more than a row vector. From a theoretical point of view, there is almost no difference between rows and columns. You only need to be careful when coordinating with the surrounding multiplier matrices.

    Task 5. Do the multiplication:

    \[\left[ \begin(array)(*(35)(r)) 2 & -1 & 3 \\ 4 & 2 & 0 \\ -1 & 1 & 1 \\\end(array) \right] \cdot \left[ \begin(array)(*(35)(r)) 1 \\ 2 \\ -1 \\\end(array) \right]\]

    Solution. Here we have the product of matched matrices: $\left[ 3\times 3 \right]\cdot \left[ 3\times 1 \right]=\left[ 3\times 1 \right]$. Let's find this piece:

    \[\left[ \begin(array)(*(35)(r)) 2 & -1 & 3 \\ 4 & 2 & 0 \\ -1 & 1 & 1 \\\end(array) \right] \cdot \left[ \begin(array)(*(35)(r)) 1 \\ 2 \\ -1 \\\end(array) \right]=\left[ \begin(array)(*(35 )(r)) 2\cdot 1+\left(-1 \right)\cdot 2+3\cdot \left(-1 \right) \\ 4\cdot 1+2\cdot 2+0\cdot 2 \ \ -1\cdot 1+1\cdot 2+1\cdot \left(-1 \right) \\\end(array) \right]=\left[ \begin(array)(*(35)(r) ) -3 \\ 8 \\ 0 \\\end(array) \right]\]

    Answer: $\left[ \begin(array)(*(35)(r))-3 \\ 8 \\ 0 \\\end(array) \right]$.

    Task 6. Do the multiplication:

    \[\left[ \begin(array)(*(35)(r)) 1 & 2 & -3 \\\end(array) \right]\cdot \left[ \begin(array)(*(35) (r)) 3 & 1 & -1 \\ 4 & -1 & 3 \\ 2 & 6 & 0 \\\end(array) \right]\]

    Solution. Again everything is agreed: $\left[ 1\times 3 \right]\cdot \left[ 3\times 3 \right]=\left[ 1\times 3 \right]$. We count the product:

    \[\left[ \begin(array)(*(35)(r)) 1 & 2 & -3 \\\end(array) \right]\cdot \left[ \begin(array)(*(35) (r)) 3 & 1 & -1 \\ 4 & -1 & 3 \\ 2 & 6 & 0 \\\end(array) \right]=\left[ \begin(array)(*(35)() r))5 & -19 & 5 \\\end(array) \right]\]

    Answer: $\left[ \begin(matrix) 5 & -19 & 5 \\\end(matrix) \right]$.

    As you can see, when we multiply a row vector and a column vector by a square matrix, the output always results in a row or column of the same size. This fact has many applications - from solving linear equations to all sorts of coordinate transformations (which ultimately also come down to systems of equations, but let's not talk about sad things).

    I think everything was obvious here. Let's move on to the final part of today's lesson.

    Matrix exponentiation

    Among all the multiplication operations, exponentiation deserves special attention - this is when we multiply the same object by itself several times. Matrices are no exception; they can also be raised to various powers.

    Such works are always agreed upon:

    \\cdot \left[ n\times n \right]=\left[ n\times n \right]\]

    And they are designated in exactly the same way as ordinary degrees:

    \[\begin(align) & A\cdot A=((A)^(2)); \\ & A\cdot A\cdot A=((A)^(3)); \\ & \underbrace(A\cdot A\cdot \ldots \cdot A)_(n)=((A)^(n)). \\ \end(align)\]

    At first glance, everything is simple. Let's see what this looks like in practice:

    Task 7. Raise the matrix to the indicated power:

    $((\left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right])^(3))$

    Solution. Well OK, let's build. First let's square it:

    \[\begin(align) & ((\left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right])^(2))=\left[ \begin(matrix ) 1 & 1 \\ 0 & 1 \\\end(matrix) \right]\cdot \left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right]= \\ & =\left[ \begin(array)(*(35)(r)) 1\cdot 1+1\cdot 0 & 1\cdot 1+1\cdot 1 \\ 0\cdot 1+1\cdot 0 & 0\cdot 1+1\cdot 1 \\\end(array) \right]= \\ & =\left[ \begin(array)(*(35)(r)) 1 & 2 \\ 0 & 1 \ \\end(array) \right] \end(align)\]

    \[\begin(align) & ((\left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right])^(3))=((\left[ \begin (matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right])^(3))\cdot \left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end( matrix) \right]= \\ & =\left[ \begin(array)(*(35)(r)) 1 & 2 \\ 0 & 1 \\\end(array) \right]\cdot \left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right]= \\ & =\left[ \begin(array)(*(35)(r)) 1 & 3 \\ 0 & 1 \\\end(array) \right] \end(align)\]

    That's all. :)

    Answer: $\left[ \begin(matrix)1 & 3 \\ 0 & 1 \\\end(matrix) \right]$.

    Problem 8. Raise the matrix to the indicated power:

    \[((\left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right])^(10))\]

    Solution. Just don’t cry now about the fact that “the degree is too big,” “the world is not fair,” and “the teachers have completely lost their shores.” It's actually easy:

    \[\begin(align) & ((\left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right])^(10))=((\left[ \begin (matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right])^(3))\cdot ((\left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\ end(matrix) \right])^(3))\cdot ((\left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right])^(3))\ cdot \left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right]= \\ & =\left(\left[ \begin(matrix) 1 & 3 \\ 0 & 1 \\\end(matrix) \right]\cdot \left[ \begin(matrix) 1 & 3 \\ 0 & 1 \\\end(matrix) \right] \right)\cdot \left(\left[ \begin(matrix) 1 & 3 \\ 0 & 1 \\\end(matrix) \right]\cdot \left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right ] \right)= \\ & =\left[ \begin(matrix) 1 & 6 \\ 0 & 1 \\\end(matrix) \right]\cdot \left[ \begin(matrix) 1 & 4 \\ 0 & 1 \\\end(matrix) \right]= \\ & =\left[ \begin(matrix) 1 & 10 \\ 0 & 1 \\\end(matrix) \right] \end(align)\ ]

    Notice that in the second line we used multiplication associativity. Actually, we used it in the previous task, but it was implicit there.

    Answer: $\left[ \begin(matrix) 1 & 10 \\ 0 & 1 \\\end(matrix) \right]$.

    As you can see, there is nothing complicated about raising a matrix to a power. The last example can be summarized:

    \[((\left[ \begin(matrix) 1 & 1 \\ 0 & 1 \\\end(matrix) \right])^(n))=\left[ \begin(array)(*(35) (r)) 1 & n \\ 0 & 1 \\\end(array) \right]\]

    This fact is easy to prove through mathematical induction or direct multiplication. However, it is not always possible to catch such patterns when raising to a power. Therefore, be careful: often multiplying several matrices “at random” turns out to be easier and faster than looking for some kind of patterns.

    In general, do not look for higher meaning where there is none. Finally, let's look at matrix exponentiation larger size- as much as $\left[ 3\times 3 \right]$.

    Problem 9. Raise the matrix to the indicated power:

    \[((\left[ \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right])^(3))\]

    Solution. Let's not look for patterns. We work ahead:

    \[((\left[ \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right])^(3))=(( \left[ \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right])^(2))\cdot \left[ \begin (matrix)0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right]\]

    First, let's square this matrix:

    \[\begin(align) & ((\left[ \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right])^( 2))=\left[ \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right]\cdot \left[ \begin(matrix ) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right]= \\ & =\left[ \begin(array)(*(35)(r )) 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \\\end(array) \right] \end(align)\]

    Now let's cube it:

    \[\begin(align) & ((\left[ \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right])^( 3))=\left[ \begin(array)(*(35)(r)) 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \\\end(array) \right] \cdot \left[ \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right]= \\ & =\left[ \begin( array)(*(35)(r)) 2 & 3 & 3 \\ 3 & 2 & 3 \\ 3 & 3 & 2 \\\end(array) \right] \end(align)\]

    That's it. The problem is solved.

    Answer: $\left[ \begin(matrix) 2 & 3 & 3 \\ 3 & 2 & 3 \\ 3 & 3 & 2 \\\end(matrix) \right]$.

    As you can see, the volume of calculations has become larger, but the meaning has not changed at all. :)

    This concludes the lesson. Next time we will consider the inverse operation: using the existing product we will look for the original factors.

    As you probably already guessed, we will talk about inverse matrix and methods for finding it.

    We will sequentially “exclude” the unknowns. To do this, we will leave the first equation of the system unchanged, and transform the second and third:

    1) to the second equation we add the first one, multiplied by –2, and bring it to the form –3 x 2 –2x 3 = –2;

    2) to the third equation we add the first one, multiplied by – 4, and bring it to the form –3 x 2 – 4x 3 = 2.

    As a result, the unknown will be excluded from the second and third equations x 1 and the system will take the form

    We multiply the second and third equations of the system by –1, we get

    Coefficient 1 in the first equation for the first unknown X 1 is called leading element the first step of elimination.

    In the second step, the first and second equations remain unchanged, and the same method of eliminating the variable is applied to the third equation x 2 . Leading element of the second step is the coefficient 3. To the third equation we add the second, multiplied by –1, then the system is transformed to the form

    (1.2)

    The process of reducing system (1.1) to form (1.2) is called direct progress of the method Gauss.

    The procedure for solving system (1.2) is called in reverse. From the last equation we get X 3 = –2. Substituting this value into the second equation, we get X 2 = 2. After this, the first equation gives X 1 = 1. Thus, is a solution to system (1.1).


    Matrix concept

    Let us consider the quantities included in system (1.1). A set of nine numerical coefficients appearing before the unknowns in equations forms a table of numbers called matrix:

    A= . (1.3)

    The table numbers are called elements matrices. Elements form rows and columns matrices. The number of rows and the number of columns form dimension matrices. Matrix A has a dimension of 3´3 (“three by three”), with the first number indicating the number of rows, and the second the number of columns. Often a matrix is ​​denoted by indicating its dimension A (3 ´ 3). Since the number of rows and columns in the matrix A the same, the matrix is ​​called square. The number of rows (and columns) in a square matrix is ​​called its in order, That's why A– matrix third order.



    The right sides of the equations also form a table of numbers, i.e. matrix:

    Each row of this matrix is ​​formed by a single element, so B(3 ´ 1) is called matrix-column, its dimension is 3´1. The set of unknowns can also be represented as a column matrix:

    Multiplying a square matrix by a column matrix

    Various operations can be performed with matrices, which will be discussed in detail later. Here we will only analyze the rule for multiplying a square matrix by a column matrix. By definition, the result of matrix multiplication A(3 ´ 3) per column IN(3 ´ 1) is the column D(3 ´ 1) , whose elements are equal to the sums of the products of the elements of the matrix rows A to column elements IN:

    2)second column element D equal to the sum of the products of the elements second matrix rows A to column elements IN:

    From the above formulas it is clear that multiplying a matrix by a column IN is possible only if the number of matrix columns A equal to the number of elements in the column IN.

    Let's look at two more numerical examples of matrix multiplication (3 ´3) per column (3 ´1) :

    Example 1.1

    AB =
    .

    Example 1.2

    AB= .