• If 2 rows in a matrix are equal then. Linear algebra Matrices and determinants. Algorithm for constructing an inverse matrix


    Square matrix A order n you can compare the number det A(or | A|, or ), called her determinant , as follows:

    Matrix determinant A also called her determinant . Rule for calculating the determinant for the order matrix N is quite difficult to understand and apply. However, methods are known that make it possible to implement the calculation of determinants of high orders based on determinants of lower orders. One of the methods is based on the property of expansion of the determinant into elements of a certain series (property 7). At the same time, we note that it is advisable to be able to calculate determinants of low orders (1, 2, 3) according to the definition.

    The calculation of the 2nd order determinant is illustrated by the diagram:


    Example 4.1. Find determinants of matrices

    When calculating the 3rd order determinant, it is convenient to use triangle rule (or Sarrus), which can be symbolically written as follows:

    Example 4.2. Calculate the determinant of a matrix

    det A = 5*1*(-3) + (-2)*(-4)*6 + 3*0*1 — 6*1*1 — 3*(-2)*(-3) — 0*(-4)*5 = -15+48-6-18 = 48-39 = 9.

    Let us formulate the basic properties of determinants inherent in determinants of all orders. We will explain some of these properties using third-order determinants.

    Property 1 (“Equality of rows and columns”). The determinant will not change if its rows are replaced by columns, and vice versa. In other words,

    In what follows, we will simply call rows and columns rows of determinant .

    Property 2 . When two parallel series are rearranged, the determinant changes sign.

    Property 3 . A determinant having two identical series is equal to zero.

    Property 4 . The common factor of the elements of any series of the determinant can be taken out of the sign of the determinant.

    From properties 3 and 4 it follows, that if all elements of a certain series are proportional to the corresponding elements of a parallel series, then such a determinant is equal to zero.

    Really,

    Property 5 . If the elements of any series of a determinant are the sums of two terms, then the determinant can be decomposed into the sum of two corresponding determinants.

    For example,

    Property 6. (“Elementary transformations of the determinant”). The determinant will not change if the corresponding elements of a parallel series are added to the elements of one series, multiplied by any number.

    Example 4.3. Prove that

    Solution: Indeed, using properties 5, 4 and 3 we will learn

    Further properties of determinants are related to the concepts of minor and algebraic complement.

    Minor some element аij determinant n- th order is called the determinant n- 1st order, obtained from the original by crossing out the row and column at the intersection of which the selected element is located. Designated mij

    Algebraic complement element aij of a determinant is called its minor, taken with a plus sign, if the sum i+j an even number, and with a minus sign if this amount is odd. Designated Aij:

    Property 7 (“Decomposition of a determinant into elements of a certain series”). The determinant is equal to the sum of the products of the elements of a certain series and their corresponding algebraic complements.

    Here we will outline those properties that are usually used to calculate determinants in a standard course of higher mathematics. This is an auxiliary topic that we will refer to from other sections as necessary.

    So, let a certain square matrix $A_(n\times n)=\left(\begin(array) (cccc) a_(11) & a_(12) & \ldots & a_(1n) \\ a_(21) be given & a_(22) & \ldots & a_(2n) \\ \ldots & \ldots & \ldots & \ldots \\ a_(n1) & a_(n2) & \ldots & a_(nn) \\ \end( array) \right)$. Every square matrix has a characteristic called a determinant (or determinant). I will not go into the essence of this concept here. If it requires clarification, then please write about it on the forum, and I will touch on this issue in more detail.

    The determinant of the matrix $A$ is denoted as $\Delta A$, $|A|$, or $\det A$. Determinant order equal to the number of rows (columns) in it.

    1. The value of the determinant will not change if its rows are replaced by the corresponding columns, i.e. $\Delta A=\Delta A^T$.

      show\hide

      Let’s replace the rows in it with columns according to the principle: “there was a first row - there was a first column”, “there was a second row - there was a second column”:

      Let's calculate the resulting determinant: $\left| \begin(array) (cc) 2 & 9 \\ 5 & 4 \end(array) \right|=2\cdot 4-9\cdot 5=-37$. As you can see, the value of the determinant has not changed due to the replacement.

    2. If you swap two rows (columns) of the determinant, the sign of the determinant will change to the opposite.

      Example of using this property: show\hide

      Consider the determinant $\left| \begin(array) (cc) 2 & 5 \\ 9 & 4 \end(array) \right|$. Let's find its value using formula No. 1 from the topic of calculating determinants of the second and third orders:

      $$\left| \begin(array) (cc) 2 & 5 \\ 9 & 4 \end(array) \right|=2\cdot 4-5\cdot 9=-37.$$

      Now let's swap the first and second lines. We obtain the determinant $\left| \begin(array) (cc) 9 & 4 \\ 2 & 5 \end(array) \right|$. Let's calculate the resulting determinant: $\left| \begin(array) (cc) 9 & 4 \\ 2 & 5 \end(array) \right|=9\cdot 5-4\cdot 2=37$. So, the value of the original determinant was (-37), and the value of the determinant with the changed row order is $-(-37)=37$. The sign of the determinant has changed to the opposite.

    3. A determinant for which all elements of a row (column) are equal to zero is equal to zero.

      Example of using this property: show\hide

      Since in the determinant $\left| \begin(array) (ccc) -7 & 10 & 0\\ -9 & 21 & 0\\ 2 & -3 & 0 \end(array) \right|$ all elements of the third column are zero, then the determinant is zero , i.e. $\left| \begin(array) (ccc) -7 & 10 & 0\\ -9 & 21 & 0\\ 2 & -3 & 0 \end(array) \right|=0$.

    4. The determinant in which all elements of a certain row (column) are equal to the corresponding elements of another row (column) is equal to zero.

      Example of using this property: show\hide

      Since in the determinant $\left| \begin(array) (ccc) -7 & 10 & 0\\ -7 & 10 & 0\\ 2 & -3 & 18 \end(array) \right|$ all elements of the first row are equal to the corresponding elements of the second row, then the determinant is equal to zero, i.e. $\left| \begin(array) (ccc) -7 & 10 & 0\\ -7 & 10 & 0\\ 2 & -3 & 18 \end(array) \right|=0$.

    5. If in a determinant all elements of one row (column) are proportional to the corresponding elements of another row (column), then such a determinant is equal to zero.

      Example of using this property: show\hide

      Since in the determinant $\left| \begin(array) (ccc) -7 & 10 & 28\\ 5 & -3 & 0\\ -15 & 9 & 0 \end(array) \right|$ The second and third rows are proportional, i.e. $r_3=-3\cdot(r_2)$, then the determinant is equal to zero, i.e. $\left| \begin(array) (ccc) -7 & 10 & 28\\ 5 & -3 & 0\\ -15 & 9 & 0 \end(array) \right|=0$.

    6. If all elements of a row (column) have a common factor, then this factor can be taken out of the determinant sign.

      Example of using this property: show\hide

      Consider the determinant $\left| \begin(array) (cc) -7 & 10 \\ -9 & 21 \end(array) \right|$. Notice that all elements in the second row are divisible by 3:

      $$\left| \begin(array) (cc) -7 & 10 \\ -9 & 21 \end(array) \right|=\left| \begin(array) (cc) -7 & 10 \\ 3\cdot(-3) & 3\cdot 7 \end(array) \right|$$

      The number 3 is the common factor of all elements of the second row. Let's take the three out of the determinant sign:

      $$\left| \begin(array) (cc) -7 & 10 \\ -9 & 21 \end(array) \right|=\left| \begin(array) (cc) -7 & 10 \\ 3\cdot(-3) & 3\cdot 7 \end(array) \right|= 3\cdot \left| \begin(array) (cc) -7 & 10 \\ -3 & 7 \end(array) \right| $$

    7. The determinant will not change if to all the elements of a certain row (column) we add the corresponding elements of another row (column), multiplied by an arbitrary number.

      Example of using this property: show\hide

      Consider the determinant $\left| \begin(array) (ccc) -7 & 10 & 0\\ -9 & 21 & 4 \\ 2 & -3 & 1 \end(array) \right|$. Let's add to the elements of the second line the corresponding elements of the third line, multiplied by 5. This action is written as follows: $r_2+5\cdot(r_3)$. The second line will be changed, the remaining lines will remain unchanged.

      $$\left| \begin(array) (ccc) -7 & 10 & 0\\ -9 & 21 & 4 \\ 2 & -3 & 1 \end(array) \right| \begin(array) (l) \phantom(0)\\ r_2+5\cdot(r_3)\\ \phantom(0) \end(array)= \left| \begin(array) (ccc) -7 & 10 & 0\\ -9+5\cdot 2 & 21+5\cdot (-3) & 4+5\cdot 1 \\ 2 & -3 & 1 \end (array) \right|= \left| \begin(array) (ccc) -7 & 10 & 0\\ 1 & 6 & 9 \\ 2 & -3 & 1 \end(array) \right|. $$

    8. If a certain row (column) in a determinant is a linear combination of other rows (columns), then the determinant is equal to zero.

      Example of using this property: show\hide

      Let me immediately explain what the phrase “linear combination” means. Let us have s rows (or columns): $A_1$, $A_2$,..., $A_s$. Expression

      $$ k_1\cdot A_1+k_2\cdot A_2+\ldots+k_s\cdot A_s, $$

      where $k_i\in R$ is called a linear combination of rows (columns) $A_1$, $A_2$,..., $A_s$.

      For example, consider the following determinant:

      $$\left| \begin(array) (cccc) -1 & 2 & 3 & 0\\ -2 & -4 & -5 & 1\\ 5 & 0 & 7 & 10 \\ -13 & -8 & -16 & -7 \end(array) \right| $$

      In this determinant, the fourth row can be expressed as a linear combination of the first three rows:

      $$ r_4=2\cdot(r_1)+3\cdot(r_2)-r_3 $$

      Therefore, the determinant in question is equal to zero.

    9. If each element of a certain k-th row (k-th column) of a determinant is equal to the sum of two terms, then such a determinant is equal to the sum of determinants, the first of which has the first terms in the k-th row (k-th column), and the second determinant the kth row (kth column) contains the second terms. Other elements of these determinants are the same.

      Example of using this property: show\hide

      Consider the determinant $\left| \begin(array) (ccc) -7 & 10 & 0\\ -9 & 21 & 4 \\ 2 & -3 & 1 \end(array) \right|$. Let's write the elements of the second column like this: $\left| \begin(array) (ccc) -7 & 3+7 & 0\\ -9 & 21+0 & 4 \\ 2 & 5+(-8) & 1 \end(array) \right|$. Then such a determinant is equal to the sum of two determinants:

      $$\left| \begin(array) (ccc) -7 & 10 & 0\\ -9 & 21 & 4 \\ 2 & -3 & 1 \end(array) \right|= \left| \begin(array) (ccc) -7 & 3+7 & 0\\ -9 & 21+0 & 4 \\ 2 & 5+(-8) & 1 \end(array) \right|= \left| \begin(array) (ccc) -7 & 3 & 0\\ -9 & 21 & 4 \\ 2 & 5 & 1 \end(array) \right|+ \left| \begin(array) (ccc) -7 & 7 & 0\\ -9 & 0 & 4 \\ 2 & -8 & 1 \end(array) \right| $$

    10. The determinant of the product of two square matrices of the same order is equal to the product of the determinants of these matrices, i.e. $\det(A\cdot B)=\det A\cdot \det B$. From this rule we can obtain the following formula: $\det \left(A^n \right)=\left(\det A \right)^n$.
    11. If the matrix $A$ is non-singular (i.e. its determinant is not equal to zero), then $\det \left(A^(-1)\right)=\frac(1)(\det A)$.

    Formulas for calculating determinants

    For determinants of the second and third orders, the following formulas are correct:

    \begin(equation) \Delta A=\left| \begin(array) (cc) a_(11) & a_(12) \\ a_(21) & a_(22) \end(array) \right|=a_(11)\cdot a_(22)-a_( 12)\cdot a_(21) \end(equation) \begin(equation) \begin(aligned) & \Delta A=\left| \begin(array) (ccc) a_(11) & a_(12) & a_(13) \\ a_(21) & a_(22) & a_(23) \\ a_(31) & a_(32) & a_(33) \end(array) \right|= a_(11)\cdot a_(22)\cdot a_(33)+a_(12)\cdot a_(23)\cdot a_(31)+a_(21 )\cdot a_(32)\cdot a_(13)-\\ & -a_(13)\cdot a_(22)\cdot a_(31)-a_(12)\cdot a_(21)\cdot a_(33 )-a_(23)\cdot a_(32)\cdot a_(11)\end(aligned)\end(equation)

    Examples of using formulas (1) and (2) are in the topic "Formulas for calculating determinants of the second and third orders. Examples of calculating determinants".

    The determinant of the matrix $A_(n\times n)$ can be expanded in the i-th row using the following formula:

    \begin(equation)\Delta A=\sum\limits_(j=1)^(n)a_(ij)A_(ij)=a_(i1)A_(i1)+a_(i2)A_(i2)+\ ldots+a_(in)A_(in) \end(equation)

    An analogue of this formula also exists for columns. The formula for expanding the determinant in the jth column is as follows:

    \begin(equation)\Delta A=\sum\limits_(i=1)^(n)a_(ij)A_(ij)=a_(1j)A_(1j)+a_(2j)A_(2j)+\ ldots+a_(nj)A_(nj) \end(equation)

    The rules expressed by formulas (3) and (4) are illustrated in detail with examples and explained in the topic Reducing the order of the determinant. Decomposition of the determinant in a row (column).

    Let us indicate another formula for calculating the determinants of upper triangular and lower triangular matrices (for an explanation of these terms, see the topic “Matrixes. Types of matrices. Basic terms”). The determinant of such a matrix is ​​equal to the product of the elements on the main diagonal. Examples:

    \begin(aligned) &\left| \begin(array) (cccc) 2 & -2 & 9 & 1 \\ 0 & 9 & 8 & 0 \\ 0 & 0 & 4 & -7 \\ 0 & 0 & 0 & -6 \end(array) \right|= 2\cdot 9\cdot 4\cdot (-6)=-432.\\ &\left| \begin(array) (cccc) -3 & 0 & 0 & 0 \\ -5 & 0 & 0 & 0 \\ 8 & 2 & 1 & 0 \\ 5 & 4 & 0 & 10 \end(array) \ right|= -3\cdot 0\cdot 1 \cdot 10=0. \end(aligned)

    The main numerical characteristic of a square matrix is ​​its determinant. Consider a second order square matrix

    A determinant or determinant of the second order is a number calculated according to the following rule

    For example,

    Let us now consider a third-order square matrix

    .

    A third-order determinant is a number calculated using the following rule

    In order to remember the combination of terms included in the expressions for determining the third-order determinant, they usually use Sarrus' rule: the first of the three terms included in the right side with a plus sign is the product of elements located on the main diagonal of the matrix, and each of the other two is the product of elements lying on a parallel to this diagonal and an element from the opposite corner of the matrix.

    The last three terms, included with a minus sign, are determined in a similar way, only with respect to the secondary diagonal.

    Example:

    Basic properties of matrix determinants

    1. The value of the determinant does not change when the matrix is ​​transposed.

    2. When rearranging the rows or columns of the matrix, the determinant only changes the sign, maintaining the absolute value.

    3. The determinant containing proportional rows or columns is equal to zero.

    4. The common factor of the elements of a certain row or column can be taken out of the determinant sign.

    5. If all elements of a certain row or column are equal to zero, then the determinant itself is equal to zero.

    6. If to the elements of a separate row or column of a determinant we add elements of another row or column, multiplied by an arbitrary non-degenerate factor, then the value of the determinant will not change.

    Minor A matrix is ​​a determinant obtained by deleting the same number of columns and rows from a square matrix.

    If all minors of order higher than , which can be composed from a matrix, are equal to zero, and among the minors of order at least one is nonzero, then the number is called rank this matrix.

    Algebraic complement element of the determinant of order we will call its minor order, obtained by crossing out the corresponding row and column at the intersection of which there is an element taken with a plus sign if the sum of the indices is equal to an even number and with a minus sign otherwise.

    Thus

    ,

    where is the corresponding minor order.

    Calculating the determinant of a matrix by row or column expansion

    The determinant of a matrix is ​​equal to the sum of the products of the elements of any row (any column) of the matrix by the corresponding algebraic complements of the elements of this row (this column). When calculating the determinant of a matrix in this way, you should be guided by the following rule: select the row or column with the largest number of zero elements. This technique allows you to significantly reduce the amount of calculations.

    Example: .

    When calculating this determinant, we used the technique of decomposing it into the elements of the first column. As can be seen from the above formula, there is no need to calculate the last of the second-order determinants, because it is multiplied by zero.

    Calculating the inverse matrix

    When solving matrix equations, the inverse matrix is ​​widely used. To a certain extent, it replaces the division operation, which is not explicitly present in matrix algebra.

    Square matrices of the same order, the product of which gives the identity matrix, are called reciprocals or inverses. The inverse matrix is ​​denoted and the following holds true for it:

    It is possible to calculate the inverse matrix only for a matrix for which .

    Classic algorithm for calculating the inverse matrix

    1. Write down the matrix transposed to the matrix.

    2. Replace each element of the matrix with a determinant obtained by crossing out the row and column at the intersection of which this element is located.

    3. This determinant is accompanied by a plus sign if the sum of the indices of the element is even, and a minus sign otherwise.

    4. Divide the resulting matrix by the determinant of the matrix.

    - Release the titmouse to certain death!
    Let freedom caress her!
    And the ship is sailing, and the reactor is roaring...
    - Pash, are you stubborn?

    I remember I didn’t like algebra until the 8th grade. I didn't like it at all. She pissed me off. Because I didn’t understand anything there.

    And then everything changed because I discovered one trick:

    In mathematics in general (and algebra in particular) everything is built on a competent and consistent system of definitions. If you know the definitions, understand their essence, it won’t be difficult to figure out the rest.

    That's how it is with the topic of today's lesson. We will consider in detail several related issues and definitions, thanks to which you will once and for all understand matrices, determinants, and all their properties.

    Determinants are a central concept in matrix algebra. Like abbreviated multiplication formulas, they will haunt you throughout the course of higher mathematics. Therefore, we read, watch and understand thoroughly. :)

    And we will start with the most intimate thing - what is a matrix? And how to work with it correctly.

    Correct placement of indices in the matrix

    A matrix is ​​simply a table filled with numbers. Neo has nothing to do with it.

    One of the key characteristics of a matrix is ​​its dimension, i.e. the number of rows and columns it consists of. We usually say that a certain matrix $A$ has size $\left[ m\times n \right]$ if it has $m$ rows and $n$ columns. Write it like this:

    Or like this:

    There are other designations - it all depends on the preferences of the lecturer/seminarist/author of the textbook. But in any case, with all these $\left[ m\times n \right]$ and $((a)_(ij))$ the same problem arises:

    Which index is responsible for what? Does the row number come first, then the column number? Or vice versa?

    When reading lectures and textbooks, the answer will seem obvious. But when in an exam you only have a sheet of paper with a task in front of you, you can get overexcited and suddenly become confused.

    So let's settle this issue once and for all. To begin with, let’s remember the usual coordinate system from a school mathematics course:

    Introduction of a coordinate system on a plane

    Remember her? It has an origin (point $O=\left(0;0 \right)$) of the $x$ and $y$ axes, and each point on the plane is uniquely determined by the coordinates: $A=\left(1;2 \ right)$, $B=\left(3;1 \right)$, etc.

    Now let's take this construction and place it next to the matrix so that the origin of coordinates is in the upper left corner. Why there? Yes, because when opening a book, we start reading precisely from the upper left corner of the page - remembering this is easy.

    But where should the axes be directed? We will direct them so that our entire virtual “page” is covered by these axes. True, for this we will have to rotate our coordinate system. The only possible option for this arrangement is:

    Overlaying a coordinate system on a matrix

    Now every cell of the matrix has unique coordinates $x$ and $y$. For example, writing $((a)_(24))$ means that we are accessing the element with coordinates $x=2$ and $y=4$. The dimensions of the matrix are also uniquely specified by a pair of numbers:

    Defining indices in a matrix

    Just look at this picture carefully. Play around with coordinates (especially when you work with real matrices and determinants) - and very soon you will understand that even in the most complex theorems and definitions you understand perfectly well what is being said.

    Got it? Well, let's move on to the first step of enlightenment - the geometric definition of the determinant. :)

    Geometric definition

    First of all, I would like to note that the determinant exists only for square matrices of the form $\left[ n\times n \right]$. A determinant is a number that is calculated according to certain rules and is one of the characteristics of this matrix (there are other characteristics: rank, eigenvectors, but more on that in other lessons).

    So what is this characteristic? What does it mean? It's simple:

    The determinant of a square matrix $A=\left[ n\times n \right]$ is the volume of an $n$-dimensional parallelepiped that is formed if we consider the rows of the matrix as vectors forming the edges of this parallelepiped.

    For example, the determinant of a 2x2 matrix is ​​simply the area of ​​a parallelogram, but for a 3x3 matrix it is already the volume of a 3-dimensional parallelepiped - the same one that infuriates all high school students in stereometry lessons.

    At first glance, this definition may seem completely inadequate. But let's not rush to conclusions - let's look at examples. In fact, everything is elementary, Watson:

    Task. Find the determinants of the matrices:

    \[\left| \begin(matrix) 1 & 0 \\ 0 & 3 \\\end(matrix) \right|\quad \left| \begin(matrix) 1 & -1 \\ 2 & 2 \\\end(matrix) \right|\quad \left| \begin(matrix)2 & 0 & 0 \\ 1 & 3 & 0 \\ 1 & 1 & 4 \\\end(matrix) \right|\]

    Solution. The first two determinants have size 2x2. So these are simply the areas of parallelograms. Let's draw them and calculate the area.

    The first parallelogram is built on the vectors $((v)_(1))=\left(1;0 \right)$ and $((v)_(2))=\left(0;3 \right)$:

    The determinant of 2x2 is the area of ​​a parallelogram

    Obviously, this is not just a parallelogram, but quite a rectangle. Its area is

    The second parallelogram is built on the vectors $((v)_(1))=\left(1;-1 \right)$ and $((v)_(2))=\left(2;2 \right)$. So what? This is also a rectangle:

    Another 2x2 determinant

    The sides of this rectangle (essentially the lengths of the vectors) are easily calculated using the Pythagorean theorem:

    \[\begin(align) & \left| ((v)_(1)) \right|=\sqrt(((1)^(2))+((\left(-1 \right))^(2)))=\sqrt(2); \\ & \left| ((v)_(2)) \right|=\sqrt(((2)^(2))+((2)^(2)))=\sqrt(8)=2\sqrt(2); \\&S=\left| ((v)_(1)) \right|\cdot \left| ((v)_(2)) \right|=\sqrt(2)\cdot 2\sqrt(2)=4. \\\end(align)\]

    It remains to deal with the last determinant - it already contains a 3x3 matrix. You'll have to remember stereometry:


    The determinant of 3x3 is the volume of a parallelepiped

    It looks mind-blowing, but in fact it’s enough to remember the formula for the volume of a parallelepiped:

    where $S$ is the area of ​​the base (in our case, this is the area of ​​the parallelogram on the $OXY$ plane), $h$ is the height drawn to this base (in fact, the $z$-coordinate of the vector $((v)_(3) )$).

    The area of ​​a parallelogram (we drew it separately) is also easy to calculate:

    \[\begin(align) & S=2\cdot 3=6; \\ & V=S\cdot h=6\cdot 4=24. \\\end(align)\]

    That's it! We write down the answers.

    Answer: 3; 4; 24.

    A small note about the notation system. Some people probably won’t like the fact that I ignore the “arrows” above the vectors. Supposedly, a vector can be confused with a point or something else.

    But let's be serious: we are already grown-up boys and girls, so from the context we understand perfectly well when we are talking about a vector and when we are talking about a point. The arrows only clog up the narrative, which is already stuffed to the brim with mathematical formulas.

    And one more thing. In principle, nothing prevents us from considering the determinant of a 1x1 matrix - such a matrix is ​​simply one cell, and the number written in this cell will be the determinant. But there is an important note here:

    Unlike the classical volume, the determinant will give us the so-called “ oriented volume", i.e. volume taking into account the sequence of consideration of row vectors.

    And if you want to get the volume in the classical sense of the word, you will have to take the determinant module, but now there is no need to worry about it - anyway, in a few seconds we will learn how to calculate any determinant with any signs, sizes, etc. :)

    Algebraic definition

    For all the beauty and clarity of the geometric approach, it has a serious drawback: it does not tell us anything about how to calculate this very determinant.

    Therefore, now we will analyze an alternative definition - algebraic. To do this, we will need a brief theoretical preparation, but at the end we will get a tool that allows us to calculate whatever and however we want in matrices.

    True, a new problem will appear there... but first things first.

    Permutations and inversions

    Let's write down the numbers from 1 to $n$ on a line. You'll get something like this:

    Now (just for fun) let's swap a couple of numbers. You can change the neighboring ones:

    Or maybe - not particularly neighboring:

    And guess what? Nothing! In algebra this crap is called permutation. And it has a lot of properties.

    Definition. A permutation of length $n$ is a string of $n$ different numbers written in any order. Usually the first $n$ natural numbers are considered (i.e. the numbers 1, 2, ..., $n$), and then they are mixed to obtain the desired permutation.

    Permutations are denoted in the same way as vectors - simply by a letter and a sequential listing of their elements in brackets. For example: $p=\left(1;3;2 \right)$ or $p=\left(2;5;1;4;3 \right)$. The letter can be anything, but let it be $p$. :)

    Further, for simplicity of presentation, we will work with permutations of length 5 - they are already serious enough to observe any suspicious effects, but are not yet as severe for a fragile brain as permutations of length 6 or more. Here are examples of such permutations:

    \[\begin(align) & ((p)_(1))=\left(1;2;3;4;5 \right) \\ & ((p)_(2))=\left(1 ;3;2;5;4 \right) \\ & ((p)_(3))=\left(5;4;3;2;1 \right) \\\end(align)\]

    Naturally, a permutation of length $n$ can be considered as a function that is defined on the set $\left\( 1;2;...;n \right\)$ and bijectively maps this set onto itself. Returning to the permutations just written $((p)_(1))$, $((p)_(2))$ and $((p)_(3))$, we can quite legitimately write:

    \[((p)_(1))\left(1 \right)=1;((p)_(2))\left(3 \right)=2;((p)_(3))\ left(2 \right)=4;\]

    The number of different permutations of length $n$ is always limited and equal to $n!$ - this is an easily provable fact from combinatorics. For example, if we want to write down all permutations of length 5, then we will hesitate a lot, since there will be such permutations

    One of the key characteristics of any permutation is the number of inversions in it.

    Definition. Inversion in the permutation $p=\left(((a)_(1));((a)_(2));...;((a)_(n)) \right)$ - any pair $ \left(((a)_(i));((a)_(j)) \right)$ such that $i \lt j$, but $((a)_(i)) \gt ( (a)_(j))$. Simply put, inversion is when a larger number is to the left of a smaller one (not necessarily its neighbor).

    We will denote by $N\left(p \right)$ the number of inversions in the permutation $p$, but be prepared to encounter other notations in different textbooks and different authors - there are no uniform standards here. The topic of inversions is very extensive, and a separate lesson will be devoted to it. Now our task is simply to learn how to count them in real problems.

    For example, let's count the number of inversions in the permutation $p=\left(1;4;5;3;2 \right)$:

    \[\left(4;3 \right);\left(4;2 \right);\left(5;3 \right);\left(5;2 \right);\left(3;2 \right ).\]

    Thus, $N\left(p \right)=5$. As you can see, there is nothing wrong with this. I’ll say right away: from now on we will be interested not so much in the number $N\left(p \right)$ itself, but in its evenness/oddity. And here we smoothly move on to the key term of today's lesson.

    What is a determinant

    Let a square matrix $A=\left[ n\times n \right]$ be given. Then:

    Definition. The determinant of the matrix $A=\left[ n\times n \right]$ is the algebraic sum of $n!$ terms composed as follows. Each term is the product of $n$ matrix elements, taken one from each row and each column, multiplied by (−1) to the power of the number of inversions:

    \[\left| A\right|=\sum\limits_(n{{{\left(-1 \right)}^{N\left(p \right)}}\cdot {{a}_{1;p\left(1 \right)}}\cdot {{a}_{2;p\left(2 \right)}}\cdot ...\cdot {{a}_{n;p\left(n \right)}}}\]!}

    The fundamental point when choosing factors for each term in the determinant is the fact that no two factors appear in the same row or in the same column.

    Thanks to this, we can assume without loss of generality that the indices $i$ of the factors $((a)_(i;j))$ “run through” the values ​​1, ..., $n$, and the indices $j$ are some permutation of first:

    And when there is a permutation $p$, we can easily calculate the inversions $N\left(p \right)$ - and the next term of the determinant is ready.

    Naturally, no one forbids swapping factors in any term (or in all of them at once - why bother?), and then the first indices will also represent some kind of rearrangement. But in the end, nothing will change: the total number of inversions in the indices $i$ and $j$ retains parity under such distortions, which is quite consistent with the good old rule:

    Rearranging the factors does not change the product of numbers.

    Just don’t attach this rule to matrix multiplication - unlike number multiplication, it is not commutative. But I digress. :)

    Matrix 2x2

    In fact, you can also consider a 1x1 matrix - this will be one cell, and its determinant, as you might guess, is equal to the number written in this cell. Nothing interesting.

    So let's consider a 2x2 square matrix:

    \[\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) \\ ((a)_(21)) & ((a)_(22)) \\\end(matrix) \right]\]

    Since the number of lines in it is $n=2$, the determinant will contain $n!=2!=1\cdot 2=2$ terms. Let's write them down:

    \[\begin(align) & ((\left(-1 \right))^(N\left(1;2 \right)))\cdot ((a)_(11))\cdot ((a) _(22))=((\left(-1 \right))^(0))\cdot ((a)_(11))\cdot ((a)_(22))=((a)_ (11))((a)_(22)); \\ & ((\left(-1 \right))^(N\left(2;1 \right)))\cdot ((a)_(12))\cdot ((a)_(21)) =((\left(-1 \right))^(1))\cdot ((a)_(12))\cdot ((a)_(21))=((a)_(12))( (a)_(21)). \\\end(align)\]

    Obviously, in the permutation $\left(1;2 \right)$, consisting of two elements, there are no inversions, so $N\left(1;2 \right)=0$. But in the permutation $\left(2;1 \right)$ there is one inversion (in fact, 2< 1), поэтому $N\left(2;1 \right)=1.$

    In total, the universal formula for calculating the determinant for a 2x2 matrix looks like this:

    \[\left| \begin(matrix) ((a)_(11)) & ((a)_(12)) \\ ((a)_(21)) & ((a)_(22)) \\\end( matrix) \right|=((a)_(11))((a)_(22))-((a)_(12))((a)_(21))\]

    Graphically, this can be represented as the product of the elements on the main diagonal, minus the product of the elements on the side diagonal:

    Determinant of a 2x2 matrix

    Let's look at a couple of examples:

    \[\left| \begin(matrix) 5 & 6 \\ 8 & 9 \\\end(matrix) \right|;\quad \left| \begin(matrix) 7 & 12 \\ 14 & 1 \\\end(matrix) \right|.\]

    Solution. Everything is counted in one line. First matrix:

    And the second:

    Answer: −3; −161.

    However, it was too simple. Let's look at 3x3 matrices - it's already interesting.

    Matrix 3x3

    Now consider a 3x3 square matrix:

    \[\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) & ((a)_(13)) \\ ((a)_(21)) & ((a)_(22)) & ((a)_(23)) \\ ((a)_(31)) & ((a)_(32)) & ((a)_(33) ) \\\end(matrix) \right]\]

    When calculating its determinant, we get $3!=1\cdot 2\cdot 3=6$ terms - not too many to panic, but enough to start looking for some patterns. First, let’s write out all the permutations of three elements and count the inversions in each of them:

    \[\begin(align) & ((p)_(1))=\left(1;2;3 \right)\Rightarrow N\left(((p)_(1)) \right)=N\ left(1;2;3 \right)=0; \\ & ((p)_(2))=\left(1;3;2 \right)\Rightarrow N\left(((p)_(2)) \right)=N\left(1;3 ;2 \right)=1; \\ & ((p)_(3))=\left(2;1;3 \right)\Rightarrow N\left(((p)_(3)) \right)=N\left(2;1 ;3 \right)=1; \\ & ((p)_(4))=\left(2;3;1 \right)\Rightarrow N\left(((p)_(4)) \right)=N\left(2;3 ;1 \right)=2; \\ & ((p)_(5))=\left(3;1;2 \right)\Rightarrow N\left(((p)_(5)) \right)=N\left(3;1 ;2 \right)=2; \\ & ((p)_(6))=\left(3;2;1 \right)\Rightarrow N\left(((p)_(6)) \right)=N\left(3;2 ;1 \right)=3. \\\end(align)\]

    As expected, a total of 6 permutations were written out: $((p)_(1))$, ... $((p)_(6))$ (naturally, it would be possible to write them down in a different sequence - this makes no difference will change), and the number of inversions in them varies from 0 to 3.

    In general, we will have three terms with a “plus” (where $N\left(p \right)$ is even) and three more with a “minus”. In general, the determinant will be calculated according to the formula:

    \[\left| \begin(matrix) ((a)_(11)) & ((a)_(12)) & ((a)_(13)) \\ ((a)_(21)) & ((a) _(22)) & ((a)_(23)) \\ ((a)_(31)) & ((a)_(32)) & ((a)_(33)) \\\end (matrix) \right|=\begin(matrix) ((a)_(11))((a)_(22))((a)_(33))+((a)_(12))( (a)_(23))((a)_(31))+((a)_(13))((a)_(21))((a)_(32))- \\ -( (a)_(13))((a)_(22))((a)_(31))-((a)_(12))((a)_(21))((a)_ (33))-((a)_(11))((a)_(23))((a)_(32)) \\\end(matrix)\]

    Just don’t sit down and furiously cram all these indexes now! Instead of incomprehensible numbers, it is better to remember the following mnemonic rule:

    Triangle rule. To find the determinant of a 3x3 matrix, you need to add three products of elements located on the main diagonal and at the vertices of isosceles triangles with a side parallel to this diagonal, and then subtract the same three products, but on the secondary diagonal. Schematically it looks like this:


    Determinant of a 3x3 matrix: triangle rule

    It is these triangles (or pentagrams, whichever you prefer) that people like to draw in all sorts of algebra textbooks and manuals. However, let's not talk about sad things. Let's better calculate one such determinant - to warm up before the real tough stuff. :)

    Task. Calculate the determinant:

    \[\left| \begin(matrix) 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 1 \\\end(matrix) \right|\]

    Solution. We work according to the rule of triangles. First, let's count three terms made up of elements on the main diagonal and parallel to it:

    \[\begin(align) & 1\cdot 5\cdot 1+2\cdot 6\cdot 7+3\cdot 4\cdot 8= \\ & =5+84+96=185 \\\end(align) \]

    Now let's look at the side diagonal:

    \[\begin(align) & 3\cdot 5\cdot 7+2\cdot 4\cdot 1+1\cdot 6\cdot 8= \\ & =105+8+48=161 \\\end(align) \]

    All that remains is to subtract the second from the first number - and we get the answer:

    That's it!

    However, determinants of 3x3 matrices are not yet the pinnacle of skill. The most interesting things await us further. :)

    General scheme for calculating determinants

    As we know, as the matrix dimension $n$ increases, the number of terms in the determinant is $n!$ and grows rapidly. Still, factorial is not bullshit; it’s a fairly quickly growing function.

    Already for 4x4 matrices, counting determinants directly (i.e. through permutations) becomes somehow not very good. I’m generally silent about 5x5 and more. Therefore, some properties of the determinant come into play, but understanding them requires a little theoretical preparation.

    Are you ready? Let's go!

    What is a matrix minor?

    Let an arbitrary matrix $A=\left[ m\times n \right]$ be given. Note: not necessarily square. Unlike determinants, minors are such cute things that exist not only in harsh square matrices. Let us select several (for example, $k$) rows and columns in this matrix, with $1\le k\le m$ and $1\le k\le n$. Then:

    Definition. A minor of order $k$ is the determinant of a square matrix arising at the intersection of selected $k$ columns and rows. We will also call this new matrix itself minor.

    Such a minor is denoted by $((M)_(k))$. Naturally, one matrix can have a whole bunch of minors of order $k$. Here is an example of a minor of order 2 for the matrix $\left[ 5\times 6 \right]$:

    Selecting $k = 2$ columns and rows to form a minor

    It is not at all necessary that the selected rows and columns be side by side, as in the example discussed. The main thing is that the number of selected rows and columns is the same (this is the number $k$).

    There is another definition. Perhaps someone will like it more:

    Definition. Let a rectangular matrix $A=\left[ m\times n \right]$ be given. If, after deleting one or more columns and one or more rows, a square matrix of size $\left[ k\times k \right]$ is formed, then its determinant is the minor $((M)_(k))$ . We will also sometimes call the matrix itself a minor - this will be clear from the context.

    As my cat said, sometimes it’s better to come back from the 11th floor to eat food than to meow while sitting on the balcony.

    Example. Let the matrix be given

    By choosing row 1 and column 2, we get a first-order minor:

    \[((M)_(1))=\left| 7\right|=7\]

    By choosing rows 2, 3 and columns 3, 4, we obtain a second-order minor:

    \[((M)_(2))=\left| \begin(matrix) 5 & 3 \\ 6 & 1 \\\end(matrix) \right|=5-18=-13\]

    And if you select all three rows, as well as columns 1, 2, 4, there will be a third-order minor:

    \[((M)_(3))=\left| \begin(matrix) 1 & 7 & 0 \\ 2 & 4 & 3 \\ 3 & 0 & 1 \\\end(matrix) \right|\]

    It will not be difficult for the reader to find other minors of orders 1, 2 or 3. Therefore, we move on.

    Algebraic additions

    “Well ok, what do these minor minions give us?” - you probably ask. By themselves - nothing. But in square matrices, each minor has a “companion” - an additional minor, as well as an algebraic complement. And together these two tricks will allow us to crack the determinants like nuts.

    Definition. Let a square matrix $A=\left[ n\times n \right]$ be given, in which the minor $((M)_(k))$ is chosen. Then the additional minor for the minor $((M)_(k))$ is a piece of the original matrix $A$, which will remain after deleting all the rows and columns involved in composing the minor $((M)_(k))$:

    Additional minor to minor $((M)_(2))$

    Let us clarify one point: an additional minor is not just a “piece of the matrix”, but a determinant of this piece.

    Additional minors are indicated by an asterisk: $M_(k)^(*)$:

    where the operation $A\nabla ((M)_(k))$ literally means “delete from $A$ the rows and columns included in $((M)_(k))$”. This operation is not generally accepted in mathematics - I just invented it myself for the beauty of the story. :)

    Additional minors are rarely used by themselves. They are part of a more complex construction - algebraic complement.

    Definition. The algebraic complement of a minor $((M)_(k))$ is the additional minor $M_(k)^(*)$ multiplied by the value $((\left(-1 \right))^(S))$ , where $S$ is the sum of the numbers of all rows and columns involved in the original minor $((M)_(k))$.

    As a rule, the algebraic complement of a minor $((M)_(k))$ is denoted by $((A)_(k))$. That's why:

    \[((A)_(k))=((\left(-1 \right))^(S))\cdot M_(k)^(*)\]

    Difficult? At first glance, yes. But this is not certain. Because in reality everything is easy. Let's look at an example:

    Example. Given a 4x4 matrix:

    Let's choose a second order minor

    \[((M)_(2))=\left| \begin(matrix) 3 & 4 \\ 15 & 16 \\\end(matrix) \right|\]

    Captain Obviousness seems to hint to us that when compiling this minor, lines 1 and 4, as well as columns 3 and 4, were involved. Cross them out and we get an additional minor:

    It remains to find the number $S$ and obtain the algebraic complement. Since we know the numbers of the rows (1 and 4) and columns (3 and 4) involved, everything is simple:

    \[\begin(align) & S=1+4+3+4=12; \\ & ((A)_(2))=((\left(-1 \right))^(S))\cdot M_(2)^(*)=((\left(-1 \right) )^(12))\cdot \left(-4 \right)=-4\end(align)\]

    Answer: $((A)_(2))=-4$

    That's it! In fact, the whole difference between an additional minor and an algebraic complement is only in the minus at the front, and even then not always.

    Laplace's theorem

    And so we came to the point why, in fact, all these minors and algebraic additions were needed.

    Laplace's theorem on the decomposition of the determinant. Let $k$ rows (columns) be selected in a matrix of size $\left[ n\times n \right]$, with $1\le k\le n-1$. Then the determinant of this matrix is ​​equal to the sum of all products of minors of order $k$ contained in the selected rows (columns) and their algebraic complements:

    \[\left| A \right|=\sum(((M)_(k))\cdot ((A)_(k)))\]

    Moreover, there will be exactly $C_(n)^(k)$ of such terms.

    Okay, okay: about $C_(n)^(k)$ - I’m already showing off, there was nothing like that in Laplace’s original theorem. But no one has canceled combinatorics, and literally a quick glance at the condition will allow you to see for yourself that there will be exactly that many terms. :)

    We will not prove it, although it does not present any particular difficulty - all calculations come down to the good old permutations and even/odd inversions. However, the proof will be presented in a separate paragraph, and today we have a purely practical lesson.

    Therefore, we move on to a special case of this theorem, when the minors are individual cells of the matrix.

    Decomposition of the determinant in row and column

    What we are going to talk about now is precisely the main tool for working with determinants, for the sake of which all this nonsense with permutations, minors and algebraic additions was started.

    Read and enjoy:

    Corollary of Laplace's Theorem (decomposition of the determinant in row/column). Let one row be selected in a matrix of size $\left[ n\times n \right]$. The minors in this line will be $n$ individual cells:

    \[((M)_(1))=((a)_(ij)),\quad j=1,...,n\]

    Additional minors are also easy to calculate: just take the original matrix and cross out the row and column containing $((a)_(ij))$. Let's call such minors $M_(ij)^(*)$.

    For the algebraic complement we still need the number $S$, but in the case of a minor of order 1 it is simply the sum of the “coordinates” of the cell $((a)_(ij))$:

    And then the original determinant can be written in terms of $((a)_(ij))$ and $M_(ij)^(*)$ according to Laplace’s theorem:

    \[\left| A \right|=\sum\limits_(j=1)^(n)(((a)_(ij))\cdot ((\left(-1 \right))^(i+j))\cdot ((M)_(ij)))\]

    This is it formula for decomposing the determinant in a row. But the same is true for columns.

    Several conclusions can be immediately drawn from this consequence:

    1. This scheme works equally well for both rows and columns. In fact, most often the decomposition will proceed precisely along the columns rather than along the rows.
    2. The number of terms in the expansion is always exactly $n$. This is significantly less than $C_(n)^(k)$ and even more so $n!$.
    3. Instead of one determinant $\left[ n\times n \right]$ you will have to consider several determinants of size one less: $\left[ \left(n-1 \right)\times \left(n-1 \right) \right ]$.

    The last fact is especially important. For example, instead of the brutal 4x4 determinant, now it will be enough to count several 3x3 determinants - we will somehow cope with them. :)

    Task. Find the determinant:

    \[\left| \begin(matrix) 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\\end(matrix) \right|\]

    Solution. Let's expand this determinant along the first line:

    \[\begin(align) \left| A \right|=1\cdot ((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) 5 & 6 \\ 8 & 9 \\\end(matrix) \right|+ & \\ 2\cdot ((\left(-1 \right))^(1+2))\cdot \left| \begin(matrix) 4 & 6 \\ 7 & 9 \\\end(matrix) \right|+ & \\ 3\cdot ((\left(-1 \right))^(1+3))\cdot \left| \begin(matrix) 4 & 5 \\ 7 & 8 \\\end(matrix) \right|= & \\\end(align)\]

    \[\begin(align) & =1\cdot \left(45-48 \right)-2\cdot \left(36-42 \right)+3\cdot \left(32-35 \right)= \\ & =1\cdot \left(-3 \right)-2\cdot \left(-6 \right)+3\cdot \left(-3 \right)=0. \\\end(align)\]

    Task. Find the determinant:

    \[\left| \begin(matrix) 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \\\end(matrix) \right|\ ]

    Solution. For a change, let's work with columns this time. For example, the last column contains two zeros at once - obviously, this will significantly reduce the calculations. Now you'll see why.

    So, we expand the determinant along the fourth column:

    \[\begin(align) \left| \begin(matrix) 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \\\end(matrix) \right|= 0\cdot ((\left(-1 \right))^(1+4))\cdot \left| \begin(matrix) 1 & 0 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \\\end(matrix) \right|+ & \\ +1\cdot ((\left(-1 \ right))^(2+4))\cdot \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \\\end(matrix) \right|+ & \\ +1\cdot ((\left(-1 \ right))^(3+4))\cdot \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 1 \\\end(matrix) \right|+ & \\ +0\cdot ((\left(-1 \ right))^(4+4))\cdot \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\\end(matrix) \right| & \\\end(align)\]

    And then - oh, miracle! - two terms immediately go down the drain, since they contain a factor of “0”. There are still two 3x3 determinants left, which we can easily deal with:

    \[\begin(align) & \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \\\end(matrix) \right|=0+0+1-1-1-0=-1; \\ & \left| \begin(matrix) 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 1 \\\end(matrix) \right|=0+1+1-0-0-1=1. \\\end(align)\]

    Let's go back to the source and find the answer:

    \[\left| \begin(matrix) 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \\\end(matrix) \right|= 1\cdot \left(-1 \right)+\left(-1 \right)\cdot 1=-2\]

    Well, that's all. And no 4! = 24 terms did not have to be counted. :)

    Answer: −2

    Basic properties of the determinant

    In the last problem, we saw how the presence of zeros in the rows (columns) of the matrix dramatically simplifies the decomposition of the determinant and, in general, all calculations. A natural question arises: is it possible to make these zeros appear even in the matrix where they were not originally there?

    The answer is clear: Can. And here the properties of the determinant come to our aid:

    1. If you swap two rows (columns), the determinant will not change;
    2. If one row (column) is multiplied by the number $k$, then the entire determinant will also be multiplied by the number $k$;
    3. If you take one line and add (subtract) it as many times as you like from another, the determinant will not change;
    4. If two rows of the determinant are the same, or proportional, or one of the rows is filled with zeros, then the entire determinant is equal to zero;
    5. All the above properties are true for columns as well.
    6. When transposing a matrix, the determinant does not change;
    7. The determinant of the product of matrices is equal to the product of determinants.

    The third property is of particular value: we can subtract from one row (column) another until zeros appear in the right places.

    Most often, calculations come down to “zeroing” the entire column everywhere except for one element, and then expanding the determinant over this column, obtaining a matrix of size 1 smaller.

    Let's see how this works in practice:

    Task. Find the determinant:

    \[\left| \begin(matrix) 1 & 2 & 3 & 4 \\ 4 & 1 & 2 & 3 \\ 3 & 4 & 1 & 2 \\ 2 & 3 & 4 & 1 \\\end(matrix) \right|\ ]

    Solution. There seem to be no zeros here at all, so you can “drill” on any row or column - the amount of calculations will be approximately the same. Let's not waste time on trifles and “zero out” the first column: it already has a cell with one, so just take the first line and subtract it 4 times from the second, 3 times from the third and 2 times from the last.

    As a result, we will get a new matrix, but its determinant will be the same:

    \[\begin(matrix) \left| \begin(matrix) 1 & 2 & 3 & 4 \\ 4 & 1 & 2 & 3 \\ 3 & 4 & 1 & 2 \\ 2 & 3 & 4 & 1 \\\end(matrix) \right|\ begin(matrix) \downarrow \\ -4 \\ -3 \\ -2 \\\end(matrix)= \\ =\left| \begin(matrix) 1 & 2 & 3 & 4 \\ 4-4\cdot 1 & 1-4\cdot 2 & 2-4\cdot 3 & 3-4\cdot 4 \\ 3-3\cdot 1 & 4-3\cdot 2 & 1-3\cdot 3 & 2-3\cdot 4 \\ 2-2\cdot 1 & 3-2\cdot 2 & 4-2\cdot 3 & 1-2\cdot 4 \ \\end(matrix) \right|= \\ =\left| \begin(matrix) 1 & 2 & 3 & 4 \\ 0 & -7 & -10 & -13 \\ 0 & -2 & -8 & -10 \\ 0 & -1 & -2 & -7 \\ \end(matrix) \right| \\\end(matrix)\]

    Now, with the equanimity of Piglet, we lay out this determinant along the first column:

    \[\begin(matrix) 1\cdot ((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) -7 & -10 & -13 \\ -2 & -8 & -10 \\ -1 & -2 & -7 \\\end(matrix) \right|+0\cdot ((\ left(-1 \right))^(2+1))\cdot \left| ... \right|+ \\ +0\cdot ((\left(-1 \right))^(3+1))\cdot \left| ... \right|+0\cdot ((\left(-1 \right))^(4+1))\cdot \left| ... \right| \\\end(matrix)\]

    It is clear that only the first term will “survive” - I didn’t even write down the determinants for the rest, since they are still multiplied by zero. The coefficient in front of the determinant is equal to one, i.e. you don't have to write it down.

    But you can take out the “cons” from all three lines of the determinant. Essentially, we took out the factor (−1) three times:

    \[\left| \begin(matrix) -7 & -10 & -13 \\ -2 & -8 & -10 \\ -1 & -2 & -7 \\\end(matrix) \right|=\cdot \left| \begin(matrix) 7 & 10 & 13 \\ 2 & 8 & 10 \\ 1 & 2 & 7 \\\end(matrix) \right|\]

    We have obtained a small determinant 3x3, which can already be calculated using the rule of triangles. But we’ll try to expand it along the first column - fortunately, the last line proudly contains one:

    \[\begin(align) & \left(-1 \right)\cdot \left| \begin(matrix) 7 & 10 & 13 \\ 2 & 8 & 10 \\ 1 & 2 & 7 \\\end(matrix) \right|\begin(matrix) -7 \\ -2 \\ \uparrow \ \\end(matrix)=\left(-1 \right)\cdot \left| \begin(matrix) 0 & -4 & -36 \\ 0 & 4 & -4 \\ 1 & 2 & 7 \\\end(matrix) \right|= \\ & =\cdot \left| \begin(matrix) -4 & -36 \\ 4 & -4 \\\end(matrix) \right|=\left(-1 \right)\cdot \left| \begin(matrix) -4 & -36 \\ 4 & -4 \\\end(matrix) \right| \\\end(align)\]

    You can, of course, still have fun and expand the 2x2 matrix along a row (column), but you and I are adequate, so we’ll just calculate the answer:

    \[\left(-1 \right)\cdot \left| \begin(matrix) -4 & -36 \\ 4 & -4 \\\end(matrix) \right|=\left(-1 \right)\cdot \left(16+144 \right)=-160\ ]

    This is how dreams are broken. Only −160 in the answer. :)

    Answer: −160.

    A couple of notes before we move on to the last task:

    1. The original matrix was symmetrical with respect to the secondary diagonal. All minors in the expansion are also symmetrical with respect to the same secondary diagonal.
    2. Strictly speaking, we could not expand anything at all, but simply reduce the matrix to an upper triangular form, when there are solid zeros under the main diagonal. Then (in strict accordance with the geometric interpretation, by the way) the determinant is equal to the product of $((a)_(ii))$ - the numbers on the main diagonal.

    Task. Find the determinant:

    \[\left| \begin(matrix) 1 & 1 & 1 & 1 \\ 2 & 4 & 8 & 16 \\ 3 & 9 & 27 & 81 \\ 5 & 25 & 125 & 625 \\\end(matrix) \right|\ ]

    Solution. Well, here the first line just begs to be “zeroed”. Take the first column and subtract exactly once from all the others:

    \[\begin(align) & \left| \begin(matrix) 1 & 1 & 1 & 1 \\ 2 & 4 & 8 & 16 \\ 3 & 9 & 27 & 81 \\ 5 & 25 & 125 & 625 \\\end(matrix) \right|= \\ & =\left| \begin(matrix) 1 & 1-1 & 1-1 & 1-1 \\ 2 & 4-2 & 8-2 & 16-2 \\ 3 & 9-3 & 27-3 & 81-3 \\ 5 & ​​25-5 & 125-5 & 625-5 \\\end(matrix) \right|= \\ & =\left| \begin(matrix) 1 & 0 & 0 & 0 \\ 2 & 2 & 6 & 14 \\ 3 & 6 & 24 & 78 \\ 5 & 20 & 120 & 620 \\\end(matrix) \right| \\\end(align)\]

    We expand along the first row, and then take out the common factors from the remaining rows:

    \[\cdot \left| \begin(matrix) 2 & 6 & 14 \\ 6 & 24 & 78 \\ 20 & 120 & 620 \\\end(matrix) \right|=\cdot \left| \begin(matrix) 1 & 3 & 7 \\ 1 & 4 & 13 \\ 1 & 6 & 31 \\\end(matrix) \right|\]

    Again we see “beautiful” numbers, but in the first column - we lay out the determinant according to it:

    \[\begin(align) & 240\cdot \left| \begin(matrix) 1 & 3 & 7 \\ 1 & 4 & 13 \\ 1 & 6 & 31 \\\end(matrix) \right|\begin(matrix) \downarrow \\ -1 \\ -1 \ \\end(matrix)=240\cdot \left| \begin(matrix) 1 & 3 & 7 \\ 0 & 1 & 6 \\ 0 & 3 & 24 \\\end(matrix) \right|= \\ & =240\cdot ((\left(-1 \ right))^(1+1))\cdot \left| \begin(matrix) 1 & 6 \\ 3 & 24 \\\end(matrix) \right|= \\ & =240\cdot 1\cdot \left(24-18 \right)=1440 \\\end( align)\]

    Order. The problem is solved.

    Answer: 1440