• Regression in Excel: equation, examples. Linear regression

    Regression Analysis in Microsoft Excel - The most comprehensive guide to using MS Excel to solve regression analysis problems in the field of business analytics. Konrad Carlberg clearly explains theoretical issues, knowledge of which will help you avoid many mistakes both when conducting regression analysis yourself and when evaluating the results of analysis performed by other people. All material, from simple correlations and t-tests to multiple analysis of covariance, is based on real-world examples and is accompanied by detailed step-by-step procedures.

    The book discusses the quirks and controversies of Excel's regression functions, examines the implications of each option and argument, and explains how to reliably apply regression methods in areas ranging from medical research to financial analysis.

    Konrad Carlberg. Regression analysis in Microsoft Excel. – M.: Dialectics, 2017. – 400 p.

    Download the note in or format, examples in format

    Chapter 1: Assessing Data Variability

    Statisticians have many measures of variation at their disposal. One of them is the sum of squared deviations of individual values ​​from the average. In Excel, the SQUARE() function is used for this. But variance is more often used. Dispersion is the average of squared deviations. The variance is insensitive to the number of values ​​in the data set under study (while the sum of squared deviations increases with the number of measurements).

    Excel offers two functions that return variance: DISP.G() and DISP.V():

    • Use the DISP.G() function if the values ​​to be processed form a population. That is, the values ​​contained in the range are the only values ​​you are interested in.
    • Use the DISP.B() function if the values ​​to be processed form a sample from a larger population. It is assumed that there are additional values ​​whose variance you can also estimate.

    If a quantity such as a mean or correlation coefficient is calculated from a population, it is called a parameter. A similar quantity calculated on the basis of a sample is called a statistic. Counting deviations from the average in a given set, you will get a sum of squared deviations of a smaller magnitude than if you counted them from any other value. A similar statement is true for variance.

    The larger the sample size, the more accurate the calculated statistic value. But there is no sample size smaller than the population size for which you can be confident that the statistic value matches the parameter value.

    Let's say you have a set of 100 heights whose mean differs from the population mean, no matter how small the difference. By calculating the variance for a sample, you will get a value, say 4. This value is smaller than any other value that can be obtained by calculating the deviation of each of 100 height values ​​relative to any value other than the sample average, including relative to the true average. general population. Therefore, the calculated variance will be different, and smaller, from the variance that you would get if you somehow found out and used a population parameter rather than a sample mean.

    The mean sum of squares determined for the sample provides a lower estimate of the population variance. The variance calculated in this way is called displaced assessment. It turns out that in order to eliminate the bias and obtain an unbiased estimate, it is enough to divide the sum of squared deviations not by n, Where n- sample size, and n – 1.

    Magnitude n – 1 is called the number (number) of degrees of freedom. There are different ways to calculate this quantity, although they all involve either subtracting some number from the sample size or counting the number of categories into which the observations fall.

    The essence of the difference between the DISP.G() and DISP.V() functions is as follows:

    • In the function VAR.G(), the sum of squares is divided by the number of observations and therefore represents a biased estimate of the variance, the true mean.
    • In the DISP.B() function, the sum of squares is divided by the number of observations minus 1, i.e. by the number of degrees of freedom, which gives a more accurate, unbiased estimate of the variance of the population from which the sample was drawn.

    Standard deviation standard deviation, SD) – is the square root of the variance:

    Squaring the deviations transforms the measurement scale into another metric, which is the square of the original one: meters - into square meters, dollars - into square dollars, etc. The standard deviation is the square root of the variance, and therefore takes us back to the original units of measurement. Whichever is more convenient.

    It is often necessary to calculate the standard deviation after the data has been subjected to some manipulation. And although in these cases the results are undoubtedly standard deviations, they are usually called standard errors. There are several types of standard errors, including standard error of measurement, standard error of proportion, and standard error of the mean.

    Let's say you collected height data for 25 randomly selected adult men in each of the 50 states. Next, you calculate the average height of adult males in each state. The resulting 50 average values, in turn, can be considered observations. From this, you could calculate their standard deviation, which is standard error of the mean. Rice. 1. compares the distribution of 1,250 raw individual values ​​(height data for 25 men in each of the 50 states) with the distribution of the 50 state averages. The formula for estimating the standard error of the mean (that is, the standard deviation of means, not individual observations):

    where is the standard error of the mean; s– standard deviation of the original observations; n– number of observations in the sample.

    Rice. 1. Variation in averages from state to state is significantly less than variation in individual observations.

    In statistics, there is a convention regarding the use of Greek and Latin letters to represent statistical quantities. It is customary to denote parameters of the general population with Greek letters, and sample statistics with Latin letters. Therefore, when talking about the population standard deviation, we write it as σ; if the standard deviation of the sample is considered, then we use the notation s. As for the symbols for designating averages, they do not agree with each other so well. The population mean is denoted by the Greek letter μ. However, the symbol X̅ is traditionally used to represent the sample mean.

    z-score expresses the position of an observation in the distribution in standard deviation units. For example, z = 1.5 means that the observation is 1.5 standard deviations away from the mean. Term z-score used for individual assessments, i.e. for dimensions assigned to individual sample elements. The term used to refer to such statistics (such as the state average) z-score:

    where X̅ is the sample mean, μ is the population mean, is the standard error of the means of a set of samples:

    where σ is the standard error of the population (individual measurements), n– sample size.

    Let's say you work as an instructor at a golf club. You have been able to measure the distance of your shots over a long period of time and know that the average is 205 yards and the standard deviation is 36 yards. You are offered a new club, claiming that it will increase your hitting distance by 10 yards. You ask each of the next 81 club patrons to take a test shot with a new club and record their swing distance. It turned out that the average distance with the new club was 215 yards. What is the probability that a difference of 10 yards (215 – 205) is due solely to sampling error? Or to put it another way: What is the likelihood that, in more extensive testing, the new club will not demonstrate an increase in hitting distance over the existing long-term average of 205 yards?

    We can check this by generating a z-score. Standard error of the mean:

    Then z-score:

    We need to find the probability that the sample mean will be 2.5σ away from the population mean. If the probability is small, then the differences are not due to chance, but to the quality of the new club. Excel does not have a ready-made function for determining z-score probability. However, you can use the formula =1-NORM.ST.DIST(z-score,TRUE), where the NORM.ST.DIST() function returns the area under the normal curve to the left of the z-score (Figure 2).

    Rice. 2. The NORM.ST.DIST() function returns the area under the curve to the left of the z-value; To enlarge the image, right-click on it and select Open image in new tab

    The second argument of the NORM.ST.DIST() function can take two values: TRUE – the function returns the area of ​​the area under the curve to the left of the point specified by the first argument; FALSE – the function returns the height of the curve at the point specified by the first argument.

    If the population mean (μ) and standard deviation (σ) are not known, the t-value is used (see details). The z-score and t-score structures differ in that the standard deviation s obtained from the sample results is used to find the t-score rather than the known value of the population parameter σ. The normal curve has a single shape, and the shape of the t-value distribution varies depending on the number of degrees of freedom df. degrees of freedom) of the sample it represents. The number of degrees of freedom of the sample is equal to n – 1, Where n- sample size (Fig. 3).

    Rice. 3. The shape of t-distributions that arise in cases where the parameter σ is unknown differs from the shape of the normal distribution

    Excel has two functions for the t-distribution, also called the Student distribution: STUDENT.DIST() returns the area under the curve to the left of a given t-value, and STUDENT.DIST.PH() returns the area to the right.

    Chapter 2. Correlation

    Correlation is a measure of dependence between elements of a set of ordered pairs. The correlation is characterized Pearson correlation coefficients–r. The coefficient can take values ​​in the range from –1.0 to +1.0.

    Where Sx And S y– standard deviations of variables X And Y, S xy– covariance:

    In this formula, the covariance is divided by the standard deviations of the variables X And Y, thereby removing unit-related scaling effects from the covariance. Excel uses the CORREL() function. The name of this function does not contain the qualifying elements Г and В, which are used in the names of functions such as STANDARDEV(), VARIANCE() or COVARIANCE(). Although the sample correlation coefficient provides a biased estimate, the reason for the bias is different than in the case of variance or standard deviation.

    Depending on the magnitude of the general correlation coefficient (often denoted by the Greek letter ρ ), correlation coefficient r produces a biased estimate, with the effect of bias increasing as sample sizes decrease. However, we do not try to correct this bias in the same way as, for example, we did when calculating the standard deviation, when we substituted not the number of observations, but the number of degrees of freedom into the corresponding formula. In reality, the number of observations used to calculate the covariance has no effect on the magnitude.

    The standard correlation coefficient is intended for use with variables that are related to each other by a linear relationship. The presence of nonlinearity and/or errors in the data (outliers) lead to incorrect calculation of the correlation coefficient. To diagnose data problems, it is recommended to create scatter plots. This is the only chart type in Excel that treats both the horizontal and vertical axes as value axes. A line chart defines one of the columns as the category axis, which distorts the picture of the data (Fig. 4).

    Rice. 4. The regression lines seem the same, but compare their equations with each other

    The observations used to construct the line chart are arranged equidistant along the horizontal axis. The division labels along this axis are just labels, not numeric values.

    Although correlation often means that there is a cause-and-effect relationship, it cannot be used to prove that this is the case. Statistics are not used to demonstrate whether a theory is true or false. To exclude competing explanations for observational results, put planned experiments. Statistics are used to summarize the information collected during such experiments and to quantify the likelihood that the decision made may be incorrect given the available evidence base.

    Chapter 3: Simple Regression

    If two variables are related to each other, so that the value of the correlation coefficient exceeds, say, 0.5, then in this case it is possible to predict (with some accuracy) the unknown value of one variable from the known value of the other. To obtain forecast price values ​​based on the data shown in Fig. 5, you can use any of several possible methods, but you almost certainly will not use the one shown in Fig. 5. Still, you should familiarize yourself with it, because no other method allows you to demonstrate the connection between correlation and prediction as clearly as this one. In Fig. 5 in the range B2:C12 shows a random sample of ten houses and provides data on the area of ​​\u200b\u200beach house (in square feet) and its selling price.

    Rice. 5. Forecast sales price values ​​form a straight line

    Find the means, standard deviations, and correlation coefficient (range A14:C18). Calculate area z-scores (E2:E12). For example, cell E3 contains the formula: =(B3-$B$14)/$B$15. Compute the z-scores of the forecast price (F2:F12). For example, cell F3 contains the formula: =ЕЗ*$В$18. Convert z-scores to dollar prices (H2:H12). In cell NZ the formula is: =F3*$C$15+$C$14.

    Note that the predicted value always tends to shift toward the mean of 0. The closer the correlation coefficient is to zero, the closer to zero the predicted z-score is. In our example, the correlation coefficient between area and selling price is 0.67, and the forecast price is 1.0 * 0.67, i.e. 0.67. This corresponds to an excess of a value above the mean equal to two-thirds of a standard deviation. If the correlation coefficient were equal to 0.5, then the forecast price would be 1.0 * 0.5, i.e. 0.5. This corresponds to an excess of a value above the mean equal to only half a standard deviation. Whenever the value of the correlation coefficient differs from the ideal value, i.e. greater than -1.0 and less than 1.0, the score of the predicted variable should be closer to its mean than the score of the predictor (independent) variable to its own. This phenomenon is called regression to the mean, or simply regression.

    Excel has several functions for determining the coefficients of a regression line equation (called a trend line in Excel) y =kx + b. To determine k serves function

    =SLOPE(known_y_values, known_x_values)

    Here at is the predicted variable, and X– independent variable. You must strictly follow this order of variables. The slope of the regression line, correlation coefficient, standard deviations of the variables, and covariance are closely related (Figure 6). The INTERMEPT() function returns the value intercepted by the regression line on the vertical axis:

    =LIMIT(known_y_values, known_x_values)

    Rice. 6. The relationship between standard deviations converts the covariance into a correlation coefficient and the slope of the regression line

    Note that the number of x and y values ​​provided as arguments to the SLOPE() and INTERCEPT() functions must be the same.

    In regression analysis, another important indicator is used - R 2 (R-square), or the coefficient of determination. It determines what contribution to the overall variability of the data is made by the relationship between X And at. In Excel, there is a function for this called CVPIERSON(), which takes exactly the same arguments as the CORREL() function.

    Two variables with a non-zero correlation coefficient between them are said to explain variance or have variance explained. Typically explained variance is expressed as a percentage. So R 2 = 0.81 means that 81% of the variance (scatter) of two variables is explained. The remaining 19% is due to random fluctuations.

    Excel has a TREND function that makes calculations easier. TREND() function:

    • accepts the known values ​​you provide X and known values at;
    • calculates the slope of the regression line and the constant (intercept);
    • returns predicted values at, determined by applying a regression equation to known values X(Fig. 7).

    The TREND() function is an array function (if you have not encountered such functions before, I recommend).

    Rice. 7. Using the TREND() function allows you to speed up and simplify calculations compared to using a pair of SLOPE() and INTERCEPT() functions

    To enter the TREND() function as an array formula in cells G3:G12, select the range G3:G12, enter the formula TREND(NW:C12;B3:B12), press and hold the keys and only after that press the key . Note that the formula is enclosed in curly braces: ( and ). This is how Excel tells you that this formula is interpreted as an array formula. Don't enter the parentheses yourself: If you try to enter them yourself as part of a formula, Excel will treat your input as a regular text string.

    The TREND() function has two more arguments: new_values_x And const. The first allows you to make a forecast for the future, and the second can force the regression line to pass through the origin (a value of TRUE tells Excel to use the calculated constant, a value of FALSE tells Excel to use a constant = 0). Excel allows you to draw a regression line on a graph so that it passes through the origin. Start by drawing a scatter plot, then right-click on one of the data series markers. Select the item in the context menu that opens Add a trend line; select an option Linear; if necessary, scroll down the panel, check the box Set up intersection; Make sure its associated text box is set to 0.0.

    If you have three variables and you want to determine the correlation between two of them, eliminating the influence of the third, you can use partial correlation. Suppose you are interested in the relationship between the percentage of a city's residents who have completed college and the number of books in the city's libraries. You collected data for 50 cities, but... The problem is that both of these parameters may depend on the well-being of the residents of a particular city. Of course, it is very difficult to find other 50 cities characterized by exactly the same level of well-being of residents.

    By using statistical methods to control for the influence of wealth on both library financial support and college affordability, you could get a more precise quantification of the strength of the relationship between the variables you are interested in, namely the number of books and the number of graduates. Such a conditional correlation between two variables, when the values ​​of other variables are fixed, is called partial correlation. One way to calculate it is to use the equation:

    Where rC.B. . W- correlation coefficient between the College and Books variables with the influence (fixed value) of the Wealth variable excluded; rC.B.- correlation coefficient between the variables College and Books; rCW- correlation coefficient between the College and Welfare variables; rB.W.- correlation coefficient between the variables Books and Welfare.

    On the other hand, partial correlation can be calculated based on the analysis of residuals, i.e. differences between the predicted values ​​and the associated results of actual observations (both methods are presented in Fig. 8).

    Rice. 8. Partial correlation as correlation of residuals

    To simplify the calculation of the matrix of correlation coefficients (B16:E19), use the Excel analysis package (menu Data –> Analysis –> Data Analysis). By default, this package is not active in Excel. To install it, go through the menu File –> Options –> Add-ons. At the bottom of the opened window OptionsExcel find the field Control, select Add-onsExcel, click Go. Check the box next to the add-in Analysis package. Click A data analysis, select option Correlation. Specify $B$2:$D$13 as the input interval, check the box Labels in the first line, specify $B$16:$E$19 as the output interval.

    Another possibility is to determine semi-partial correlation. For example, you are investigating the effects of height and age on weight. Thus, you have two predictor variables - height and age, and one predictor variable - weight. You want to exclude the influence of one predictor variable on another, but not on the predictor variable:

    where H – Height, W – Weight, A – Age; The semi-partial correlation coefficient index uses parentheses to indicate which variable is being removed and from which variable. In this case, the notation W(H.A) indicates that the effect of the Age variable is removed from the Height variable, but not from the Weight variable.

    It may seem that the issue being discussed is not of significant importance. After all, what matters most is how accurately the overall regression equation works, while the problem of the relative contributions of individual variables to the total explained variance seems to be of secondary importance. However, this is far from the case. Once you start wondering whether a variable is worth using in a multiple regression equation at all, the issue becomes important. It can influence the assessment of the correctness of the choice of model for analysis.

    Chapter 4. LINEST() Function

    The LINEST() function returns 10 regression statistics. The LINEST() function is an array function. To enter it, select a range containing five rows and two columns, type the formula, and click (Fig. 9):

    LINEST(B2:B21,A2:A21,TRUE,TRUE)

    Rice. 9. LINEST() function: a) select the range D2:E6, b) enter the formula as shown in the formula bar, c) click

    The LINEST() function returns:

    • regression coefficient (or slope, cell D2);
    • segment (or constant, cell E3);
    • standard errors of regression coefficient and constant (range D3:E3);
    • coefficient of determination R 2 for regression (cell D4);
    • standard error of estimate (cell E4);
    • F-test for full regression (cell D5);
    • number of degrees of freedom for the residual sum of squares (cell E5);
    • regression sum of squares (cell D6);
    • residual sum of squares (cell E6).

    Let's look at each of these statistics and how they interact.

    Standard error in our case, it is the standard deviation calculated for sampling errors. That is, this is a situation where the general population has one statistic, and the sample has another. Dividing the regression coefficient by the standard error gives you a value of 2.092/0.818 = 2.559. In other words, a regression coefficient of 2.092 is two and a half standard errors away from zero.

    If the regression coefficient is zero, then the best estimate of the predicted variable is its mean. Two and a half standard errors is quite large, and you can safely assume that the regression coefficient for the population is nonzero.

    You can determine the probability of obtaining a sample regression coefficient of 2.092 if its actual value in the population is 0.0 using the function

    STUDENT.DIST.PH (t-criterion = 2.559; number of degrees of freedom = 18)

    In general, the number of degrees of freedom = n – k – 1, where n is the number of observations and k is the number of predictor variables.

    This formula returns 0.00987, or rounded to 1%. It tells us that if the regression coefficient for the population is 0%, then the probability of obtaining a sample of 20 people for which the estimated regression coefficient is 2.092 is a modest 1%.

    The F-test (cell D5 in Fig. 9) performs the same functions in relation to full regression as the t-test in relation to the coefficient of simple pairwise regression. The F test is used to test whether the coefficient of determination R 2 for a regression is large enough to reject the hypothesis that in the population it has a value of 0.0, which indicates that there is no variance explained by the predictor and predicted variable. When there is only one predictor variable, the F test is exactly equal to the t test squared.

    So far we have looked at interval variables. If you have variables that can take multiple values, representing simple names, for example, Man and Woman or Reptile, Amphibian and Fish, represent them as a numeric code. Such variables are called nominal.

    R2 statistics quantifies the proportion of variance explained.

    Standard error of estimate. In Fig. Figure 4.9 presents the predicted values ​​of the Weight variable, obtained on the basis of its relationship with the Height variable. The range E2:E21 contains the residual values ​​for the Weight variable. More precisely, these residuals are called errors - hence the term standard error of estimation.

    Rice. 10. Both R 2 and the standard error of the estimate express the accuracy of the forecasts obtained using regression

    The smaller the standard error of the estimate, the more accurate the regression equation and the closer you expect any prediction produced by the equation to match the actual observation. The standard error of estimation provides a way to quantify these expectations. The weight of 95% of people with a certain height will be in the range:

    (height * 2.092 – 3.591) ± 2.092 * 21.118

    F-statistic is the ratio of between-group variance to within-group variance. This name was introduced by statistician George Snedecor in honor of Sir, who developed analysis of variance (ANOVA, Analysis of Variance) at the beginning of the 20th century.

    The coefficient of determination R 2 expresses the proportion of the total sum of squares associated with the regression. The value (1 – R 2) expresses the proportion of the total sum of squares associated with residuals - forecasting errors. The F-test can be obtained using the LINEST function (cell F5 in Fig. 11), using sums of squares (range G10:J11), using proportions of variance (range G14:J15). The formulas can be studied in the attached Excel file.

    Rice. 11. Calculation of F-criterion

    When using nominal variables, dummy coding is used (Figure 12). To encode values, it is convenient to use the values ​​0 and 1. The probability F is calculated using the function:

    F.DIST.PH(K2;I2;I3)

    Here the function F.DIST.PH() returns the probability of obtaining an F-criterion that obeys the central F-distribution (Fig. 13) for two sets of data with the numbers of degrees of freedom given in cells I2 and I3, the value of which coincides with the value given in cell K2.

    Rice. 12. Regression analysis using dummy variables

    Rice. 13. Central F-distribution at λ = 0

    Chapter 5. Multiple Regression

    When moving from simple pairwise regression with one predictor variable to multiple regression, you add one or more predictor variables. Store the values ​​of the predictor variables in adjacent columns, such as columns A and B in the case of two predictors, or A, B, and C in the case of three predictors. Before entering a formula that includes the LINEST() function, select five rows and as many columns as there are predictor variables, plus one more for the constant. In the case of regression with two predictor variables, the following structure can be used:

    LINEST(A2: A41; B2: C41;;TRUE)

    Similarly in the case of three variables:

    LINEST(A2:A61,B2:D61,;TRUE)

    Let's say you want to study the possible effects of age and diet on LDL levels - low-density lipoproteins, which are believed to be responsible for the formation of atherosclerotic plaques, which cause atherothrombosis (Fig. 14).

    Rice. 14. Multiple regression

    The R 2 of multiple regression (reflected in cell F13) is greater than the R 2 of any simple regression (E4, H4). Multiple regression uses multiple predictor variables simultaneously. In this case, R2 almost always increases.

    For any simple linear regression equation with one predictor variable, there will always be a perfect correlation between the predicted values ​​and the values ​​of the predictor variable because the equation multiplies the predictor values ​​by one constant and adds another constant to each product. This effect does not persist in multiple regression.

    Displaying the results returned by the LINEST() function for multiple regression (Figure 15). Regression coefficients are output as part of the results returned by the LINEST() function in reverse order of variables(G–H–I corresponds to C–B–A).

    Rice. 15. Coefficients and their standard errors are displayed in reverse order on the worksheet.

    The principles and procedures used in single predictor variable regression analysis are easily adapted to account for multiple predictor variables. It turns out that much of this adaptation depends on eliminating the influence of the predictor variables on each other. The latter is associated with partial and semi-partial correlations (Fig. 16).

    Rice. 16. Multiple regression can be expressed through pairwise regression of residuals (see Excel file for formulas)

    In Excel, there are functions that provide information about t- and F-distributions. Functions whose names include the DIST part, such as STUDENT.DIST() and F.DIST(), take a t-test or F-test as an argument and return the probability of observing a specified value. Functions whose names include the OBR part, such as STUDENT.INR() and F.INV(), take a probability value as an argument and return a criterion value corresponding to the specified probability.

    Since we are looking for critical values ​​of the t-distribution that cut off the edges of its tail regions, we pass 5% as an argument to one of the STUDENT.INV() functions, which returns the value corresponding to this probability (Fig. 17, 18).

    Rice. 17. Two-tailed t-test

    Rice. 18. One-tailed t-test

    By establishing a decision rule for the single-tailed alpha region, you increase the statistical power of the test. If you go into an experiment and are confident that you have every reason to expect a positive (or negative) regression coefficient, then you should perform a single-tail test. In this case, the likelihood that you make the right decision in rejecting the hypothesis of a zero regression coefficient in the population will be higher.

    Statisticians prefer to use the term directed test instead of the term single-tail test and term undirected test instead of the term two-tail test. The terms directed and undirected are preferred because they emphasize the type of hypothesis rather than the nature of the tails of the distribution.

    An approach to assessing the impact of predictors based on model comparison. In Fig. Figure 19 presents the results of a regression analysis that tests the contribution of the Diet variable to the regression equation.

    Rice. 19. Comparing two models by testing differences in their results

    The results of the LINEST() function (range H2:K6) are related to what I call the full model, which regresses the LDL variable on the Diet, Age, and HDL variables. The range H9:J13 presents calculations without taking into account the predictor variable Diet. I call this the limited model. In the full model, 49.2% of the variance in the dependent variable LDL was explained by the predictor variables. In the restricted model, only 30.8% of LDL is explained by the Age and HDL variables. The loss in R 2 due to excluding the Diet variable from the model is 0.183. In the range G15:L17, calculations are made that show that there is only a probability of 0.0288 that the effect of the Diet variable is random. In the remaining 97.1%, Diet has an effect on LDL.

    Chapter 6: Assumptions and Cautions for Regression Analysis

    The term "assumption" is not defined strictly enough, and the way it is used suggests that if the assumption is not met, then the results of the entire analysis are at the very least questionable or perhaps invalid. This is not actually the case, although there are certainly cases where violating an assumption fundamentally changes the picture. Basic assumptions: a) the residuals of the Y variable are normally distributed at any point X along the regression line; b) Y values ​​are linearly dependent on X values; c) the dispersion of the residuals is approximately the same at each point X; d) there is no dependence between the residues.

    If assumptions do not play a significant role, statisticians say that the analysis is robust to violation of the assumption. In particular, when you use regression to test for differences between group means, the assumption that the Y values ​​- and hence the residuals - are normally distributed does not play a significant role: the tests are robust to violations of the normality assumption. It is important to analyze data using charts. For example, included in the add-on Data Analysis tool Regression.

    If the data does not meet the assumptions of linear regression, there are approaches other than linear regression at your disposal. One of them is logistic regression (Fig. 20). Near the upper and lower limits of the predictor variable, linear regression produces unrealistic predictions.

    Rice. 20. Logistic regression

    In Fig. Figure 6.8 displays the results of two data analysis methods aimed at examining the relationship between annual income and the likelihood of buying a home. Obviously, the likelihood of making a purchase will increase with increasing income. The charts make it easy to spot the differences between the results that linear regression predicts the likelihood of buying a home and the results you might get using a different approach.

    In statistician's parlance, rejecting the null hypothesis when in fact it is true is called a Type I error.

    In the add-on Data Analysis offers a convenient tool for generating random numbers, allowing the user to specify the desired shape of the distribution (for example, Normal, Binomial or Poisson), as well as the mean and standard deviation.

    Differences between functions of the STUDENT.DIST() family. Beginning with Excel 2010, three different forms of the function are available that return the proportion of the distribution to the left and/or to the right of a given t-test value. The STUDENT.DIST() function returns the fraction of the area under the distribution curve to the left of the t-test value you specify. Let's say you have 36 observations, so the number of degrees of freedom for the analysis is 34 and the t-test value = 1.69. In this case the formula

    STUDENT.DIST(+1.69,34,TRUE)

    returns the value 0.05, or 5% (Figure 21). The third argument of the STUDENT.DIST() function can be TRUE or FALSE. If set to TRUE, the function returns the cumulative area under the curve to the left of the specified t-test, expressed as a proportion. If it is FALSE, the function returns the relative height of the curve at the point corresponding to the t-test. Other versions of the STUDENT.DIST() function - STUDENT.DIST.PH() and STUDENT.DIST.2X() - take only the t-test value and the number of degrees of freedom as arguments and do not require specifying a third argument.

    Rice. 21. The darker shaded area in the left tail of the distribution corresponds to the proportion of area under the curve to the left of a large positive t-test value

    To determine the area to the right of the t-test, use one of the formulas:

    1 — STIODENT.DIST (1, 69;34;TRUE)

    STUDENT.DIST.PH(1.69;34)

    The entire area under the curve must be 100%, so subtracting from 1 the fraction of the area to the left of the t-test value that the function returns gives the fraction of the area to the right of the t-test value. You may find it preferable to directly obtain the area fraction you are interested in using the function STUDENT.DIST.PH(), where PH means the right tail of the distribution (Fig. 22).

    Rice. 22. 5% alpha region for directional test

    Using the STUDENT.DIST() or STUDENT.DIST.PH() functions implies that you have chosen a directional working hypothesis. The directional working hypothesis combined with setting the alpha value to 5% means that you place all 5% in the right tail of the distributions. You will only have to reject the null hypothesis if the probability of the t-test value you obtain is 5% or less. Directional hypotheses generally result in more sensitive statistical tests (this greater sensitivity is also called greater statistical power).

    In an undirected test, the alpha value remains at the same 5% level, but the distribution will be different. Because you must allow for two outcomes, the probability of a false positive must be distributed between the two tails of the distribution. It is generally accepted to distribute this probability equally (Fig. 23).

    Using the same obtained t-test value and the same number of degrees of freedom as in the previous example, use the formula

    STUDENT.DIST.2Х(1.69;34)

    For no particular reason, the STUDENT.DIST.2X() function returns the error code #NUM! if it is given a negative t-test value as its first argument.

    If the samples contain different amounts of data, use the two-sample t-test with different variances included in the package Data Analysis.

    Chapter 7: Using Regression to Test Differences Between Group Means

    Variables that previously appeared under the name predictor variables will be called outcome variables in this chapter, and the term factor variables will be used instead of the term predictor variables.

    The simplest approach to coding a nominal variable is dummy coding(Fig. 24).

    Rice. 24. Regression analysis based on dummy coding

    When using dummy coding of any kind, the following rules should be followed:

    • The number of columns reserved for new data must be equal to the number of factor levels minus
    • Each vector represents one factor level.
    • Subjects in one of the levels, which is often the control group, are coded 0 in all vectors.

    The formula in cells F2:H6 =LINEST(A2:A22,C2:D22,;TRUE) returns regression statistics. For comparison, in Fig. Figure 24 shows the results of traditional ANOVA returned by the tool. One-way ANOVA add-ons Data Analysis.

    Effects coding. In another type of coding called effects coding, The mean of each group is compared with the mean of the group means. This aspect of effect coding is due to the use of -1 instead of 0 as the code for the group, which receives the same code in all code vectors (Figure 25).

    Rice. 25. Effects coding

    When dummy coding is used, the constant value returned by LINEST() is the mean of the group that is assigned zero codes in all vectors (usually the control group). In the case of effects coding, the constant is equal to the overall mean (cell J2).

    The general linear model is a useful way to conceptualize the components of the value of an outcome variable:

    Y ij = μ + α j + ε ij

    The use of Greek letters in this formula instead of Latin letters emphasizes the fact that it refers to the population from which samples are drawn, but it can be rewritten to indicate that it refers to samples drawn from a given population:

    Y ij = Y̅ + a j + e ij

    The idea is that each observation Y ij can be viewed as the sum of the following three components: the grand average, μ; effect of treatment j, and j ; value e ij, which represents the deviation of the individual quantitative indicator Y ij from the combined value of the general average and the effect of the j-th treatment (Fig. 26). The goal of the regression equation is to minimize the sum of squares of the residuals.

    Rice. 26. Observations decomposed into components of a general linear model

    Factor analysis. If the relationship between the outcome variable and two or more factors is studied simultaneously, then in this case we talk about using factor analysis. Adding one or more factors to a one-way ANOVA can increase statistical power. In one-way analysis of variance, variance in the outcome variable that cannot be attributed to a factor is included in the residual mean square. But it may well be that this variation is related to another factor. Then this variation can be removed from the mean square error, a decrease in which leads to an increase in the F-test values, and therefore to an increase in the statistical power of the test. Superstructure Data Analysis includes a tool that processes two factors simultaneously (Fig. 27).

    Rice. 27. Tool Two-way analysis of variance with repetitions of the Analysis Package

    The ANOVA tool used in this figure is useful because it returns the mean and variance of the outcome variable, as well as the counter value, for each group included in the design. In the table Analysis of variance displays two parameters not present in the output of the single-factor version of the ANOVA tool. Pay attention to sources of variation Sample And Columns in lines 27 and 28. Source of variation Columns refers to gender. Source of Variation Sample refers to any variable whose values ​​occupy different lines. In Fig. 27 values ​​for the KursLech1 group are in lines 2-6, the KursLech2 group is in lines 7-11, and the KursLechZ group is in lines 12-16.

    The main point is that both factors, Gender (label Columns in cell E28) and Treatment (label Sample in cell E27), are included in the ANOVA table as sources of variation. The means for men are different from the means for women, and this creates a source of variation. The means for the three treatments also differ, providing another source of variation. There is also a third source, Interaction, which refers to the combined effect of the variables Gender and Treatment.

    Chapter 8. Analysis of Covariance

    Analysis of Covariance, or ANCOVA (Analysis of Covariation), reduces bias and increases statistical power. Let me remind you that one of the ways to assess the reliability of a regression equation is F-tests:

    F = MS Regression/MS Residual

    where MS (Mean Square) is the mean square, and the Regression and Residual indices indicate the regression and residual components, respectively. MS Residual is calculated using the formula:

    MS Residual = SS Residual / df Residual

    where SS (Sum of Squares) is the sum of squares, and df is the number of degrees of freedom. When you add covariance to a regression equation, some portion of the total sum of squares is included not in SS ResiduaI but in SS Regression. This leads to a decrease in SS Residua l, and hence MS Residual. The smaller the MS Residual, the larger the F test and the more likely you are to reject the null hypothesis of no difference between the means. As a result, you redistribute the variability of the outcome variable. In ANOVA, when covariance is not taken into account, variability becomes error. But in ANCOVA, part of the variability previously attributed to the error term is assigned to a covariate and becomes part of SS Regression.

    Consider an example in which the same data set is analyzed first with ANOVA and then with ANCOVA (Figure 28).

    Rice. 28. ANOVA analysis indicates that the results obtained from the regression equation are unreliable

    The study compares the relative effects of physical exercise, which develops muscle strength, and cognitive exercise (doing crossword puzzles), which stimulates brain activity. Subjects were randomly assigned to two groups so that both groups were exposed to the same conditions at the beginning of the experiment. After three months, subjects' cognitive performance was measured. The results of these measurements are shown in column B.

    The range A2:C21 contains the source data passed to the LINEST() function to perform analysis using effects coding. The results of the LINEST() function are given in the range E2:F6, where cell E2 displays the regression coefficient associated with the impact vector. Cell E8 contains t-test = 0.93, and cell E9 tests the reliability of this t-test. The value contained in cell E9 indicates that the probability of encountering the difference between group means observed in this experiment is 36% if the group means are equal in the population. Few consider this result to be statistically significant.

    In Fig. Figure 29 shows what happens when you add a covariate to the analysis. In this case, I added the age of each subject to the dataset. The coefficient of determination R 2 for the regression equation that uses the covariate is 0.80 (cell F4). The R 2 value in the range F15:G19, in which I replicated the ANOVA results obtained without the covariate, is only 0.05 (cell F17). Therefore, a regression equation that includes the covariate predicts values ​​for the Cognitive Score variable much more accurately than using the Impact vector alone. For ANCOVA, the probability of obtaining the F-test value displayed in cell F5 by chance is less than 0.01%.

    Rice. 29. ANCOVA brings back a completely different picture

    Shows the influence of some values ​​(independent, independent) on the dependent variable. For example, how does the number of economically active population depend on the number of enterprises, wages and other parameters. Or: how do foreign investments, energy prices, etc. affect the level of GDP.

    The result of the analysis allows you to highlight priorities. And based on the main factors, predict, plan the development of priority areas, and make management decisions.

    Regression happens:

    linear (y = a + bx);

    · parabolic (y = a + bx + cx 2);

    · exponential (y = a * exp(bx));

    · power (y = a*x^b);

    · hyperbolic (y = b/x + a);

    logarithmic (y = b * 1n(x) + a);

    · exponential (y = a * b^x).

    Let's look at an example of building a regression model in Excel and interpreting the results. Let's take the linear type of regression.

    Task. At 6 enterprises, the average monthly salary and the number of quitting employees were analyzed. It is necessary to determine the dependence of the number of quitting employees on the average salary.

    The linear regression model has the following form:

    Y = a 0 + a 1 x 1 +…+a k x k.

    Where a are regression coefficients, x are influencing variables, k is the number of factors.

    In our example, Y is the indicator of quitting employees. The influencing factor is wages (x).

    Excel has built-in functions that can help you calculate the parameters of a linear regression model. But the “Analysis Package” add-on will do this faster.

    We activate a powerful analytical tool:

    1. Click the “Office” button and go to the “Excel Options” tab. "Add-ons".

    2. At the bottom, under the drop-down list, in the “Manage” field there will be an inscription “Excel Add-ins” (if it is not there, click on the checkbox on the right and select). And the “Go” button. Click.

    3. A list of available add-ons opens. Select “Analysis package” and click OK.

    Once activated, the add-on will be available in the Data tab.

    Now let's do the regression analysis itself.

    1. Open the menu of the “Data Analysis” tool. Select "Regression".



    2. A menu will open to select input values ​​and output options (where to display the result). In the fields for the initial data, we indicate the range of the described parameter (Y) and the factor influencing it (X). The rest may not be filled in.

    3. After clicking OK, the program will display the calculations on a new sheet (you can select an interval to display on the current sheet or assign output to a new workbook).

    First of all, we pay attention to R-squared and coefficients.

    R-squared is the coefficient of determination. In our example – 0.755, or 75.5%. This means that the calculated parameters of the model explain 75.5% of the relationship between the studied parameters. The higher the coefficient of determination, the better the model. Good - above 0.8. Bad – less than 0.5 (such an analysis can hardly be considered reasonable). In our example – “not bad”.

    The coefficient 64.1428 shows what Y will be if all variables in the model under consideration are equal to 0. That is, the value of the analyzed parameter is also influenced by other factors not described in the model.

    The coefficient -0.16285 shows the weight of variable X on Y. That is, the average monthly salary within this model affects the number of quitters with a weight of -0.16285 (this is a small degree of influence). The “-” sign indicates a negative impact: the higher the salary, the fewer people quit. Which is fair.

    One of the indicators that describes the quality of the constructed model in statistics is the coefficient of determination (R^2), which is also called the value of approximation reliability. It can be used to determine the level of forecast accuracy. Let's find out how to calculate this indicator using various Excel tools.

    Depending on the level of the coefficient of determination, it is customary to divide models into three groups:

    • 0.8 – 1 – good quality model;
    • 0.5 – 0.8 – model of acceptable quality;
    • 0 – 0.5 – poor quality model.

    In the latter case, the quality of the model indicates the impossibility of using it for forecasting.

    The choice of how Excel calculates the specified value depends on whether the regression is linear or not. In the first case, you can use the function KVPIERSON, and in the second you will have to use a special tool from the analysis package.

    Method 1: calculating the coefficient of determination for a linear function

    First of all, let's find out how to find the coefficient of determination for a linear function. In this case, this indicator will be equal to the square of the correlation coefficient. Let's calculate it using the built-in Excel function using the example of a specific table, which is given below.


    Method 2: calculating the coefficient of determination in nonlinear functions

    But the above option for calculating the desired value can only be applied to linear functions. What should you do to calculate it in a nonlinear function? Excel also has this option. This can be done using the tool "Regression", which is part of the package "Data Analysis".

    1. But before you use this tool, you must activate it yourself "Analysis package", which is disabled by default in Excel. Moving to the tab "File", and then move on to the item "Options".
    2. In the window that opens, move to the section "Add-ons" by navigating the left vertical menu. At the bottom of the right area of ​​the window there is a field "Control". From the list of subsections available there, select the name "Excel Add-ins...", and then click on the button "Go...", located to the right of the field.
    3. The add-ons window opens. In its central part there is a list of available add-ons. Check the box next to the position "Analysis package". After this you need to click on the button "OK" on the right side of the window interface.
    4. Tool package "Data Analysis" in the current instance of Excel will be activated. Access to it is located on the ribbon in the tab "Data". Move to the specified tab and click on the button "Data Analysis" in the settings group "Analysis".
    5. The window is activated "Data Analysis" with a list of specialized information processing tools. We select from this list the item "Regression" and click on the button "OK".
    6. Then the tool window opens "Regression". The first block of settings – "Input data". Here in two fields you need to indicate the addresses of the ranges where the values ​​of the argument and function are located. Place the cursor in the field "Input interval Y" and highlight the contents of the column on the sheet "Y". After the array address is displayed in the window "Regression", place the cursor in the field "Input interval Y" and select the column cells in exactly the same way "X".

      About parameters "Mark" And "Constant-zero" We do not check the boxes. The checkbox can be set next to the parameter "Reliability level" and in the field opposite indicate the desired value of the corresponding indicator (95% by default).

      In a group "Output Options" you need to specify in which area the calculation result will be displayed. There are three options:

      • Area on the current sheet;
      • Another sheet;
      • Another book (new file).

      Let's choose the first option so that the source data and the result are placed on one worksheet. Place the switch next to the parameter "Output Interval". Place the cursor in the field opposite this item. We left-click on an empty element on the sheet, which is intended to become the upper left cell of the table for displaying the calculation results. The address of this element should be displayed in the window field "Regression".

      Parameter groups "Remains" And "Normal Probability" we ignore them, since they are not important for solving the task at hand. After that, click on the button "OK", which is located in the upper right corner of the window "Regression".

    7. The program performs calculations based on previously entered data and displays the result in the specified range. As you can see, this tool displays a fairly large number of results for various parameters on the sheet. But in the context of the current lesson, we are interested in the indicator "R-squared". In this case, it is equal to 0.947664, which characterizes the selected model as a good quality model.

    Method 3: coefficient of determination for the trend line

    In addition to the above options, the coefficient of determination can be displayed directly for the trend line in a graph built on an Excel sheet. Let's find out how this can be done using a specific example.

    1. We have a graph based on the table of arguments and values ​​of the function that was used for the previous example. Let's construct a trend line for it. Click anywhere in the plotting area on which the chart is located with the left mouse button. At the same time, an additional set of tabs appears on the ribbon - "Working with diagrams". Go to the tab "Layout". Click on the button "Trend line", which is located in the tool block "Analysis". A menu appears with a choice of trend line type. We choose the type that suits the specific task. Let's choose the option for our example "Exponential Approximation".
    2. Excel draws a trend line in the form of an additional black curve directly on the plotting plane.
    3. Now our task is to display the coefficient of determination itself. Right-click on the trend line. The context menu is activated. We stop the choice in it at the point "Trend line format...".

      There is an alternative action you can take to navigate to the Trendline Format window. Select the trend line by clicking on it with the left mouse button. Moving to the tab "Layout". Click on the button "Trend line" in the block "Analysis". In the list that opens, click on the very last item in the list of actions - "Additional trend line options...".

    4. After any of the two above actions, a format window opens in which additional settings can be made. In particular, to complete our task, you need to check the box next to the item “Place the approximation reliability value (R^2) on the diagram”. It is located at the very bottom of the window. That is, in this way we enable the display of the coefficient of determination on the construction area. Then don't forget to click on the button "Close" at the bottom of the current window.
    5. The approximation reliability value, that is, the value of the coefficient of determination, will be displayed on the sheet in the construction area. In this case, this value, as we see, is equal to 0.9242, which characterizes the approximation as a good quality model.
    6. Absolutely in this way you can set the display of the coefficient of determination for any other type of trend line. You can change the type of trend line by going through the button on the ribbon or the context menu to the window of its parameters, as shown above. Then in the window itself in the group "Building a trend line" you can switch to another type. At the same time, do not forget to control that near the point “Place the approximation reliability value on the diagram” the checkbox was checked. Having completed the above steps, click on the button "Close" in the lower right corner of the window.
    7. With a linear type, the trend line already has an approximation reliability value of 0.9477, which characterizes this model as even more reliable than the exponential type trend line we considered earlier.
    8. Thus, by switching between different types of trend lines and comparing their approximation reliability values ​​(coefficient of determination), you can find the option whose model most accurately describes the presented graph. The option with the highest coefficient of determination will be the most reliable. Based on it, you can build the most accurate forecast.

      For example, in our case, it was experimentally possible to establish that the highest level of reliability has a polynomial type of trend line of the second degree. The coefficient of determination in this case is equal to 1. This indicates that the specified model is absolutely reliable, which means that errors are completely eliminated.

      But, at the same time, this does not mean at all that for another chart this type of trend line will also be the most reliable. The optimal choice of trend line type depends on the type of function on which the graph was created. If the user does not have enough knowledge to estimate the best option “by eye,” then the only way to determine the best forecast is to compare the coefficients of determination, as was shown in the example above.

    It is known for being useful in various fields of activity, including such a discipline as econometrics, where this software utility is used in work. Basically, all actions of practical and laboratory classes are performed in Excel, which greatly facilitates the work by providing detailed explanations of certain actions. Thus, one of the analysis tools “Regression” is used to select a graph for a set of observations using the least squares method. Let's look at what this program tool is and what its benefits are for users. Below we also provide brief but clear instructions for building a regression model.

    Main tasks and types of regression

    Regression represents the relationship between given variables, through which a forecast of the future behavior of these variables can be determined. Variables are various periodic phenomena, including human behavior. This type of Excel analysis is used to analyze the impact on a specific dependent variable of the values ​​of one or a number of variables. For example, sales in a store are influenced by several factors, including assortment, prices and location of the store. Thanks to regression in Excel, you can determine the degree of influence of each of these factors based on the results of existing sales, and then apply the data obtained to forecast sales for another month or for another store located nearby.

    Typically, regression is presented as a simple equation that reveals the relationships and strengths of relationships between two groups of variables, where one group is dependent or endogenous and the other is independent or exogenous. If there is a group of interrelated indicators, the dependent variable Y is determined based on the logic of reasoning, and the rest act as independent X variables.

    The main tasks of building a regression model are as follows:

    1. Selection of significant independent variables (X1, X2, ..., Xk).
    2. Selecting the type of function.
    3. Constructing estimates for coefficients.
    4. Construction of confidence intervals and regression functions.
    5. Checking the significance of the calculated estimates and the constructed regression equation.

    There are several types of regression analysis:

    • paired (1 dependent and 1 independent variables);
    • multiple (several independent variables).

    There are two types of regression equations:

    1. Linear, illustrating a strict linear relationship between variables.
    2. Nonlinear - Equations that can include powers, fractions, and trigonometric functions.

    Instructions for building a model

    To perform a given construction in Excel, you must follow the instructions:


    For further calculation, use the “Linear()” function, specifying Y Values, X Values, Const and Statistics. After this, determine the set of points on the regression line using the "Trend" function - Y Values, X Values, New Values, Const. Using the given parameters, calculate the unknown value of the coefficients, based on the given conditions of the problem.

    Regression and correlation analysis are statistical research methods. These are the most common ways to show the dependence of a parameter on one or more independent variables.

    Below, using specific practical examples, we will consider these two very popular analyzes among economists. We will also give an example of obtaining results when combining them.

    Regression Analysis in Excel

    Shows the influence of some values ​​(independent, independent) on the dependent variable. For example, how does the number of economically active population depend on the number of enterprises, wages and other parameters. Or: how do foreign investments, energy prices, etc. affect the level of GDP.

    The result of the analysis allows you to highlight priorities. And based on the main factors, predict, plan the development of priority areas, and make management decisions.

    Regression happens:

    • linear (y = a + bx);
    • parabolic (y = a + bx + cx 2);
    • exponential (y = a * exp(bx));
    • power (y = a*x^b);
    • hyperbolic (y = b/x + a);
    • logarithmic (y = b * 1n(x) + a);
    • exponential (y = a * b^x).

    Let's look at an example of building a regression model in Excel and interpreting the results. Let's take the linear type of regression.

    Task. At 6 enterprises, the average monthly salary and the number of quitting employees were analyzed. It is necessary to determine the dependence of the number of quitting employees on the average salary.

    The linear regression model has the following form:

    Y = a 0 + a 1 x 1 +…+a k x k.

    Where a are regression coefficients, x are influencing variables, k is the number of factors.

    In our example, Y is the indicator of quitting employees. The influencing factor is wages (x).

    Excel has built-in functions that can help you calculate the parameters of a linear regression model. But the “Analysis Package” add-on will do this faster.

    We activate a powerful analytical tool:

    Once activated, the add-on will be available in the Data tab.

    Now let's do the regression analysis itself.



    First of all, we pay attention to R-squared and coefficients.

    R-squared is the coefficient of determination. In our example – 0.755, or 75.5%. This means that the calculated parameters of the model explain 75.5% of the relationship between the studied parameters. The higher the coefficient of determination, the better the model. Good - above 0.8. Bad – less than 0.5 (such an analysis can hardly be considered reasonable). In our example – “not bad”.

    The coefficient 64.1428 shows what Y will be if all variables in the model under consideration are equal to 0. That is, the value of the analyzed parameter is also influenced by other factors not described in the model.

    The coefficient -0.16285 shows the weight of variable X on Y. That is, the average monthly salary within this model affects the number of quitters with a weight of -0.16285 (this is a small degree of influence). The “-” sign indicates a negative impact: the higher the salary, the fewer people quit. Which is fair.

    

    Correlation Analysis in Excel

    Correlation analysis helps determine whether there is a relationship between indicators in one or two samples. For example, between the operating time of a machine and the cost of repairs, the price of equipment and the duration of operation, the height and weight of children, etc.

    If there is a connection, then does an increase in one parameter lead to an increase (positive correlation) or a decrease (negative) of the other. Correlation analysis helps the analyst determine whether the value of one indicator can be used to predict the possible value of another.

    The correlation coefficient is denoted by r. Varies from +1 to -1. The classification of correlations for different areas will be different. When the coefficient is 0, there is no linear relationship between samples.

    Let's look at how to find the correlation coefficient using Excel.

    To find paired coefficients, the CORREL function is used.

    Objective: Determine whether there is a relationship between the operating time of a lathe and the cost of its maintenance.

    Place the cursor in any cell and press the fx button.

    1. In the “Statistical” category, select the CORREL function.
    2. Argument “Array 1” - the first range of values ​​– machine operating time: A2:A14.
    3. Argument “Array 2” - second range of values ​​– repair cost: B2:B14. Click OK.

    To determine the type of connection, you need to look at the absolute number of the coefficient (each field of activity has its own scale).

    For correlation analysis of several parameters (more than 2), it is more convenient to use “Data Analysis” (the “Analysis Package” add-on). You need to select correlation from the list and designate the array. All.

    The resulting coefficients will be displayed in the correlation matrix. Like this:

    Correlation and regression analysis

    In practice, these two techniques are often used together.

    Example:


    Now the regression analysis data has become visible.