Playlist

Linear Regression

by David Spade, PhD

My Notes
  • Required.
Save Cancel
    Learning Material 2
    • PDF
      Slides Statistics pt1 Linear Regression.pdf
    • PDF
      Download Lecture Overview
    Report mistake
    Transcript

    00:01 Welcome to Lecture 7, where we're going to discuss Linear Regression.

    00:05 All right, so when we talk about the linear model.

    00:07 What we mean is we wanna describe relationships between quantitative variables with a line.

    00:12 How do we do it and what does it mean? Well, sometimes our goal is to model the relationship between two quantitative variables with a line.

    00:19 While it's unlikely that any straight line will pass through all of the data points, under the right conditions, a line that goes through our data can give a close approximation to this relationship.

    00:30 What the linear model is, is the equation of a straight line that goes through our data.

    00:36 Our data are imperfect, our line doesn't go through all the data points, so there's going to be some error which we call residuals.

    00:43 When we have deviations from the data to the line, that's what we mean by a residual.

    00:50 We call our observed response values y and our predicted values, based on the line, y hat.

    00:56 We'll talk about -- talk more later about how we get these predicted values.

    01:00 The distance between y and y hat is known as the residual for that observation.

    01:05 It tells us how far off the model's prediction that point is.

    01:10 For example, if we have y = 22, if that's what we observe, and our model predicts y = 26.5, then the residual for that observation is 22 minus 26.5, or -4.5.

    01:25 Residuals can be viewed as the vertical distance from the observed value to the line.

    01:31 Here's a picture of what it looks like. We have in red, the observed values.

    01:37 We have in green, the predicted values based on the line.

    01:42 The length of those lines that you see connecting these points represent what we call the residuals.

    01:48 Now, we wanna talk about the regression line.

    01:52 We wanna know about the line that best fits our data.

    01:56 So what is regression line? Well, first of all let's think about a line that's a poor fit to the data, this is gonna show large residuals.

    02:04 A line that's a better fit to the data will have smaller residuals.

    02:07 We could try to find the line that minimizes the sum of all the residuals, but the negative residuals cancel out the positive ones, so the sum of the residual is always 0, so we can't do that. What do we do? Instead, we find the line that minimizes the sum of the squares of all the residuals.

    02:26 This result is known as the line of best fit or the least-squares line.

    02:30 We've talked about the correlation to measure the strength of the linear relationships, and now we're talking about modelling our data using a line.

    02:40 It's seems like those two might be related, but how? What does correlations tells us about the regression line? Well, it tells us a lot.

    02:49 For instance, it can tell us whether the line has a positive or negative slope.

    02:53 If the correlation is negative, then the line has a negative slope.

    02:58 If the correlation is positive, then the regression line has a positive slope.

    03:01 In many cases the correlation can give some insight in to how well our linear model fits our data, but we need to be careful in doing this, and we'll talk more about that a little bit later.

    03:13 How do we find it? Well, here's an equation of the regression line.

    03:17 It takes the form, y hat which is our predicted value equals b0 plus b1 times the value of our explanatory variable.

    03:26 So b0 is an intercept for the line, b1 is the slope.

    03:30 Okay, so to find those b0 and b1 values, let's start with a slope, which is b1.

    03:37 If r's are correlation, and sx and the sy are the standard deviations of the x's and y's, respectively.

    03:45 Then, we can find the slope by just taking the correlation times the standard deviation of the y's over the standard deviation of the x's.

    03:52 The slope tells us the number of units by which we expect the value of our response to increase with a one-unit increase in x.

    04:01 Now, we have to calculate the y-intercept, and we use the slope in this calculation.

    04:07 If x bar is the mean of all the x values, and y bar is the mean of al the y values, then what we have is b0 equals the mean of the y's minus the slope times the mean of the x's.

    04:20 The y-intercept, in some cases, tells us what value of y we should expect when x equals 0.

    04:27 But again, we have to be careful here. Why? Because it doesn't necessarily makes sense for x to be 0.

    04:33 And if it doesn't make sense for x to be 0, then the intercept has no meaningful interpretation.

    04:39 It's just where the regression line crosses the y-axis.

    04:43 Let's open an example. Suppose we don't have a scale that's available to weigh a sugar maple leaf, but we have a ruler available to measure its width.

    04:52 We aim to estimate leaf mass in grams by using the width and centimeters as a predictor.

    04:58 We have the following data available from past repetitions of this experiment.

    05:03 Here are the widths of the leaves. Here are the masses of the leaves.

    05:08 Before we go directly for a linear regression, we need to see if it's an appropriate thing to do.

    05:16 Let's look at the scatterplot of the width versus the leaf mass.

    05:22 What do we see? Well, we see a clear linear pattern in a positive direction, and it seems that a line would fit pretty well.

    05:32 Let's obtain the equation of the regression line.

    05:37 We have the correlation at 0.9113, the standard deviation of the widths at 1.3771, and the standard deviation of mass at 0.1493.

    05:50 The mean width is 9.295, and the mean mass is 0.4968.

    05:56 We can use all these and find b0 and b1 and put together our regression line.

    06:00 So let's do that.

    06:02 The slope of the regression line is the correlation times the standard deviation of the mass, over the standard deviation of the width, and that gives us 0.0988.

    06:14 So what does that mean? That means that we expect for a 1 cm increase in the width of the leaf, an increase of 0.0988 grams in the mass.

    06:27 The y-intercept is given by the mean mass minus the slope times the mean width, and that gives us -0.4215.

    06:36 This is the equation of our regression line. Predicted mass is -0.4215 plus 0.0988 times the width.

    06:47 Let's give context to what we just did.

    06:52 Here is an example where the intercept term the b0 doesn't have a meaningful interpretation, because a leaf can't have a width of 0, for 1.

    07:03 And even if they could, it doesn't make sense for a leaf to have a negative mass.

    07:08 The slope tells us that for 1 cm increase in the width of the leaf, we expect a 0.0988 gram increase in the mass.

    07:17 The linear model enables us to make predictions about the mass of the leaf, given the value of the width, as long as the width is between 6.9 cm and 12.1 cm.

    07:29 Let's be very careful here.

    07:32 We cannot use the line to make predictions for leaves with widths outside this range.

    07:37 And we're gonna talk about, why? We call making predictions outside of the range of the explanatory variable values that we observed, we call that extrapolation.

    07:47 It's the use of the regression equation to make predictions about the value of a response variable to correspond to value of the explanatory variable that's outside the range of what we observed.

    07:57 And let's think about why this is bad? There's a bunch of reasons it is not a good idea.

    08:03 First of all, we only observe a linear pattern between the explanatory variable and the response in the range of the values of the explanatory variable that we observed.

    08:15 We have no idea what happens.

    08:18 We might have little squigglies on each side of the line once we get outside of that range, we have no idea.

    08:23 We might have a curved pattern or it might start decreasing on one side and increasing on the other.

    08:29 We have no idea what happens.

    08:32 We can't be sure that our predictions are useful if we haven't observed data in those regions.

    08:38 All right, so for those reasons, it's best not to use the regression equations to make predictions outside of the range of the data that you observed.

    08:45 For example, let's try to make -- let's use the regression equation that we found to make a prediction about the mass of the leaf that's 1 cm wide.

    08:54 We do that, we get -0.4215 plus 0.0988 times 1, which gives us -0.3227, the negative mass for a leaf that has some positive width, that makes no sense here.

    09:10 Our regression line give us a nonsense value.

    09:15 This is an example of why you shouldn't use the regression line to make predictions outside of the range of the data that you observed.

    09:21 Now, let's look at making predictions using the regression line.

    09:24 What we just did was we made a prediction of sorts for value outside the range of our data.

    09:31 But what about for values inside the range of our data? We learn that making prediction for values outside of the range of our data is not a good thing to do, but it's perfectly fine for values inside that range.

    09:42 For example, let's predict the mass of a leaf that has a width of 8.3 cm.

    09:47 Well, we get estimated mass minus 0.4215 plus 0.0988 times 8, gives us an estimated mass of 0.3689 grams.

    09:58 This seems like a sensible value.

    10:00 Given the pattern, 0.3689 grams seems to be pretty reasonable for a leaf that has width of 8.3 cm.

    10:10 Now, let's look at the scatterplot with the regression line overlaid.

    10:14 We have all our data points, and then we have the line that goes through it.

    10:18 It appears that the lines fits the data fairly well, there's some variation around the line, which is to be expected, but it's not a bad fit to the data that we observed.

    10:28 But how can we asses how well the model performs? Well, the best way to do it is to look at how much variation there is around the line.

    10:36 And to quantify that, we look at the residuals.

    10:39 Given the distance of most of the points from the line, and what we observed in the scatterplot, our model looks to do pretty well.

    10:47 But when we examine the residuals, that's where we get a formal measure of how well our models actually performing.

    10:53 What we're gonna do now is just an example.

    10:56 We're gonna find the residual for a leaf with width 9.6 cm.

    11:01 We observed that the mass was 0.587, so this is one of the leaves in the original data set that we had.

    11:08 The predicted mass, is -0.4215 plus 0.0988 times 9.6, which gives us an estimated mass of 0.52698 grams.

    11:22 Our residual then is the observed mass, minus the estimated mass, which is 0.0602.

    11:30 What we wanna do, is we wanna look at the distribution of the residuals.

    11:34 How are they distributed? In order to see how well our model performs.

    11:39 It's best to revisit the residuals and look at their distribution.

    11:42 In order to do this, we're gonna look at histogram.

    11:45 What we hope to see is something that looks pretty close to a normal distribution.

    11:49 In other words, unimodal and symmetric.

    11:51 We can also look at the amount by which the residuals vary by examining their standard deviation.

    11:57 We calculate the standard deviation of the residuals, by summing up the squared differences between the observed values, and the predicted values, divided by n minus 2, where nz number of observations and taking the square root of that, that gives us a standard deviation of the residuals.

    12:16 If we were to do that calculation for the data that we observed, and the predictions that we made, what we would find is that the standard deviation of the residuals is 0.0631.

    12:26 The histogram that's gonna pop up here to the right, does seem to deviate from the normal distribution.

    12:32 As we can see, it's got a lot -- it looks to be skewed to the left a little bit, but with an outlier down there on the left-hand side.

    12:41 That's an indication that our data -- that our residuals may not be normal.

    12:45 Let's look at a normal probability plot.

    12:49 Well, we see that we have some pretty clear deviations from the linear pattern that we want in the normal probability plots.

    12:56 Given what we've seen in the histogram, and the normal probability plot, it doesn't look promising that our residuals would be normally distributed.

    13:04 Let's look at the the residuals just a little bit more closely.

    13:09 Again, neither the histogram nor the normal probability plot give any confidence that our residuals are normal, it's just want we want.

    13:17 Despite this, the models does seem to perform pretty well in predicting mass based on width.

    13:23 The deviation from the normal distribution might be a result of the fact that we only have 20 observations.

    13:29 Perhaps, if we have more our data would look more normal.

    13:33 In terms of assessing how our model performs, it's important to know how much of the variation in our observations the model actually accounts for.

    13:41 We have this R-squared quantity.

    13:44 We use this because it's important to know that percentage, and how much of the variation is accounted for.

    13:51 This R-squared quantity gives us exactly this measurement.

    13:54 R-squared normally, is the percentage of the variation in the response variable that is explained by the linear regression on the explanatory variable.

    14:02 It's easily calculated by squaring the correlation.

    14:06 Okay, so for example, in our problem, R-squared is equal to the correlation square, which is 0.9113 squared, or 0.8305.

    14:17 What this means is that 83.05% of the variation in leaf mass is explained by the linear relationship with width.

    14:25 Now, the question is, what kind of R-squared values indicate a good fit? There's no good rule of thumb for that, it kinda depends on the data you have, and what field your working in.

    14:37 We know that R-squared is always between 0 and 100% 100% implies a perfect fit of the line to the data.

    14:44 100% is too good to be true for real data.

    14:47 But typically, in scientific experiments they aim for R-squared between 80% and 90%.

    14:54 Observational studies, usually have lower R-squared values.

    14:58 Typically, between 30% and 50%, that's often used as evidence to support a useful linear regression and observational studies.

    15:08 All right, so when does linear regression appropriate? Well, there's four conditions that need to be satisfied.

    15:14 The first is the quantitative variable condition, which should look familiar from -- when we talked about correlation.

    15:19 All that quantitative variable condition says is that both the explanatory and the response variable must be quantitative.

    15:27 If one of them is categorical, then stop. Don't use regression.

    15:31 We need to satisfy the straight enough condition, which means that we need to use the scatterplot to see if the relationship between the explanatory variable and the response variable is roughly linear, or straight enough to model it with a line.

    15:45 We need to check the outlier condition, which simply means look at the scatterplot and see if there are any apparent outliers.

    15:52 We need to know if the plot thickens.

    15:55 In other words, is there a lot more spread in some parts of the scatterplot than there are in others? Is there a lot more spread around the line? One example might be for going up the regression line, and we have a little bit of spread here, and then there's a whole bunch in the middle, and then it thins out again, that's an example of where the plot would thicken somewhere, and this is bad news for us.

    16:18 If this happens, you don't wanna use linear regression.

    16:20 We need to check all these conditions using the scatterplot before we carry out linear regression.

    16:26 Here's the example from before.

    16:30 From the scatterplot to the right, we're gonna assess these four conditions.

    16:33 Using the leaf data, we know that both of those variables, width and mass, are quantitative, so that condition is okay.

    16:41 The scatterplot appears to be roughly linear, so it seems straight enough to use a line to model a fit.

    16:47 There did not appear to be any clear outliers in our scatterplot.

    16:52 There's no real dramatic thickening of the plot at any point.

    16:57 In this case, regression seems to be appropriate.

    17:02 What do we do after regression? Well, that's where we examine the residuals to make sure the assumptions of the regression model are satisfied.

    17:09 We wanna make a plot of residuals against the fitted values.

    17:12 We're gonna check to see if there are any of the following things, any bends in the plot that might indicate a violation of the straight enough condition, any outliers that were not apparent before, any change in the vertical spread from one part of the plot to the other.

    17:28 What a good looking residuals versus fitted values plot would be would just be random scatter about 0.

    17:36 Here's the residuals versus fitted values plot from our leaf mass data.

    17:40 Now, we see somethings that we didn't quite pick up in the original scatterplot of the data.

    17:46 We see a pretty big bend there in the the middle, and that might be an indication that the straight enough condition isn't satisfied like we thought it was from looking at the scatterplot itself.

    17:57 We see the large bend. There does, also, appear to be a thickening of the plot around where the fitted values between 0.5 and 0.6.

    18:04 There may be some violations of the straight enough condition, and the does the plot thicken condition that we need to worry about.

    18:12 In regression, just like with any other statistical practice, there are common mistakes that happen.

    18:18 Let's look at things that we need to be cautious of, and things that can go wrong.

    18:21 First of all, do not use linear regression to model a non-linear relationship.

    18:27 We wanna be aware of outliers, because remember, the slope of the regression line depends on the correlation, and correlation is highly influenced by outliers.

    18:36 These can cause a lot of problems and give us bad predictions.

    18:40 Do not say, just like with correlation, that changes in x cause changes in y just because there's a strong linear relationship between them.

    18:49 Don't choose a model based on R-squared alone.

    18:54 Remember that in our leaf mass data, we had -- R-squared was fairly high.

    18:58 But the residuals versus fitted values plot seemed to indicate that there are problems with the regression assumptions.

    19:04 We don't wanna choose a model based on R-squared alone.

    19:06 The reason is that sometimes, we have a weak linear relationship, but then one outlier that makes the correlation appear pretty high, that makes R-squared high.

    19:16 In other words, this high R-squared value might indicate a good linear -- a good fit even when our data aren't linear, but there's one outlier out there kinda pulling the strings.

    19:29 Make sure you know which variable is your x variable, and which variable is your y, so which is your explanatory, and which is you response.

    19:37 Don't try to use your regression line to make predictions about x based on y.

    19:42 In other word, don't try to predict you explanatory variable based on your response.

    19:47 These are the common issues that we run into with regression.

    19:51 You wanna avoid these at all times, and if you can do that you'll probably able to find a pretty good fit to your data.

    19:59 That's the end of Lecture 7 on Linear Regression, and we'll see you next time for Lecture 8.


    About the Lecture

    The lecture Linear Regression by David Spade, PhD is from the course Statistics Part 1. It contains the following chapters:

    • Linear Regression
    • The Y-Intercept
    • Making Predictions
    • The R-squared Quantity
    • After Regression

    Included Quiz Questions

    1. The line represented by the linear model goes through every data point.
    2. The linear model is the equation of a straight line that goes through our data.
    3. The linear model can be used to model the relationship between two quantitative variables.
    4. A line with a good fit will have small residuals.
    5. The linear model cannot be used for qualitative variables.
    1. The slope of the regression line is equal to the correlation.
    2. Correlation can, in many cases, give insight into how well the linear model will fit our data.
    3. We must be careful in using correlation to describe how well the linear model will fit our data.
    4. Correlation tells us whether the slope of the regression line is positive or negative.
    5. The intersection of the y-axis does not provide meaningful information about correlation.
    1. For a one-unit increase in the value of X, we expect a decrease of 0.275 units in the value of Y.
    2. For a one-unit increase in the value of X, we expect an increase of 0.275 in the value of Y.
    3. For a one-unit increase in the value of X , we expect a 12.5 unit increase in the value of Y .
    4. If Y = 0, then X = 12.5.
    5. For a one-unit decrease in the value of X, we expect a decrease of 0.275 units in the value of Y.
    1. High values of R² mean that the changes in the value of X cause the changes in the value of Y.
    2. The R² quantity can be used to assess how well the line fits the data in many cases.
    3. The R² quantity provides the percentage of variation in the response variable that is explained by the regression on the explanatory variable.
    4. The R² quantity is calculated by squaring the correlation.
    5. R² ranges from 0 to 1.
    1. The vertical distance from the observed value to the regression line
    2. The horizontal distance from the observed value to the regression line
    3. The vertical distance from the expected value to the regression line
    4. The horizontal distance from the expected value to the regression line
    5. The lateral distance from the expected value to the regression line
    1. 134.4
    2. 30
    3. -34.4
    4. 34.4
    5. 100
    1. 15.6
    2. 0
    3. -15.6
    4. 34.4
    5. 100
    1. 1.5
    2. 0
    3. 1
    4. 0.375
    5. 0.75
    1. 4
    2. 0
    3. 1
    4. 2
    5. 3
    1. GDP = 100 + 0.9* Health expenditure
    2. GDP = 100 – 0.9* Health expenditure
    3. GDP = 100 + 0.9* Health expenditure + 0.23* Health expenditure²
    4. Health expenditure = 100 – 0.9*GDP
    5. Health expenditure = 100 + 0.9*GDP

    Author of lecture Linear Regression

     David Spade, PhD

    David Spade, PhD


    Customer reviews

    (1)
    5,0 of 5 stars
    5 Stars
    5
    4 Stars
    0
    3 Stars
    0
    2 Stars
    0
    1  Star
    0