Playlist

Addressing Issues with Regression Assumptions

by David Spade, PhD

My Notes
  • Required.
Save Cancel
    Learning Material 2
    • PDF
      Slides Statistics pt1 Issues with Regression Assumptions.pdf
    • PDF
      Download Lecture Overview
    Report mistake
    Transcript

    00:01 Welcome to Lecture 8, in which we'll address issues with regression assumptions.

    00:06 Once we do a linear regression, we like to look at the residual plots.

    00:11 And when we look at the residual plots, what we wanna see if our regression is done well, is random scatter about zero? If there's random scatter about zero and the residual versus fitted values plot, this is an indication that the regression assumptions are reasonable.

    00:28 Things in the residual plots that we might see that indicate violations of regression assumptions include: a curved pattern in the residual versus fitted values plot, outliers, and parts of the plot where the spread is bigger than it is in other places.

    00:43 For example, let's look at the residuals versus fitted values plot for the Leaf Mass Data from the regression lecture.

    00:50 We see a possible outlier, we see a curved pattern, and we see places of varying spread.

    00:58 So the residual shows some problems, but our original scatter plot seem to show that the relationship between the width and the leaf mass was pretty strongly linear.

    01:09 So what happened? Often we can't see this kind of problems in the original scatter plot.

    01:14 It takes examination in the residual plots to see the possible violations of the underlying assumptions.

    01:20 So now the question becomes, "How do we fix these problems?" One problem that we might run in to is groups or subsets.

    01:27 Sometimes we have small clusters of residuals in different regions of the residuals versus fitted values plot.

    01:34 This indicates the presence of more than one group in our data set.

    01:37 How can we fix that? Well, the easiest way is to do a separate regression for the data from each group.

    01:44 For example, suppose we are looking at the relationship between sugar content of cereal and calories in the cereal.

    01:51 Since kids' cereal in the supermarket are likely to be place on the lower shelves at the eye levels of the children, we'll probably have several groups.

    02:00 These groups will be according to the shelf the cereal is on.

    02:03 So we handle these things separately.

    02:07 We would wanna do a separate regressions of calories against sugar for each of the shelves.

    02:12 And we'll likely get several, very different models, one for each group or shelf.

    02:16 Each of these models would look very different from the one that comes from analyzing all the data together.

    02:22 So one of the important rules of regression is that all of the data must come from the same population.

    02:28 In the cereal example they don't, because the different shelves are geard toward different demographics.

    02:35 If you have separate groups, like we do in the cereal example, we wanna analyze each group separately.

    02:41 We have outliers, leverage, and influential points all of which have pronounced effects on the regression model.

    02:48 Outliers, again, are observations that are far away from the rest of the data.

    02:53 Outliers are going to have very large residual values. We have several types of outliers.

    02:59 One of them is a high leverage point, and these have x-values that are far away from the average x-values.

    03:05 So you might have one observation with an x-value that's way out to the right, or way out to the left compared to the rest of the data.

    03:11 And this can make a linear relationship appear much stronger than what it actually is.

    03:15 So it's best to fit the model with all the points first, and then try it again without the high leverage point to see how the model changes.

    03:22 We might have what we call an Influential Point.

    03:26 And this is a point whose removal from the data set greatly changes the regression equation.

    03:31 So we need to handle this in the same way that we handle the high leverage points.

    03:35 We need to conduct the analysis with all the points in the data set, and then do it again without the possible influential points to see how the model changes.

    03:44 If the model changes a lot, then the point is in the influential point.

    03:48 For example, let's look at this scatter plot for exit polling we're looking at each county in a particular state.

    03:56 Notice in the scatter plot that we have one high leverage point way out to the right, and one possible influential point with a y-value that's really high.

    04:06 We have two candidates and we're doing a linear regression of 1's vote count versus the others.

    04:13 So we're gonna do two different regressions. Let's include all the points first.

    04:19 The regression equation we get out is the estimated candidate 2 vote count is 414.2601 plus 0.99474 times the vote count for candidate 1.

    04:30 The correlation between the vote count for candidate 2 and the vote count for candidate 1 is 0.6392.

    04:38 Now, let's take out the high leverage point at 21,000 for candidate 1.

    04:43 The new regression equations is the estimated candidate 2 vote count is 224.7384 plus 0.14772 times the vote count for candidate 1. The correlation's 0.5652.

    04:58 Now, let's try it one more time where we remove the influential point where candidate 1 has a vote count of 6,800 votes.

    05:06 The new regression equation without that influential point is the estimated vote count for candidate 2 is 388.6 plus 0.0079 times the vote count for candidate 1.

    05:17 The new correlation is 0.9226. So what happened? The high leverage point increase the correlation, and the influential point greatly decrease the correlation.

    05:30 Both situations have a significant impact on the slope with the regression line and, as well as, the intercept.

    05:36 Another problem we might run into is Lurking Variables.

    05:40 We need to be careful of those, because no matter how high the correlation is, we cannot infer cause from observational data.

    05:47 If we have a strong linear relationship between two variables, that does not imply that changes in one cause changes in the other, because we can't be sure that a variable isn't hanging out in the background that's actually the cause of the association.

    06:01 So we have these problems in our data and with our regression assumption, and now we need to know how to fix them.

    06:08 A common way to fix these problems with the regression assumptions, is to transform one or both variables.

    06:14 And this will help fix problems in the scatter plot or in the residuals versus fitted values plot.

    06:20 So what are the goals of transformations? Well, one goal is to make the distribution of a variable more symmetric.

    06:27 And we can asses whether or not this is work by using a histogram.

    06:30 We might wanna make the spread of multiple groups more alike, even if their centers differ.

    06:36 And we can assess whether or not this is work by looking at side-by-side boxplots.

    06:40 We may, also, wanna make the form of a scatterplot more nearly linear.

    06:45 And we might wanna make the spread in the scatterplot more even through out the plot instead of having thickening parts of the plot, and thinner parts of the plot.

    06:55 So we need to know what transformations are appropriate to fix which problems.

    07:00 So we have this concept called the Ladder of Powers.

    07:03 And we have 2, 1, 1/2, 0 , and -1.

    07:07 Two, is where we square the response variable values.

    07:12 And where this is useful, is if we have unimodal distributions that have a skew to the left.

    07:17 One, means no change.

    07:20 This is the raw response, we're not doing anything to the response variable, this is our "home base".

    07:25 This is our original data.

    07:26 One-half, represents the square root transformations, so we're taking the square root of all the response values.

    07:33 And this is really good for count data.

    07:35 Zero, is the log transformation, so we're taking the natural log rhythm of our response values.

    07:42 Where this is useful, is in measurements that cannot be negative, and for values that grow by percentage increases.

    07:49 For instance, salaries, and populations.

    07:51 Negative one, that's the negative reciprocal.

    07:56 This is -1 over the response, and this is good for changing the direction of a relationship or reversing the original ratio of how the response values are measured.

    08:05 So for example, let's suppose we have the following data set with 15 observations.

    08:10 So there's all the x-y pairs.

    08:13 And now we look at the scatterplot, and what do we see? We see clear curved pattern in there, and it looks like y might be related to the explanatory variable through the relationship y = x squared.

    08:28 So if we wanna use linear regression, how might we fix this problem? Well, the natural idea would be to take the square root of the responses, and then plot those against the x-variables.

    08:39 So here's the scatterplot once we've made that transformation.

    08:43 And now we can see that it looks much more linear than it did before.

    08:46 So we might wanna try a regression of the square root of y against x.

    08:52 So for the transformations, this transformations appears to have fix the relationship between x and y so that it's more nearly linear.

    09:01 So this gives us an indication that we might be able to model the square root of y using x as an explanatory variable with linear regression.

    09:08 We still have to examine the residual plots after our analysis is complete in order to see if any violations of the regression assumptions are apparent.

    09:17 Sometimes, the ladder of powers transformations don't work well, so we focus on the logarithmic transformation, an there are three types of them.

    09:26 One of them is one of the one that's mentioned in the ladder of powers, that's the zero transformation, taking the log of the natural response.

    09:34 We call this the exponential transformation.

    09:37 And this is useful for values that grow by percentage increases.

    09:41 We might also transform the explanatory variable by taking the natural log of the access.

    09:47 This is known as the logarithmic model, and this is useful when the scatterplot descends rapidly at the left, but then levels of at the right.

    09:55 And finally, there's the in between transformation, known as the power transformation.

    10:01 And for this transformation, what we do is we take the natural log of both the explanatory variable, and the response variable.

    10:08 And this is good when neither of the transformations of the response, nor the explanatory variable work well by themselves, but we need something in between the two.

    10:18 So common issues with the regression assumptions are things that we might run into, and things that we need to be aware of is, first of all, we need to make sure that the relationship between our response variable or transformed response variable, and our explanatory variable is straight, it's linear.

    10:36 We need to look out for different groups in the regression analysis.

    10:40 We do not want to extrapolate.

    10:42 Again, that means don't try to make predictions for values -- of the response that correspond to values of the explanatory variable that are outside the range of what you observed.

    10:51 We need to be careful to look for unusual points, high leverage points, and influential points.

    10:57 We need to consider two regression models to examine if the impact of unusual points is strong in the model.

    11:07 We need to be careful if our data have multiple modes, because this can indicate groups.

    11:12 We need to be aware of lurking variables, because even though the relationship between two variables might be strongly linear, there might be something hanging out in the background that might be causing that association, and it doesn't necessarily mean that changing the value of the explanatory variable is causing the change in the response.

    11:31 We need to be careful not to use regression to imply cause.

    11:35 And don't ever expect your model to be prefect, none of them are.

    11:39 We don't wanna stray too far away from the ladder of powers when we're trying to transform data.

    11:44 Again, we don't wanna choose a model based only on R-squared because correlation might be affected and R-squared in turn, may be affected by outliers, or influential points which may increase the value of R-squared dramatically.

    11:59 And finally, one more time, be careful if your data have multiple modes, because this might indicate groups, and so you might wanna do separate regression analysis for all these groups.

    12:09 All right, these are the common issues that come up in checking the regression assumptions.

    12:14 We've learned how to deal with transformations, and how to try to make our data more linear, and how to fix problems with the regression assumptions.

    12:21 This is the end of Lecture 8, and we'll see you back here for Lecture 9.


    About the Lecture

    The lecture Addressing Issues with Regression Assumptions by David Spade, PhD is from the course Statistics Part 1. It contains the following chapters:

    • Addressing Issues with Regression Assumptions
    • Outliers, Leverage and Influential Points
    • Lurking Variables and Causation
    • The Logarithmic Transformation

    Included Quiz Questions

    1. We expect to see random scatter of 0.
    2. We expect to see a curved pattern in the plot.
    3. We expect to see outliers in the plot.
    4. We expect to see parts of the plot where the spread is larger in some parts than it is in others.
    5. We expect to see small clusters of residuals.
    1. Performing one linear regression for each subgroup is the best way to handle the presence of multiple groups in our data.
    2. Performing one linear regression with all the data points is the best way to handle the presence of multiple groups in our data.
    3. Linear regression cannot be used to deal with this type of data.
    4. If there are multiple groups, there is no way to analyze the data set.
    5. Perform a logarithmic regression for the larger subgroup and a linear regression for the smaller subgroup.
    1. This point is said to have high leverage.
    2. This point is said to have high value.
    3. This point is said to have low leverage.
    4. This point is said to have low value.
    5. The point is said to have low volatility.
    1. The best way to handle influential points is to perform two separate regressions including and excluding the influential point.
    2. The best way to handle influential points is to perform a single linear regression, but mention that there is an influential point present.
    3. The best way to handle influential points is to find another method besides linear regression to analyze the data.
    4. The best way to handle influential points is to discard them and perform the regression as though they never existed.
    5. The best way to handle influential points is to place extra emphasis on their value.
    1. Squaring the response values is used to make unimodal, left-skewed distributions more symmetric.
    2. The log transformation is used to make unimodal, left-skewed distributions more symmetric.
    3. The negative reciprocal of the response values are used to make unimodal, left-skewed distributions more symmetric.
    4. The square root transformation is used to make unimodal, left-skewed distributions more symmetric.
    5. Adding a constant value to the skewed data points is the best way to make a skewed distribution more symmetric.
    1. Make the distribution of a variable more asymmetric
    2. Make the distribution of a variable more symmetric
    3. Make the spread of several groups more alike
    4. Make the form of a scatter plot more nearly linear
    5. Make the scatter plot spread out evenly rather than thickening at one end or the other
    1. Log transformation
    2. Square the response value
    3. Square root of response value
    4. Negative reciprocal
    5. Exponential transformation
    1. Negative reciprocal
    2. Square the response value
    3. Square root of response value
    4. Log transformation
    5. Exponential transformation
    1. Exponential transformation
    2. Square the response value
    3. Square root of response value
    4. Log transformation
    5. Negative reciprocal
    1. Make sure the relationship is quadratic
    2. Make sure the relationship is straight
    3. Do not extrapolate
    4. Look for unusual points, high leverage points and influential points
    5. Be careful if your data has multiple modes

    Author of lecture Addressing Issues with Regression Assumptions

     David Spade, PhD

    David Spade, PhD


    Customer reviews

    (1)
    5,0 of 5 stars
    5 Stars
    5
    4 Stars
    0
    3 Stars
    0
    2 Stars
    0
    1  Star
    0