Goals

We will briefly review linear modeling, focusing on building and assessing linear models in R. We have four main goals in this lab:

Data

Execute the following code chunk to load the heart disease dataset we worked with before:

library(tidyverse)
heart_disease <- read_csv("http://www.stat.cmu.edu/cmsac/sure/2022/materials/data/health/intro_r/heart_disease.csv")

This dataset consists of 788 heart disease patients (608 women, 180 men). Your goal is to predict the Cost column, which corresponds to the patient’s total cost of claims by subscriber (i.e., Cost is the response variable). You have access to the following explanatory variables:

Exercises

1. EDA

Spend time exploring the dataset, to visually assess which of the explanatory variables listed above is most associated with our response Cost. Create scatterplots between the response and each continuous explanatory variable (either Interventions, ERVist, Comorbidities, or Duration). Do any of the relationship appear to be linear? Describe the direction and strength of the association between the explanatory and response variables.

In your opinion, which of the possible continuous explanatory variables displays the strongest relationship with cost?

2. Fit a simple linear model

Now that you’ve performed some EDA, it’s time to actually fit some linear models to the data. Start the variable you think displays the strongest relationship with the response variable. Update the following code by replacing INSERT_VARIABLE with your selected variable, and run to fit the model:

init_cost_lm <- lm(Cost ~ INSERT_VARIABLE, data = heart_disease)

Before check out the summary() of this model, you need to check the diagnostics to see if it meets the necessary assumptions. To do this you can try running plot(init_cost_lm) in the console (what happens?). Equivalently, another way to make the same plots but with ggplot2 perks is with the ggfortify package by running the following code:

# First install the package by running the following line (uncomment it!) in the console
# install.packages("ggfortify")
library(ggfortify)
autoplot(init_cost_lm) +
  theme_bw()

The first plot is residuals vs. fitted: this plot should NOT display any clear patterns in the data, no obvious outliers, and be symmetric around the horizontal line at zero. The smooth line provided is just for reference to see how the residual average changes. Do you see any obvious patterns in your plot for this model?

The second plot is a Q-Q plot (p. 93). Without getting too much into the math behind them, the closer the observations are to the dashed reference line, the better your model fit is. It is bad for the observations to diverge from the dashed line in a systematic way - that means we are violating the assumption of normality discussed in lecture. How do your points look relative to the dashed reference line?

The third plot looks at the square root of the absolute value of the standardized residiuals. We want to check for homoskedascity of errors (equal, constant variance). If we did have constant variance, what would we expect to see? What does your plot look like?

The fourth plot is residuals vs. leverage which helps us identify influential points. Leverage quanitifies the influence the observed response for a particular observation has on its predicted value, i.e. if the leverage is small then the observed response has a small role in the value of its predicted reponse, while a large leverage indicates the observed response plays a large role in the predicted response. Its a value between 0 and 1, where the sum of all leverage values equals the number of coefficients (including the intercept). Specifically the leverage for observation \(i\) is computed as:

\[h_{ii} = \frac{1}{n} + \frac{(x_i - \bar{x})^2}{\sum_i^n (x_i - \bar{x})^2}\] where \(\bar{x}\) is the average value for variable \(x\) across all observations. See page 191 for more details on leverage and the regression hat matrix. We’re looking for points in the upper right or lower right corners, where dashed lines for Cook’s distance values would indicate potential outlier points that are displaying too much influence on the model results. Do you observed any such influential points in upper or lower right corners?

What is your final assessment of the diagnostics, do you believe all assumptions are met? Any potential outlier observations to remove?

3. Transform the Cost variable

An obvious result from looking at the residual diagnostics above is that we are clearly violating the assumption of Normality. Why do you think we’re violating this assumption? (HINT: Display a histogram of the Cost variable.)

One way of addressing this concern is to apply a transformation to the response variable, in this case Cost. A common transformation for any type of dollar amount is to use the log() transformation. Run the following code chunk to create a new log_cost variable that we will use for the remainder of the lab.

heart_disease <- heart_disease %>%
  mutate(log_cost = log(Cost + 1))

Why did we need to + 1 before taking the log()? (HINT: Look at the minimum of Cost.) Now make another histogram, this time for the new log_cost variable - what happened to the distribution?

4. Assess the model summary

Now fit the same model as before using the following code chunk. Update the following code by replacing INSERT_VARIABLE with your selected variable, and run to fit the model:

log_cost_lm <- lm(log_cost ~ INSERT_VARIABLE, data = heart_disease)

Following the example in lecture, interpret the results from the summary() function on your initial model. Do you think there is sufficient evidence to reject the null hypothesis that the coefficient is 0? What is the interpretation of the \(R^2\) value? Compare the square root of the raw (unadjusted) \(R^2\) of your linear model to the correlation between that explanatory variable and the response using the cor() function (e.g., cor(heart_disease$INSERT_VARIABLE, heart_disease$log_cost) - but replace INSERT_VARIABLE with your variable). What do you notice?

To assess the fit of a linear model, we can also plot the predicted values vs the actual values, to see how closely our predictions align with reality, and to decide whether our model is making any systematic errors. Execute the following code chunk to show the actual log(Cost) against our model’s predictions

heart_disease %>%
  mutate(model_preds = predict(log_cost_lm)) %>%
  ggplot(aes(x = model_preds, y = log_cost)) +
  geom_point(alpha = 0.75) +
  geom_abline(slope = 1, intercept = 0,
              linetype = "dashed", color = "red") +
  theme_bw() +
  labs(x = "Predictions", y = "Observed log(Cost + 1)")

5. Repeat steps 2 and 3 above for each of the different continuous variables

Which of the variables do you think is the most appropriate variable for modeling the cost?

6. Include multiple covariates in your regression

Repeat steps 2 and 3 above but including more than one variable in your model. You can easily do this in the lm() function by adding another variable to the formula with the + operator as so (but just replace the INSERT_VARIABLE_X parts):

multi_cost_lm <- lm(log_cost ~ INSERT_VARIABLE_1 + INSERT_VARIABLE_2, 
                   data = heart_disease)

Experiment with different sets of the continuous variables. What sets of continuous variables do you think models log(Cost) best? (Remember to use the Adjusted \(R^2\) when comparing models that have different numbers of variables).

Beware collinearity! Load the car library (install it if necessary!) and use the vif() function to check for possible (multi)collinearity. The vif() function computes the variance inflation factor (VIF) where for predictor \(x_j\) for \(j \in 1,\dots, p\):

\[ VIF_j = \frac{1}{1 - R^2_j} \]

where \(R^2_j\) is the \(R^2\) from a variable with variable \(x_j\) as the response and the other \(p-1\) predictors as the explanatory variables. VIF values close to 1 indicate the variable is not correlated with other predictors, while VIF values over 5 indicate strong presence of collinearity. If present, remove a variable with VIF over 5, and redo the fit. Rinse, lather, and repeat until the vif() outputs are all less than 5. The follow code chunk displays an example of using this function:

# First install the package by uncommenting out the following line:
# install.packages("car")
library(car)
vif(multi_cost_lm)

Tomorrow

Tomorrow’s lab will focus on categorical variables, interactions, and holdout data predictions.