Introduction

What is big data? According to Wikipedia, big data is a term used to refer to data sets that are too large or complex for traditional data-processing application software to adequately deal with. According to Dr. Mario, big data is a nightmare. For this class, big data means that either the sample size (\(n\)) is extremely large, the number of explanatory variables (\(p\)) is extremely large, or both sample size and the number of variables is extremely large. For Google, big data means all the data in the world. Big data is a sexy phrase that you should drop on the reg, not because you know what it means, but because it makes you sound smarter. Notice how many times I have used big data in this paragraph.

Suppose we have a response variable \(Y\) and \(p=1000\) predictor variables. It is highly unlikely that all the predictor variables are relevant explaining the variation of \(Y\). Furthermore, it is highly unlikely that all the predictor variables are useful for maker future predictions \(\hat{Y}\). Today, we take a look at helpful methods for simultaneously parameter estimation and variable selection for the classic linear model.

Part 1: Simulate and Meditate

Chunk 1: Simulate Data

set.seed(216)
X=matrix(rnorm(100000),500,200)
beta=c(rep(5,5),rep(-2,5),rep(0,190))
set.seed(480)
epsilon=rnorm(500,0,10)
y=X%*%beta+epsilon

SIM.DATA=data.frame(y=y,X=X)

Chunk 2: Fit Linear Model

lm.model=lm(y~.,data=SIM.DATA)
glance(lm.model)
param.est=lm.model$coefficients
param.conf=confint(lm.model)
param.lm=data.frame(cbind(param.est,param.conf))[-1,] #Remove Intercept
names(param.lm)=c("Estimate","Lower","Upper")
param.lm = param.lm %>%
              mutate(Significant=factor(ifelse(0>Lower & 0<Upper,"No","Yes")))

ggplot(param.lm[1:5,]) +
  geom_pointrange(aes(x=1:5,y=Estimate,ymin=Lower,ymax=Upper,color=Significant),size=2)+
  theme_minimal()+
  scale_color_manual(drop=F,values=c("lightskyblue2","gray"))+
  xlab("X1:X5")
ggplot(param.lm[6:10,]) +
  geom_pointrange(aes(x=6:10,y=Estimate,ymin=Lower,ymax=Upper,color=Significant),size=2)+
  theme_minimal()+
  scale_color_manual(values=c("lightskyblue2","gray"))+
  xlab("X6:X10")
ggplot(param.lm[11:200,]) +
  geom_pointrange(aes(x=11:200,y=Estimate,ymin=Lower,ymax=Upper,color=Significant))+
  theme_minimal()+
  scale_color_manual(values=c("lightskyblue2","gray"))+
  xlab("X11:X200")

Chunk 3: Linear Model for Each Potential Predictor

COEF=rep(NA,200)
P.VAL=rep(NA,200)
for(j in 1:200){
  individual.mod=lm(y~.,data=SIM.DATA[,c(1,j+1)])
  COEF[j]=coef(individual.mod)[2]
  P.VAL[j]=summary(individual.mod)$coefficients[2,4]
}

KEEP=P.VAL<0.01
ACTUAL=c(rep("NonZero",10),rep("Zero",190))

tibble(COEF,P.VAL,KEEP) %>% 
  ggplot() +
  geom_point(aes(x=COEF,y=P.VAL,color=KEEP),size=2) +
  geom_hline(yintercept=0.01,linetype="dashed")+
  scale_color_manual(values=c("lightskyblue2","gray"))+
  theme_minimal()

tibble(ACTUAL=ACTUAL,KEEP=KEEP) %>% 
  table() %>% prop.table()

Chunk 4: Modifying the Cutoff For P-Values

Cutoff = CUTOFF

COEF=rep(NA,200)
P.VAL=rep(NA,200)
for(j in 1:200){
  individual.mod=lm(y~.,data=SIM.DATA[,c(1,j+1)])
  COEF[j]=coef(individual.mod)[2]
  P.VAL[j]=summary(individual.mod)$coefficients[2,4]
}

KEEP=P.VAL<Cutoff
ACTUAL=c(rep("NonZero",10),rep("Zero",190))

tibble(ACTUAL=ACTUAL,KEEP=KEEP) %>% 
  table() %>% prop.table()

lm.model=lm(y~.,data=SIM.DATA[,c(1,which(KEEP)+1)])
param.est=lm.model$coefficients
param.conf=confint(lm.model)
param.lm=data.frame(cbind(param.est,param.conf))[-1,] #Remove Intercept
names(param.lm)=c("Estimate","Lower","Upper")
param.lm = param.lm %>%
  mutate(Significant=factor(ifelse(0>Lower & 0<Upper,"No","Yes")))

ggplot(param.lm[1:5,]) +
  geom_pointrange(aes(x=1:5,y=Estimate,ymin=Lower,ymax=Upper,color=Significant),size=2)+
  theme_minimal()+
  scale_color_manual(drop=F,values=c("lightskyblue2","gray"))+
  xlab("X1:X5")
ggplot(param.lm[6:10,]) +
  geom_pointrange(aes(x=6:10,y=Estimate,ymin=Lower,ymax=Upper,color=Significant),size=2)+
  theme_minimal()+
  scale_color_manual(values=c("lightskyblue2","gray"))+
  xlab("X6:X10")
ggplot(param.lm[11:(dim(param.lm)[1]),]) +
  geom_pointrange(aes(x=11:(dim(param.lm)[1]),y=Estimate,ymin=Lower,ymax=Upper,color=Significant))+
  theme_minimal()+
  scale_color_manual(values=c("lightskyblue2","gray"))+
  xlab("X11:X200")

Part 2: Shrinkage Estimation and More Meditation

Chunk 1: Penalized Estimation Path for Ridge

library(glmnet)
ridge.mod=glmnet(x=as.matrix(SIM.DATA[,-1]),
                 y=as.vector(SIM.DATA[,1]),
                 alpha=0)
plot(ridge.mod,xvar="lambda")

Chunk 2: Penalized Estimation Path for Lasso

lasso.mod=glmnet(x=as.matrix(SIM.DATA[,-1]),
                 y=as.vector(SIM.DATA[,1]),
                 alpha=1)
plot(lasso.mod,xvar="lambda")

Chunk 3: Penalized Estimation Path for Elastic Net

enet.mod=glmnet(x=as.matrix(SIM.DATA[,-1]),
                 y=as.vector(SIM.DATA[,1]),
                 alpha=1/2)
plot(enet.mod,xvar="lambda")