4.2 A practical example using simulated data

A practical example will make the usage of this test clear. Let’s just simulate data from a linear model:

x <- 1:10
y <- 10 + 20 * x + rnorm(10, sd = 10)

Here, the null hypothesis that the slope is 0 is false (it has value 20). Now, we fit a null hypothesis model, without a slope:

## null hypothesis model:
m0 <- lm(y ~ 1)

We will compare this model’s log likelihood with that of the alternative model, which includes an estimate of the slope:

## alternative hypothesis model:
m1 <- lm(y ~ x)

The difference in log likelihood, multiplied with -2, is:

LogLRT <- -2 * (logLik(m0) - logLik(m1))
## observed value:
LogLRT[1]
## [1] 34.49

The difference in the number of parameters in the two models is one, so \(\log LRT\) has the distribution \(\chi_1^2\). Is the observed value \(34.49\) unexpected under this distribution? We can calculate the probability of obtaining the likelihood ratio statistic we observed above, or a value more extreme, given the \(\chi_1^2\) distribution.

pchisq(LogLRT[1], df = 1, lower.tail = FALSE)
## [1] 4.286e-09

Just like the critical t-value in the t-test, the critical chi-squared value here is:

## critical value:
qchisq(0.95, df = 1)
## [1] 3.841

If minus two times the observed difference in log likelihood is larger than this critical value, we reject the null hypothesis.

Note that in the likelihood test above, we are comparing one nested model against another: the null hypothesis model is nested inside the alternative hypothesis model. What this means is that the alternative hypothesis model contains all the parameters in the null hypothesis model (i.e., the intercept) plus another one (the slope).