6.2 The hypothesis matrix illustrated with a three-level factor
Consider an example with the three word classes nouns, verbs, and adjectives. We load simulated data from a lexical decision task with response times as dependent variable. The research question is: do response times differ as a function of the between-subject factor word class with three levels: nouns, verbs, and adjectives? We here make the ad-hoc assumption that nouns may have longer response times and that adjectives may have shorter response times. Here, we specify word class as a between-subject factor. In cognitive science experiments, word class will usually vary within subjects and between items. However, the within- or between-subjects status of an effect is independent of its contrast coding; we assume the manipulation to be between subjects for ease of exposition. The concepts presented here extend to repeated measures designs that are often analyzed using hierarchical Bayesian (linear mixed) models.
# source("functions/mixedDesign.v0.6.3.R")
# M <- matrix(c(500, 450, 400),
# nrow=3, ncol=1, byrow=FALSE)
# set.seed(1)
# simdat2 <- mixedDesign(B=3, W=NULL,
# n=4, M=M, SD=20, long = TRUE)
data("df_contrasts2")
head(df_contrasts2)
## # A tibble: 6 × 3
## F DV id
## <fct> <int> <int>
## 1 nouns 476 1
## 2 nouns 517 2
## 3 nouns 491 3
## 4 nouns 516 4
## 5 verbs 464 5
## 6 verbs 439 6
names(df_contrasts2)[1]
## [1] "F"
Factor | N data | Est. means | Std. dev. | Std. errors |
---|---|---|---|---|
adjectives | 4 | 400.2 | 19.9 | 9.9 |
nouns | 4 | 500.0 | 20.0 | 10.0 |
verbs | 4 | 450.2 | 20.0 | 10.0 |
As shown in Table 6.2, the estimated means reflect our assumptions about the true means in the data simulation: Response times are longest for nouns and shortest for adjectives. In the following sections, we use this data-set to illustrate sum. Furthermore, we will use an additional data set to illustrate repeated, polynomial, and custom contrasts. In practice, usually only one set of contrasts is selected when the expected pattern of means is formulated during the design of the experiment.
6.2.1 Sum contrasts
For didactic purposes, the next sections describe sum contrasts. Suppose that the expectation is that nouns are responded to slower and verbs words are responded to faster than the GM response time. Then, the research question could be: Do nouns differ from the GM and do adjectives differ from the GM? And if so, are they above or below the GM? We want to estimate the following two quantities:
\[\begin{equation} \beta_1 = \mu_1 - \frac{\mu_1+\mu_2+\mu_3}{3} = \mu_1 - GM \end{equation}\]
and
\[\begin{equation} \beta_2 = \mu_2 - \frac{\mu_1+\mu_2+\mu_3}{3} = \mu_2 - GM \end{equation}\]
This translates into the following two hypotheses:
\[\begin{equation} H_{0_1}: \mu_1 = \frac{\mu_1+\mu_2+\mu_3}{3} = GM \end{equation}\]
and
\[\begin{equation} H_{0_2}: \mu_2 = \frac{\mu_1+\mu_2+\mu_3}{3} = GM \end{equation}\]
\(H_{0_1}\) can also be written as:
\[\begin{align} \label{h01} & \mu_1 =\frac{\mu_1+\mu_2+\mu_3}{3}\\ \Leftrightarrow & \mu_1 - \frac{\mu_1+\mu_2+\mu_3}{3} = 0\\ \Leftrightarrow & \frac{2}{3} \mu_1 - \frac{1}{3}\mu_2 - \frac{1}{3}\mu_3 = 0 \end{align}\]
This corresponds to estimating the quantity \(\beta_1 = \frac{2}{3} \mu_1 - \frac{1}{3}\mu_2 - \frac{1}{3}\mu_3\). Here, the weights \(2/3, -1/3, -1/3\) are informative about how to combine the condition means to estimate the linear model coefficient and to define the null hypothesis.
\(H_{0_2}\) is also rewritten as:
\[\begin{align}\label{h02} & \mu_2 = \frac{\mu_1+\mu_2+\mu_3}{3}\\ \Leftrightarrow & \mu_2 - \frac{\mu_1+\mu_2+\mu_3}{3} = 0 \\ \Leftrightarrow & -\frac{1}{3}\mu_1 + \frac{2}{3} \mu_2 - \frac{1}{3} \mu_3 = 0 \end{align}\]
This corresponds to estimating the quantity \(\beta_2 = -\frac{1}{3}\mu_1 + \frac{2}{3} \mu_2 - \frac{1}{3} \mu_3\). Here, the weights are \(-1/3, 2/3, -1/3\), and they again indicate how to combine the condition means for estimating the regression coefficient and for defining the null hypothesis.
6.2.2 The hypothesis matrix
The weights of the condition means are not only useful to define parameter estimates and hypotheses. They also provide the starting step in a very powerful method which allows the researcher to generate the contrasts that are needed to test these hypotheses in a linear model. That is, what we did so far is to explain some kinds of different contrast codings that exist and what the hypotheses are that they test. That is, if a certain data-set is given and the goal is to estimate certain comparisons (and test certain hypotheses), then the procedure would be to check whether any of the contrasts that we encountered above happen to estimate these comparisons and test exactly the hypotheses of interest. Sometimes it suffices to use one of these existing contrasts. However, at other times, our research questions may not correspond exactly to any of the contrasts in the default set of standard contrasts provided in R. For these cases, or simply for more complex designs, it is very useful to know how contrast matrices are created. Indeed, a relatively simple procedure exists in which we write our comparisons or hypotheses formally, extract the weights of the condition means from the comparisons/hypotheses, and then automatically generate the correct contrast matrix that we need in order to estimate these comparisons or test these hypotheses in a linear model. Using this powerful method, it is not necessary to find a match to a contrast matrix provided by the family of functions in R starting with the prefix contr
. Instead, it is possible to simply define the comparisons that one wants to estimate or the hypotheses that one wants to test, and to obtain the correct contrast matrix for these in an automatic procedure. Here, for pedagogical reasons, we show some examples of how to apply this procedure in cases where the comparisons/hypotheses do correspond to some of the existing contrasts.
Defining a custom contrast matrix involves four steps:
- Write down the estimated comparisons or hypotheses
- Extract the weights and write them into what we will call a hypothesis matrix
- Apply the generalized matrix inverse to the hypothesis matrix to create the contrast matrix
- Assign the contrast matrix to the factor and run the linear (mixed) model
Let us apply this four-step procedure to our example of the sum contrast. The first step, writing down the estimated parameters and hypotheses, is shown above. The second step involves writing down the weights that each comparison / hypothesis gives to condition means. The weights for the first comparison or null hypothesis are wH01=c(+2/3, -1/3, -1/3)
, and the weights for the second comparison or null hypothesis are wH02=c(-1/3, +2/3, -1/3)
.
Before writing these into a hypothesis matrix, we also define the estimated quantity and the null hypothesis for the intercept term. The intercept parameter estimates the mean across all conditions:
\[\begin{align} \alpha = \frac{\mu_1 + \mu_2 + \mu_3}{3} \\ \alpha = \frac{1}{3} \mu_1 + \frac{1}{3}\mu_2 + \frac{1}{3}\mu_3 \end{align}\]
This corresponds to the null hypothesis that the mean across all conditions is zero:
\[\begin{align} H_{0_0}: &\frac{\mu_1 + \mu_2 + \mu_3}{3} = 0 \\ H_{0_0}: &\frac{1}{3} \mu_1 + \frac{1}{3}\mu_2 + \frac{1}{3}\mu_3 = 0 \end{align}\]
This estimate and null hypothesis has weights of \(1/3\) for all condition means.
The weights from all three model parameters / hypotheses that were defined are now combined and written into a matrix that we refer to as the hypothesis matrix (Hc
):
<- rbind(
HcSum cH00 = c(adjectives = 1 / 3, nouns = 1 / 3, verbs = 1 / 3),
cH01 = c(adjectives = +2 / 3, nouns = -1 / 3, verbs = -1 / 3),
cH02 = c(adjecctives = -1 / 3, nouns = +2 / 3, verbs = -1 / 3)
)fractions(t(HcSum))
## cH00 cH01 cH02
## adjectives 1/3 2/3 -1/3
## nouns 1/3 -1/3 2/3
## verbs 1/3 -1/3 -1/3
Each set of weights is first entered as a row into the matrix (command rbind()
). This has mathematical reasons (see Schad et al. 2020b). However, we then switch rows and columns of the matrix for easier readability using the command t()
(this transposes the matrix, i.e., switches rows and columns). The command fractions()
turns the decimals into fractions to improve readability.
Now that the condition weights have been written into the hypothesis matrix, the third step of the procedure is implemented: a matrix operation called the ‘generalized matrix inverse’2 is used to obtain the contrast matrix that is needed to test these hypotheses in a linear model.
In R this next step is done using the function ginv()
from the MASS
package. Here, we define a function ginv2()
for nicer formatting of the output.3
# define a function to make the output nicer
<- function(x) {
ginv2 fractions(provideDimnames(ginv(x),
base = dimnames(x)[2:1]
)) }
Applying the generalized inverse to the hypothesis matrix results in the new matrix XcSum
. This is the contrast matrix \(X_c\) that estimates exactly those comparisons and tests exactly those hypotheses that were specified earlier:
<- ginv2(HcSum)) (XcSum
## cH00 cH01 cH02
## adjectives 1 1 0
## nouns 1 0 1
## verbs 1 -1 -1
This contrast matrix corresponds exactly to the sum contrasts described above. In the case of the sum contrast, the contrast matrix looks very different from the hypothesis matrix. The contrast matrix in sum contrasts codes with \(+1\) the condition that is to be compared to the GM. The condition that is never compared to the GM is coded as \(-1\). This is a very different interpretation than the two-condition sum contrast we saw in earlier chapters!
The important point here is that unless we know the relationship between the hypothesis matrix and the contrast matrix, the meaning of the coefficients is completely opaque.
To verify this custom-made contrast matrix, it is compared to the sum contrast matrix as generated by the R function contr.sum()
in the stats
package. The resulting contrast matrix is identical to the result when adding the intercept term, a column of ones, to the contrast matrix:
fractions(cbind(1, contr.sum(3)))
## [,1] [,2] [,3]
## 1 1 1 0
## 2 1 0 1
## 3 1 -1 -1
In order to estimate model parameters, step four in our procedure involves assigning sum contrasts to the factor F
in our example data, and running a linear (mixed) model. This allows us to estimate the regression coefficients associated with each contrast. We compare these to the data shown above (Table 6.2) to test whether the regression coefficients actually correspond to the differences of condition means, as intended. To define the contrast, it is necessary to remove the intercept term, as this is automatically added by the linear model function lm()
.
contrasts(df_contrasts2$F) <- XcSum[, 2:3]
<- lm(DV ~ 1 + F,
fit_Sum data = df_contrasts2
)
round(summary(fit_Sum)$coefficients, 1)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 450.2 5.8 78.1 0
## FcH01 -49.9 8.2 -6.1 0
## FcH02 49.8 8.2 6.1 0
The linear model regression coefficients show the GM response time of \(450\) ms in the intercept. Remember that the first regression coefficient FcH01
was designed to estimate the extent to which nouns are responded to slower than the GM. The regression coefficient FcH01
(‘Estimate’) of \(50\) reflects the difference between adjectives (\(400\) ms) and the GM of \(450\) ms. The estimate of interest was in how response times for adjectives differ from the GM. The fact that the second regression coefficient FcH02
is close to \(-50\) indicates that response times for adjectives (\(400\) ms) are 50 ms faster than the GM of \(450\) ms. Verbs are estimated to have \(50\) ms longer reading times than the GM (the second slope labeled FcH02).
We have now not only derived contrasts, parameter estimates, and hypotheses for the sum contrast, we have also used a powerful and highly general procedure that is used to generate contrasts for many kinds of different hypotheses and experimental designs.
6.2.3 Generating contrasts: The hypr
package
To work with the 4-step procedure, i.e., to flexibly design contrasts to estimate specific comparisons, we have developed the R package hypr
(Rabe et al. 2020). This package allows the user to specify comparisons as null hypotheses, and based on these null hypotheses, it automatically generates contrast matrices that allow the user to estimate these comparisons and test these hypotheses in linear models. It thus considerably simplifies the implementation of the 4-step procedure outlined above.
To illustrate the functionality of the hypr
package, we will use the two comparisons and associated null hypotheses that we had defined and analyzed in the previous section:
\[\begin{equation} \beta_1 = \mu_1 - \frac{\mu_1+\mu_2+\mu_3}{3} = \mu_1 - GM \end{equation}\]
\[\begin{equation} H_{0_1}: \mu_1 = \frac{\mu_1+\mu_2+\mu_3}{3} = GM \end{equation}\]
and
\[\begin{equation} \beta_2 = \mu_2 - \frac{\mu_1+\mu_2+\mu_3}{3} = \mu_2 - GM \end{equation}\]
\[\begin{equation} H_{0_2}: \mu_2 = \frac{\mu_1+\mu_2+\mu_3}{3} = GM \end{equation}\]
These null hypotheses are effectively comparisons between condition means or between bundles of condition means. That is, \(\mu_1\) is compared to the GM and \(\mu_2\) is compared to the grand mean. These two comparisons/hypotheses can be directly entered into R using the hypr()
function from the hypr
package.
To do so, we use some labels to indicate factor levels. E.g., nouns
, verbs
, and adjectives
can represent factor levels \(\mu_1\), \(\mu_2\), and \(\mu_3\). The first comparison/hypothesis specifies that \(\mu_1 = \frac{\mu_1+\mu_2+\mu_3}{3}\). This can be written as a formula in R: nouns ~ (nouns + verbs + adjectives)/3
. The second comparison/hypothesis is that \(\mu_2 = \frac{\mu_1+\mu_2+\mu_3}{3}\), which can be written in R as verbs ~ (nouns + verbs + adjectives)/3
.
<- hypr(
HcSum b1 = nouns ~ (nouns + verbs + adjectives) / 3,
b2 = verbs ~ (nouns + verbs + adjectives) / 3,
levels = c("nouns", "verbs", "adjectives")
) HcSum
## hypr object containing 2 null hypotheses:
## H0.b1: 0 = (2*nouns - verbs - adjectives)/3
## H0.b2: 0 = (2*verbs - nouns - adjectives)/3
##
## Call:
## hypr(b1 = ~2/3 * nouns - 1/3 * verbs - 1/3 * adjectives, b2 = ~2/3 *
## verbs - 1/3 * nouns - 1/3 * adjectives, levels = c("nouns",
## "verbs", "adjectives"))
##
## Hypothesis matrix (transposed):
## b1 b2
## nouns 2/3 -1/3
## verbs -1/3 2/3
## adjectives -1/3 -1/3
##
## Contrast matrix:
## b1 b2
## nouns 1 0
## verbs 0 1
## adjectives -1 -1
The results show that the null hypotheses or comparisons between condition means have been re-written into a form where \(0\) is coded on the left side of the equation, and the condition means together with associated weights are written on the right side of the equation. This presentation makes it easy to see the weights of the condition means to code a certain null hypothesis or comparison. The next part of the results shows the hypothesis matrix, which contains the weights from the condition means. Thus, hypr
takes as input comparisons between condition means, which also define null hypotheses, and automatically extracts the corresponding weights and encodes them into the hypothesis matrix. hypr
moreover applies the generalized matrix inverse to obtain the contrast matrix from the hypothesis matrix. Note that the different steps correspond exactly to the steps we had carried out manually in the preceding section. hypr
automatically performs these steps for us. We can now extract the contrast matrix by a simple function call:
contr.hypothesis(HcSum)
## b1 b2
## nouns 1 0
## verbs 0 1
## adjectives -1 -1
## attr(,"class")
## [1] "hypr_cmat" "matrix" "array"
We can assign this contrast to our factor as we did before, and again fit a linear model:
$F <- factor(df_contrasts2$F,
df_contrasts2levels = c("nouns", "verbs", "adjectives")
)contrasts(df_contrasts2$F) <- contr.hypothesis(HcSum)
<- lm(DV ~ 1 + F,
fit_Sum2 data = df_contrasts2
)
round(summary(fit_Sum2)$coefficients, 1)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 450.2 5.8 78.1 0
## Fb1 49.8 8.2 6.1 0
## Fb2 0.1 8.2 0.0 1
In the hypr
, the focus lies on estimation of contrasts that code comparisons between condition means or groups of condition means. Thus, the null hypotheses that one specifies implicitly imply the estimation of a difference between condition means or bundles of condition means. The output of the hypr()
function (see the first section of the results) makes this clear - these formulate the null hypotheses in a way that also illustrates the estimation of model parameters. I.e., the null hypothesis H0.b1: 0 = 2/3*m1 - 1/3*m2 - 1/3*m3
corresponds to a parameter estimate of b1 = 2/3*m1 - 1/3*m2 - 1/3*m3
. The resulting contrasts will then allow us to estimate the specified differences between condition means or bundles of condition means.
References
At this point, there is no need to understand in detail what this means. We refer the interested reader to Schad et al. (2020b). For a quick overview, we recommend a vignette explaining the generalized inverse in the matlib package (Friendly, Fox, and Chalmers 2020).↩︎
The function from the package is used to make the output more easily readable, and the function is used to keep row and column names.↩︎