Welcome to the Sixth Summer School on Statistical Methods for Linguistics and Psychology, 12-16 September 2022






Application, dates, location

  • Dates: 12-16 September 2022.
  • Times: 9AM-5PM daily.
  • Location: The summer school will be held at the Griebnitzsee campus in Potsdam, at Haus 6. For train connections, consult bvg.de; the train station near the campus is called Griebnitzsee Bhf.
  • Application period: 17 Sept 2021 to 1 April 2022. Applications are closed. Decisions will be announced around 15 April 2022.

Free online course: Introduction to Bayesian Data Analysis

You can do this four-week online course for free (starts Jan 25, 2022). Details here.

Brief history of the summer school, and motivation

The summer school was started by Shravan Vasishth in 2017, as part of a methods project funded within the SFB 1287. The summer school aims to fill a gap in statistics education, specifically within the fields of linguistics and psychology. One goal of the summer school is to provide comprehensive training in the theory and application of statistics, with a special focus on the linear mixed model. Another major goal is to make Bayesian data analysis a standard part of the toolkit for the linguistics and psychology researcher. Over time, the summer school has evolved to have at least four parallel streams: beginning and advanced courses in frequentist and Bayesian statistics. These may be expanded to more parallel sessions in future editions. We typically admit a total of 120 participants (in 2019, we had some 450 applications). In addition to the all-day courses, we regularly invite speakers to give lectures on important current issues relating to statistics. Previous editions of the summer school: 2021, 2020, 2019, 2018, 2017.

Code of conduct

All participants will be expected to follow the (code of conduct, taken from StanCon 2018. In case a participant has any concerns, please contact any of the following instructors: Audrey Bürki, Anna Laurinavichyute, Shravan Vasishth, Bruno Nicenboim, or Reinhold Kliegl.

Invited lecturers

Ralf Engbert; Phillip Alday; Douglas Bates (Prof Bates is attending over zoom).

Invited keynote speakers

  1. Prof. Dr. Lena Jäger, Zürich, Switzerland. (Tuesday, 13 Sept 2022, 5-6PM).
    Title: Bayesian Estimation of Measurement Reliability of Individual Differences in Sentence Processing
    Abstract:
    Theories of human sentence processing generally assume that the cognitive mechanisms involved in language processing are qualitatively identical across speakers. However, over the past decade, evidence has accumulated indicating that individual differences in a comprehender’s cognitive capacities play an important role in sentence processing (e.g., Vuong and Martin, 2014; Nicenboim et al., 2015; Farmer et al., 2017). From a methodological point of view, the first step for a principled investigation of individual differences in sentence processing is to establish their test-retest measurement reliability, that is, the correlation of subject-level effects across multiple experimental sessions (Parsons et al., 2019; Cunnings and Fujita, 2020). We can’t take this measurement reliability as a given because of the so-called reliability paradox which states that test-retest measurement reliability at the individual level is necessarily lower for manipulations with high between-subjects reliability, that is, replicability at the group level (Hedge, 2017). However, it is likely that precisely effects with high replicability at the group level constitute the set of well-established psycholinguistic phenomena which build the foundation of sentence processing theories. In this talk, I will present ongoing work in which we assess the measurement reliability of individual differences in a range of theoretically relevant phenomena within and across methods. We collected the first cross-methodological reading corpus with multiple experimental sessions from each participant. 50 native speakers of German each participated in four experimental sessions (two eye-tracking and two self-paced reading sessions, 200 sessions in total). Participants read 80 pages of natural text (20 pages per session), and participated in a comprehensive psychometric assessment measuring verbal and non-verbal working memory capacity, cognitive control and IQ, as well as lexical and non-lexical reading fluency. We estimate within- and cross-methodological test-retest measurement reliability of individual differences in well-established psycholinguistic effects by computing Bayesian correlation estimates (Matzke et al., 2017) of participant-specific random slopes between sessions from the same method, and sessions from different methods, respectively. We further explore whether cognitive capacities affect test-retest measurement reliability of individual-level effects. We find that lexical-level effects are very stable within individuals across sessions and methods (e.g., participants exhibiting a particularly strong word length effect do so across sessions and methods). By contrast, higher-level cognitive effects that involve syntactic processing (e.g., surprisal or dependency locality) are less stable within individuals. In a nutshell, we find that for higher-level effects, test-retest measurement reliability of individual-level effects is generally higher in self-paced reading than in eye-tracking. Cross-methodological measurement reliability of individual-level effects is generally low for all eye-tracking measures. Future work will need to address the question whether the observed low test-retest measurement reliability of higher-level cognitive effects can be explained by the stimulus materials (naturalistic texts) which, in contrast to minimal pairs of planned experiments, do not push the comprehender’s sentence processor to its limits, and therefore might be less adequate to assess individual differences in sentence processing.
  2. Prof. Dr. Riccardo Fusaroli. (Thursday, 15 Sept 2022, 5-6PM).
    Title: Standing on the shoulders of normal-sized people. Promise and challenges of cumulative statistical approaches
    Abstract:
    We often hear that Newton stood on the shoulders of giants and that science is a cumulative enterprise where new research builds on previous results. This conception of science relates to a commonly cited benefit of Bayesian approaches: their ability to integrate diverse sources of information, e.g. results of previous studies as informed priors. However, this practice is rarely seen in the literature. One possible explanation could be that we remain skeptic of scientific findings in our own field; that is, we know that we stand on the shoulders of normal-sized, fallible people (just as we ourselves are), rather than on the shoulders of giants. This raises the question of how we best integrate fallible findings from previous analyses in our studies. In this talk I will tackle this issue using a combination of simplified simulations, and the concerns arisen in concrete studies using informed priors. First I will cover simulation-based studies of posterior passing: what happens when we use previous posterior estimates as priors, in a sterilized in silico environment? I will then let the complexity of real research slowly creep in: from linear chains of one study following the other, to interrupted chains due to publication bias, to meandering forking paths where studies know and include only some of the literature. These simulations show that posterior passing is slowed down by complexity, but still provides the best solution for this cumulative enterprise. With the simulations at hand, I will turn to real application scenarios, where previous literature and expert opinions are used to build informed priors. Novel concerns arise: hierarchical structures of expectations, heterogeneity of studies, undue levels of confidence, etc. Based on these results, I will advocate for a critical use of informed priors, involving comparisons between informed priors and alternative (e.g. skeptical) priors, and explicit testing of inferential robustness.

Curriculum and schedule

Schedule: Here is the schedule as a pdf.

Social hours: All participants are invited to an evening of snacks and general hanging out together on Monday and Wednesday 5PM onwards (CEST) .
We offer foundational/introductory and advanced courses in Bayesian and frequentist statistics. When applying, participants are expected to choose only one stream. This year, there will be a special series of lectures by Ralf Engbert that everyone is welcome to attend.

  • Special short course: Introduction to Dynamical Models in Cognitive Science (all participants are welcome, no need to register). Taught by Ralf Engbert, assisted by Lisa Schwetlick and Maximilian Rabe. Tuesday and Thursday, 3:00-4:30PM. Location: Hoersaal 03.

    This course is an introduction to dynamical modeling of eye movements during reading. Lecture I (Tuesday) starts with an introduction to the basic concepts of mathematical modeling using ordinary differential equations. We develop a simplified version of the SWIFT model for eye guidance in reading (Engbert et al., 2005; Rabe et al., 2021), including the computer implementation with R. Lecture II (Thursday) introduces sequential likelihood methods for dynamical processes. We show that the likelihood function can be decomposed into temporal and spatial components. For the simplified SWIFT model, we carry out numerical computations of the likelihood function.

    Course material: All slides and computer code will be made available via OSF at https://osf.io/8wrf6/

    Timing: Tuesday and Thursday: 3:00-4:30PM.
  • Introduction to Bayesian data analysis (maximum 30 participants). Taught by Shravan Vasishth, assisted by Anna Laurinavichyute.
  • This course is an introduction to Bayesian modeling, oriented towards linguists and psychologists. Topics to be covered: Introduction to Bayesian data analysis, Linear Modeling, Hierarchical Models. We will cover these topics within the context of an applied Bayesian workflow that includes exploratory data analysis, model fitting, and model checking using simulation. Participants are expected to be familiar with R, and must have some experience in data analysis, particularly with the R library lme4.
    Course Materials Previous year's course web page: all materials (videos etc.) from the previous year are available here.
    Textbook: here. We will work through the first six chapters.


  • Advanced Bayesian data analysis (maximum 30 participants). Taught by Bruno Nicenboim, assisted by Himanshu Yadav
  • This course assumes that participants have some experience in Bayesian modeling already using brms and want to transition to Stan to learn more advanced methods and start building simple computational cognitive models. Participants should have worked through or be familiar with the material in the first five chapters of our book draft: Introduction to Bayesian Data Analysis for Cognitive Science. In this course, we will cover Parts III to V of our book draft: model comparison using Bayes factors and k-fold cross validation, introduction and relatively advanced models with Stan, and simple computational cognitive models.
    Course Materials Textbook here. We will start from Part III of the book (Advanced models with Stan). Participants are expected to be familiar with the first five chapters.

  • Foundational methods in frequentist statistics (maximum 30 participants). Taught by Audrey Buerki, Daniel Schad, and João Veríssimo.
  • Participants will be expected to have used linear mixed models before, to the level of the textbook by Winter (2019, Statistics for Linguists), and want to acquire a deeper knowledge of frequentist foundations, and understand the linear mixed modeling framework more deeply. Participants are also expected to have fit multiple regressions. We will cover model selection, contrast coding, with a heavy emphasis on simulations to compute power and to understand what the model implies. We will work on (at least some of) the participants' own datasets. This course is not appropriate for researchers new to R or to frequentist statistics.
    Course Materials Textbook draft here.

  • Advanced methods in frequentist statistics with Julia (maximum 30 participants). Taught by Reinhold Kliegl, Phillip Alday, and (over zoom:) Doug Bates.
Applicants must have experience with linear mixed models and be interested in learning how to carry out such analyses with the Julia-based MixedModels.jl package) (i.e., the analogue of the R-based lme4 package). MixedModels.jl has some significant advantages. Some of them are: (a) new and more efficient computational implementation, (b) speed — needed for, e.g., complex designs and power simulations, (c) more flexibility for selection of parsimonious mixed models, and (d) more flexibility in taking into account autocorrelations or other dependencies — typical EEG-, fMRI-based time series (under development). We do not expect profound knowledge of Julia from participants; the necessary subset of knowledge will be taught on the first day of the course. We do expect a readiness to install Julia and the confidence that with some basic instruction participants will be able to adapt prepared Julia scripts for their own data or to adapt some of their own lme4-commands to the equivalent MixedModels.jl-commands. The course will be taught in a hybrid IDE. There is already the option to execute R chunks from within Julia, meaning one needs Julia primarily for execution of MixedModels.jl commands as replacement of lme4. There is also an option to call MixedModels.jl from within R and process the resulting object like an lme4-object. Thus, much of pre- and postprocessing (e.g., data simulation for complex experimental designs; visualization of partial-effect interactions or shrinkage effects) can be carried out in R.
Course Materials Github repo: here.

Fees and accommodation

If the summer school is held in person (as is the plan), there will be a 40 Euro fee; this covers costs for coffee and snacks. Participants who are accepted are expected to arrange their own accommodation. We strongly advise participants to find a place to stay near Griebnitzsee campus, and not in Berlin. The reason is that German train personnel tend to go on strike every year around the time of the summer school. You will be better off if you can get easily to the Griebnitzsee campus.

Contact details

For any questions regarding this summer school that have not been addressed on this home page already, please contact Shravan Vasishth.

Funding

This summer school is funded by the DFG and is part of the SFB 1287, “Variability in Language and Its Limits”.