Welcome to the Seventh Summer School on Statistical Methods for Linguistics and Psychology, 11-15 September 2023
Application, dates, location
- Dates: 11-15 September 2023.
- Times: 9AM-5PM daily.
- Location: The summer school will be held at the Griebnitzsee campus in Potsdam, at Haus 6. For train connections, consult bvg.de; the train station near the campus is called Griebnitzsee Bhf.
- Application period: 30 Sept 2022 to 1 April 2023.
Click here to apply. Decisions will be announced around 15 April 2023.
Brief history of the summer school, and motivation
The summer school was started by Shravan Vasishth in 2017, as part of a
methods project funded within the
SFB 1287. The summer school aims to fill a gap in statistics education, specifically within the fields of linguistics and psychology. One goal of the summer school is to provide comprehensive training in the theory and application of statistics, with a special focus on the linear mixed model. Another major goal is to make Bayesian data analysis a standard part of the toolkit for the linguistics and psychology researcher. Over time, the summer school has evolved to have at least four parallel streams: beginning and advanced courses in frequentist and Bayesian statistics. These may be expanded to more parallel sessions in future editions. We typically admit a total of 120 participants (in 2019, we had some 450 applications). In addition to the all-day courses, we regularly invite speakers to give lectures on important current issues relating to statistics. Previous editions of the summer school:
2022,
2021,
2020,
2019,
2018,
2017.
Code of conduct
All participants will be expected to follow the (
code of conduct, taken from
StanCon 2018. In case a participant has any concerns, please contact any of the following instructors: Audrey Bürki, Anna Laurinavichyute, Shravan Vasishth, Bruno Nicenboim, or Reinhold Kliegl.
Courses
- Special short course: Introduction to Bayesian meta-analysis. Taught by Gian Luca Di Tanna.
Timing: Tuesday and Thursday: 3:00-4:30PM. Anyone can attend this short course.
- Introduction to Bayesian data analysis (maximum 30 participants). Taught by Himanshu Yadav, assisted by Anna Laurinavichyute.
You can decide whether this course is appropriate for you by looking at the online version
of this course (videos are available): see here.
This course is an introduction to Bayesian modeling, oriented towards linguists and
psychologists. Topics to be covered: Introduction to Bayesian data analysis,
Linear Modeling, Hierarchical Models. We will cover these topics within
the context of an applied Bayesian workflow that includes exploratory data
analysis, model fitting, and model checking using simulation.
Prerequisites: Participants
are expected to be familiar with frequentist methodology (what is taught in
the introductory course, see below, as well as this online textbook:
here), be relatively fluent in R usage, and must
have some experience in data analysis, particularly with the R library lme4.
Basic high school (pre-calculus) arithmetic and mathematical fluency is
assumed; for example, you should know what a log is, what an exponent is,
and be able to solve for y in x=log(y/(1-y)). If you are unfamiliar with frequentist
methods, we suggest taking the introductory frequentist course listed below.
Course Materials
Textbook: here. We will work through the first six chapters, plus the chapter
on Bayes factors.
- Advanced Bayesian data analysis (maximum 30 participants). Taught by Bruno Nicenboim
This course assumes that participants have some experience in Bayesian modeling already using brms and want to transition to Stan to learn more advanced methods and start building simple computational cognitive models. Participants should have worked through or be familiar with the material in the first five chapters of our book draft: Introduction to Bayesian Data Analysis for Cognitive Science. In this course, we will cover Parts III to V of our book draft: model comparison using Bayes factors and k-fold cross validation, introduction and relatively advanced models with Stan, and simple computational cognitive models.
Course Materials
Textbook here. We will start from Part III of the book (Advanced models with Stan). Participants are expected to be familiar with the first five chapters.
- Foundational methods in frequentist statistics (maximum 30 participants). Taught by Audrey Buerki, Daniel Schad, and João Veríssimo.
Participants will be expected to have used linear mixed models before, to the level of the textbook by Winter (2019, Statistics for Linguists), and want to acquire a deeper knowledge of frequentist foundations, and understand the linear mixed modeling framework more deeply. Participants are also expected to have fit multiple regressions. We will cover model selection, contrast coding, with a heavy emphasis on simulations to compute power and to understand what the model implies. We will work on (at least some of) the participants' own datasets. This course is not appropriate for researchers new to R or to frequentist statistics.
Course Materials
Textbook draft here.
- Advanced methods in frequentist statistics with Julia (maximum 30 participants). Taught by Reinhold Kliegl, Phillip Alday, and (over zoom:) Doug Bates.
Applicants must have experience with linear mixed models and be interested in learning how to carry out such analyses with the
Julia-based MixedModels.jl package) (i.e., the analogue of the R-based lme4 package). MixedModels.jl has some significant advantages. Some of them are: (a) new and more efficient computational implementation, (b) speed — needed for, e.g., complex designs and power simulations,
(c) more flexibility for selection of parsimonious mixed models, and
(d) more flexibility in taking into account autocorrelations or other dependencies — typical EEG-, fMRI-based time series (under development).
We
do not expect profound knowledge of Julia from participants; the necessary subset of knowledge will be taught on the first day of the course. We do expect a readiness to
install Julia and the confidence that with some basic instruction participants will be able to adapt prepared Julia scripts for their own data or to adapt some of their own lme4-commands to the equivalent MixedModels.jl-commands. The course will be taught in a hybrid IDE. There is already the option to execute R chunks from within Julia, meaning one needs Julia primarily for execution of MixedModels.jl commands as replacement of lme4. There is also an option to call MixedModels.jl from within R and process the resulting object like an lme4-object. Thus, much of pre- and postprocessing (e.g., data simulation for complex experimental designs; visualization of partial-effect interactions or shrinkage effects) can be carried out in R.
Course Materials
Github repo from 2022:
here.