- Dates: 9-13 September 2024.
- Times: 9AM-5PM daily.
- Location: The summer school will be held at the Griebnitzsee campus in Potsdam, at Haus 6. For train connections, consult bvg.de; the train station near the campus is called Griebnitzsee Bhf.
**Application period: Oct 1, 2023 to April 1, 2024**.**Schedule: coming soon**.

- Stephan Lewandowsky
- Julia Haaf
- Henrik Singmann
**Introduction to Bayesian data analysis**(maximum 30 participants). Taught by Shravan Vasishth, and Anna Laurinavichyute.**Advanced Bayesian data analysis**(maximum 30 participants). Taught by Bruno Nicenboim and Himanshu Yadav**Foundational methods in frequentist statistics**(maximum 30 participants). Taught by Audrey Buerki, Daniel Schad, and João Veríssimo.**Advanced methods in frequentist statistics with Julia**(maximum 30 participants). Taught by Reinhold Kliegl, Phillip Alday, and Doug Bates.- Embrace uncertainty
- Github repo from 2023
- Github repo from 2022
- Github repo from 2021
- Github repo from 2020
- Publications using MixedModel.jl

This course is an introduction to Bayesian modeling, oriented towards linguists and psychologists. Topics to be covered: Introduction to Bayesian data analysis, Linear Modeling, Hierarchical Models. We will cover these topics within the context of an applied Bayesian workflow that includes exploratory data analysis, model fitting, and model checking using simulation.

This course assumes that participants have some experience in Bayesian modeling already using brms and want to transition to Stan to learn more advanced methods and start building simple computational cognitive models. Participants should have worked through or be familiar with the material in the first five chapters of our book draft: Introduction to Bayesian Data Analysis for Cognitive Science. In this course, we will cover Parts III to V of our book draft: model comparison using Bayes factors and k-fold cross validation, introduction and relatively advanced models with Stan, and simple computational cognitive models.

Participants will be expected to have used linear mixed models before, to the level of the textbook by Winter (2019, Statistics for Linguists), and want to acquire a deeper knowledge of frequentist foundations, and understand the linear mixed modeling framework more deeply. Participants are also expected to have fit multiple regressions. We will cover model selection, contrast coding, with a heavy emphasis on simulations to compute power and to understand what the model implies. We will work on (at least some of) the participants' own datasets.

Applicants must have experience with linear mixed models and be interested in learning how to carry out such analyses with the Julia-based MixedModels.jl package) (i.e., the analogue of the R-based lme4 package). MixedModels.jl has some significant advantages. Some of them are: (a) new and more efficient computational implementation, (b) speed — needed for, e.g., complex designs and power simulations, (c) more flexibility for selection of parsimonious mixed models, and (d) more flexibility in taking into account autocorrelations or other dependencies — typical EEG-, fMRI-based time series (under development). We