Welcome to The Third Summer School on Statistical Methods for Linguistics and Psychology, 2019, 9-13 September
Registration
On Monday 9th September, 2019, 8:30AM-9:00AM Room S26, Haus 6 in Griebnitzsee campus.
The fee of 30 Euros will also be used to provide coffee & snacks.
Schedule
You can download the floor plan of the Griebnitzsee campus building (Haus 6) and the schedule for the summer school from here.
The general schedule is here.
The detailed schedule by stream is here.
Summer School Location
Griebnitzsee Campus, University of Potsdam, Germany
The summer school will be held at the Griebnitzsee campus of the University of Potsdam; this is about 15-20 minutes away from Berlin zoo station by train. Lectures will be held in Haus 6. Please use bvg.de for planning your travel (by train or bus).
Please pay attention to this information sheet about the altered train schedules.
Code of conduct
All participants will be expected to follow the code of conduct, taken from StanCon 2018. In case a participant has any concerns, please contact any of the following people: Audrey Bürki, Shravan Vasishth, Bruno Nicenboim, Daniel Schad, or Reinhold Kliegl (see instructors listed below).
Keynote lectures
Bayesian group:
- Julia Haaf (Hoersaal 05)
Title: Does Everyone? Modeling Individual Differences in Cognitive Tasks
Abstract: In psychological science, the main target of interest usually is an empirical effect. For example, we may be interested in human perception and ask participants to react to light spots flashing up on a screen as fast as they can. Psychologists typically ask whether, on average, participants respond faster to bright lights than to dim ones. We may extend this question to the individual participant level: Does everyone react to bright lights faster than to dim ones? In case of perception, this seems reasonable: After accounting for sample noise, we probably would expect that indeed everyone is better at perceiving higher-signal visual stimuli. Yet, we may not expect that everyone throws a ball further with their right hand than their left hand. Clearly, left-handed people may not. I propose a modeling approach to the “Does Everyone” question using Bayes factor model comparison with hierarchical models. This modeling approach allows researchers to distinguish between quantitative and qualitative individual differences, and the lack thereof. I apply the approach to data from the Stroop interference task, and discuss the implications of “Everyone Does” to large-scale correlational studies of individual differences.
- Paul Buerkner (Hoersaal 02)
Title: Bayesflow: Tools for data analysis following a principled Bayesian workflow
Abstract: Probabilistic programming languages such as Stan, which can be used to specify and fit Bayesian models, have revolutionized the practical application of Bayesian statistics. They are an integral part of Bayesian data analysis and as such, a necessity to obtain reliable and valid inference. However, they are not sufficient by themselves. Instead, they have to be combined with substantive statistical and subject matter knowledge, expertise in programming and data analysis, as well as critical thinking about the decisions made in the process. A principled Bayesian workflow consists of several steps from the design of the study, gathering of the data, model building, estimation, and validation, to the final conclusions about the effects under study. I want to present a concept for a software package that assists users in following a principled Bayesian workflow for their data analysis by diagnosing problems and giving recommendations for sensible next steps. This concept gives rise to a lot of interesting research questions we want to investigate in the upcoming years.
- Ralf Engbert, dynamical modeling (two lectures, Hoersaal 02)
Title: An introduction to stochastic-dynamical models and to determinstic dynamical models
Abstract: In the stochastic model lecture, the drift-diffusion model is presented and its link to Bayesian modeling made clear see. In the second lecture, continuous-time discrete-state Markov processes will be covered, and an application area will be presented: eye-movement control in reading, focusing on two prominent models, E-Z Reader and SWIFT.
Frequentist group:
-
Douglas Bates, Assessing variability of parameter estimates (Hoersaal 05)
-
Reinhold Kliegl, Parsimoniously fitting fitness (Hoersaal 05)
-
Titus von der Malsburg (Hoersaal 02)
Title: The great parade of statistical errors in NHST – A large-scale simulation study
Abstract: It is well known that conducting multiple tests of the same hypothesis inflates false positive effects (Type 1 error). We also know that low statistical power inflates false negative effects (Type 2 errors) and increases the size of Type M and the rate of Type S errors. But how bad are these problems really? And how can they be addressed? I will present a large-scale simulation investigating these questions in great detail using eye-tracking research of reading behavior as an example. (No prior knowledge of eye-tracking is needed to follow the talk.) The results suggest these problems are quite real and they show that the interpretation of results from NHST procedures is considerably more difficult than is commonly appreciated. Regarding remedies, we will look at the Bonferroni correction, ways to increase statistical power, the role of computational models in formulating precise testable hypotheses, and pre-registration.
Curriculum
For previous iterations of this summer school, see the website for SMLP 2017, and SMLP 2018.
Introductory frequentist statistics (maximum 30 participants)
Instructors: Daniel Schad and Audrey Buerki
Topics to be covered:
- Very basic R usage, basic probability theory, random variables (RVs),
including jointly distributed RVs, probability distributions,
including bivariate distributions
- Maximum Likelihood Estimation
- sampling distribution of mean
- Null hypothesis significance testing, t-tests, confidence intervals
- type I error, type II error, power, type M and type S errors
- An introduction to (generalized) linear models
- An introduction to linear mixed models
Introductory Bayesian statistics (maximum 30 participants)
Instructor: Shravan Vasishth
Teaching assistant: Anna Laurinavichyute
Topics to be covered: course materials can be viewed here
- Basic probability theory, random variable (RV) theory,
including jointly distributed RVs
- probability distributions, including bivariate distributions
- Using Bayes' rule for statistical inference
- Introduction to Markov Chain Monte Carlo
- Introduction to (generalized) linear models
- Introduction to hierarchical linear models
- Bayesian workflow
- Model comparison and selection
Advanced frequentist methods (maximum 30 participants)
Instructors: Reinhold Kliegl, and Douglas Bates
Topics to be covered:
- Review of linear modeling theory
- Introduction to linear mixed models
- Model selection
- Contrast coding and visualizing partial fixed effects
- Shrinkage and partial pooling
- Visualization
- Calling Julia from R for very fast mixed modeling
Advanced Bayesian methods (maximum 30 participants)
Instructor: Bruno Nicenboim
Topics will be some selection of the following topics:
- Review of basic theory
- Introduction to hierarchical modeling
- Multinomial processing trees
- Measurement error models
- Modeling censored data
- Meta-analysis
- Finite mixture models
- Model selection and hypothesis testing
(Bayes factor and k-fold cross-validation)
Additional short course on Friday (two 90 minute lectures) by Ralf Engbert
Accomodation
Special conference rates:
Novum Hotel Aldea, Berlin: Sept. 8.-13.; single room: 109 Euro/night; double room: 124 Euro/night; booking/cancellation possible only until July 14. See form sent via email. Expired
Kongresshotel Potsdam: Sept. 8.-13.; single room: 109 Eur/night; booking/cancellation possible only until June 28. See keyword sent via email. Expired
Some alternative possibilities:
Jugendherberge Potsdam – Haus der Jugend
Jugendherberge Berlin – Am Wannsee
Funding
This summer school is funded by the DFG and is part of the SFB “Limits of Variability in Language”.