Look at what I found at #thriftbooks...
Probably from a library , original 1971 edition (published two years before I was born!) and one of the very good (?best) and clear books written on the topic of #linearmodels
Look at what I found at #thriftbooks...
Probably from a library , original 1971 edition (published two years before I was born!) and one of the very good (?best) and clear books written on the topic of #linearmodels
Look at what I found at thriftbooks... Probably from a library , original 1971 edition (published two years before I was born!) and one of the very good (?best) and clear books written on the topic of #linearmodels @stephensenn.bsky.social@bsky.brid.gy @andrewpgrieve.bsky.social@bsky.brid.gy @cubiclogic.bsky.social@bsky.brid.gy
‼ Announcement: Online Unfold.jl workshop ‼
📅 09.05.2025
💶 Free!
👉🏼 https://github.com/s-ccs/workshop_unfold_2025
❓ rERPs, mass univariate models & deconvolution!
If you are interested in combined #EEG / #EyeTracking, in natural experiments, sequential sampling models + EEG (e.g. DriftDiffusion), #VR+EEG, - this could be a useful workshop for you!
#EEG #linearmodels #statistics
#julialang
Organized with Romy Frömer (CHBH)
and the S-CCS lab (@uni_stuttgart)
📈 Models simplify complex observations by filtering out details that might not generalize to new instances, but… simplification requires assumptions.
Take #LinearModels: they assume data is fundamentally linear, dismissing deviations as mere noise.
The art lies in knowing what to keep and what to discard.
"The Robust Beauty of Improper Linear Models in Decision Making" lives rent free in my mind. I think about this paper from 1979 ALL. THE. TIME!
TL;DR: experts can make robust linear models by just picking a few salient features from their experience. See https://www.cmu.edu/dietrich/sds/docs/dawes/the-robust-beauty-of-improper-linear-models-in-decision-making.pdf
In today's parlance the TL;DR would read "feature selection is really important."
In today's lecture on #StatisticalModeling, I explained how to define meaningful non-orthogonal hypotheses/contrasts in (generalized) #LinearModels.
I only learned about the difference between specifying a contrast matrix vs. a hypothesis matrix in this paper:
How to capitalize on a priori contrasts in linear (mixed) models
(by Daniel Schad et al., 2020)
https://doi.org/10.1016/j.jml.2019.104038
Preprint: https://arxiv.org/abs/1807.10451