
#statstab #394 Difference-in-Differences Estimation
Thoughts: A bit of love for the python coders. DiD with lots of examples and estimators.
It’s great to see causal Inference methods being used for this determination. Are better algorithms (than the near-far matching used) available that might be used in a judicial process causal digital twins to ameliorate these and other injustices in the future? Of course, getting rid of the bail system would make it moot.
#causalInference #causation #legal #justice
From: @hrdag
https://mastodon.social/@hrdag/114902611019230490
#statstab #392 Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements (forum thread)
Thoughts: Forums can be great for asking the author for exact answers to complex questions
#modelselection #causalinference #prediction #bias #information
@PCI_Archaeology Very cool! If I'm not mistaken, this is the first published application of #CausalInference / causal DAGs to archaeology?
My Road to Bayesian Stats
By 2015, I had heard of Bayesian Stats but didn’t bother to go deeper into it. After all, significance stars, and p-values worked fine. I started to explore Bayesian Statistics when considering small sample sizes in biological experiments. How much can you say when you are comparing means of 6 or even 60 observations? This is the nature work at the edge of knowledge. Not knowing what to expect is normal. Multiple possible routes to a seen a result is normal. Not knowing how to pick the route to the observed result is also normal. Yet, our statistics fails to capture this reality and the associated uncertainties. There must be a way I thought.
Free Curve to the Point: Accompanying Sound of Geometric Curves (1925) print in high resolution by Wassily Kandinsky. Original from The MET Museum. Digitally enhanced by rawpixel.I started by searching for ways to overcome small sample sizes. There are minimum sample sizes recommended for t-tests. Thirty is an often quoted number with qualifiers. Bayesian stats does not have a minimum sample size. This had me intrigued. Surely, this can’t be a thing. But it is. Bayesian stats creates a mathematical model using your observations and then samples from that model to make comparisons. If you have any exposure to AI, you can think of this a bit like training an AI model. Of course the more data you have the better the model can be. But even with a little data we can make progress.
How do you say, there is something happening and it’s interesting, but we are only x% sure. Frequentist stats have no way through. All I knew was to apply the t-test and if there are “***” in the plot, I’m golden. That isn’t accurate though. Low p-values indicate the strength of evidence against the null hypothesis. Let’s take a minute to unpack that. The null hypothesis is that nothing is happening. If you have a control set and do a treatment on the other set, the null hypothesis says that there is no difference. So, a low p-value says that it is unlikely that the null hypothesis is true. But that does not imply that the alternative hypothesis is true. What’s worse is that there is no way for us to say that the control and experiment have no difference. We can’t accept the null hypothesis using p-values either.
Guess what? Bayes stats can do all those things. It can measure differences, accept and reject both null and alternative hypotheses, even communicate how uncertain we are (more on this later). All without making assumptions about our data.
It’s often overlooked, but frequentist analysis also requires the data to have certain properties like normality and equal variance. Biological processes have complex behavior and, unless observed, assuming normality and equal variance is perilous. The danger only goes up with small sample sizes. Again, Bayes requires you to make no assumptions about your data. Whatever shape the distribution is, so called outliers and all, it all goes into the model. Small sample sets do produce weaker fits, but this is kept transparent.
Transparency is one of the key strengths of Bayesian stats. It requires you to work a little bit harder on two fronts though. First you have to think about your data generating process (DGP). This means how do the data points you observe came to be. As we said, the process is often unknown. We have at best some guesses of how this could happen. Thankfully, we have a nice way to represent this. DAGs, directed acyclic graphs, are a fancy name for a simple diagram showing what affects what. Most of the time we are trying to discover the DAG, ie the pathway of a biological outcome. Even if you don’t do Bayesian stats, using DAGs to lay out your thoughts is a great. In Bayesian stats the DAGs can be used to test if your model fits the data we observe. If the DAG captures the data generating process the fit is good, and not if it doesn’t.
The other hard bit is doing analysis and communicating the results. Bayesian stats forces you to be verbose about your assumptions in your model. This part is almost magicked away in t-tests. Frequentist stats also makes assumptions about the model that your data is assumed to follow. It all happens so quickly that there isn’t even a second to think about it. You put in your data, click t-test and woosh! You see stars. In Bayesian stats stating the assumptions you make in your model (using DAGs and hypothesis about DGPs) communicates to the world what and why you think this phenomenon occurs.
Discovering causality is the whole reason for doing science. Knowing the causality allows us to intervene in the forms of treatments and drugs. But if my tools don’t allow me to be transparent and worse if they block people from correcting me, why bother?
Richard McElreath says it best:
There is no method for making causal models other than science. There is no method to science other than honest anarchy.
#statstab #391 {sensemakr} Sensitivity Analysis Tools for OLS
Thoughts: No unobserved variables is an untestable assumption, but you can quantify the robustness of your ATE.
#R #causalinference #observational #inference #confounding #bias #sensitivity
#statstab #387 Give Your Hypotheses Space!
Thoughts: "It’s tempting to throw a bunch of variables...into a model
...but proceed at your own caution!"
#Mbias #causalinference #collider #moderator #confounder #regression #r #DAG
https://brian-lookabaugh.github.io/website-brianlookabaugh/blog/2025/mutual-adjustment/
#statstab #383 Berkson's paradox
Thoughts: aka Berkson's bias, collider bias, or Berkson's fallacy. Important for interpreting conditional probabilities. Can produce counterintuitive patterns.
So far at this conference I have seen reports of true experiments, natural experiments, difference in difference analysis and regression discontinuity design - but no instrumental variable analysis
I wonder why?
I was hoping for the full set of causal inference methods
#statstab #367 Matching in R: Propensity Scores, Weighting (IPTW) and the Double Robust Estimator
Thoughts: A guide on common adjustments for observational studies.
#r #observational #iptw #matching #weights #doublerobust #guide #causalinference
https://www.franciscoyira.com/post/matching-in-r-3-propensity-score-iptw/
What are people’s fave methods for this situation:
At t0, all units are untreated.
As time goes on, individual units are one by one selected for treatment, on an expert’s assessment of their potential improvement under treatment.
How to measure the treatment effect, either over all units or ideally the treatment effect on each unit?
Oh, for extra fun, they’re probably not independent
Registration is open for the GMDS ACADEMY 2025 (Hannover, October 20-23).
There will be three parallel workshops on meta analysis, causal inference and time-to-event analysis involving Wolfgang Viechtbauer (@wviechtb), Christian Röver, Sebastian Weber, Vanessa Didelez, Arthur Allignol, Oliver Kuß, Alexandra Strobel, Hannes Buchner, Xiaofei Liu and Ann-Kathrin Ozga.
See here for more details: https://www.gmds.de/fileadmin/user_upload/GMDS-Academy-2025.pdf
#statstab #348 The Effect {book} - Causal Diagrams
Thoughts: At some point you'll need to learn about DAGs. Maybe this is the chapter you need.
Postdoc in Single-Cell Multi-Omic Gene Regulatory Networks
University of Massachusetts Chan Medical School
Join us to decode #GeneRegulatoryNetwork from #SingleCell multiomics with #CausalInference as a #postdoc! Quantitative bg needed.
See the full job description on jobRxiv: https://jobrxiv.org/job/university-of-massachusetts-chan-medica...
https://jobrxiv.org/job/university-of-massachusetts-chan-medical-school-27778-postdoc-in-single-cell-multi-omic-gene-regulatory-networks/?feed_id=94702
Hello SFBA! I’ve been wistfully thinking of switching over here for a while and recent fosstodon choices gave me the push I needed. So #introduction time!
I’m from #SanFrancisco and moved back here after some wandering. Raising two kids and a dog. Working in tech (sigh) but on #sustainability at least.
Interested in and post about #CausalInference, #Statistics, #Politics, #Policy, #Climate, #Energy, #Dogs, #Crafting and #Parenting
Postdoc in Single-Cell Multi-Omic Gene Regulatory Networks
University of Massachusetts Chan Medical School
Join us to decode #GeneRegulatoryNetwork from #SingleCell multiomics with #CausalInference as a #postdoc! Quantitative bg needed.
See the full job description on jobRxiv: https://jobrxiv.org/job/university-of-massachusetts-chan-medica...
https://jobrxiv.org/job/university-of-massachusetts-chan-medical-school-27778-postdoc-in-single-cell-multi-omic-gene-regulatory-networks/?feed_id=94125
This looks great: Andrew Gelman (@statmodeling_bot ) would be joining Nancy Cartwright and Berna Devezer. Short idea talks, lots of panel discussion and Q&A.
Join us on April 25th to discuss RCTs, replications, and scientific inference.
https://sites.google.com/view/cepbi/talks-gatherings?authuser=0
The case for multiple UESDs and an application to migrant deaths in the Mediterranean Sea https://doi.org/10.1017/psrm.2025.17 #CausalInference Analyzing multiple, comparable unexpected events happening during survey data collection makes a lot of sense to assess patterns. In doing so, one has to follow 1/
Postdoc in Single-Cell Multi-Omic Gene Regulatory Networks
University of Massachusetts Chan Medical School
Join us to decode #GeneRegulatoryNetwork from #SingleCell multiomics with #CausalInference as a #postdoc! Quantitative bg needed.
See the full job description on jobRxiv: https://jobrxiv.org/job/university-of-massachusetts-chan-medica...
https://jobrxiv.org/job/university-of-massachusetts-chan-medical-school-27778-postdoc-in-single-cell-multi-omic-gene-regulatory-networks/?feed_id=93671