Aneesh Sathe<p><strong>My Road to Bayesian Stats</strong></p><p class="">By 2015, I had heard of Bayesian Stats but didn’t bother to go deeper into it. After all, significance stars, and p-values worked fine. I started to explore Bayesian Statistics when considering small sample sizes in biological experiments. How much can you say when you are comparing means of 6 or even 60 observations? This is the nature work at the edge of knowledge. Not knowing what to expect is normal. Multiple possible routes to a seen a result is normal. Not knowing how to pick the route to the observed result is also normal. Yet, our statistics fails to capture this reality and the associated uncertainties. There must be a way I thought. </p><a href="https://aneeshsathe.com/wp-content/uploads/2025/07/image-from-rawpixel-id-2968487-jpeg.jpg" rel="nofollow noopener" target="_blank"></a>Free Curve to the Point: Accompanying Sound of Geometric Curves (1925) print in high resolution by Wassily Kandinsky. Original from The MET Museum. Digitally enhanced by rawpixel.<p>I started by searching for ways to overcome small sample sizes. There are minimum sample sizes recommended for t-tests. Thirty is an often quoted number with qualifiers. Bayesian stats does not have a minimum sample size. This had me intrigued. Surely, this can’t be a thing. But it is. Bayesian stats creates a mathematical model using your observations and then samples from that model to make comparisons. If you have any exposure to AI, you can think of this <em>a bit</em> like training an AI model. Of course the more data you have the better the model can be. But even with a little data we can make progress. </p><p>How do you say, there is something happening and it’s interesting, but we are only x% sure. Frequentist stats have no way through. All I knew was to apply the t-test and if there are “***” in the plot, I’m golden. That isn’t accurate though. Low p-values indicate the strength of evidence against the null hypothesis. Let’s take a minute to unpack that. The null hypothesis is that nothing is happening. If you have a control set and do a treatment on the other set, the null hypothesis says that there is no difference. So, a low p-value says that it is unlikely that the null hypothesis is true. But that does not imply that the alternative hypothesis <em>is</em> true. What’s worse is that there is no way for us to say that the control and experiment have no difference. We can’t accept the null hypothesis using p-values either. </p><p>Guess what? Bayes stats can do all those things. It can measure differences, accept and reject both null and alternative hypotheses, even communicate how uncertain we are (more on this later). All without making assumptions about our data.</p><p>It’s often overlooked, but frequentist analysis also requires the data to have certain properties like normality and equal variance. Biological processes have complex behavior and, unless observed, assuming normality and equal variance is perilous. The danger only goes up with small sample sizes. Again, Bayes requires you to make no assumptions about your data. Whatever shape the distribution is, so called outliers and all, it all goes into the model. Small sample sets do produce weaker fits, but this is kept transparent. </p><p>Transparency is one of the key strengths of Bayesian stats. It requires you to work a little bit harder on two fronts though. First you have to think about your data generating process (DGP). This means how do the data points you observe came to be. As we said, the process is often unknown. We have at best some guesses of how this could happen. Thankfully, we have a nice way to represent this. DAGs, directed acyclic graphs, are a fancy name for a simple diagram showing what affects what. Most of the time we are trying to discover the DAG, ie the pathway of a biological outcome. Even if you don’t do Bayesian stats, using DAGs to lay out your thoughts is a great. In Bayesian stats the DAGs can be used to test if your model fits the data we observe. If the DAG captures the data generating process the fit is good, and not if it doesn’t. </p><p>The other hard bit is doing analysis and communicating the results. Bayesian stats forces you to be verbose about your assumptions in your model. This part is almost magicked away in t-tests. Frequentist stats also makes assumptions about the model that your data is assumed to follow. It all happens so quickly that there isn’t even a second to think about it. You put in your data, click t-test and woosh! You see stars. In Bayesian stats stating the assumptions you make in your model (using DAGs and hypothesis about DGPs) communicates to the world what and why you think this phenomenon occurs. </p><p>Discovering causality is the whole reason for doing science. Knowing the causality allows us to intervene in the forms of treatments and drugs. But if my tools don’t allow me to be transparent and worse if they block people from correcting me, why bother?</p><p>Richard McElreath says it best:</p><blockquote><p>There is no method for making causal models other than science. There is no method to science other than honest anarchy.</p></blockquote><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai/" target="_blank">#AI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/bayesian-statistics/" target="_blank">#BayesianStatistics</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/biological-data-analysis/" target="_blank">#BiologicalDataAnalysis</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/business/" target="_blank">#Business</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/causal-inference-2/" target="_blank">#CausalInference</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/dags/" target="_blank">#DAGs</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/data-generating-process/" target="_blank">#DataGeneratingProcess</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/data-science/" target="_blank">#dataScience</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/experimental-design/" target="_blank">#ExperimentalDesign</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/frequentist-vs-bayesian/" target="_blank">#FrequentistVsBayesian</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/leadership/" target="_blank">#Leadership</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/machine-learning/" target="_blank">#machineLearning</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/philosophy/" target="_blank">#philosophy</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/science/" target="_blank">#science</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/scientific-method/" target="_blank">#ScientificMethod</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/small-sample-size/" target="_blank">#SmallSampleSize</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/statistical-modeling/" target="_blank">#StatisticalModeling</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/statistical-philosophy/" target="_blank">#StatisticalPhilosophy</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/statistics/" target="_blank">#statistics</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/transparent-science/" target="_blank">#TransparentScience</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/uncertainty-quantification/" target="_blank">#UncertaintyQuantification</a></p>