Updating My Priors (and My Grinder): A Bayesian Fanatic's Journey from Crema to Cardiograms

What espresso taught me about intelligent sensing, continuous learning, and why being wrong faster might be the most valuable skill in cardiac AI.

Ashleigh Brilliant Pot-Shot cartoon reading "Having heard all the arguments, I can now reach a decision in accordance with my original prejudices."

“Having heard all the arguments, I can now reach a decision in accordance with my original prejudices.” - Ashleigh Brilliant

The author of that gem, Ashleigh Brilliant, passed away recently, but his wit lives on. This quote feels like the perfect starting point for a blog about being wrong, and about how being wrong can be the beginning of wisdom as a Bayesian fanatic.

My wife got me a Breville ® Barista Touch for my birthday. The kind that looks like it belongs in a small Italian café, not a modest kitchen where the cat stalks countertops and toy dinosaurs wage eternal war near the sink. Stainless steel, dual boilers, digital PID temperature control. The kind that costs more than my first laptop. The kind you see on YouTube channels where bearded men in raw denim discuss “extraction yield” with the gravity usually reserved for climate science.

I smiled. I thanked her. I said all the right things. But my internal monologue was running a different script: Coffee is coffee.

That was my prior. Strong, confident, built on a decade of perfectly serviceable Mr. Coffee mornings. In my mental model, coffee was just a caffeine delivery system - functionally no different from a Red Bull, just more socially acceptable at 7 AM. Taste quality flatlined somewhere around $100. A machine 10x that price couldn’t possibly brew a meaningfully better cup.

Then I made my first latte. The crema had weight, a silky thickness I’d never encountered. The microfoam had texture, tiny bubbles that felt velvety. The aroma had layers that shifted as the cup cooled. And the machine had settings - different profiles for oat milk versus dairy, temperature controls, and steam pressure adjustments. It turns out “coffee with milk” and “properly pulled espresso with precisely textured microfoam” aren’t even the same category of experience.

It’s been less than two weeks, and my stock of coffee beans is depleting at an alarming rate. I haven’t explicitly told her I was wrong, but I know she knows. My priors are being updated in real time, one espresso at a time.

The Bayesian Awakening

Now that we have clarity with caffeine in our veins, let’s talk Bayesianism. The essence of Bayesian thinking is simple: start with a prior belief, gather evidence, and update your belief to form a posterior - and use that posterior to inform future decisions. It’s a continuous loop of learning and adjustment.

Putting my coffee journey into Bayesian terms:

P(not worth itamazing exists)=P(amazing existsnot worth it)P(not worth it)P(amazing exists)P(\text{not worth it}|\text{amazing exists}) = \frac{P(\text{amazing exists}|\text{not worth it}) \cdot P(\text{not worth it})}{P(\text{amazing exists})}

Where:

  • P(not worth it)P(\text{not worth it}) = prior probability (85% confident it’s not worth it, before tasting)
  • P(amazing exists)P(\text{amazing exists}) = probability of the observed evidence (amazing coffee can be made with fancy machines)
  • P(amazing existsnot worth it)P(\text{amazing exists}|\text{not worth it}) = likelihood (probability of observing amazing coffee given that fancy machines aren’t worth it)
  • P(not worth itamazing exists)P(\text{not worth it}|\text{amazing exists}) = posterior probability (updated belief after observing the evidence)

Each espresso was an experiment, and each sip updated the posterior. My confidence curve shifted from disbelief to near-ritual devotion.

From Crema to Cardiograms

This journey from coffee snobbery to appreciation is more than a personal anecdote; it’s a perfect microcosm of the challenges we face in high-stakes scientific fields like cardiac diagnostics. The core problem is the same: we never directly measure what truly matters. We want to know the heart’s full 3D electrical and metabolic state—ion channels, depolarization wavefronts, perfusion. What we get are indirect, noisy projections of that reality: ECGs (volume-conducted potentials blurred by tissue), magnetocardiograms (attenuated magnetic fields), and photoplethysmograms (pressure surrogates). Each signal is a complex, noisy map from hidden physiology through layers of physics and instrumentation.

This inference problem involves multiple layers of assumptions. Maxwell’s equations describe electromagnetic field propagation perfectly, but they don’t account for fibrotic scar tissue disrupting conduction, autonomic tone modulating repolarization, or patient-specific thoracic geometry warping the volume conductor. Ion-channel models (Hodgkin-Huxley, Luo-Rudy) capture membrane dynamics but assume homogeneous populations and miss remodeling from disease, genetic variants affecting channel expression, and metabolic stress shifting operating points. The physics is load-bearing and non-negotiable; the biological instantiation is variable, adaptive, and patient-specific. Bridging what equations predict and what happens in living hearts requires structured priors and uncertainty quantification at every layer.

The forward problem - predicting signals from known cardiac states - is well-posed via Maxwell’s equations and ion-channel kinetics. The inverse problem - inferring cardiac state from sparse, noisy measurements - is ill-posed and prior-hungry. This is where Bayesian thinking becomes essential:

P(Cardiac StateSensor Data)=P(Sensor DataCardiac State,θ)P(Cardiac Stateθ)P(θ)P(Sensor Data)P(\text{Cardiac State}|\text{Sensor Data}) = \frac{P(\text{Sensor Data}|\text{Cardiac State}, \theta) \cdot P(\text{Cardiac State}|\theta) \cdot P(\theta)}{P(\text{Sensor Data})}

Where θ\theta represents patient-specific biological parameters (tissue conductivities, ion-channel densities, geometry, scar burden), and each term encodes different aspects of our knowledge and uncertainty. This hierarchical structure makes every layer visible. We’re jointly inferring hidden physiology and the patient-specific parameters that mediate how physics maps to observation.

An ECG doesn’t “tell us” the arrhythmia - nor does a magnetocardiogram, a photoplethysmogram, or an echocardiogram. Each constrains the posterior probability of different cardiac scenarios from a different angle, with its own noise characteristics, distribution assumptions, and physical forward model. ECGs see electrical potentials blurred by tissue; magnetocardiograms detect magnetic fields corrupted by environmental interference; photoplethysmograms measure optical signals sensitive to motion; echocardiograms capture structural geometry but miss functional dynamics. Good cardiac AI continuously reweights hypotheses about hidden dynamics, guided by biophysical truths and multi-modal data - each modality bringing different strengths and different uncertainties to the inference. The table below breaks down what each equation term contains, where uncertainty comes from, and how it manifests in practice:

Equation TermWhat It EncodesSources of UncertaintyWhat Goes Into ItExample
P(θ)P(\theta)Prior over patient-specific biologyPopulation variability, limited anatomical data, unmeasured covariatesTissue conductivity distributions, ion-channel density ranges, geometric factor priors, scar burden statisticsEveryone’s heart is different - yours might be 30% larger than mine, with different electrical conductivity. We don’t know your exact fiber orientation or whether you have hidden scar tissue until we image. Like assuming all coffee beans are similar until you try Ethiopian vs. Colombian.
P(CSθ)P(\text{CS}\|\theta)Conditional prior: which states are plausible given this patientModel assumptions about dynamics, approximations in physicsIon-channel kinetics (Hodgkin-Huxley, Luo-Rudy), refractory periods, charge conservation, anatomical constraints (atria ≠ ventricles)Given your specific heart anatomy, which arrhythmias are even possible? A particular scar pattern might enable dangerous re-entrant circuits - or might not. Physics says electrical signals can’t travel backward during refractory periods, constraining what rhythms are plausible.
P(SDCS,θ)P(\text{SD}\|\text{CS}, \theta)Likelihood: how data arises from hidden statesSensor noise, motion artifacts, environmental interference, forward model errorMaxwell’s equations, electromagnetic propagation, volume conductor models, noise spectral profilesYour ECG shows a weird blip every 30 seconds. Is it your heart or the hospital elevator passing by? Your breathing moves the sensors. Your chest geometry affects how electrical signals reach the skin. We model all these noise sources to separate signal from artifact.
P(CSSD)P(\text{CS}\|\text{SD})Posterior: updated belief after seeing dataPropagated uncertainty from all above componentsInference algorithms (particle filters, variational solvers), multimodal fusion, credible intervalsAfter seeing your ECG and magnetocardiogram: “65% confidence it’s ischemia, 25% it’s motion artifact, 10% it’s a normal variant.” Not a binary yes/no, but honest quantified uncertainty a clinician can act on. Like tasting espresso and updating from “coffee is coffee” to “actually, extraction temperature matters.”

CS: Cardiac State | SD: Sensor Data | θ: Patient-Specific Parameters | P: Probability

The Living Loop: Where Bayesian Brains Meet ML Brawn

In a Bayesian framework, having wrong priors isn’t just okay; it’s the entire point. The goal isn’t to be right from the start, but to become less wrong over time in a principled way. When a model’s prediction fails, we don’t just log an error; we update the model’s foundational assumptions.

This brings us to the role of machine learning. While the Bayesian framework provides the elegant logic for updating our beliefs, executing it with complex models and vast datasets is another story. This is where machine learning comes in, acting as the powerful engine that makes Bayesian inference tractable at scale.

Think of it this way:

  • Bayesian Framework (The “What”): It’s the blueprint for reasoning under uncertainty. It tells us how to update our beliefs when new evidence arrives.
  • Machine Learning (The “How”): It’s the high-performance compiler for that blueprint. It uses tools like neural networks to approximate solutions and learn patterns from millions of data points, turning theory into a practical reality.

In practice, this partnership operates across three nested timescales. The fastest loop handles patient-level inference in milliseconds, using ML models for rapid predictions. A medium loop, spanning days or weeks, retrains these models on accumulated population data to enhance their accuracy. The slowest loop, unfolding over months or years, revises the core scientific models themselves when systemic flaws are found, ensuring the entire system learns and adapts.

This is the same discipline as perfecting espresso: a fast loop for each shot, a medium loop for calibrating the grinder, and a slow loop for rethinking your entire coffee philosophy. The hard part isn’t the math; it’s the discipline to question your assumptions and build systems that learn instead of break.

The Lesson

My wife was right. The machine wasn’t just a luxury; it was a teacher. A true Bayesian isn’t someone who’s always right, but someone who is willing to become less wrong, faster. It’s someone who pairs conviction with calibration, strong priors with stronger humility. That’s what principled updating looks like—in science, in sensing, in building medical AI, and in life. The process is the same whether it’s crema or cardiograms: start with your assumptions, let evidence reshape them, and when reality surprises you, update your model and keep learning. Because the most dangerous belief—in cardiac diagnostics or in coffee—is the one you’re too confident to question.

Have you checked your priors lately?


For more on how we manage uncertainty in medical software, see The Paradox of Perfect Control in Medical Software. If you’re interested in how continuous learning systems work, explore Life Lessons from Machine Learning.