My experience is that most folks around me (myself included) enjoy employing the power of causality to understand various phenomena. There’s something incredibly satisfying about establishing a sound causal chain. Once the last piece of the puzzle clicks in place, there’s nothing like it. Back when I still worked directly in code, some of my fondest memories were tracking down the causes of bugs. I remember once, we shipped a version of Chrome and suddenly, people started having the freakiest of crashes. Like, I spent a few days just staring at traces trying to comprehend how that might even be possible. However, as more information (and frustrated users) piled up, the long causal chain slowly coalesced. This happens, then this, then that, and — bam! — you get a sad tab. I still remember the high of writing the patch that fixed the crash. The grittiest of bugs have the longest causal chains, which always made them so much fun to figure out.
At the same time, there are causal chains that we perceive as incredibly short. Reach for a cup – get a drink. Press a key to type a letter in a doc. They might not actually be short (by golly, I know enough about HTML Editing and how Google Docs work to know otherwise) — but to us, they are simple action-reaction chainlinks. We see them as atomic and compose the causal chains of our life stories out of them.
We engineers love collapsing long causal chains into these simple chainlinks: turning a daunting process into a single action. My parents reminded me recently of how much harder it used to be to send emails before the Internet. I had forgotten the hours I spent traversing FIDO maps, crafting the right UUCP addresses, and teaching my Mom how to communicate with her colleagues — in another city! Electronically! Nowadays, the Wizardry of Email-sending has faded away into the background, replaced with agonizing over the right emoji or turn of the phrase. And yes, adding (and encoding) emojis also used to be a whole thing. A poetic way to describe engineering could be as the craft of seeking out and collapsing long causal chains into simple chainlinks, crystallizing them into everyday products.
Based on my understanding of the human brain, this is not too dissimilar from how it works. I am not a neuroscientist myself. My guides here are books by Lisa Feldman Barrett and Jeff Hawkins, as well as Daniel Kahneman’s seminal “Thinking, Fast and Slow”. It does look like the two processes: the discovery of causal chains (Dr. Barrett calls the process “novelty search”) and then collapsing them into chainlinks (“categorization/compression” or “reference framing”) are something that our brains are constantly engaged in. And once collapsed and turned into simple chainlinks, our brains are incredibly efficient at reaching for them to — you guessed it — seek out novel causal chains, continuing the infinite recursion of making sense of the world.

This seems like a perfect system. Except for one tiny problem: our discovered causal chains often contain mistakes. If this then that might omit an important variable or two. Remember those freaky crashes? Those were manifestations of engineers’ mistakes in their process of collapsing the massive causal chains that comprise a modern browser into the simple “go to URL.” In software, engineers spend a lot of time finding and fixing these bugs — and so do our brains. Still, both our software and our brains are teeming with chainlinks that hide mistakes (yes, I’ve said it — we’re full of bugs!) Worse yet, the recursive nature of our sense-making tends to amplify these mistakes, while still concealing their origin. While software just stops working, we tend to experience the amplified, distorted mistakes as suffering: anxiety, depression, burnout, anger, stress, etc. It takes intentional spelunking to discern these mistakes and not get further overwhelmed in the process. Like most astonishing things, our capacity for discovering and collapsing causal chains is both a gift and a curse. Or so the causal chain of this story says.
3 thoughts on “Causal chains”