I’ve already shared this piece elsewhere, but might as well post it here. This story is almost like a piece of fantasy fiction that’s waiting to be written — and a metaphor to which I keep coming back to describe flux.
Imagine a warrior who sets out to slay the dragon. The warrior has the sharpest sword, the best armor, and is in the top physical shape. The dragon is looming large, and the warrior bravely rushes forth. What the warrior doesn’t know is that this is a fractal dragon. You can’t defeat a fractal dragon with a sword. So our courageous, yet unwitting warrior lands mighty blows on the dragon. The warrior moving in perfect form, one with the sword. You might look at this feat of precision and go: wow, this warrior is so amazing at crushing the dragon into a million bits! Look at them go! Except… each bit is a tiny dragon-fractal. A few hundred more valiant slashes and the warrior will be facing a new kind of opponent: the hive of the dragon sand. The warrior’s blade will woosh harmlessly through the sand and all we can hope is that the warrior has dragon-sandblast-proof defenses (hint: nope).
This weird effect of flux is something that we engineering souls need to be keenly aware of. When we find ourselves in that confident, exhilarating problem-solving mindset, it is on us to pause and reflect: are we perchance facing the fractal dragon? Will each “solution” create an army of different kinds of problems, each more and more immune to the tools we applied to the original problem? And if/when we recognize the fractal dragon, do we have the access tools that aren’t our mighty sword we’re so fond of?
My experience is that most folks around me (myself included) enjoy employing the power of causality to understand various phenomena. There’s something incredibly satisfying about establishing a sound causal chain. Once the last piece of the puzzle clicks in place, there’s nothing like it. Back when I still worked directly in code, some of my fondest memories were tracking down the causes of bugs. I remember once, we shipped a version of Chrome and suddenly, people started having the freakiest of crashes. Like, I spent a few days just staring at traces trying to comprehend how that might even be possible. However, as more information (and frustrated users) piled up, the long causal chain slowly coalesced. This happens, then this, then that, and — bam! — you get a sad tab. I still remember the high of writing the patch that fixed the crash. The grittiest of bugs have the longest causal chains, which always made them so much fun to figure out.
At the same time, there are causal chains that we perceive as incredibly short. Reach for a cup – get a drink. Press a key to type a letter in a doc. They might not actually be short (by golly, I know enough about HTML Editing and how Google Docs work to know otherwise) — but to us, they are simple action-reaction chainlinks. We see them as atomic and compose the causal chains of our life stories out of them.
We engineers love collapsing long causal chains into these simple chainlinks: turning a daunting process into a single action. My parents reminded me recently of how much harder it used to be to send emails before the Internet. I had forgotten the hours I spent traversing FIDO maps, crafting the right UUCP addresses, and teaching my Mom how to communicate with her colleagues — in another city! Electronically! Nowadays, the Wizardry of Email-sending has faded away into the background, replaced with agonizing over the right emoji or turn of the phrase. And yes, adding (and encoding) emojis also used to be a whole thing. A poetic way to describe engineering could be as the craft of seeking out and collapsing long causal chains into simple chainlinks, crystallizing them into everyday products.
Based on my understanding of the human brain, this is not too dissimilar from how it works. I am not a neuroscientist myself. My guides here are books by Lisa Feldman Barrett and Jeff Hawkins, as well as Daniel Kahneman’s seminal “Thinking, Fast and Slow”. It does look like the two processes: the discovery of causal chains (Dr. Barrett calls the process “novelty search”) and then collapsing them into chainlinks (“categorization/compression” or “reference framing”) are something that our brains are constantly engaged in. And once collapsed and turned into simple chainlinks, our brains are incredibly efficient at reaching for them to — you guessed it — seek out novel causal chains, continuing the infinite recursion of making sense of the world.
This seems like a perfect system. Except for one tiny problem: our discovered causal chains often contain mistakes. If this then that might omit an important variable or two. Remember those freaky crashes? Those were manifestations of engineers’ mistakes in their process of collapsing the massive causal chains that comprise a modern browser into the simple “go to URL.” In software, engineers spend a lot of time finding and fixing these bugs — and so do our brains. Still, both our software and our brains are teeming with chainlinks that hide mistakes (yes, I’ve said it — we’re full of bugs!) Worse yet, the recursive nature of our sense-making tends to amplify these mistakes, while still concealing their origin. While software just stops working, we tend to experience the amplified, distorted mistakes as suffering: anxiety, depression, burnout, anger, stress, etc. It takes intentional spelunking to discern these mistakes and not get further overwhelmed in the process. Like most astonishing things, our capacity for discovering and collapsing causal chains is both a gift and a curse. Or so the causal chain of this story says.
In a couple of conversations this week, the word “ecosystem” came up, and I realized that there were two different ways in which we employed that word.
The first one I heard was using “ecosystem” to describe a collection of products with which users come in contact. Let’s call it the product ecosystem perspective. This perspective puts software and/or hardware at the center of the ecosystem universe. Users enter and exit the ecosystem, and changing the ecosystem means making updates to products, discontinuing them, and shipping new products. It’s a fairly clean view of an ecosystem.
The other way I’d heard the word “ecosystem” being used was to describe the users that interact with the product, or the user ecosystem perspective. Here, the user is at the center of the ecosystem universe. It is products that move. Users pick them up or drop them, according to interests, desires, comfort, or needs. Users are humans. They talk with each other, giving out their own and following others’ advice, giving rise to waves and wanes in product popularity. This view of an ecosystem is messy, annoyingly unpredictable, and beautifully real.
It feels intuitive to me that both of these perspectives are worth keeping in mind. The empowering feel of the product ecosystem perspective is comforting for us technologically-inclined folk. It’s easy to measure and prioritize. Diving into the complexity of user ecosystem perspective provides deeper insights into what’s really important.
I’ve been thinking about this idea of the flux budget as a measure of capacity to navigate complexity of the environment. With a high flux budget, I can thrive in massively volatile, uncertain, complex, and ambiguous (yep, VUCA) spaces. With a low flux budget, a slightest sight of unpredictability triggers stress and suffering. If we imagine that the flux budget is indeed a thing, then we can look at organizations –and ourselves — and make guesses about how the respective flux budgets are managed.
Reflecting on my own habits, I am recognizing that to manage my flux budget, I have to deliberately work for it. To peer into the abyss of unpredictable, it appears that I need to be anchored to a sizable predictable environment. I ruthlessly routinize my day. From inbox zero to arranging shirts, to my exercise schedule, and even the allotment of guilty pleasures (like watching a TV show or the evening tea with cookies), it’s all pretty well-organized and neatly settled. Observing me in my natural routine habitat without context might conjure up depictions of thoughtless robots. Yet this is what allows me to have the presence to think deeply, to reflect, and patiently examine ideas without becoming attached to them.
This reaching for the comfort of routine has obvious consequences. How many beautiful, turning-point moments have I missed while sticking to my routine? How many times has the routine itself led me away from insights that would have otherwise been on my path? Or worse yet, imposed an unnecessary burden on others? Let’s call this phenomenon the predictability footprint: the whole of the consequences of us creating a predictable environment to which to anchor in the face of complexity.
I am pretty excited to be learning more about the relationship between flux budget and predictability footprint. The whole notion of the footprint (which I borrowed from carbon footprint) speaks to the second-order effects of us seeking solid ground in the flux of today’s world — and how that in turn might create more flux. A while back, I wrote about leading while sleepwalking, which seems like a decent example of a vicious cycle where a leader’s predictability footprint increases the overall state of flux, placing more demand on an organization’s flux budget.
These framings also help me ask new interesting questions. What is my predictability footprint? How might it affect my flux budget? What are the steps I can take to reduce my predictability footprint?
Hamilton Helmer pointed out this amazing connection between intention and shared mental model space that I haven’t seen before. If we are looking to gain more coherence within an organization, simply expanding the shared mental model does not seem sufficient. Yes, expanding this space creates more opportunities for coherence. But what role does the space play in realizing these opportunities?
A metaphor that helped me: imagine the shared mental model space as a landscape. There are tall mountains, and deep chasms, as well as areas that make for a nice, pleasant hike. Those who are walking within this landscape will naturally form paths through those friendly areas. When a shared mental model space is tiny, everyone is basically seeing a different landscape. Everyone is walking their own hiking trails, and none of them match. Superimposed into one picture, it looks like Brownian motion. When the shared mental model space is large, the landscape is roughly the same, and so is the trail, growing into a full-blown road that everyone travels.
On this road, where is everybody going? Where is the road leading them? Shared mental models aren’t just a way for us to communicate effectively. They also shape the outcomes of organizations. The slope of the road is slanted toward something. The common metaphors, terms, turns of the phrase, causal chains and shorthands — they are the forms that mold our organization’s norms and culture.
If my team’s shared mental model space is dominated by war metaphors and ironclad logic of ruthless expansion, the team will see every challenge — external or internal — as a cutthroat battle. If my organization’s key metaphors are built around evaluating the impact of individual contributions, we might have trouble cohering toward a common goal.
Put differently, every team and organization has an intention. This intention is encoded in its shared mental model space. The slant of that road gently, but implacably pulls everyone toward similar conclusions and actions. This encoded intention may or may not be aligned with the intention of organization’s leaders. When it is, everything feels right and breezy. Things just happen. When it is not, there is a constant headwind felt by everyone. Everything is slow and frustrating. Despite our temptation to persevere, I wonder if we would be better off becoming aware of our shared mental model space, discerning the intention encoded in it, and patiently gardening the space to slant toward the intention we have in mind?
Continuing my exploration of narratives that catalyze coherence, I would be remiss to not talk about the story of a threat.
The story of a threat is easily the most innately felt story. When compared to the story of an opportunity, it seems to be more visceral, primitive, and instinctive. It is also a prediction of compounding returns, but this time, the returns are negative. The story of a threat also conveys a vivid mental model of a compounding loop, but the gradient of the curve is pointing toward doom at an alarming rate. Living in 2021, I don’t need to go too far for an example here: the all-too-familiar waves of COVID-19 death rates are etched in our collective consciousness. Just like with the story of an opportunity, there’s something valuable that we have and the story predicts that we’re about to lose it all.
Structurally, the story of a threat usually begins with depiction of the vital present (the glorious “now”), emphasizing the significance of how everything is just so right now. It then proceeds to point out a yet-hidden catastrophe that is about to befall us. The reveal of the catastrophe must be startling and deeply disconcerting: the story of a threat does not seem to work as effectively with “blah-tastrophes.” Being able to “scare the pants off” the listener is the aim of the story.
A curious property of the story of a threat is that it is a half-story. It only paints the picture of the terrible future in which we’ll definitely be engulfed. Unlike with the story of an opportunity, there is less agency. Something bad is happening to us, and we gotta jump or perish. In that sense, the story of a threat is reactive — contrasted with the proactive thrust of the story of an opportunity. Being reactive, it propels the listener toward some action, leaving out the specifics of the action.
This half-storiness is something that is frequently taken advantage of in politics. Once the listener is good and ready, sufficiently distraught by the prospect of the impending disaster, any crisp proposal for action would do. We must do something, right? Why not that?
The story of a threat is a brute force to be reckoned with, and is extremely challenging to contain. Such stories can briefly catalyze coherence. But unless quickly and deliberately converted to the story of an opportunity, they tend to backfire. Especially in organizations where employees can just leave for another team, the story of a threat is rarely a source of enduring coherence. More often than not, it’s something to be wary of for organizational leaders. If they themselves are subject to the story of a threat, chances are they are undermining the coherence of their organization.
I was looking for practices that help expand shared mental model space and thinking about prototyping. I’ve always been amazed by the bridging power of hacking together something that kind-of-sort-of works and can be played with by others. Crystallized imagination, even when it’s just glue and popsicle sticks, immediately advances the conversation.
However, we often accidentally limit this power by prototyping solutions to problems that we don’t fully understand. When trying to expand the shared mental model space, it is tempting to make our ideas as “real” as possible — and in the process, produce answers based on a snapshot of a state, not accounting for the movement of the parts. Given a drawing of a car next to a tree and asked to solve the “tree problem,” I might devise several ingenious solutions for protecting the paint of the car from tree sap. No amount of prototyping will help me recognize that the “tree problem” is actually about the car careening toward the tree.
My colleague Donald Martin has a resonant framing here: prototype the problem (see him talk about it at PAIR Symposium). Prototyping the problem means popping the prototyping effort a level above solution space, out to the problem space. The prototype of a problem will look like a model describing the forces that influence and comprise the phenomenon we recognize as the problem. In the car example above, the “tree problem” prototype might involve understanding the speed at which the car is moving, strengths of participating materials (tree, car, person, etc.), as well as the means to control direction and speed of the car.
Where it gets tricky is making problem prototypes just as tangible as solution prototypes. There are many techniques available: from loosely contemplating a theory of change, to causal loop diagrams, to full-blown system dynamics. All have the same drawback: they aren’t as intuitive to grasp or play with as actually making a semi-working product mock-up. Every one of these requires us to first expand our shared mental space to think in terms of prototyping the problems. Recursion, don’t you love it.
Yet, turning our understanding of the problem into a playable prototype is a source of a significant advantage. First, we can reason about the environment, expanding both our solution space and the problem space. For that “tree problem,” discovering the role of material strengths guides me toward inventing seatbelts and airbags, no longer confined to just yelling “veer left! brake harder!” But most importantly, it allows us to examine the problem space collectively, enriching it with bits that we would have never seen individually. My intuition is that an organization with a well-maintained problem prototype as part of its shared mental model space will not just be effective — it would also be a joy to work in.
Tucked away in a couple of paragraphs of the brilliant paper by Cynthia Kurtz and David Snowden, there’s a highly generative insight. The authors make a distinction between the kinds of connections within an organization and then correlate the strength of these connections to the Cynefin quadrants. I accidentally backed into these correlations myself once. What particularly interested me was the introduction of connections into the Cynefin thinking space, so I am going to riff on that.
First, I’ll deviate from the paper and introduce my own taxonomy (of course). Looking at how information travels across a system, let’s imagine two kinds: connections that relay organizing information and connections that relay sensing information. For example, a reporting chain is a graph (most often, a tree) of organizing connections: it is used to communicate priorities, set and adjust direction, etc. Muscles and bones in our bodies are also organizing connections: they hold us together, right? Organizing connections define the structure of the system. Nerve endings, whiskers, and watercooler chats are examples of sensing connections — they inform the system of the environment (which includes the system itself), and hopefully, of the changes in that environment.
With this taxonomy in hand, we can now play in the Cynefin spaces. It is pretty clear that the Unpredictable World (my apologies, I also use different names for Cynefin bits than the paper) favors weak organizing connections and the Predictable World favors the strong ones. Organization is what makes a world predictable. In the same vein, Chaotic and Obvious spaces favor weak sensing connections, contrary to the neighboring Complex and Complicated spaces with their fondness for strong sensing connections.
Seems fairly straightforward and useful, right? Depending on the nature of the challenge I am facing, aiming for the right mix of organizing and sensing connections of the organizational structures can help me be more effective. Stamping out billions of identical widgets? Go for strong organizational connections, and reduce the sensing network. Solving hard engineering problems? Make sure that both organizational and sensing connection networks are robust: one to hold the intention, the other to keep analyzing the problem.
Weirdly, the causality goes both ways. The connection mix doesn’t just make organization more effective in different spaces. It also defines the kinds of problems that the organization can perceive.
A team with strong organizing connections and non-existent sensing connections will happily march down its predetermined path — every problem will look Obvious to it. Sure, the earth will burn around it and everything will go to hell in the end, but for the 99.9% of the journey, their own experience will be blissfully righteous. The solution to war is obvious to a sword.
Similarly, if that engineering organization loses its steady leader, weakening the strength of its organizing connection network, every problem will suddenly start looking Complex. The magic of constructed reality is that it is what we perceive it to be.
This might be a useful marker to watch for. If you work in a team that merrily stamps widgets, and suddenly everything starts getting more Complicated, look for those tendrils of sensing connections sprouting. And if you’re working at the place where the thick fog of Complexity begins to billow, it might be the environment. But it also could be the loss of purpose that kept y’all together all this time.
Shared mental model spaces are challenging to grow and expand. Mental models, especially novel and interesting ones, are subtle and have to be examined patiently to become shareable. The process of sharing itself often causes the models to mutate, creating variants that take off on their own. It’s a bewilderingly complex process, evoking images of mercury drops and murmurations.
And yet, this is how we learn. This is the only process we humans have at our disposal for creating intersubjective reality. Every failed attempt at sharing, each blank stare and subtle — or not-so-subtle — mutation takes us a tiny step closer to expanding our shared mental model space and becoming more capable of communicating together.
I used to get frustrated and give up pretty easily when my ideas were left seemingly unheard. Flip that bozo bit — make life easier. “They don’t get me.” As I’d found, that was a recipe for a self-isolating vicious cycle: my head is full of insights, but nobody can understand what the hell I just said. Why say anything at all?
It took me some time to figure out that for novel ideas and mental models, the rewrite count is crazy-high. Our minds are these massive networks of mental models. To become shared between us, a mental model needs to overlap with enough existing mental models to bridge to the new ideas. So, if I reframe “they are not getting my idea” as “I haven’t yet built enough bridges to their existing models,” the path toward shared mental model space becomes more evident. To get to that resonant moment of understanding, I have to keep conveying and re-conveying concepts in many different ways. I have to say it in many different ways, relying on different framings and metaphors, until the bridge suddenly appears and — click! — you and I share a model.
I also have to let my model mutate. Though it sounds similar, the process I am describing is very different from “convincing.” Achieving a shared mental model means accepting that what I bring with me is subject to change. The bridge works both ways. Your mental models enrich and influence mine. And each re-telling of a concept creates a new opportunity to bridge with someone else’s mind.
While geeking out on this idea of coherence and possible mechanisms that bring coherence, I ended up in a fun rabbit hole of narratives as catalysts for coherence. There seems to be certain kinds of stories that somehow end up bringing people together, organizing them and the outcomes of their efforts into a coherent whole. Looking back at my experiences, the one that stood out was the story of an opportunity.
Generally, the story of an opportunity is a prediction of compounding returns. Such a story conveys a mental model of a compounding loop, along with a recipe (sometimes just a sketch) for reaping benefits off it. I use “compounding returns” and “benefits” here very broadly. It can be straight-up money. It can be gathering enough impact to get a promotion. It can be acquired insights, carbon emission reduction, the attention of others, or practically any tangible or intangible thing we find valuable.
The story of an opportunity begins with describing the status quo in a way that’s resonant for the listeners. Then, it depicts the (boring/awful) future based on the status quo, setting up for the big reveal: the possibility of drastically different outcomes. This is the central moment of the story, the captivating twist in which the listeners acquire a mental model — how a change in their actions can lead to exponential returns.
At this stage of the story, the fork in the road is presented. Do the old thing and get old results, or do this other thing and get to ride the power of compounding returns. The story of an opportunity continues with plotting a path, helping the listener become convinced that taking the new path is plausible and perhaps even prudent. There’s usually a discussion of costs that might be high, but meager next to the predicted outcomes — and a conclusion that asks for commitment.
There’s something incredibly powerful about such stories. The glimpse of that mental model can be intoxicating and inspiring (and sometimes, ruinous). Growing up in the Soviet Union, I was prepared for linear outcomes: things will happen in this sequence, and then this will happen. It will all roughly be the same. Then, the iron curtain fell and the American Dream unceremoniously barged into my youthful mind. The movie that truly changed my life was The Secret of My Success, a bad movie that aged even more poorly. But back then, the cartoonish portrayal of riding a compounding loop of wit and circumstance was my fork of the road, followed by dramatic life-defining choices.
A story of an opportunity can act as a force of coherence in an organization. It can inspire people to come together and do amazing things, putting their hearts, sweat, and tears into the common goal. It is also just a story, and as such, can morph or be replaced by other stories, affecting coherence. The durability of a story’s power seems to reside in the accuracy of its prediction: how does what happens next reflect on our chances for riding the compounding loop? In an unpredictable environment, this accuracy diminishes quite a bit, making it more challenging to find a lasting story of an opportunity. Yet, it does seem like we collectively yearn for these stories, continue to look for them — and feel betrayed by them when the predictions don’t pan out.