Settled and “not settled yet” framings

I was working on decision-making frameworks this week and had a pretty insightful conversation with a colleague, using the Cynefin framework as context. Here’s a story that popped out of it.

Many (most?) challenging decisions-requiring situations seem to reside in this liminal space between Complex and Complicated quadrants. We humans generally dislike the unpredictability of the complex space, so we invest a lot of time and energy into trying to shift the context: try to turn a complex situation into a complicated one. Especially in today’s business environments, we have a crapload of tools (processes, practices, metrics, etc.) to tackle complicated situations, and it just feels right to quickly get to the point where a problem becomes solvable. This property of solvability is something that is acquired as a result of transitioning through the Complex-Complicated liminal space. I use the word “framing” to describe this transition: we take a phenomenon that looks fuzzy and weird, then we frame it in terms of some well-established metaphors. Once framed, the phenomenon snaps into shape: it becomes a problem. Once a problem exists, it can be solved using those nifty business tools.

This transformation is lossy. Some subtle parts of the phenomenon’s full complexity become invisible once it becomes “the problem.” If I am lucky with my framing, these subtle parts will remain irrelevant. In the less happy case, the subtle parts will continue influencing the visible parts — the ones we see as “the problem.” We usually call these side effects. With side effects, the problem will appear to be resisting our attempts to solve it. No matter how much we try, our solutions will create new problems, new side effects to worry about.

In this story, it’s pretty evident that effective framing is key to making a successful Complex-Complicated transition. Further, it’s also likely that framing is an iterative process: once we encounter side effects, we are better off recognizing that what we believe is “the problem” might be a result of ineffective framing — and shifting back to complex space to find a more effective one.

My colleague had this really neat idea that, given the multitudes of framings and problems we collectively track in our daily lives, it might be worth tagging the problems according to the experienced effectiveness of their framing. If a problem is teeming with side-effects, the framing has “not settled yet” — it’s the best framing we’ve got, but approach it lightly, do seek out novel ways to reframe the phenomenon. Decisions based on this framing are unlikely to stick or bear fruit. Conversely, settled framings are the ones that we didn’t have to adjust in a while and are consistently allowing us to produce side-effect free results. Here, decisions can be proceduralized and turned into best practices.

Rigor and shared mental model space

Alex Komoroske has another great set of new cards on rigor and here’s a little riff on them. The thing that caught my attention — and resonated with me —  was the notion of faux rigor. Where does it come from?

Alex’s excellent iron triangle of the argument trade-offs framing offers a hint. Time is the ever-present constraint, and that means that the Short corner tends to prevail. Given a 1-pager and a 30-pager, the choice is a no-brainer for a busy leader. So the battle for rigor now depends on the argument being self-evident. Here, I want to build the story around the concept of shared mental model space. A shared mental model space is the intersection of all mental models across all members of a team.

In a small, well-knit team that’s worked together for a long time, the shared mental model space is quite large. People speak in shorthand, and getting your teammate up to speed on a new idea is quite easy: they already reached most of the stepping stones that got you there. In this environment, we can still find rigorous arguments, because the Self-evident corner is satisfied by the expansive shared mental model space. 

As the team grows larger or churns in membership, the shared mental model space shrinks. Shorthands start needing expansion, semantics — definition, and arguments — longer and longer builds. With a smaller shared mental model space, the argument needs to be more self-contained. Eventually, rigor is sacrificed to the lowest common denominator of mental models. In a limited shared mental model space, only the most basic short and self-evident arguments can be made. Value good. Enemy bad.

This spells trouble for larger teams or teams with high turnover. Through this lens, it’s easy to see how they would struggle to maintain a high level of rigor in the arguments that are being presented and evaluated within it. And as the level of rigor declines, so will the organization’s capacity to make sound strategic decisions. After faux rigor becomes the norm, this norm itself becomes a barrier that traps the organization in existential strategic myopia.

Especially in situations when a small organization begins to grow, it might be a good investment to consider how it will maintain a large shared mental model space as new people arrive and old guard retires. Otherwise, its growth will correlate with the decline of argument rigor within its ranks.  

Correlated compounding loops

A thing that became a bit more visible for me is this idea of correlated compounding loops. I used to picture feedback loops as these crisp, independent structures, and found that it is rarely the case in practice. More often than not, it feels like there are multiple compounding loops and some of them seem to influence, be influenced, or at least act in some sort of concordant dance with each other. In other words, they are correlated. Such correlation can be negative or positive.

Like any complexity lens, this is a shorthand. When we see correlated compounding loops, we are capturing glimpses of some underlying forces, and as it happens with many complex adaptive systems, we don’t yet understand its nature. All we can say is that we’re observing a bunch of correlated ones. To quickly conjure up an example, think of the many things impacted by the compounding loop of infections during the pandemic.

The thing that makes this shorthand even more useful is that we can now imagine a continuum of correlation between two compounding loops with two completely uncorrelated loops at one extreme and them coming together in perfect union at the other. Now we can look at a system and make some guesses about the correlation of compounding loops within it.

It seems that there will be more correlated compounding loops in systems that are more complex. In other words, the more connected the system is, the less likely we are to find completely uncorrelated compounding loops. To some degree, everything influences everything.

There are some profound implications in this thought experiment. If this hypothesis is true, the truly strong and durable compounding loops will be incredibly rare in highly connected systems. If everything influences everything, every compounding loop has high variability, which weakens them all, asymptotically approaching perfect dynamic equilibrium. And every new breakthrough — a discovery of a novel compounding loop — will just teach the system how to correlate it with other bits of the system. In a well-connected system, the true scarce resource is a non-correlated compounding loop.

The miracle count

While working on a technical design this week, my colleagues and I brushed up an old framing: the miracle count. I found it useful when considering the feasibility of projects.

Any ambitious project will likely have dependencies: a set of unsolved problems that are stringed together into a causal chain. “To ship <A>, we need <B> to work and <C> to have <metric>.” More often than not, the task of shipping such a project will be more about managing the flow of these dependencies rather than their individual effectiveness or operational excellence.

Very early in the project’s life, some of these dependencies might pop out as clearly outsized: ambitious projects in themselves. If I want to make an affordable flying saucer, the antigravity drive is a dependency, and it still needs to be invented. I call these kinds of dependencies “miracles” and their total number in the project the “miracle count.” Put it simply, the miracle count is the number of unlikely events that we need to happen for our project to succeed.

I don’t know about other environments, but in the engineering organization, having a miracle count conversation is useful. The miracle might be technological in nature (like that antigrav drive), it could be a matter of shifting priorities (“this team is already overloaded and we’re asking them to add one more thing to their list”), or any number of things. Recognizing that a dependency is a “miracle” is typically a prompt to de-risk, to have a contingency plan. “Yeah, I wonder if that generalized cache project is a miracle. What will we do if it doesn’t go through?”

Miracle-spotting can be surprisingly hard in large organizations. Each team is aiming high, yet wants to appear successful, and thus manage the appearance of their progress as a confident stride to victory. I have at least a couple of war stories where a team didn’t recognize a miracle dependency and parked their own “hacky workaround” projects — only to grasp for them suddenly (“Whew! So glad this code is still around! Who was the TL?”) when the miracle failed to materialize. I’ve also seen tech leads go to the other extreme and treat every dependency as a miracle. This inclination tends to create self-contained mega-projects, and increase their own miracle-ness. In my experience, finding that balance is hard, and usually depends on the specific constraints of a dependency — something that a robust miracle count conversation can help you determine.

Effectiveness and certainty

I’ve been thinking about the limits to seeing, and this diagram popped into my head. It feels like there’s more here, but I am not there yet. I thought I’d write down what I have so far.

There’s something about the relationship between the degree of certainty we have in knowing something and our effectiveness applying that knowledge. When we’ve just encountered an unknown phenomenon — be that a situation or a person — we tend to be rather ineffective in engaging with it. Thinking of it in terms of constructed reality, our model of this phenomenon generates predictions that are mostly errors: we think it will go this way, but it goes completely differently.  “I don’t get it. I thought he’d be glad I pointed out the problem with the design.” Each new error contributes to improving the model and, through the interaction with the phenomenon, we improve our prediction rate and with it, our certainty about our knowledge of the phenomenon. “Oh, turns out there’s another tech lead who actually makes all the calls on this team.” Here, our effectiveness of applying the knowledge seems to climb along with the improvements.    At some point, our model gets pretty good, and we reach our peak effectiveness, wielding our knowledge skillfully to achieve our aims. This is where many competence frameworks stop and celebrate success, but I’ve noticed that often, the story continues.

As my prediction error rate drops below some threshold, the model becomes valuable: the hard-earned experiential knowledge begins to act as its own protector. The errors that help refine the model are rapidly incorporated, while the errors that undermine the model begin to bounce off. Because of this, my certainty in the model continues to grow, but the effectiveness slides. I begin to confuse my model with my experiences, preferring the former to the latter. “She only cares about these two metrics!” — “What about this other time… “ — “Why are we still talking about this?!” Turns out, my competence has a shadow. And this shadow will lull me to sleep of perfect certainty… until the next  prediction error cracks the now-archaic model apart, and spurs me to climb that curve all over again.

For some reason, our brains really prefer fixed models — constructs that, once understood, don’t change or adapt over time. A gloomy way to describe our lifelong process of learning would be as the relentless push to shove the entire world into some model of whose prediction rate we are highly certain. And that might be okay. This might be the path we need to walk to teach ourselves to let go of that particular way of learning.

Open-endedness and bedrock

I’ve been inspired and captivated by Gordon Brander’s exploration of open-ended systems, and was reminded of a concept that we developed a while back when working on the Web Platform: the bedrock. My old colleague and friend Alex Russel and I wrote and talked a bit about it back then, and it still seems useful in considering open-endedness.

Generally, the way I want to frame bedrock is as an impenetrable barrier, the boundary at the developer surface of a platform. It acts as the attenuator of capabilities: some are exposed to the developers (the consumers of the platform) and others are hidden beneath. In this sense, bedrock is a tool, whether intentional or not. When intentional, it is usually the means of preserving some value that becomes lost if fully exposed to developers, like the Web security model. When unintentional, it’s just where we decided to settle. For example, if I wanted to build my own developer surface that sits on top of multiple other platforms (see any cross-platform UI toolkit or — yes, the Web platform itself), the bedrock will likely be defined for me as the lowest common denominator of capabilities across these platforms — rather than something I intentionally choose.

Through this lens, the open-endedness of a system can be viewed as a degree to which the bedrock is used as the attenuation tool. When attenuation is minimal, both developers of the platform and consumers are likely working above the bedrock. As I write it, I reminisce of the early days of Firefox. Being written mostly in Javascript and easy to hack, tools like Greasemonkey sprouted from that fertile ground, with their own vibrant ecosystems. Sadly, the loss of value in the form of security vulnerabilities showed up pretty early and with that, the push to move some capabilities under the bedrock.

When the attenuation is high, the bedrock becomes a more pronounced developer boundary: on one side are the developers of the platform, and on the other are its consumers. Platform developers rarely venture outside of the bedrock and platform consumers treat the stuff beneath the bedrock as a mythical “thing that happens.” Ecosystems around platforms with high bedrock attenuation can still be thriving and produce beautiful outcomes, but their generative potential is limited by the attenuation. For example, there are tons of dazzling Chrome themes out there, yet the bedrock attenuation is so high (API is “supply pictures and colors”) that it’s unlikely that something entirely different can emerge on top of it.

Platforms that want to offer a more generative environment will tend to want to move capabilities in the other direction across the bedrock line, exposing more of them to the consumers of the developer surface. One thing to look out for here is whether the platform developers are coming along with these capabilities: do they mingle with the surface consumers, building on top of the same bits? If so, that would be a strong indicator that the bedrock attenuation is diminishing.

Growth, control, and tricycles

Riffing on Alex Komoroske’s excellent set of cards on compounding loops, there’s something really interesting about the relationship between compounding loops and properties of a theory of change. A compounding loop makes an excellent motor: every cycle produces a bit more value. At the same time, a rudder tends to dampen a compounding loop: just like with any OODA loop, we will need to zig and zag, introducing variability into the compounding process. And as Accounting 101 teaches us, variability diminishes compounding returns. 

If my theory of change relies on the same compounding loop to serve both as a motor and a rudder, I will start experiencing a weird tension that I call the growth/control tension.

This tension arises from the two opposing forces. There’s the force of growth that comes from my wish for the compounding loop to continue acting as the motor in my theory of change. There’s also the force of control, which is an embodiment of my wish to use it as the rudder. This might be a really terrible analogy, but it’s kind of like riding one of those front-wheel pedal tricycles: choose between going fast and steering  as little as possible, or turning and breaking the pedaling rhythm — can’t do both. In a single-compounding loop theory of change, trying to control impacts growth and focusing on growth means losing some control.

Developer ecosystems are very commonly subject to this tension. To get more developers (growth), I want to listen to their needs and ship stuff that satisfies them. To move an ecosystem in a certain direction (control), I want to change what developers do, which means that some of them won’t like it. If I don’t recognize the control/growth tension, I might end up see-sawing from one extreme to the other, or worse yet, find myself paralyzed by indecision: do I take the developer sentiment hit, or do I stay in the past? I have done both, and these aren’t on my list of happy places.

If you’re lucky, you can diversify: there is another low-correlation compounding loop in the ecosystem that you can lean onto, to separate your motor and rudder from each other. Otherwise, you are stuck riding the tricycle, dynamically resolving the growth/control tension. It’s not so bad, but it does take a lot of practice.

Limits to seeing: capacity and attachment

Helping a colleague process tough feedback this week, I’ve been reaching for a framing to describe something subtle around the nature of the limits to seeing another’s perspective. This story tries to get at this subtlety.

When we find ourselves in a situation where a colleague or a friend says something that doesn’t make sense, we might be encountering one of the two obstacles in our way: the limit of capacity and the limit of attachment.

The limit of capacity to fully grok another perspective is fairly straightforward. Trying to explain derivatives to a three-year old is an example of such a limit: the child is not yet capable of holding mental models of this complexity. Similarly, I could also be overloaded with other things. My favorite example here is an anecdote from a colleague of mine, who was masterfully conducting a status update meeting to senior leads. At the end of the update, one of the leads said: “This is all very cool, nice work! If you don’t mind… Can you tell us why you are doing this?” Leaders have full plates, and this project simply fell off, going beyond their collective holding capacity.

The second kind of obstacle is much trickier. With the limit of attachment, taking a perspective feels unsafe for some reason. I am attached to my view and intuitively want to defend it against anything that might change it. Either there’s a painful admission of some truth that’s hard to come to terms with, or an entire construction of the world might come undone if this perspective is taken. This limit has a very prominent marker: a sense of unease, a spike of emotional temperature in the conversation. “Whoa, this meeting just got weird.”

The distinction between these obstacles feels significant to me because the approaches to overcoming them are drastically different. For the limit of capacity, I typically look for ways to decrease the notional capacity of the perspective I want to convey. Can I create a simple, more accessible narrative? Perhaps connect it to something that’s already well-understood and habitual? Framing, describing, articulating, narrative-making are all fine tools for this job.

These tools are also futile and possibly harmful for the limit of attachment. When I am firmly attached to my perspective, these attempts to “better explain” will feel like attacks, like blades that are shredding the essence of my being. Until I myself bring my attachment to the foreground and reflect on it, I will remain stuck. The “myself” in the last sentence is key. In my experience, the limit of attachment can only be overcome from the inside. So if you’re having a “just got weird” meeting or email exchange with a colleague, it might be worth gently pinging them to see if they recognize bumping against their limit of attachment, giving them a moment to reflect and regroup. And be prepared to walk away if the ping goes unheard.

Triggered, intentional, and spontaneous coherence

I’ve been chatting about organizational coherence with a few folks, and this question intrigued me: what are the conditions that lead to coherence emerging? Sifting through my experience, I ended up with these three (still playing with the labels for them): triggered, intentional, and spontaneous. I also have this vague intuition that they are sequenced in relation to each other.

Imagine a team that finds itself in some existential crisis and must come together to overcome it. Here, the coherence is triggered: people align on the same goal to counteract the outside pressure. When I suddenly find the strength to jump that too-tall fence after being chased by a dog, my body is demonstrating triggered coherence. In response to a threat, I am capable of going beyond my imagined limits. The triggered coherence works the same way in teams, popularized by a familiar story trope of a bunch of underachievers going the distance in extreme circumstances.

When a team is at the feel-good apex of that story, it might even look like a team in the state of intentional coherence. However, there’s an important distinction: triggered coherence is reactive, and intentional coherence is proactive. An organization that is capable of intentional coherence decides to cohere in pursuit of a goal. When I go to the gym despite the aches and that sweet temptation of skipping just this once, my body comes together in intentional coherence. 

To spot intentional coherence, look for a mission, a sense of purpose to the action, not obviously attached to some perceived threat. The team pushes their limits intentionally, having enough confidence that together, they are greater than the sum of their individual capabilities.

At the top of the game, it might even feel like coherence is effortless within such a team, almost like the coherence is spontaneous rather than willful. However, as soon as the object of intention is captured (or is clearly within reach), it’s worth looking for signs for coherence dwindling. If it does, the coherence is likely more intentional than spontaneous.

What does spontaneous coherence look like? Typically, there’s minimal organization. Everything just kind of happens. People working in teams that exhibit spontaneous coherence are regularly surprised by the high quality and consistency of the outcomes they produce. “We just jammed around and I pitched in here and there, and whoa, this came together really nicely!” I have precious little experience working in spontaneous coherence environments, but there’s something about these environments that is phenomenally appealing. There’s none of the existential angst of the triggered coherence nor the teeth-gritting of the intentional coherence. It just feels like air, like “why wouldn’t I be doing this” — and I cherish every moment being part of such an environment.

My guess is that spontaneous coherence is also impermanent, and the overarching sense of joy and appreciation for the environment holds a tempting potential to become an intention in itself (“let’s keep doing this for as long as we can!”) — thus eventually sliding back into intentional coherence.

Orders of Humility

Alex Komoroske and I had a great generative conversation today and as usual, I have a new framing to write down.

We often look for a humble attitude toward new information in those who we collaborate with. It signals to us that they are willing to listen and shift their perspective, to see what we see — even if temporarily. We recognized that there might be two different kinds of this humility, seemingly stacked on top of another: the first order and the second order.

The first order of humility feels like this general sense of curiosity, a hunger for new information, excitement toward it. People with strong first-order humility are information sponges, asking more questions, pulling on threads, actively engaging in conversations, seeking insights and incorporating them into their understanding of the world. They have enough confidence in this understanding to the degree that they aren’t experiencing much angst in the presence of disconfirming evidence. 

This confidence also conceals the limit of the first order of humility. Even though I might be eager to take your new idea for a spin, I only do so to enrich and reinforce my current understanding. In this order of humility, my understanding of the world remains largely immovable, a vessel that I am happy to pile my new insights into, sorting useful ones into one pile, and the useless ones into another.

At the second order of humility, this illusion is shattered. Usually through lived experience, a person who acquired this kind of humility has watched that vessel turn into mush, or disappear, break down, and eventually emerge as something completely different — taking all those piled-on useless insights and Copernican-shifting them into a whole reality.

Lost is the confidence in the concreteness of the world, and with it, the ease of sorting insights into useful and useless. Disconfirming evidence is met with wonder and awe, as a possible precursor to another metamorphosis.

We also noticed that orders of humility seem to correlate with horizontal and vertical development framing. Both are representative of the acceptance and appreciation of development. The first order of humility seems to correspond to commitment to horizontal development (“learning makes you better”), and the second order of humility to vertical (“learning transforms who you are”).