Consistency, cohesion, and coherence

My colleague Micah introduced me to this framing of different degrees of organization and I found it rather useful. Recently, I shared the framing with my son and he came up with a pretty neat metaphor that I will try to capture here.

Imagine a box of gears. All gears are of different sizes and kinds. There are spur gears, bevel gears, herringbone, and their tooth spacing is all different. It’s a boxful of random junk. We call this state disorganized: entities are disjointed and aren’t meant to fit together.

Now, let’s imagine a different box. It is also full of gears, but here, all gears are of the same kind, and they all fit. It’s still just a pile of gears, but at least they are consistent. This state of consistency is our next degree of organization. The entities fit together, but aren’t connected in any way.

If we took the gears out of that second box and built a working gear system out of them, we would achieve the next degree of organization, the state of cohesion. Here, the entities have been organized into something that actually does something — we turn one gear and all others start turning with it. It’s amazing.

But what does this gear system do? This is where the story’s final degree of organization comes. Running rigs of gears are cool, but when we build them to do something intentional — like changing the rotational speed or torque of a motor — we reach the state of coherence. In this state, the entities don’t just work together, they are doing so to fulfill some intention. The addition of intentionality is a focusing function. In the state of cohesion, we’d be perfectly fine with building a contraption that engages all the gears we have. When we seek coherence, we will likely discard gears that might fit really well, but don’t serve the purpose of intention.

We also noticed that the states aren’t necessarily a straight-line progression. Just picture a bunch of gears that barely fit, rigged to do something useful — thus skipping the consistency stage altogether.

Playing with this metaphor and developer surfaces (APIs, tools, docs, etc.) produces a handy set of examples. If my APIs are all over the place, each in different language, style, and set of dependencies, we can safely call my developer surface disorganized. Making them all line up and match in some common style/spirit turns them consistent. If I go one step further and make the APIs easy to combine and build stuff with, I’ve taken my developer surface to the state of cohesiveness. Given how rare this is in real life, it’s a reason to celebrate already. But there’s one more state. My developer surface is coherent when the developers who use it produce outcomes that align with my intentions for the surface. If I made a UI framework with the intent to enable buttery-smooth end-user interactions, but all the users see is a bloated, janky mess — my developer surface could be consistent and cohesive, but it’s definitely not coherent.

Limits of attachment and capacity are interlinked

It felt both liberating and somehow odd to make a distinction between the limits of capacity and attachment. I’ve been thinking about that oddness and here’s an extra twist to the story.

The limits of attachment and capacity are interrelated. They are in this circular relationship that’s reminiscent of yin and yang. When I am struggling to grasp something or feel overwhelmed and generally experiencing the limit of capacity, it is usually the limit of attachment that is holding me back from gaining this capacity. Conversely, when I lash out in fear and frustration, trapped by my limit of attachment — it is usually the limit of capacity that prevents me from reframing and shifting my perspective to loosen my attachment.

Even more interestingly, the limit of attachment sometimes rises out of experiencing the limit of capacity, and vice versa. A few years back, I was the tech lead of a large team. As the team kept growing, I distinctly felt that I was losing track of everything that was going on. I was in over my head, hitting that limit of capacity. Meetings and syncs were overflowing my calendar, with the notes from that period of time turning increasingly terse and cryptic. One of the distinct fears — limit of attachment! — I remember from that time, was “I will fail to produce a coherent direction for the team.” I was holding too firmly onto a certain way of leading the team, and as I became more overwhelmed, I instinctively tried to hold it even firmer. So what did I do? I decided that the problem was somewhere else — it was the team that wasn’t organized right! I dove into drawing up plans for organizing teams and programs and all those other doc and chart artifacts that ultimately were not helpful — and likely the opposite. Experiencing the limit of capacity fed my limit of attachment — the fear of failing my team as their leader. Which in turn fed my limit of capacity with all the teamification work I created. The vicious cycle ended up being so horrific and traumatizing, I ended up leaving the team.

This story has a happy ending. This experience was also the eye-opening moment I needed, my first glimpse into the nature of complexity, being subject to some unknown force with increasing recognition of this force’s existence — my first conscious subject-object shift. It also helped me see that when folks around me bump into their limits to seeing, they are likely facing both the limit of capacity and the limit of attachment at the same time. And when they do, they are standing at the doorstep of vertical development.

It’s a unicycle!

Thinking a bit more about correlated compounding loops, I would like to improve on my metaphor. I can’t believe I didn’t see it earlier, but it’s clearly not a tricycle — it’s a unicycle! Having never ridden one, I can only imagine a bit more balancing and finesse needed to ride than a bicycle. So it’s settled then. It’s all about unicycles and bicycles from here on. Transportation of the future.

When the motor and the rudder rely on the same or highly correlated compounding loops, we have ourselves a unicycle. Conversely, we have a bicycle when the motor and the rudder use low- or non-correlated compounding loops. Bicycles tend to be more stable and unicycles more finicky.

For example, a tenured position is a solid bicycle. With the tenure secured, I can focus on steering toward the desired change, knowing that my motor will continue to provide the necessary power. Companies establishing research centers like PARC or Bell Labs is another example of bicycles: creating distance between the source of funding and the environment of change. This distance does not have to be large. Any buffer between the cash that’s coming in (motor) and the expenses dedicated to achieving desired outcomes (rudder) is acting as the unicycle-to-bicycle conversion kit.

There’s still more to consider in this transportation metaphor. It feels like the notion of groundedness is important. Are both bicycle wheels on the ground — is the rudder experiencing the same environment as the motor? What is the impact of that motor/rudder correlation on the time horizon of the intended change? I am still chewing on these.

Complexity escape routes and listening to learn

I was teaching the workshop on complexity this week, an outgrowth of my Adventures In Complexity slides. One of the interesting ideas that I was emphasizing during the workshop was this notion that we humans tend to be rather uncomfortable in Complex space. We traced the two intuitive pathways out of this space as escaping to Complicated and escaping to Chaotic.

We escape to Complicated space through insufficient framing: converting a complex phenomenon into a “problem to be solved” as quickly as we possibly can. We escape to Chaotic space by escalating: instead of viewing a complex phenomenon as a problem to be solved, we choose to view it as a threat. This particular kind of escape is just as tempting as the first one — and perhaps even more. How many difficult conversations did we turn into stupid, ultimately losing fights? How many emergencies did we create just to avoid sitting with the discomfort of complexity?

But the most interesting insight came when I was sharing some of my favorite tools from my complexity toolkit. I learned the Listening to Learn framework from Jennifer Garvey Berger and used it many times before this workshop. However, talking about the escape routes right next to it connected them in a novel way.

It seems that Listening to Win is closely correlated with the way we escape complexity by shifting to Chaotic space. “Winning” here is very much a confrontation with a distinct intent of containing a threat. In the same vein, Listening to Fix is a shoe-in for the steps we take to escape over to Complicated space: framing-shmaming, let’s fix this thing! It is the third way of listening, the eponymous Listening to Learn is what encourages us to hold complexity and resist taking the escape routes. I was surprised and delighted to make this connection and can’t wait to incorporate it into my next workshop. 

Settled and “not settled yet” framings

I was working on decision-making frameworks this week and had a pretty insightful conversation with a colleague, using the Cynefin framework as context. Here’s a story that popped out of it.

Many (most?) challenging decisions-requiring situations seem to reside in this liminal space between Complex and Complicated quadrants. We humans generally dislike the unpredictability of the complex space, so we invest a lot of time and energy into trying to shift the context: try to turn a complex situation into a complicated one. Especially in today’s business environments, we have a crapload of tools (processes, practices, metrics, etc.) to tackle complicated situations, and it just feels right to quickly get to the point where a problem becomes solvable. This property of solvability is something that is acquired as a result of transitioning through the Complex-Complicated liminal space. I use the word “framing” to describe this transition: we take a phenomenon that looks fuzzy and weird, then we frame it in terms of some well-established metaphors. Once framed, the phenomenon snaps into shape: it becomes a problem. Once a problem exists, it can be solved using those nifty business tools.

This transformation is lossy. Some subtle parts of the phenomenon’s full complexity become invisible once it becomes “the problem.” If I am lucky with my framing, these subtle parts will remain irrelevant. In the less happy case, the subtle parts will continue influencing the visible parts — the ones we see as “the problem.” We usually call these side effects. With side effects, the problem will appear to be resisting our attempts to solve it. No matter how much we try, our solutions will create new problems, new side effects to worry about.

In this story, it’s pretty evident that effective framing is key to making a successful Complex-Complicated transition. Further, it’s also likely that framing is an iterative process: once we encounter side effects, we are better off recognizing that what we believe is “the problem” might be a result of ineffective framing — and shifting back to complex space to find a more effective one.

My colleague had this really neat idea that, given the multitudes of framings and problems we collectively track in our daily lives, it might be worth tagging the problems according to the experienced effectiveness of their framing. If a problem is teeming with side-effects, the framing has “not settled yet” — it’s the best framing we’ve got, but approach it lightly, do seek out novel ways to reframe the phenomenon. Decisions based on this framing are unlikely to stick or bear fruit. Conversely, settled framings are the ones that we didn’t have to adjust in a while and are consistently allowing us to produce side-effect free results. Here, decisions can be proceduralized and turned into best practices.

Rigor and shared mental model space

Alex Komoroske has another great set of new cards on rigor and here’s a little riff on them. The thing that caught my attention — and resonated with me —  was the notion of faux rigor. Where does it come from?

Alex’s excellent iron triangle of the argument trade-offs framing offers a hint. Time is the ever-present constraint, and that means that the Short corner tends to prevail. Given a 1-pager and a 30-pager, the choice is a no-brainer for a busy leader. So the battle for rigor now depends on the argument being self-evident. Here, I want to build the story around the concept of shared mental model space. A shared mental model space is the intersection of all mental models across all members of a team.

In a small, well-knit team that’s worked together for a long time, the shared mental model space is quite large. People speak in shorthand, and getting your teammate up to speed on a new idea is quite easy: they already reached most of the stepping stones that got you there. In this environment, we can still find rigorous arguments, because the Self-evident corner is satisfied by the expansive shared mental model space. 

As the team grows larger or churns in membership, the shared mental model space shrinks. Shorthands start needing expansion, semantics — definition, and arguments — longer and longer builds. With a smaller shared mental model space, the argument needs to be more self-contained. Eventually, rigor is sacrificed to the lowest common denominator of mental models. In a limited shared mental model space, only the most basic short and self-evident arguments can be made. Value good. Enemy bad.

This spells trouble for larger teams or teams with high turnover. Through this lens, it’s easy to see how they would struggle to maintain a high level of rigor in the arguments that are being presented and evaluated within it. And as the level of rigor declines, so will the organization’s capacity to make sound strategic decisions. After faux rigor becomes the norm, this norm itself becomes a barrier that traps the organization in existential strategic myopia.

Especially in situations when a small organization begins to grow, it might be a good investment to consider how it will maintain a large shared mental model space as new people arrive and old guard retires. Otherwise, its growth will correlate with the decline of argument rigor within its ranks.  

Correlated compounding loops

A thing that became a bit more visible for me is this idea of correlated compounding loops. I used to picture feedback loops as these crisp, independent structures, and found that it is rarely the case in practice. More often than not, it feels like there are multiple compounding loops and some of them seem to influence, be influenced, or at least act in some sort of concordant dance with each other. In other words, they are correlated. Such correlation can be negative or positive.

Like any complexity lens, this is a shorthand. When we see correlated compounding loops, we are capturing glimpses of some underlying forces, and as it happens with many complex adaptive systems, we don’t yet understand its nature. All we can say is that we’re observing a bunch of correlated ones. To quickly conjure up an example, think of the many things impacted by the compounding loop of infections during the pandemic.

The thing that makes this shorthand even more useful is that we can now imagine a continuum of correlation between two compounding loops with two completely uncorrelated loops at one extreme and them coming together in perfect union at the other. Now we can look at a system and make some guesses about the correlation of compounding loops within it.

It seems that there will be more correlated compounding loops in systems that are more complex. In other words, the more connected the system is, the less likely we are to find completely uncorrelated compounding loops. To some degree, everything influences everything.

There are some profound implications in this thought experiment. If this hypothesis is true, the truly strong and durable compounding loops will be incredibly rare in highly connected systems. If everything influences everything, every compounding loop has high variability, which weakens them all, asymptotically approaching perfect dynamic equilibrium. And every new breakthrough — a discovery of a novel compounding loop — will just teach the system how to correlate it with other bits of the system. In a well-connected system, the true scarce resource is a non-correlated compounding loop.

The miracle count

While working on a technical design this week, my colleagues and I brushed up an old framing: the miracle count. I found it useful when considering the feasibility of projects.

Any ambitious project will likely have dependencies: a set of unsolved problems that are stringed together into a causal chain. “To ship <A>, we need <B> to work and <C> to have <metric>.” More often than not, the task of shipping such a project will be more about managing the flow of these dependencies rather than their individual effectiveness or operational excellence.

Very early in the project’s life, some of these dependencies might pop out as clearly outsized: ambitious projects in themselves. If I want to make an affordable flying saucer, the antigravity drive is a dependency, and it still needs to be invented. I call these kinds of dependencies “miracles” and their total number in the project the “miracle count.” Put it simply, the miracle count is the number of unlikely events that we need to happen for our project to succeed.

I don’t know about other environments, but in the engineering organization, having a miracle count conversation is useful. The miracle might be technological in nature (like that antigrav drive), it could be a matter of shifting priorities (“this team is already overloaded and we’re asking them to add one more thing to their list”), or any number of things. Recognizing that a dependency is a “miracle” is typically a prompt to de-risk, to have a contingency plan. “Yeah, I wonder if that generalized cache project is a miracle. What will we do if it doesn’t go through?”

Miracle-spotting can be surprisingly hard in large organizations. Each team is aiming high, yet wants to appear successful, and thus manage the appearance of their progress as a confident stride to victory. I have at least a couple of war stories where a team didn’t recognize a miracle dependency and parked their own “hacky workaround” projects — only to grasp for them suddenly (“Whew! So glad this code is still around! Who was the TL?”) when the miracle failed to materialize. I’ve also seen tech leads go to the other extreme and treat every dependency as a miracle. This inclination tends to create self-contained mega-projects, and increase their own miracle-ness. In my experience, finding that balance is hard, and usually depends on the specific constraints of a dependency — something that a robust miracle count conversation can help you determine.

Effectiveness and certainty

I’ve been thinking about the limits to seeing, and this diagram popped into my head. It feels like there’s more here, but I am not there yet. I thought I’d write down what I have so far.

There’s something about the relationship between the degree of certainty we have in knowing something and our effectiveness applying that knowledge. When we’ve just encountered an unknown phenomenon — be that a situation or a person — we tend to be rather ineffective in engaging with it. Thinking of it in terms of constructed reality, our model of this phenomenon generates predictions that are mostly errors: we think it will go this way, but it goes completely differently.  “I don’t get it. I thought he’d be glad I pointed out the problem with the design.” Each new error contributes to improving the model and, through the interaction with the phenomenon, we improve our prediction rate and with it, our certainty about our knowledge of the phenomenon. “Oh, turns out there’s another tech lead who actually makes all the calls on this team.” Here, our effectiveness of applying the knowledge seems to climb along with the improvements.    At some point, our model gets pretty good, and we reach our peak effectiveness, wielding our knowledge skillfully to achieve our aims. This is where many competence frameworks stop and celebrate success, but I’ve noticed that often, the story continues.

As my prediction error rate drops below some threshold, the model becomes valuable: the hard-earned experiential knowledge begins to act as its own protector. The errors that help refine the model are rapidly incorporated, while the errors that undermine the model begin to bounce off. Because of this, my certainty in the model continues to grow, but the effectiveness slides. I begin to confuse my model with my experiences, preferring the former to the latter. “She only cares about these two metrics!” — “What about this other time… “ — “Why are we still talking about this?!” Turns out, my competence has a shadow. And this shadow will lull me to sleep of perfect certainty… until the next  prediction error cracks the now-archaic model apart, and spurs me to climb that curve all over again.

For some reason, our brains really prefer fixed models — constructs that, once understood, don’t change or adapt over time. A gloomy way to describe our lifelong process of learning would be as the relentless push to shove the entire world into some model of whose prediction rate we are highly certain. And that might be okay. This might be the path we need to walk to teach ourselves to let go of that particular way of learning.

Open-endedness and bedrock

I’ve been inspired and captivated by Gordon Brander’s exploration of open-ended systems, and was reminded of a concept that we developed a while back when working on the Web Platform: the bedrock. My old colleague and friend Alex Russel and I wrote and talked a bit about it back then, and it still seems useful in considering open-endedness.

Generally, the way I want to frame bedrock is as an impenetrable barrier, the boundary at the developer surface of a platform. It acts as the attenuator of capabilities: some are exposed to the developers (the consumers of the platform) and others are hidden beneath. In this sense, bedrock is a tool, whether intentional or not. When intentional, it is usually the means of preserving some value that becomes lost if fully exposed to developers, like the Web security model. When unintentional, it’s just where we decided to settle. For example, if I wanted to build my own developer surface that sits on top of multiple other platforms (see any cross-platform UI toolkit or — yes, the Web platform itself), the bedrock will likely be defined for me as the lowest common denominator of capabilities across these platforms — rather than something I intentionally choose.

Through this lens, the open-endedness of a system can be viewed as a degree to which the bedrock is used as the attenuation tool. When attenuation is minimal, both developers of the platform and consumers are likely working above the bedrock. As I write it, I reminisce of the early days of Firefox. Being written mostly in Javascript and easy to hack, tools like Greasemonkey sprouted from that fertile ground, with their own vibrant ecosystems. Sadly, the loss of value in the form of security vulnerabilities showed up pretty early and with that, the push to move some capabilities under the bedrock.

When the attenuation is high, the bedrock becomes a more pronounced developer boundary: on one side are the developers of the platform, and on the other are its consumers. Platform developers rarely venture outside of the bedrock and platform consumers treat the stuff beneath the bedrock as a mythical “thing that happens.” Ecosystems around platforms with high bedrock attenuation can still be thriving and produce beautiful outcomes, but their generative potential is limited by the attenuation. For example, there are tons of dazzling Chrome themes out there, yet the bedrock attenuation is so high (API is “supply pictures and colors”) that it’s unlikely that something entirely different can emerge on top of it.

Platforms that want to offer a more generative environment will tend to want to move capabilities in the other direction across the bedrock line, exposing more of them to the consumers of the developer surface. One thing to look out for here is whether the platform developers are coming along with these capabilities: do they mingle with the surface consumers, building on top of the same bits? If so, that would be a strong indicator that the bedrock attenuation is diminishing.