Archers, Captains, and Strategists

Talking with a colleague, I was trying to draw a distinction between the different kinds of questions people ask when looking for direction. A simple lens materialized. I hope it will be as  useful for you as it has been for me.

Imagine a fun medieval-themed board game, where we all draw different cards. Based on the cards we draw, we want to know different things and want to see different parts of the overall picture. There are three archetypes: Archers, Captains, and Strategists.

When we draw the Archer card, we don’t really care about the larger picture or the depth of nuance within the situation. We just want to have clarity on what needs to be done. For Archers, the question is “Where do we shoot?” As an example, when I sign up for volunteer work, I tend to draw the Archer card. I just want to chip in, relying on others to organize me. Wash dishes? Okay. Clean tables? Sure. Stack chairs? You’ve got it. When I have the Archer card, my satisfaction comes from getting stuff done. 

When we draw the Captain card, we are asked to see enough of a larger picture to make sure that all those arrows not just hit the target, but that each round of our game progresses in service of some sort of intention. Captains lead. Stepping into a TL role is like drawing a Captain card: you are given a broad mandate of some sort, and it’s on you to figure out how to organize your colleague’s collective capabilities to fulfill it. Captains ask the “What are we winning?” question. In my example of TLs, the clarity of that mandate is paramount. All their reasoning sits on top of it. If the mandate is loose, so are the winning conditions – which rarely leads to desired outcomes.

Occasionally, I get confused and, when given the Archer card, I try to act as a Captain. This can be somewhat stressful. When given a target, Captains aren’t content until they understand the problem being solved and make their own conclusion that this is indeed the right target at which to aim. And if it’s not, it can be quite draining to see everyone around me blissfully shooting arrows in what I believe is the wrong direction. I am guessing this happens to you, too?

Finally, when we draw the Strategist card, we are asked to situate all underlying intentions of Captains and Archers in the larger picture of the game.  If we do indeed win, why is that significant? What happens next? What is the longer arc of this adventure? Strategists want to see it all. Strategists assume that targets will be chosen and rounds won or lost, skipping over to the effects of these moves on the larger environment. It’s the overall change in this environment that they are most interested in. Strategists discern a system of rules within the game and help Captains frame problems into mandates. The question Strategists ask is “What is the game?

If I were to make such a game more life-like, I would employ the likes of that UNO Attack! shuffler, which tosses cards at us in handfuls. We’re always an Archer, a Captain, and a Strategist — and often, it’s hard to tell which card we’re currently holding. To add to the chaos, some of us lean toward Archer, and some Captain or Strategist, acting the archetype even if it’s different from the card we’re dealt. It’s a crazy game.

One of the many insights that this lens produced for me was that when communicating direction within an organization, it may be useful to structure it as a layering of these questions. We start with a brief answer to “Where do we shoot?”, then provide a more broad “What are we winning?” and close with the expansive “What is the game?” This way, when I am an Archer, I can quickly get my target list and go at it. When I am a Captain, I can dig a bit deeper and find clarity of my mandate. Last but not least, as a Strategist, I will appreciate the full rigor of exploring the system in which this particular direction is located.

The problem understanding framework

With my apologies for taking a scenic route and sincere thanks for following along, I am happy to declare that we now have all the parts to return to that framework I started with. To give you a quick recap, the framework was my replacement for Cynefin and consisted of three problem classes: solved, solvable, and unsolvable.

And now, for the big reveal. Allow me to connect the problem classes to the cycles in the process of understanding. The “solved” problem class corresponds to the “apply” cycle, the “solvable” problem – to the “solve” cycle, and finally the “unsolvable” problem fits the “struggle” cycle. We apply solved problems, we solve solvable problems, and we struggle with unsolvable problems. Okay, maybe the reveal wasn’t as dramatic as I made it out to be. 

I still don’t have a catchy name for it. Right now, I am going with a generic “problem understanding framework”, which is definitely not as cool as Cynefin or OODA.

When starting on this adventure, I wanted to construct a framework that had a few of attributes that seemed important: ontological humility, modularity, and layering.

For me, the attribute of ontological humility meant that the framework must be rooted in the idea of constructed reality. Every problem is probably unsolvable. However, it might come with a really solid framing that makes it fit reasonably well into a solvable problem class. It might even come with a highly effective solution that elevates it into the class of solved problems. The problem’s current position within a class might shift, as our explorations of change indicate. The framework itself is just a framing and as such, has blindspots and infinity-problems within it. We can see it as a bug, or just be humble enough to admit that the world around us is much more complex than any framework can capture.

When I say “modularity”, I convey possibility and encouragement to use and remix parts of the framework like LEGO bricks to fit a particular experience or challenge. You don’t need the whole thing. I also want to point out that the framework provides for reinterpretation and swapping out of its parts. If you have your own way to think about infinity-problems, please do replace the pre-built bits with it. Think of it as a bunch of micro-frameworks and mental models chilling contentedly in one happy house. The whole thing hangs together, but also works as individual pieces.

The third property of layering provides a progression from more pragmatic, surface usage to more in-depth and rigorous one. The problem classes are already useful to orient – and it’s okay if this is the only layer that you need in a given situation. But if you want to dig deeper, I tried to layer concepts in a way that allows gradual exploration. There is a rigorous foundation under the three simple buckets. Each layer answers a different question, starting with a simple “where am I” at the top layer, and progressing toward the forces that might be influencing me, their underlying dynamics, and why these dynamics emerge.

To give you a sense of how it’s all organized in my mind, I thought I’d put it all together in one mega-diagram.

The layers are at the top, arranged (left-to-right) from more concrete to more rigorous: starting with the pragmatic three problem classes, progressing to the process of understanding, then arriving at the learning loop, and finally revealing the predictive model fundamentals. The modules are at the bottom, placed along the spectrum of the models. Not gonna lie, it looks a bit daunting.

So wish me luck. Next, I’ll be playing with this framework and applying it in various situations. Let’s see where the process of understanding takes me. And of course, I’ll keep sharing any new learnings here.

Framing

When encountering an infinity-problem, we may have enough wherewithal to resist the urge to act on our caveman firmware. In such cases, we tend to employ a more sophisticated process to exit the “struggle” cycle. The typical name it goes by is framing, or discerning a subset of the infinity-problem that is approximately the same, but does not touch infinity. Framing is a bit of a cop out, a giving-up of sorts. It’s an admission that understanding infinity remains elusive. Framing is our way to convert a problem from the one we cannot solve to the one we can.

We perform this conversion by constraining the original problem. One very common technique for adding constraints is imposing a terminating condition. If we examine our instinctive “fight” response, we can spot a terminating condition: elimination of one of the participants. When we choose to fight, we convert a likely infinity-problem into a problem of winning. Shifting to this constrained problem still requires a bout of adversarial reciprocal adaptation, but only enough to reach the terminating condition.

Another way we constraint is by removing change from parts of the problem. Assuming things being constant feels so natural to us that we don’t even recognize it as the process of imposing constraints. Terminating conditions and removing change interlink with each other: of course the problem will go away permanently as soon as we win.

Yet another way to constrain infinity-problems is by drawing bounds. It just feels right when we put limits into what is possible and what is not. Yes, it is possible that I will get hit by an asteroid right now, but it is so unlikely that I would prefer not to consider that. Yes, it is possible that a deadly virus will cause a global pandemic, but it is so unlikely …  waaaaait a minute. Human-erected bounds are all around us, and again, they combine with terminating conditions and presuming lack of change to create an environment that feels predictable. Games are a great illustration of such environments. From chess to Minecraft, games create spaces where the contact with infinity is microdosed to actually become fun.

When we frame a problem by imposition of constraints, we make a choice. We choose to ignore the parts of the problem that lie outside of the constraints. Once framed, these parts become the dark matter of the problem. Whether we want them or not, they continue to exist. Their existence manifests through a phenomenon we call “side effects.” By definition, every framing will have them. Some framings have more side effects, and others less. For example, if you and I are in a high-stakes meeting, and you say something that I disagree with, I might instinctively choose the “fight” framing and attempt to engage in fisticuffs right there and then. Conversely, I might choose to invest a few extra moments to consider the infinity-problem I am facing, and instead decide to examine how your statements might enrich my understanding of the situation. It’s pretty clear from these two contrasting approaches that one framing will have more negative side effects than the other (it’s the first one, if you’re still wondering). We often use the word “reframing” as the name for this seeking of a more effective framing.

So it seems that we’re better off when we view framing as a deliberate process. In relation to the process of understanding, it’s a meta-process: framing defines how we proceed with our understanding. Framings are squishy and vague early on, and solidify rapidly as the process goes on. By the time we reach the “solving” stage, framings serve as foundations we build our understanding upon. To emphasize this meta-ness of framing, I will further complicate our process diagram and embed a fractal copy of it (yay, infinity!) somewhere between the “struggle” and “solve” cycles. In this way, we perceive framing as its own process of understanding, with its own “novel”, “diverge”, “converge”, and “routine” phases. And yes, I will blissfully ignore the notion of this meta-process also having its own meta-process for now. (Pop quiz: which constraining technique did I apply just now?)  However, Anne Starr and Bill Torbert have an insightful exploration of that particular rabbit hole in Timely and Transforming Leadership Inquiry and Action: Toward Triple-loop Awareness, connecting awareness of this fractality of meta-processes with – what else? – Adult Development Theory. The main distinction from the larger process is that for the framing process, solution effectiveness measures the degree of side effects of the framing.

Recognizing when framing is happening and consciously shifting to this separate framing process is likely one of the most important skills one can develop. We come in contact with infinity every day. Every heated exchange with a loved one, every swing of the unseen polarity, every iron triangle (like the project management one) is us becoming aware of the infinity’s touch. A picture that comes to mind is that of a three-layered world, where the top is filled with the routine of compressed models we take entirely for granted, supported by the middle layer of framings that we’re still puzzling out. At the bottom of this world are the Lovecraftian horrors of infinity that churn endlessly, occasionally shaking the foundation of our process of understanding and waking us up to the possibility that every framing is just a story we tell ourselves to avoid staring into the infinity’s abyss. Those capable of diving into that abyss and enduring it long enough to gain a glimpse of a new framing are the ones who enable others to build worlds upon it.

Touching infinity

As we explore the process of understanding, it may not be immediately obvious why change isn’t conquerable, and why isn’t knowledge a finite resource as the siren of modernism sweetly suggests. As far as infinity goes, there are infinite stories to convey it, and here’s but one of them. It’s an examination of a particularly interesting kind of change: reciprocal adaptation.

Adaptation is all around us, and is largely responsible for the never-ending change. For example, when I rest on a tree stump in the forest after a long hike, I may notice a fragrant flower bush abuzz with the bees. I am seeing the effects of adaptation. Over the eons, flowers adapted to attract bees to solve their problem of pollination (my sincere apologies to passerby biology experts – I know too little of the subject to speak so confidently about it). 

However, if I notice large yellow eyes examining me through the forest’s canopy, I would be experiencing another kind of adaptation. The predator is trying to build their own mental model of me. At that moment, I am its problem: the current nature-enjoying me as  “what is” and the meal version of me that “ought to be”. Obviously, this makes the predator’s intent a problem for me – and thus engages me in reciprocal adaptation.

In a non-reciprocal adaptation, our understanding of the problem must include some hypotheses on how the phenomenon’s behavior changes over time. Even though it is already a pretty challenging task, we can choose to be careful, neutral observers of the phenomenon. With such commitment, we still have a chance at arriving at the model that produces an effective solution. For this kind of adaptation, the process of understanding looks like the one I described earlier.

Once we find ourselves in a reciprocal adaptation, things get rather hairy. Two or more entities see each other as problems – or at least, as parts of them. Each continuously develops a mental model of the problem that includes itself, the other, and their intention. In such situations, we are no longer neutral observers: every solution we try is used by other parties to adjust their mental models, thus invalidating the models of theirs we keep developing. 

A pernicious fractal weirdness emerges. When you and I are locked in reciprocal adaptation, your intention is my problem, which means that my model of the problem now has to include your intention. Because I am part of your “what ought to be”, a mental model of me — how you model me — is now embedded in my model of you. In other words, not only do I need to model you, I also need to model how you model me. To produce an effective model, I also need to model how you model my modeling of you, and so on. And you have little choice but to do the same.

In this hall of mirrors, despite all parties acquiring more and more diverse models, we are not reaching that satisfying solution effectiveness found in other situations.  Every interaction between us rejiggers the nested dolls of our mental models, and so the process of understanding looks bizarre, with effectiveness wobbling unsteadily or hitting invisible asymptotes. The “convergent” stage keeps getting subverted back into “novel”, and the “routine” stage of the process of understanding no longer develops. Correspondingly, the effort is pegged at maximum and while our valence of feelings about the situation remains negative.

This under-developed learning cycle is something that happens with us anytime we touch infinity. We struggle and we feel out of our depth. To illustrate this in our ever-growing process diagram, we’ll add an extra short-circuit from “convergent” back to “novel” stages, splitting the “learn” cycle into two. We’ll name the outer part of it the “solve” cycle, since it does culminate in arriving at an effective solution. 

Let’s call the shorter circuit the “struggle” cycle. I picked this name because inhabiting this cycle is stressful and unpleasant – the effort remains at maximum for prolonged periods of time, exhausting us.  The force of homeostasis tends to rather dislike these situations. It’s literally the opposite of the “apply” cycle – lots of energy goes into it. A good marker of touching infinity is that sense of rising unease, progressing toward a full-blown terror. My guess is that this is our embodied, honed by the evolution warning mechanism to steer clear of it.

When we’re in the “struggle” cycle, we gain one additional problem. You know, like it wasn’t enough to struggle with infinity, right? This additional problem stems from our intention to exit this cycle as quickly as possible. We even come pre-wired with a few solutions to break out of this cycle: fight, flight, and freeze. As an aside, I described this same phenomenon differently in “Model flattening” a while back, but hey — infinity and its infinite stories. These built-in solutions are what helped our cave-dwelling ancestors survive and we’re grateful for their contribution to humanity’s progress. However, they tend to work out rather poorly in somewhat more nuanced situations we experience in the present day.

To end things on a more positive note… I kept describing reciprocal adaptation in almost exclusively adversarial terms. And there’s something to it. When we are part of someone else’s problem, it’s a decent chance we will feel at least a little bit threatened by that. However, I would be remiss not to mention the more sunny side of reciprocal adaptation: mutuality. Mutuality is a kind of reciprocal adaptation in which our intentions are aligned. We have the same “ought to be”.  As you probably know, mutuality produces nearly opposite results. We no longer need to build a separate mental model of our partner in reciprocal adaptation. We can substitute it with ours. This substitution pattern scales, too! If I can reliably assume that a given number of people is “like me,” (that is, has the same mental model as me), it feels like I gain superpowers.  When we put our efforts to solve a common problem together, we can move mountains. Perhaps completely without merit, even infinity appears less infinite when we are surrounded by those who share our intention.

Model compression and us

Often, it almost seems like if we run the process of understanding long enough, we could just stay in the applying cycle and not have to worry about learning ever again. Sure, there’s change. But if we study the nature of change, maybe we can find the underlying causes of it and incorporate it into our models – thus harnessing the change itself? It seems that the premise of modernism was rooted in this idea. 

If we imagine that learning is the process of excavating a resource of understanding, we can convince ourselves that this resource is finite. From there, we can start imagining that all we have to do is – simply – run everything through the process of understanding and arrive at the magnificent state where learning is more or less optional. History has been rather unkind to these notions, but they continue to hold great appeal, especially among us technologists.

Alas, combining technology and a large-enough number of people, it seems that we unavoidably grow our dependence on the applying cycle. In organizations where only compressed models are shared, change becomes more difficult. There’s not enough mental model diversity within the ranks to continue the cycle of understanding. If such organizations don’t pay attention to attrition of its veterans, the ones who knew how things worked and why, they find themselves in the Chesterton’s fence junkyard. At that point, their only options are to anxiously continue holding on to truisms they no longer comprehend or to plunge back to the bottom of the stairs and re-learn, generating the necessary mental model diversity by grinding through the solution loop cycle, all over again.

I wonder if the nadir of the hero’s journey is marked by suffering in part because the hero discovers first-hand the brittleness of model compression. Change is much more painful when most of our models are compressed.

At a larger scale, societies first endure horrific experiences and acquire embodied awareness of social pathologies, then lose that knowledge through compression as it is passed along to younger generations. Deeply meaningful concepts become monochrome caricatures, thus setting up the next generation to repeat mistakes of their ancestors. More often than not, the caricatures themselves become part of the same pathology that their uncompressed models were learned to prevent.

In a highly compressed environment, we often experience the process of understanding in reverse. Instead of starting with learning and then moving onto applying, we start with the application of someone else’s compressed models and only then – optionally – move on to learning them. Today, a child is likely to first use a computer and then understand how it works, more than likely never fully grasping the full extent of the mental model that goes into creating one. Our life can feel like an exploration of a vast universe of existing compressed models with a faint hope of sometimes ever fully understanding them. 

From this vantage point, we can even get disoriented and assume that this is all there is, that everything has already been discovered. We are just here to find it, dust it off, and apply it. No wonder the “Older is Better” trope is so resonant and prominent in fiction. You can see how this feeds back into the “excavating knowledge as a finite resource” idea, reinforcing the pattern.

In this way, a pervasive model compression appears pretty trappy. Paradoxically, the brittle nature of highly compressed environments makes them less stable. The very quest to conquer change results in more – and more dramatic – change. To thrive in these environments, we must put conscious effort to mitigate the nature of the compression’s trap. We are called to strive to deepen our diversity of mental models and let go of the scaffolding provided by the compressed models of others.

Model compression

At the end of each journey in our process of understanding, we have an effective solution to the problem we were presented with. Here’s an interesting thing I am noticing. We still have a diverse, deeply nuanced mental model of the problem that we developed by cycling through the solution loop. However, we don’t actually need the full diversity of the model at this point. We found the one solution that we actually need when approaching the given problem.

This is a pivotal point at which our solution becomes shareable. To help others solve similar problems, we don’t need to bestow the full burden of our trials and errors upon them. We can just share that one effective solution. In doing so, we compress the model, providing only a shallow representation of it that covers just enough to describe the solution.

This trick of model compression seems simple, but it ends up being nothing short of astounding. Let’s start with an example of simple advice, like that time when an expert showed me how to properly crack an egg and I almost literally felt the light bulb go off in my head. It would have taken me a lot of cycling through the solution loop to get anywhere close to that technique. Thanks to the compressed model transfer, I was able to bypass all of that trial and error.

Next, I invite you to direct your attention to the wonder of a modern toothbrush. Immeasurable amounts of separate solution loop iterations went into finding the right shape and materials to offer this compressed model of dental hygiene. To keep my teeth healthy, I don’t have to know any of that. I only need to have a highly compressed model: how to work the toothbrush. This ability to compound is what makes model compression so phenomenally important.

We live in a technological world. We are surrounded by highly compressed mental models that are themselves composed of other highly compressed models, recursing on and on. I am typing this little article on a computer, and if I stop to imagine an uncompressed mental model of this one device, from raw materials scattered unfound across the planet to the cursor blinking back at me, my mind boggles in awe. To type, I don’t have to know any of that. Despite us taking it for granted, our capacity to compress and share models might just be the single most important gift that humanity was given – aside from being able to construct these models, of course.

Model compression introduces a peculiar extra stage to the process of understanding. At this fifth stage, our solution effectiveness is high, flux is low, but our model diversity is low as well. When we acquire a compressed model – whether through technology or a story – we don’t inherit the rich diversity of the model. We don’t get the full experiential process of constructing it. We just get the most effective solution.

It feels like a reasonable deal, yet there is a catch. As we’ve learned earlier, things change

When my solution is at this newly discovered “compressed” stage, a new change will expose this stage’s brittleness: I don’t have the diversity of the model necessary to continue climbing the stair steps of understanding. Instead, it appears that I need to start problem-solving from scratch. This does make intuitive sense, and the compressed model compounding makes this even more apparent. When a modern phone suddenly stops working, we have only a couple of different things we can try to resuscitate: plug in the charger and/or maybe try to hold down the power button and hope it comes back. If it doesn’t, the vastness of crystallized model compression makes it as good as a pebble. Chuck it into a drawer or into a lake – not much else can happen here.

Lucky for us, this phenomenon of compressed models being brittle in the face of change is a problem in itself – which means that we can aim our solving ability at it. If we’re really honest about it, software engineering is not really about writing software. It’s about writing software that breaks less often and when it does, it does so in graceful ways. So we’ve come with a neat escape route out of this particular predicament. If my toothbrush breaks or wears out, I just replace it with a new one from the five-pack in which they usually come. If my laptop stops working, I take it to a “genius” to have it fixed. Warranties, redundancies, and repair facilities – all of these solutions rely on the presence of someone else possessing  – and maintaining! – their diversity of the mental model for me to lean on.

This shortcut works great in so many cases that I probably need to draw a special arrow on our newly updated diagram of the process of understanding. There are two distinct cycles that emerge: the already-established cycle of learning, and the applying cycle, where I can only use compressed models obtained through learning – even if I didn’t do the learning myself! Both are available to us, but the applying cycle feels much more (like orders of magnitude) economical to our force of homeostasis. As a result, we constantly experience the gravitational pull toward this cycle.

Change

So far, I carefully avoided the topic of change, presenting my problem-solving realm in a delightfully modernist manner. “See phenomenon? Make a model of it! Bam! Now we’re cooking with gas.”

Alas, despite its wholesome appeal, this picture is incomplete. Change is ever-present. As the movie title says, everything, everywhere, all at once – is changing, always. Some things change incomprehensibly quickly and some change so slowly that we don’t even notice the change. At least, at first. And this ever-changing nature of the environment around us presents itself as its own kind of force.

While the force of homeostasis is pushing us toward routine, the force of change is constantly trying to upend it. As a result of these forces dancing around each other, our problems tend to walk the awkward gait of punctuated equilibrium: an effective solution appears to have settled down, then after a while, a change unmoors it and the understanding process repeats. The punctuated equilibrium pattern appears practically everywhere, indicating that this might be another general pattern that falls out of the underlying processes of mental modeling.

Throughout this repeating sequence, the flux and effectiveness components wobble up and down, just like we expect them to. However, something interesting happens with the model diversity: it continues to grow in a stair-step pattern.

If you’ve read my stories before, you may recognize the familiar stair-step shape from my ongoing fascination, the adult development theory (ADT). It seems to rhyme, doesn’t it? I wonder if the theory itself is a story that is imposed upon a larger, much more fractally manifesting process of mental modeling. The ADT stages might be a just slice of it, discerned by a couple of very wise folks and put into a captivating narrative.

Every revolution of the process of understanding adds to our model, making us more capable of facing the next round of change. Sometimes this process is just refining the model. Sometimes it’s a transformational reorganization of it. This is how we learn.

Moreover, this might be how we are. This story of learning is such a part of our being that it is deeply embedded into culture and even has a name: the hero’s journey. The call to the adventure, the reluctance, the tribulations, and facing the demons to finally reveal the boon and bring it back to my people is a deeply emotional description of the process of understanding. And often, it has the wishful “happily ever after” bookend — because this would be the last change ever, right? It’s another paradox. It seems that we know full well that change is ever-present, yet we yearn for stability.

For me, this rhymes with the notion of Damasio’s homeostasis. Unlike the common belief that homeostasis is about equilibrium, in Strange order of Things, he talks how, from our perspective, homeostasis is indeed about reaching a stable state… and then leaning a bit forward to ensure flourishing. It’s like our embodied intuition accepts the notion of change and prepares us for it, despite our minds continuing to weave stories of eternal bliss.

Life of a solution

Looking at the framework in the previous piece, I am noticing that the components of the tripartite loop (aka the solution loop, apologies for naming it earlier) form an interesting causal relationship. Check it out. Imagine that for every problem, there’s this process of understanding, or a repeated cycling through the loop. As this cycling goes on, the causality manifests itself.

Rising flux leads to rising solution diversity. This makes sense, right? More interesting updates to the model will provide a larger space for possible predictions. Rising solution diversity leads to rising effectiveness, since more predictions create more opportunities for finding a solution that results in the intended outcome. Finally, rising effectiveness leads to falling flux — the more effective the solution, the fewer interesting updates to the model we are likely to see. Once flux subsides past a certain point, we attest that the process of problem understanding has run its course. We now have a model of the phenomenon, ourselves, and our intention that is sufficiently representative to generate a reliably effective solution. We understood the problem.

I am realizing that I can capture this progression in roughly four stages. At the first stage, the effectiveness is low and diversity is low, with flux rapidly rising. This is the typical “oh crap” moment we all experience when experiencing a novel phenomenon that is misaligned with our intention. Let’s call this stage “novel,” and assign it the oh-so-appropriate virus emoji. 

Rising flux pushes us forward to the next stage that I will call “divergent”. Here, our model of the problem is growing in complexity, incorporating the various updates brought in by flux. This stage is less chaotic than the one before, but it’s usually even more uncomfortable. We are putting in a lot of effort, but the mental models remain squishy and there are few well-known facts. Nearing the end of the stage, there’s a sense of cautious excitement in the air. While the effectiveness of our solutions is still pretty low, we are starting to see a bit of a lift: all of that model enrichment is beginning to produce intended outcomes. Soon after, the next stage kicks in. 

The convergent stage sees continued, steady rise of effectiveness. Correspondingly, flux starts to ease off, indicating that we have the model figured out, and now we’re just looking for the most effective solution. This stage feels great for us engineering folks. Constraints appear to have settled in their final resting places. We just need to figure out the right path through the labyrinth. Or the right pieces of the puzzle. Or the right algorithm. We’ve got it.

After a bit more cycling of the loop, we finally arrive at the routine stage, the much desired steady state of understanding the problem well enough for it to become routine, where solving a problem is more of a habit rather than a bout of strenuous mental gymnastics. The problem has become boring.

The progression from novel to routine is something that every problem strives to go through. Sometimes it plays out in seconds. Sometimes it takes much longer. However, my guess is that this process isn’t something that we can avoid when presented with problems. It appears to be a general sequence that falls out of how our minds work. I want to call the pressure that animates this sequence the force of homeostasis. This force propels us inexorably toward the “routine” stage of the process, where the ongoing investment of effort is at its lowest value. Our bodies and our minds are constantly seeking to reach that state of homeostasis as quickly as possible, and this search is what powers this progression.

A Solution

If we are looking at a problem, and as we learned earlier, our understanding of  a problem is a model that includes us, our intention, and the phenomenon that is a subject of it, then a solution is the problem understanding-based prediction that resolves the problem’s intention, aligning the state of the phenomenon with it.

Because the problem’s model includes us, the solution often manifests as a set of actions we take. For example, for my trying to repel that mischievous bunny from the previous piece, one solution might look like the list of a) grab a tennis ball, b) aim at the tree nearby, c) throw the ball at the tree with the most force I can muster. However, solutions can also be devoid of our actions, like in that old adage: “if you ignore a problem long enough, it will go away on its own”.

Note that according to the definition above, a solution relies on the model, but is distinct from it. Same model might have multiple solutions. Additionally, a solution is distinct from the outcome. Since I defined it as a prediction, a solution is a peek into the future. And as such, it may or may not pan out. These distinctions give us just enough material to construct a simple framework to reason about solutions.

Let’s see… we have a model, a solution (aka prediction), and the outcome. All three are separate pieces, interlinked. Yay, time for another triangle! Let’s look at each edge of this triangle.

When we study the relationship between solution and outcome, we arrive at the concept of solution effectiveness, a sort of hit/miss scale for the solution. Solutions that result in our intended outcomes are effective. Solutions that don’t are less so. (As an aside, notice how the problem’s intention manifests in the word “intended”). Solution effectiveness appears to be fairly easy to measure. Just track the rate of prediction errors over time. The lower the rate, the more effective the solution is. We are blessed to be surrounded by a multitude of effective solutions. However, there are also solutions that fail, and to glimpse possible reasons why that might be happening, we need to look at the other sides of our triangle.

The edge that connects solution and model signifies the possibility that our mental model of the problem contains an effective solution, but we may have not found it yet. Some models are simple, producing very few possible solutions. Many are complicated labyrinths, requiring skill and patience to traverse. When we face a problem that does not yet have an effective solution, we tend to examine the full variety of possible solutions within the model: “What if I do this? What If we try that?”  When we talk about “finding a solution,” we usually describe this process. To firm this notion up a bit, a  model of the problem is diverse when it contains many possible solutions. Solution diversity tends to be only interesting when we are still looking to find one that’s more effective than what we currently have. Situations where the solution is elusive, yet the model’s solution diversity is low can be rather unfortunate – I need to find more options, yet the model doesn’t give me much to work with. In such cases, we tend to look for ways to enrich the model.

This is where the final side of the triangle comes in. This edge highlights the relationship between the model and the outcome. With highly effective solutions, this edge is pretty thin, maybe even appearing non-existent. Lack of prediction errors means that our model represents the phenomenon accurately enough. However, when the solution fails to produce the intended outcome, this edge comes to life: prediction errors flood in as input for updating the model. If we treat every failure to attain the intended outcome as an opportunity to learn more about the phenomenon, our model becomes more nuanced, and subsequently, increases its solution diversity – which in turn lets us find an effective solution, completing the cycle. This edge of the triangle represents the state of flux within the model: how often and how drastically is the model being updated in response to the stream of solutions that failed? By calling it “flux”, I wanted to emphasize the updates that lead to “interesting” changes in the model: lack of prediction error is also a model update, but it’s not going to increase its diversity. However, outcomes that leave us stunned and unsure of what the heck is going on are far more interesting.

Wait. Did I just reinvent the OODA loop? Kind of, but not exactly. Don’t get me wrong, I love the Mad Colonel’s lens, but this one feels a bit different. Instead of enumerating the phases of the familiar circular solution-finding process, our framework highlights its components, the relationships between them and their attributes. And my hope is that this shift will bring new insights about problems, solutions, and us in their midst.

Rubber duck meetings

When I am looking for new insights, a generative conversation with colleagues is hard to beat in terms of quality of output. When I look back at what I do, a large chunk of my total effort is invested into cultivating spaces for generative conversations. It seems deceptively easy (“Let’s invite people and have them talk!”), but ends up being rather tricky – an art more than a technique. My various chat spaces are littered with tombstones of failed generative spaces, with only a precious few attempts actually bearing fruit. Let’s just say I am learning a lot.

One failed outcome of trying to construct a generative space is what I call the “rubber duck meeting”. The key dynamic that contributes to this outcome is the gravity well of perceived power. For example, a manager invites their reports to partake in a freeform ideation session. At this session, the manager shares their ideas and walks the team through them, or reviews someone else’s idea and brainstorms around them. There is some participation from the others, but if we stand back, it’s pretty clear that most of the generative ideation – and talking – is done by the manager. 

Now, a one-person ideation session is not a bad thing. For programmers, it’s a very common technique to find our way out of a bug. It even has a name: rubber duck debugging. The idea is simple: pretend like you’re explaining the problem to someone (use a rubber ducky as an approximation if you must) and hope that some new insights will come dislodged in your network of mental models in the process.

The problem with the rubber duck meeting is that everyone else is bored out of their mind and often frustrated. The power dynamic in the room raises the stakes for participation for everyone else but the manager. No matter how much we earnestly try to participate, even a subtle gravity well inexorably shifts the meeting to monologue (or a dialog between two senior peers). The worst part? Unless these leaders make a conscious effort to reduce the size of their gravity well, they don’t notice what’s happening. They might even be saying to themselves: “This is going so well!” and “Look at all these ideas being generated!” and “I am so glad we’re doing this!” – without realizing that these are all their ideas and no new insights are coming in. They might as well be talking to a rubber duck. I know this because I led such meetings. And only much later, wondered: wait, was it just me thinking out loud all this time?

Now, about that “consciously reducing the size of the gravity well”? I don’t even know if it’s possible. I try. My techniques are currently somewhere around “just sit back and  let the conversation happen” and “direct attention to other folks’ ideas”. The easiest thing to reduce the rank-based power dynamics in a meeting seems to be inviting peers, though this particular tactic isn’t great either: the vantage points  are roughly similar, and so the depth of insights is more limited.

I kept looking for ways to finish this bit on a more uplifting note. So here’s one: when you do find that generative space where ideas are tossed around with care, hang onto it and celebrate your good fortune. For you struck gold.