A problem

To get a more solid grounding of the newly born decision-making framework, we need to understand what a problem is. Let’s begin with a definition. A problem is an imposition of our intention on a phenomenon.

I touched on this notion of intention a bit in one of the Jank in Teams pieces, but here’s a recap. Our mental models generate a massive array of  predictions, and it appears that we prefer some of these predictions to others. The union of these predictions manifests as our intention. When we observe any phenomenon, we can’t help it but impose our intention on it. The less the predicted future state of the phenomenon aligns with our intention, the more of a problem it is.

For example, suppose I am growing a small garden in my backyard. I love plants and they are amazing, but if they aren’t the ones that I intended to grow on my plot, they are a problem. Similarly, If I am shown a video of a cute bunny eating a carrot, I would not see the events, documented by the videographer as a problem. Unless, that is, I am told that this video was just recorded in my garden. At this very instant, the fluffy animal becomes a harvest-destroying pest – and a problem.

I like this definition because it places problems in the realm of subjectivity. To become problems, phenomena need to be subject to a particular perspective. A phenomenon is a problem only if we believe it is a problem. Even world-scale, cataclysmic events like climate change are only a problem if our preferred future includes a thriving humanity and life as we know it. I also like how it incorporates intention and thus, a desire to impose our will on a phenomenon. When we decide that something is problematic, we reveal our preferences to its future state.

Framing problems as a byproduct of intentionality also allows us to play with the properties of intention to see how they shift the nature of a problem. Looking at the discussion of the definition above, I can name a couple of such properties: the strength of intention and the degree of alignment. Let’s draw – you know it! – a 2×2, a tool to represent the continuous spectrum of these property values as their extremities. The vertical axis will be the degree of alignment between the current state of the phenomenon and our intention imposed on it. The horizontal axis will represent the strength of our intention.

In the top-left quadrant, we’re facing a disaster. A combination of strong intention and a poor alignment means that we view the phenomenon as something pretty terrible and looming large. The presence of a strong intention tends to have this quality. The more important it is for a phenomenon to be in a certain state, the more urgent and pressing the problem will feel for us. Another way to think of strength of intention is how existential for us is the fulfillment of this intention. If I need my garden to survive through the winter, it being overrun by a horde of ravenous bunnies will definitely fit into this quadrant.

Moving clockwise, the alignment is still poor, but our intention is not that strong. This quadrant is a mess. This is where we definitely see that things could be better, but we keep not finding time on our schedule to deal with the situation. Problems in this quadrant can still feel large in scope, indicating that the predicted future state of the phenomenon is far apart from the state we intend it to have. It’s just that we don’t experience the same existential dread when we survey them. Using that same garden as an example, I might not like how I planted the carrots in meandering, halting curves, but that would be a mess rather than a disaster.

The bottom-right quadrant is full of quirks. The degree of intention misalignment is small, and the intention is weak. Quirks aren’t necessarily problems. They can even be a source of delightful reflection, like that one carrot that seems to stick out of the row, seemingly trying to escape its kin.

The final quadrant is the bread and butter of software engineers. The phenomenon’s state is nearly aligned with our intention, but the strength of our intention makes even a tiny misalignment a problem. This is the bug quadrant. Fixing bugs is a methodical process of addressing relatively small, but important problems within our code. After all, if the bug is large enough, it is no longer a bug, but a problem from the quadrant above – a disaster.

Mental models

I use the term “mental models” a lot, and so I figured – hey, maybe it’s time to do some semantic disambiguation and write down everything I learned so far about them?

When I say “mental model,” I don’t just mean a clean abstraction of “how a car works” or “our strategy” – even though these are indeed examples of mental models. Instead, I expand the definition, imagining something squishy and organic and rather hard to separate from our own selves. I tend to believe that our entire human experience exists as a massive interconnected network of mental models. As I mentioned before, my guess is that our brains are predictive devices. Without our awareness, they create and maintain that massive network of models. This network is then used to generate predictions about the environment around us. Some of these models indeed describe how cars work, but others also help me find my way in a dark room, solve a math problem, or prompt the name of emotion I am feeling in a given moment. Mental models are everything.

Our memories are manifestations of mental models. The difference between remembering self and experiencing self is in the process of incorporating our experiences into our mental models. What we remember is not our experiences. Instead, we recall the reference points of the environment in that vast network of models – and then we relive the moment within that network. Our memories are playing back a story with the setting and the cast of characters defined by our mental model.

This playback experience is not always like that black-and-white flashback moment in a movie. Sometimes it shows up as the annoying earworm song, or sweat on our palms in anticipation of a stressful moment, or just a sense of intuition. Mental models are diverse. They aren’t always visual or clothed in rational thought, or even conscious. They usually include sensory experiences, but most definitely, they contain feelings. Probably more accurately, feelings are how our mental models communicate. A “gut feeling” is a mental model at work. Feelings tell us whether the prediction produced by a mental model is positive (feels good) or negative (feel bad), so that’s the most important information to be encapsulated in the model. Sometimes these feelings are so nuanced and light that we don’t even recognize them as feelings – “I like this idea!” or “Hmm, this is weird, I am not sure I buy this” – and sometimes the feelings are touching-the-hot-plate visceral. Rational thinking is us learning how to spelunk the network of mental models to understand why we are feeling what we’re feeling.

One easy way to think of this network model as of a massive, parallel computer that is always running in the background, where we’re asleep or awake. There are always predictions being made and evaluated. Unlike computers, our models aren’t set structurally. As we grow up, our models evolve, not just by getting better, but also through the means by which models are created and organized. We can see this plainly by examining our memories. I may remember a painful experience from the past as a “terrible thing that happened to me” at first, and then, after living for a while, that “terrible thing” somehow transforms into “a profound learning moment.” How did that happen? The mental model didn’t sit still. The bits and pieces that comprised the context of the past experience have grown along with me, and shifted how I see my past experience.

We can also see that if my memory hasn’t changed over time, it’s probably worth examining. Large connected networks are notoriously prone to clustering. The seemingly kooky idea of the “whole self” is probably rooted in this notion that mental models are in need of gardening and deliberate examination. When I react to something in a seemingly childish way, it is not a stretch to consider: maybe the model I was relying on in that moment indeed remained unexamined since childhood? And if so, there’s probably a cluster within my network of mental models that still operates on the environment drawn by a three-year old’s crayon. This examination is a never-ending process. Our models are always inconsistent, sometimes a little, and sometimes a lot.

When I see a leader ungraciously lose their cool in a public setting, the thought that comes to mind is not whether their behavior is “right” or “wrong,” but rather that I’ve just been witness to a usually hidden, internal struggle of inconsistent mental models.

Our models never get simpler. I may discover a framing that opens up a new space in the previously constrained space, allowing me to find new perspectives. Others around us are at first simple placeholders in our models, eventually growing into complex models themselves, models that recurse, including complexity of how these others think of us and even perhaps how they might think we think of them (nested models!) Over time, the network of models grows ever-more complex and interconnected. At the same time, our models seamlessly change their dimensionality. Fallback fluidly influences the nuance of the model complexity, and thus – the predictions that come up. Fallback is a focusing function. If my body believes I am in crisis, it will rapidly flatten the model, turning a nuanced situation into a simple “just punch this guy in the face!” directive — often without me realizing what happened.

I am guessing that every organism has a kind of a mental model network within them. Even the simplest single-cell organisms contract when poked, which indicates that there’s a — very primitive, but still — a predictive model of environment somewhere on the inside. It is somewhat of a miracle to see that humans have learned to share mental models with such efficiency. For us, sharing the mental models is no longer limited to a few behaviors. We can speak, write, sing, and dance stories. Stories are our ways to connect with each other and share our models, extending already-complex networks way beyond the boundary of an individual mind. When we say “a story went viral,” we’re describing the awe-inspiring speed at which a mental model can be shared. Astoundingly, we have also learned to crystallize shareable mental models through this phenomenon we call technology. Because that’s what all of our numerous aids and tools and fancy gadgets are: the embodiments of our mental models.

This is what I mean when I say “mental models.” It may seem a bit useless to take such a broad view. After all, if I am just talking about leadership, engineering, or decision-making, it’s very tempting to stick to some narrower definition. Yet at the same time, it is usually the squishy bits of the model where the trickiest parts of making decisions, leading, or engineering reside. Ignoring them just feels like… well, an incomplete mental model.

Solved, solvable, unsolvable

I have been noodling on a decision-making framework, and I am hoping to start writing things down in a sequence, Jank-in-teams style. You’ve probably seen glimpses of this thinking process in my posts over the last year or so, but now I am hoping to put it all  together into one story across several short essays. I don’t have a name for it yet.

The first step in this adventure is quite ambitious. I would like to offer a replacement for the Cynefin framework. Dear Cynefin, you’ve been one of the highest-value lenses I’ve learned. I’ve gleaned so many insights from you, and from describing you to my friends and colleagues. I am not leaving you behind. I am building on top of your wisdom.

This new purported framework is no longer a two-by-two. Instead, it starts out as a layer cake of problem classes. Let us begin the story with their definitions.

At the top is the class of solved problems. Solved problems are very similar to those residing in Obvious space in Cynefin: the problems that we no longer consider problems per se, since there’s a reliable, well-established solution to them. Interestingly, the solution does not have to be deeply understood to be a solved problem. Hammering things became a solved problem way before the physics that make a hammer useful were discerned.

Then, there is a class of solvable problems. Cynefin’s Complicated space is a reasonable match for this class of problems. As the name implies, solvable problems don’t yet have solutions, but we have a pretty good idea on how they will look when solved. From puzzles to software releases, solvable problems are all around us, and as a civilization, we’ve amassed a wealth of approaches on how to solve them.

The final class of problems loosely corresponds to Complex space in Cynefin. These are the unsolvable problems. Unsolvable problems are just that: they have no evident solution. At the core of all unsolvable problems is a curious adaptive paradox: if the problem keeps adapting to your attempts at solving it, the solution will continue being just out of reach. I wonder if this is why games like chess usually have a limited number of pieces and a clear victory condition. If the opponents are matched enough, there must be some limit to make this potentially infinite game finite. Another way of thinking about unsolvable problems is that they are trying to solve you just as much as you’re trying to solve them.

You may notice that there is no corresponding match for Cynefin’s Chaotic space in this list. When describing Chaotic space, I’ve long recognized the presence of a clear emotional marker (disaster! emergency!) that seemed a bit out of place to how I usually described other spaces. So, in this framework, I decided to make it orthogonal to the class of the problem. But let’s save this bit for later.

The interesting thing about all three classes is that they are a spectrum that I loosely grouped into three bands. Obviously, I tend to think in threes, so it’s nice and comfy for me to see the spectrum in such a way. But more importantly, each class appears to have a different set of methods and practices associated with it. You may already know this from our studies of Cynefin. Just think of how the effective approaches in Complex space differ from those in Complicated, and how both are different from those in Obvious. 

Still, it is also pretty clear that the transition between these classes is fuzzy. As my child self was learning to tie shoes, the problem slowly traversed across the spectrum. First, the tricky bendy laces that kept trying to escape my grasp (oh noes, unsolvable!?) became more and more familiar, while tying the crisp Bunny Ears knot, despite being clearly and patiently explained, was a challenge (wait, solvable!). Then, this challenge faded, and tying shoes became an unbreakable habit (yay, solved). This journey across the problem class layers is a significant part of the framework, and something I want to talk about next.