Behavior over time graphs and ways to influence

I was geeking out over behavior-over-time graphs (BOTG) this week and found this neat connection to bumpers, boosts, and tilts. My colleague Donald Martin first introduced me to BOTG and they fit right in, given my fondness for silly graphs.

The idea behind BOTG is simple: take some measure of value and draw a graph of its imagined behavior over time. Does the line go up? Does it go down? Does it tepidly mingle at about the same level? To make it even more useful, we can use BOTG for predictions: draw the “today” vertical line on the graph, splitting the space into “past” and “future.” Now, we can use it to convey our sense of how things were going before, and predict what happens next.

Now, let’s add another twist to this story: the goals. Usually, if we care about a value, we have some notion of a goal in relation to this value. Let’s draw this goal as a horizontal line at the appropriate level. If our goal is reaching a certain number of daily active users, and capture this notion by drawing our BOTG squiggly in a way that crosses the goal line in the “future” part of the graph.

It turns out that by considering how we’ve drawn this intersection of goal and BOTG lines, we can determine the type of the influence that might be effective to make our prediction come true. 

If our BOTG curve needs to touch and stick to — or asymptotically approach — the goal line, we are probably talking about a bumper. There is some force that needs to keep that curve at a certain level, and that’s what bumpers are for. For instance, if I want to keep the size of the binary of my app at a certain level, I will likely need to employ team processes that enforce some policy about only landing changes that keep us under that magic number.

If we picture the curve as temporarily crossing the goal line, we are probably looking at a boost. This is literally the “we just need to get it across the line” case. A good example here is the intense march toward a release that many software engineering teams experience. There are some criteria that are determined as requirements for shipping, and, spurred by the boost, the team works their hearts out trying to meet them. A common effect of a boost is the slide back across the line after the team ships, relaxing and resting ahead of another round of shipping.

Last but not least, the curve that permanently crosses the goal line and never looks back is likely a marker of a tilt. Here, the goal line is just a checkpoint. Did we reach N daily active users? Great. Now let’s go for N x 2. When such ambition is part of the prediction, we are likely looking for some constant source of compounding energy. A good question to ask here is — where will it come from?

One of the common mistakes that I’ve seen leads make is confusing the outcomes of boosts with those of tilts. Both offer gains. Boosts feel faster and provide that satisfying thrill of accomplishment, but they are at best temporary. Tilts are slower, but their advances are more lasting.  So when leaders employ a boost and expect the curve to just stay over the goal line, they are in for an unpleasant surprise. Early in my tenure at the Chrome team, I organized a task force to reduce the number of failing tests (shout out to my fellow LTTF-ers!), a small scrappy band of engineers dedicated to fixing failing tests. At one time, I reported that we brought the number of failures down from a couple of thousands to just 300! Trust me, that was an amazing feat. I am still in awe of us being able to get there. Unfortunately, my strategy — organizing a task force — was that of a boost. The spoils of the hard-won victory lasted a week after the task force disbanded. For a sobering reference, that list of failing tests is currently clocking at 7623 lines.

See if the BOTG with goals can help you puzzle out what might be the strategy for that difficult next endeavor you’re facing. Use them to clearly capture your predictions – and perhaps glimpse which method of influence might be needed to make them a reality.

Normative, Informative and Generative Voices

I’ve been thinking about how to convey the style of writing that I’ve learned while writing here, and this lens materialized. And yes, once again, the distinctions come in threes. As Nicklas Berild Lundblad suggested, I might be suffering from triangulism… and I like it.

The normative voice is spurring you to action. Normative voice aims to exert control, which may or may not be something you desire. For example, signs in public places tend to be written in a normative voice. Objectives, team principles, political slogans, and codes of conduct – all typically have that same quality. Normative voice usually conveyed some intention, whether veiled or not. 

The informative voice is not here to tell you what to do. Informative voice goes “here’s a thing I see” or “here is what I am doing.” Informative voice does not mean to impose an intention – it just wants to share what it’s seeing. As such, the informative voice tends to come across aloof and unemotional, like that of a detached observer.

Given that our primary medium of communication is of human origin and thus deeply rooted in feelings, it is incredibly challenging to separate normative and informative voices. I might believe that I am writing something in an informative voice, but employ epithets and the turns of the phrase that betray my attachment. Similarly, I could be voicing something that looks informative, but actually intends to control you – aka the ancient art of manipulating others. Let’s admit it, my teen self saying: “Mom, all kids have cool <item of clothing> and I don’t” was not an informative voice, no matter how neutrally presented. Another good sign of “conveyed as informative, yet actually normative” voice is the presence of absolute statements in the language or subtly – or not! – taking sides as we describe them.

Conversely, I might be trying to write normative prose, yet offer no firm call to action or even a sense of intention. My experience is that this kind of “conveyed as normative, yet actually informative” voice is commonly an outcome of suffering through a committee-driven wordsmithing process. “I swear, this mission statement was meant to say something. We just can’t remember what.” – oh lordy, forgive me for I have produced my share of these.

Within this lens, the constant struggle between the two – and the struggle to untangle the two – might seem rather unsatisfying and hopeless. Conveniently, I have a third voice that I’ve yet to introduce.

The generative voice accepts the struggle as a given and builds on that. Generative voice embraces the resonance-generating potential of the normative voice and the wealth of insights cherished by the informative voice. Yet at the same time, it aims to hold the intention lightly while still savoring the richness of feelings conveyed by the language. Generative voice is the voice that spurs to improvise, to jam with ideas, to add your own part to the music of thinking.

This is the language that I aim for when writing these little essays. For example, I use the words “might” and “tends to” to indicate that these aren’t exact truths, and I don’t intend to hold these firmly. I try to explore every side of the framing with empathy, inhabiting each of the corners for a little while. But most significantly, I hold up hope that after reading these, you feel invited to play with the ideas I conveyed, to riff on these, departing from the original content into places that resonate more with you. When speaking in the generative voice, I primarily care about catalyzing new insights for my future self – and you. And in doing so, I am hopeful that I am helping us both find new ways to look at the challenges we’re facing.

The value triangle

My colleagues and I were chatting about the idea of a “good change,” and a lens popped into my head, along with the name: the value triangle. I swear it was an accident.

When a team is trying to discern whether a change they are imparting on their product (and thus the world) is “good,” it’s possible that their conversation is walking the edges of a triangle. This triangle is formed by three values: value to business, value to the user, and value to the ecosystem. 

When something is valuable to business, this usually means that the team benefits from this change. When something is valuable to the user, it is usually the user who perceives the change as desirable in one way or another. The third corner is there to anchor the concept of a larger system: does the change benefit the whole surrounding environment that includes the business, and all current and potential users? A quick note aside: usually, when we talk about this last corner, we say things like “thinking about long-term effects.” This is usually true – ecosystems tend to move at a slower clip than individual users. However, it helps to understand that the “long” here is more of a side effect of the scope of the effects, rather than a natural property of the change.

Anyhow, now that we have visualized this triangle, I am going to sheepishly suggest that a “good” change is an endeavor that somehow creates value in all three corners. To better illustrate, it might be useful to imagine what happens when we fail to meet this criteria.

Let’s start with situations when we only hit one of the three. If our change only produces value for our business, we’re probably dealing with something rather uncouth and generally frowned upon. Conversely, if we only produce value for our users, we’re probably soon to be out of business. And if we are only concerned about the ecosystem effects, it’s highly likely we’re not actually doing anything useful.

Moving on to hitting two out of three, delivering a combination of user and business value will feel quite satisfying at first and will fit right at home with a lot of things we humans have done since the Industrial Age. Unfortunately, without considering the effects of our change on the surrounding ecosystem, the all-too-common outcome is an environmental catastrophe – literal or figurative. Moving clockwise in our triangle, focusing on only producing value for users and the ecosystem yields beautiful ideas that die young of starvation. The third combination surprised me. I’ve been looking for something that fits the bill, and with a start, realized that I’ve lived it. The intricately insane web of Soviet bureaucracy, designed with the purpose of birthing a better future for humanity, captured tremendous amounts of value while explicitly favoring the “good of the many” over that of an individual. For a less dramatic example, think of a droll enterprise tool you used recently, and the seeming desire of the tool to ignore or diminish you.

It does seem like hitting all three will be challenging. But hey, if we’re signing up to do “good,” we gotta know it won’t be a walk in the park. At least, you now have this simple lens to use as a guide.

Lenses, framings, and models

At a meeting this week, I realized that I use the terms “lens,” “framing,” and “model” in ways that hold deep meaning to me, but I am not certain that this meaning is clear to others. So here’s my attempt to capture the distinctions between them.

The way I see these, the lens is the most elemental of the three. A lens is a resonant,  easy to remember (or familiar) depiction of some phenomenon that offers a particular way of looking at various situations. Kind of like TV commercial jingles, effective lenses are catchy and brief. They usually have a moniker that makes them easy to recall, like “Goodhart’s Law” or “Tuckman’s stages of group development” or “Cynefin.” Just like their optical namesakes, lenses offer a particular focus, which means that they also necessarily distort. As such, lenses are subject to misuse if used too ardently. With lenses, the more the merrier, and the more lightly held, the better. Nearly everything can be turned into a lens. A prompt “How can this be a lens?” tends to yield highly generative conversations. For fun, think of a fairy tail or a news story and see how it might be used as a lens to highlight some dynamic within your team. Usually, while names and settings change, the movements remain surprisingly consistent.

Framings are a bit more specialized. They are an application of one or more lenses to a specific problem space. For example, when I am devising a strategy for a new team, I might employ Tuckman’s stages to describe the challenges the team will face in the first year of its existence. Then, I would invoke Cynefin to outline the kind of problems the team will need to be equipped to solve, rounding up with Goodhart’s Law to reflect on how the team will measure its success. When applied effectively, framings turn a vague, messy problem space into a solvable problem. To take me there, framings depend on the richness of the collection of lenses that are available to me. If these are the only three lenses I know, I will quickly find myself out of depth in my framing efforts: everything I come up with will limp with a particular Tuckman-Cynefin-Goodhart gait.

Finally, models are networks of causal relationships that form within my framings. The problem, revealed by my framing exercise, might yet be untenable. While I can start forming hypotheses, I still have little sense in how many miracles each will take. This is where models help. Models allow me to reason about the amount of effort each of my hypotheses will require. Because each of the hypotheses is a causal chain of events, models help uncover links of these chains that are superlinear.

Getting back to our team planning example, the first four Tuckman’s stages are a neat causal sequence and might lead us to conclude that the process we’re dealing with is linear and thus easily scheduled. However, if we study the network of causal relationships closer, we might be able to see that they aren’t. The team’s storming phase can tip the team’s environment into complex Cynefin space and thus extend the duration of the storming phase. Or, the arrival to the norming stage might make the team susceptible to over-relying on its metrics to steer, triggering Goodhart’s law, eventually leading to the slide into chaotic Cynefin space, setting the stages all the way back to forming.

The nonlinearity does not need to be surprising. Once we see it in our models, the conversation elevates from just looking at possible solutions to evaluating their effectiveness. Framings give us a way to see solvable problems. Models provide us with insight on how to realistically solve them.

Bumpers, Boosts, and Tilts

A discussion late last year about different ways to influence organizations led to this framing. The pinball machine metaphor was purely accidental – I don’t actually know that much about them, aside from that one time when we went to a pinball machine museum (it was glorious fun). The basic setup is this: we roughly bucket different ways to influence as bumpers, boosts, and tilts.

Bumpers are hard boundaries, bright lines that are not to be crossed. Most well-functioning teams have them. From security reviews, go/no-go meetings, or any sort of policy-enforcing processes, these are mostly designed to keep the game within bounds. They tend to feel like stop energy – and for a good reason. They usually encode hard-earned lessons of pain and suffering: our past selves experienced them so you don’t have to. Bumpers are usually easy to see and even if hidden, they make themselves known with vigor whenever we hit them. By their nature, bumpers tend to be reactive. Though they will help you avoid the undesired outcomes, they aren’t of much use in moving toward something – that is the function of boosts.

Boosts propel. They are directional in nature, accelerating organizations toward desired outcomes. Money commonly figures into a composition of a boost, though just as often, boosts can vibe with a sense of purpose. An ambitious, resonant mission can motivate a team, as well as an exciting new opportunity and/or a fresh start. Boosts require investment of energy, and sustaining a boost can be challenging. The sparkle of big visions wears off, new opportunities grow old, and bonuses get spent. Because of that, boosts are highly visible when the new energy is invested into them, and eventually fade as this energy dissipates. For example, many organizational change initiatives have this quality.

Finally, tilts change how the playing field is leveled. They are often subtle in their effects, relying on some other constant force to do the work. Objects on a slightly slanted floor will tend to slide toward one side of the room, gently but inexorably driven by gravity. In teams, tilts are nearly invisible by themselves. We can only see their outcomes. Some tilts are temporary and jarring, like the inevitable turn to dramatic events in the news during a meeting. Some tilts are seemingly permanent, like the depressing slide toward short-term wins in lieu of long-term strategy, … or coordination headwinds (Woo hoo! Hat trick!! Three mentions of Alex’s excellent deck in three consecutive stories!) Despite their elusive nature, titls are the only kind of influence capable of a true 10x change. A well-placed gentle tilt can change paradigms. My favorite example of such a tilt is the early insistence of folks who designed protocols and formats that undergird the Internet to be exchanged as RFCs, resulting in openness of the most of the important foundational bits of the comlex beautiful mess that we all love and use. However, most often, tilts are unintentional. They just don’t look interesting or useful to mess with.

Any mature organization will have a dazzling cocktail of all three of these. If you are curious about this framing, consider: how many boosts in your team are aimed at the bumpers? How many boosts and bumpers keep piling on because nobody had looked at the structure of tilts? How many efforts to 10x something fail because they were designed as boosts? Or worse yet, bumpers?

Silly math

In Jank in Teams, I employed a method of sharing mental models that I call “silly math.” Especially in surroundings that include peeps who love (or at least don’t hate) math, these can serve as a simple and effective way to communicate insights.

For me, silly math started with silly graphs. If you ever worked with me, you would have found me me at least once trying to draw one to get a point across. Here I am at BlinkOn 6 (2016! – wow, that’s a million years ago) in Munich talking about the Chrome Web Platform team’s predictability efforts and using a silly graph as illustration. There are actually a couple of them in this talk, all drawn with love and good humor by yours truly. As an aside, the one in Munich was my favorite BlinkOn… Or wait, maybe right after the one in Tokyo. Who am I kidding, I loved them all.

Silly graphs are great, because they help convey a sometimes tricky relationship between variables with two axes and a squiggle. Just make sure to not get stuck on precise units or actual values. The point here is to capture the dynamic. Most commonly, time is the horizontal axis, but it doesn’t need to be. Sometimes, we can even glean additional ideas from a silly graph by considering things like area under the curve, or single/double derivatives. Silly graphs can help steer conversations and help uncover assumptions. For example, if I draw a curve that has a bump in the middle to describe some relationship between two parameters – is that a normal distribution that I am implying? And if the curve bends, where do I believe nonlinearity comes from? 

Silly math is a bit more recent, but it’s something I enjoy just as much. Turns out, an equation can sometimes convey an otherwise tricky dynamic. Addition and subtraction are the simplest: our prototypical “sum of the parts.” Multiplication and division introduce nonlinear relationships and make things more interesting. The one that I find especially fascinating is division by zero. If I describe growth as effort divided by friction, what happens when friction evaporates? Another one that comes handy is multiplication of probabilities. It is perfectly logical and still kind of spooky to see a product of very high probabilities produce a lower value. Alex Komoroske used this very effectively to illustrate his point in the slime mold deck (Yes! Two mentions of Alex’s deck in two consecutive pieces! Level up!) And of course, how can we can’t forget exponential equations to draw attention to compounding loops?! Basic trigonometry is another good vehicle to share mental models. If we can sketch out a triangle, we can use the sine, cosine, or tangent to describe things that undulate or perhaps rise out of sight asymptotically. In the series, I did this a couple of times when talking about prediction errors and the expectation gradient.

Whatever math function you choose, make sure that your audience is familiar with it. Don’t get too hung up on details. It is okay if the math is unkempt and even wrong. The whole point of this all is to rely on an existing shared mental model space of math as a bridge, conveying something that might otherwise take a bunch of words in a simple formula.

How to make a breakthrough

The title is a bit tongue-in-cheek, because I am not actually providing a recipe. It is more of an inkling, a dinner-napkin doodle. But there’s something interesting here, still half-submerged, so I am writing it down. Perhaps future me – or you! – will help make it the next step forward.

Ever since my parents bought me an MK 54, I knew that programming was my calling. I dove into the world of computers headfirst. It was only years later when I had my formal introduction to the science of it all. One of the bigger moments was the discovery of the big O notation. I still remember how the figurative sky opened up and the angels started singing: so that’s how I talk about that thing that I kept bumping into all this time! The clarity of the framing was profound. Fast programs run in sublinear time. Slow programs run in superlinear time. If I designed an algorithm that turns an exponential-time function to constant time, I found a solution to a massive performance problem – even if I didn’t realize it existed in the first place. I’ve made a breakthrough. Suddenly, my code runs dramatically faster, consuming less power. Throughout my software engineering career, I’ve been learning to spot places in code where superlinearity rules and exorcizing it. And curiously, most of them will hide a loop that compounds computational bandwidth in one way or another. 

I wonder if this framing can be useful outside of computer science. Considered very broadly, The Big O notation highlights the idea that behind every phenomenon we view as a “problem” is a superlinear growth of undesired effects. If we understand the nature of that phenomenon, we can spot the compounding loop that leads to the superlinearity. A “breakthrough” then is a change that somehow takes the compounding loop out of the equation.

For example, let’s reflect briefly on Alex Komoroske’s excellent articulation of coordination headwinds. In that deck, he provides a crystal clear view of the superlinear growth of coordination effort that happens in any organization that aims to remain fluid and adaptable in the face of a challenging environment. He also sketches out the factors of the compounding loop underneath – and the undesired effects it generates. Applied to this context, a breakthrough might be an introduction of a novel way to organize, in which an increase in uncertainty, team size, or culture of self-empowerment result in meager, sublinear increases in coordination effort. Barring such an invention, we’re stuck with rate-limiting: managing nonlinearity by constraining the parameters that fuel the compounding loop of coordination headwinds.

Though we can remain sad about not yet having invented a cure to coordination headwinds, we can also sense a distinct progression. With Alex’s help, we moved from simply experiencing a problem to seeing the compounding loop that’s causing it. We now know where to look for a breakthrough – and how to best manage until we find it. Just like software engineers do in code, we can move from “omg why this is so slow” to “here’s the spot where the nonlinear growth manifests.”

It is my guess that breakthroughs are mostly about finding that self-consistent, resonant framing that captures the nature of a phenomenon in terms of a compounding loop. Once we are able to point at it and describe it, we can begin doing something about it. So whether you’re struggling with an engineering challenge or an organizational one, try to see if you can express its nature in terms of big O notation. If it keeps coming up linear or sublinear, you probably don’t have the framing right. Linear phenomena tend to be boring and predictable. But once you zero in on a framing that lights up that superlinear growth, it might be worth spending some time sketching out the underlying compounding loop, causality and factors and all. When you have them, you might be close to making a breakthrough. 

Jankless

What would it be like to work in a team that experiences no jank? Do you have a reference point, perhaps a memory of the time when your organization’s flow felt like a flawless jazz session? Or maybe a picture of some brighter future? If you do, I’d like to tune into the yearning for that moment and bring this series to its close. Let’s imagine ourselves jankless.

Not to be flip about it, but a sure way to eliminate jank is to remove intention. When we are perfectly content with the environment around us, the “what is” and “what should be” are the same. Our expectation gradient is zero. Frankly, this is never true for us humans: our aim is always a bit off that perfect Zen spot. We always want something, and even wanting to be in the Zen spot is an intention. So there’s that. 

However, there’s something in that idyllic absence of intention that can serve as our guidelight. What is our level of attachment to our intentions? If our organizational objectives feel existential, we might be subject to the trove of aversions and cravings we’ve accumulated in the models of our environment. The compounding loops we’ve talked about earlier are always at work, and it’s on us to make them object. Let’s go through each step of the OODA loop and see what tools and practices might help us do that. The common tactic we’ll use is similar to a technique in sailing, when the crew leans out of the boat to decrease its roll. With compounding loops always present, we want to keep carefully counterbalancing them.

When we Observe the environment, the fit/filter cycle is the one to keep an eye on. Examining our organization, here are some questions we can to ask ourselves:

  • What are the teams’ processes to understand the environment? If they are centralized and highly operationalized, they are likely subject to filtering.
  • Do we have a way to measure our prediction error? How well are we equipped to look at the mistakes we made? How well are our processes guiding us to incorporate them into our model of environment?
  • Are there norms around making sure that multiple perspectives are considered? Are divergent perspectives cherished?
  • How fixed are the metrics? How well-understood are they? Well-settled metrics are a good way to spot the work of the fit/filter cycle. The environment is always in flux, and metrics that don’t evolve tend to become meaningless over time.
  • Does the organization deal with the reality of blindspots? Do the team deny their existence? Are there practices to assess their state and maybe even dig into them?

As we Orient, examining our prediction error and updating our model of “what is” to reduce it, we contend with all three cycles. Here, the biggest bang for the buck is likely in focusing on the care with which we construct the model. 

To make things interesting, the collective model of the environment is rarely legible in an organization. If I went looking for it, I would not find a folder labeled: “The model of our environment. Update on every OODA cycle.” Instead, organizations tend to model the environment through the totality of their structure and people within it. Norms that people have, incentives, principles, and regulations that they adhere to, connections they keep, practices they maintain – all are part of the model. To dance with the compounding loops, we want to bring the notion of the shared mental model to the forefront:

  • Do the team and its leadership grasp the idea of a shared mental model? Do they recognize that Conway’s law is largely about shared mental models?
  • Are there practices and norms to maintain and expand the shared mental model? How do team leads invest into ensuring that everyone on the team roughly sees the same picture of the environment?
  • Are there means to estimate the consistency of the shared mental model across the organization? Are there markers in place to signal when the consistency is low? 
  • Are there boundaries around the shared mental model, with some people having no access to it? Having boundaries isn’t necessarily a bad thing, but not knowing why these boundaries exist is a sign they were put in place by cravings/aversions.
  • Do we hold “what is” and “what should be” models separately? Do we have a way to sense the amount of wishful thinking that creeps into the “what is” model, like instances of “solutions looking for problems?”

While Deciding, we hold and update the “what should be” model, picking the best choice to steer toward it. We are once again buffeted by the full force of all three compounding loops. The prerequisite is the quality of the model we constructed at the previous step. If the quality of the model is low, decisions become much harder to make:

  • When we make decisions, do we typically have a rich and diverse set of options to mull over? If not, this might be a sign that our Orient step needs a bit more extra TLC.   

Provided that we did a decent job of that while Orienting, we can zoom in on discerning intention. A common marker of adulthood is “knowing what you want and what you don’t want.” A social commentary on how rare this is aside, it is quite easy to lose sight of that in a team. We rarely hold one unified collective intention, there are usually many, often in tension with each other. The strength of this tension is controlled by the mass cravings and aversions we accumulate:

  • Does your organization have a way to explore its own cravings and aversions? Are there means by which you and your teammates can have candid conversation about the organization’s unspeakables and comfort objects?
  • Do we have a clear sense of our stated intention? Does it span longer than the rate of most of our OODA loops? Do we have a way to communicate it? Do we have a way of knowing how well it is shared across the team?
  • Does the team have a sense of how far its stated intentions diverge from the one they manifest? Do we deny the divergence exists? Do we accept its existence? Do we have practices to spot instances of it?
  • Do we have a productive way to talk about the effects of model flattening? Do we shun the subject altogether? Do we push it into the corner of “well-being,” alongside other squishy topics like “team culture?”

Finally, when we Act, we mobilize and expend the energy we estimated and commit to the option we picked in the Deciding step. This is where the rubber meets the road and the next round of prediction errors is discovered. The outcomes here are born out of the previous steps, so the questions that arise will mostly echo that:

  • Do we have a way to assess how much our effort estimates and the actual effort compare over time? Are our operational practices aware that these gaps are effects of model flattening? Does our assessment of prediction error account for them?
  • How well do we do in sustaining our commitment to the decisions we made? Does the organization understand that wavering in commitment could indicate high divergence of stated and manifested intention? Are there processes in place to feed this insight back into the future Decide steps?

These are just some of the questions that came to my mind when considering jank in teams. Answering them can easily fill a book. And now I am curious. What questions did I miss? What possible answers popped into your head? What other compounding loops did you spot? If you are struggling with jank in your organization, did this series open up a new space of opportunities for you? It certainly did for me. When I set out to write a series of articles on the OODA loop, I had a somewhat different direction in mind – but reasoning through it with you helped me take it to this much more satisfying point. Thank you.

The compounding effects of OODA

Because we are all human (future AI, if you’re reading this, you can chuckle at this naive assumption), these sources of macro-jank will be present in any human organization. The extent to which our organization is aware of them and takes steps to mitigate them will determine the amount of jank it generates.

Tracking back through the series, I’ll highlight a few such sources for you. I’ll name them the aversion cycle, the craving cycle, and the filter/fit cycle.

The aversion cycle is the shortest and most brutal of the three. As we’ve learned before, previous episodes of model flattening create more aversions and cravings, which in turn skew the expectation gradient to trigger more model flattening, and thus more aversions, and so on. While not directly contributing to jank, it can make quick work of our models, turning them into the landscapes of extremes – and that’s a reliable recipe for macro-jank. Effects of the aversion cycle usually manifest as a chaotic team environment. Everyone is either fighting fires or is in a firefight. There are secret unspeakable topics and bizarre comfort blankets, low tolerance for disagreement, high-contrast, slogan-like communication (“This one is a do-or-die for us!”), sprinkled with a general sense of sleepwalking.

Its spiritual twin, the craving cycle is a bit longer, with model flattening generating cravings that in turn result in a higher prediction error rate, speeding up the perceived clock speed and generating jank. Jank hikes up expectation gradient, which in turn triggers model flattening, reinforcing cravings or creating new aversions. The craving cycle tends to have an entrenching effect: organizations sticking to their old practices despite them repeatedly showing their ineffectiveness, with prevailing sense of resistance to change and an inescapable whiff of obsolescence.

The filter/fit cycle is the most moderate of the three. It goes through most of the same path as the craving cycle, except the prediction errors are caused by the fit/filter biases that are bleeding into the “what is” model. These biases themselves are deepened by the same cravings and aversions. Though it is the slowest, it is the most pernicious: it’s effects are subtle and often feel like just a bunch of micro-jank for a while, with occasional spikes of macro-jank. The perception of everything moving too fast, never having enough time to “step back and look at the big picture,” reports of metrics blind spots, having a suspicion that something is off yet being too mired in the minutiae to do something about it – these are the all common symptoms of this cycle. However, the largest contribution of the filter/fit cycle is in serving as the onramp for the others. Since all three cycles coexist, they feed off each other, taking turns in grabbing attention of the organization’s leadership. 

I hope that after reading this, you can reflect on the story of your team and discern the presence of these cycles. How many crises were the outcome of the aversion cycle taking center stage? How many change efforts were stymied by the craving cycle? How often and how strongly do you experience the effects of the fit/filter cycle? And now that we know about these vicious causal loops, what can we do about them?

Oodles of OODA loops

Our discovery of the inner OODA loop was cool, but I bet you’re thinking… just one other loop? That seems fishy. There’s got to be more, right?  Throughout the story, I’ve been blithely jumping back and forth between the individual and collective OODA loops, and that was another hint. An organization runs an OODA loop, and so does each person in it.  Individually, we also have more than one thing going – and all these add up. Typically, at this point in a typical OODA loop learning journey, we would point at this abundance of loops and start stacking them up neatly or nest them into a concentric-looking diagram. However, my experience is that OODA loops are a bit more organic. They tangle and jive. Some just a little, the others quite a bit. Some cycle unaware of each other, others arrange into intricate dependency barnacles. Some are short and savage. Some are long and gentle. All of it is happening at once, in one massive writhing mess.

In this jungle, there are multitudes of models, both “what is” and “what should bes,” each with their own intention, within individuals, across individuals, thoroughly mixed in any given group of them. Though trying to reason about OODA loops as if they were perfectly arranged in a structure is tempting, it is rarely effective. Even just trying to enumerate them in an actual org feels like falling into a Coastline paradox: we eventually drop our pencils in awe at how anything is working at all. OODA loops aren’t meant to be tabulated, and they will evade our attempts to do so. I can catch a sight of one or two, but eventually, I have to treat the rest as “the environment” – and that can get frustrating, especially when trying to apply the OODA loop insights in larger teams and organizations.

The good news is that we might have a secret decoder ring for this puzzle. Over the course of these series, we spotted a bunch of moving parts and their causal relationships within an OODA loop. And despite the fact that the exact configuration of the loops in our team will continue to flummox us, we can reason about them as a whole. Think of it as murmurations of birds that can look incredibly complex (and stunningly beautiful), but are actually rooted in a few simple rules. And I have an inkling that we found a few of these during our adventure.

First, we have to give up on micro-jank. When looking at OODA loops in aggregate, we simply can’t sense it. At that level, micro-jank is just noise – something that we only notice once it accrues past a certain point. However, if we are careful, we can spot the sources of macro jank. They usually look like causal arrow relationships forming a vicious cycle – one thing causes more of another thing, which in turn causes more of the first thing. These are also known as compounding loops, and if you are living in the contemporary times, you are well familiar with their effects. The same way COVID-19 is doing the “smash and grab” with our holiday plans, compounding loops tend to sneak up on people: a thing that looks like nothing at first rapidly balloons into a big deal. If we can discern the underlying causal loop behind these dramatic effects, we can do something about them before they smack us in our faces.