Performance Management Blindspot

Reflecting on the recent Google’s perf season, I came up with a framing that I decided to capture here. If you’re grappling with the last cycle’s feedback, it may hopefully add some clarity to your next steps. And if by any chance you’re developing a performance management system for your organization, it might yield some insights on your design.

As I was reflecting on the problem space, two distinct forces caught my eye. One of them is the force of the rubric, and the other – the force of the value. The performance management processes that I am most familiar with all share the same trait: there’s a rubric by which the employee is evaluated. The rubric defines some properties of individual performance, usually broken to categories for easier evaluation. The employee’s actions and outcomes are compared against the rubric to determine their level of performance. In effect, the rubric defines the metric of the individual performance, and is usually connected to compensation. The force of the rubric emerges as employees try to conform to the rubric to improve their levels of compensation.

On the other hand, the force of the value is a bit more challenging to capture as a metric, but easier to feel intuitively. Does an employee actively provide value to the organization? Or are they just sticking around, employing a minimum share of their capacity? Though “actively providing value” can be vague, it is fairly easy to discern the the force of the value emerging in pretty much any team. For example, there are individuals whose mere presence on any team seems to improve its collective velocity. Teams clump around those people and become, well, teams. Some individuals might not serve as glue of teams, but rather generate insights and framings that change entirely how a problem is viewed, becoming more possible to solve. Some have a gift for holding a clear-eyed picture of a long-term strategy when everyone else on the team is lost in the minutiae. Value comes in many shapes, but while working with others, we can almost always intuitively tell when it’s there. When organizations speak of “attracting or retaining talent,” they are manifesting the force of the value.

It is a performance management system designer’s dream that the two forces are perfectly aligned. Unfortunately, this is just a dream. To explore what happens in the cracks between the two, let’s draw a two-by-two. On the horizontal axis, we have the force of the value, and we’ll loosely designate one extreme as “high value” and the other as “low value.” On the vertical, we’ll place the force of the rubric, with “fits the rubric” and “doesn’t fit the rubric” as opposites. With the four quadrants in place, let’s explore them one by one.

The top-right quadrant is the easiest: the organization’s recognition of value is spot on. Most bits of value that the individual provides fit into the rubric. We are living the dream. Moving counterclockwise, we make a stop at the “you need to shape up” quadrant. Here, the employee is not providing value and the rubric accurately reflects that. Again, this is working as intended. Here, a bad rating or tough feedback means that the employee needs to decide: “will I change what I am doing to provide more value to my organization?” If the answer is “yes,” the rubric handily provides the checklist of things to improve.

Continuing our tour of the space, things get funky in the next, bottom-left quadrant. The individual doesn’t fit the rubric or provide value to the organization. For example, suppose that I am working in an engineering organization, yet spend most of my time growing tomatoes in my garden. Tomatoes are glorious, but unless there’s some business connection (perhaps this a tomato gardening app team?), the value/rubric fit is low. At this point, the employee likely needs to consider a different kind of change. Do they start conforming to the rubric? Or are they perhaps being called toward another career?

The last, bottom-right quadrant is the most interesting one. The value is clearly high, but the employee’s work does not conform to the rubric. This is the performance management blindspot. The organization can’t see the value provided by the individual it employs. It might sense it in other ways – like the team falling apart after this individual leaves it– but it can’t map it to the rubric. For the individual, this is the least fun place to be. Here’s how it usually feels: “I can clearly see that I am doing great work, and everybody around me sees that I am doing great work, but the perf signals I get are kind of meh, with feedback that is at best incoherent or worse, harmful to the actual work I am doing.” Peeps stuck in this quadrant find themselves torn. The question they ask themselves is: “how do I change what I do to fit into the rubric while not clobbering the value I already provide?” Some are stuck in the “retention limbo,” where the organization is trying to keep them despite seemingly not knowing what to do with them. Some are asked to conform to the rubric or else. Some are deemed “tomato gardeners” and quietly managed out, to the team’s chagrin. One of my friends suffered this fate recently, despite being probably the only person on the team who deeply understood the long-term arc of their strategy. It’s not a great outcome for the team, either. Invisible value is something that is usually only grasped long after it’s lost – and by then, it’s too late.

If you have a suspicion that you’re in that quadrant, it might be worth having a conversation with your manager and checking if they see the same thing. If they do, then they might be able to help navigate the limbo. Performance management blindspot is a real thing, and most managers that I know are aware of it. Otherwise, it might be time to look for another place. But most of all, hang in there. It sucks to be stuck in this spot. You are amazing and your gifts are precious – even if this particular organization can’t see it.

Cravings and Aversions

Though immediate effects of model flattening are already pretty dramatic, its largest contributions to jank are more long term. While the model flattening is a temporary phenomenon, our experiences of it are not. We remember them. Put in the terms of our little framework we’ve been developing, the model of our environment is updated with these weird wibble-wobble outcomes. They are at times awesomely awesome and at times awesomely horrifying, and the bluntness of model flattening leaves deep marks.

Each of these remembered experiences skews our sense of the expectation gradients. When we encounter a similar situation in the future, these deep marks influence how we evaluate it. I’ve been thinking about how to express this process visually, and this morning, the framing finally clicked into place. Yes, it’s terrible math magic time!

Imagine that there’s some baseline expectation gradient evaluation that we would do in a situation that we’re not familiar with. Now, we can visualize a relationship between this baseline and our actual evaluation. If this is indeed the entirely new situation, the relationship line will be a simple diagonal in a graph with baseline and actual gradient as axes.

The long-term effect of model flattening will manifest itself as the diagonal bending upward or downward. After a traumatic experience, we will tend to overestimate the expectation gradient in similar situations. Our model will inform us that we can’t actually cope with that situation. This will feel like an aversion: a pull away from the experience. I once was introduced to a team lead. Before the meeting, their colleague said: “Oh, and please don’t mention [seemingly innocuous project], it will sour the mood.” Back then, I just went “okay, sure” – but it stuck with me. What is this crater of aversion that is so deep that necessitated a special warning?

Bending in the other direction, there are cravings. If model flattening resulted in a miraculous breakthrough, our evaluation of the expectation gradient will skew to underestimate it in similar situations. We’ll be pushed toward these kinds of experiences, tending to seek them out, because our model will suggest that these situations are a piece of cake. And yes, a piece of cake is an example of a craving. A familiar process or tool that saved the team’s collective butt from some figurative tiger long ago are some other examples of cravings.

To capture this bending in one variable, I am going to reach for an exponent. Let’s call it the gradient skew. Then, the clean diagonal line is the skew exponent that equals to one. The skew that is larger than one will express an aversion, and skew between one and zero will express a craving.

Now, it is fairly easy to see how cravings and aversions mess with our required energy output estimates. An aversion will overestimate the output, triggering model flattening early and forming a vicious cycle: more model flattening will lead to more deep marks, compounding into more aversions. A craving will grossly underestimate the effort, resulting in prediction errors that accelerate the model clock and trigger macro jank. Since macro jank itself is an unpleasant experience, this feeds back into model flattening and more aversion-forming.

Over a long-enough period of time, the sheer number of cravings and aversions, collected within the model, is staggering. The model stops being the model of the environment per se, and instead becomes the map of cravings and aversions. Like relativistic gravity, this map will tug and pull a team or an individual along their journey. This journey will no longer be about the original or stated intention, but rather about making it to the next gravity well of a craving, tiptoeing around aversions. Within an organization that’s been around for a while, unless we regularly reflect on our cravings and aversions, chances are we’re in the midst of that particular kind of trip.

Model flattening

Before we move on from our discovery of the inner OODA loop, I want to talk about a phenomenon that plays a significant role in our lives and in the amount of jank we produce. I struggled to capture it succinctly, and here’s my current best effort. I call this phenomenon model flattening.

If we look into the strategies that the inner OODA loop applies in its Decide step, we can loosely identify three, each of non-linearly increasing severity, neatly following that expectation gradient tangent curve.

At the lower end of the curve, the inner OODA loop yields all of the resources to whatever else might need them. 

As the gradient approaches the kink in the curve, the belt begins to tighten. Sensing the approach of the asymptote, the strategy shifts to mobilization. Cutting down anything that might consume resources, our body acts as a ruthless bureaucrat. using a set of powerful tools to make that happen. When this strategy is employed, it almost feels like we are taken over by something else. We know this sensation as the amygdala hijack. “Yeah, buddy. I saw you drive, and that was cool, but it’s time for the pros to intervene. Moooove!” 

Further beyond, the body recognizes that the asymptote territory was reached and shifts into the “freeze” mode, flopping onto the ground and basically waiting for danger to pass. There’s no way to create infinite output to overcome impossible challenges, so we cleverly evolved a shutdown function.

If you know me, you were probably expecting me to inevitably stir Adult Development Theory (ADT) concepts into this stew. You were right.

Very briefly, ADT postulates that through our lives, we all traverse a stair-step like progression of stages. With each stage including and transcending the previous one, we become capable of seeing more and creating and holding more subtle models of the environment. In the context of this narrative, fallback is the short-term reversal of this process, where we rapidly lose access to the full complexity of our models. 

Fallback might be a great way to express how our inner OODA loop achieves resource mobilization. Like that thermal control system for microprocessors, it has first dibs on throttling resources. However,  while the microprocessor is just getting its clock speed reduced, the human system does something a little more interesting: it flattens our model of the environment.

With each progressive strategy, the bureaucrat in charge closes more doors in the metaphorical house of our mind, smashing the delicate filigree of our models into a flatland. As we experience it, this flattening feels like a simplification of our environment. Our surroundings become more cartoon-like, having fewer details and moving parts. Only things that the inner OODA loop judged to have our immediate survival at stake are left within the model. Those connections are strengthened and drawn with thicker lines, and the others are ignored. As a result, the number of imaginable alternatives shrinks. Our OODA loops collapse into OO or DA. You already know what happens next.

The effect on jank is somewhat different from the one we’ve seen in overheating phones. Sometimes, this flattening will result in Action that we need. Sometimes, it will do the opposite. The flattening can save my butt in a tiger encounter, and it can also ruin a delicate conversation. Model flattening is a blunt instrument and in fluid, ambiguous environments, it is probably the most significant source of prediction error rate, and subsequently – jank. Unless your job involves evading actual tigers, model flattening is likely working against you.

The OODA inside

Because of the way we humans are wired, the expectation gradient is not a neutral measurement. For some reason, when we perceive a tiger eyeing us voraciously, our bodies immediately start pumping out adrenaline and otherwise prepare us to scale that gradient wall. In many ways, we literally transform into a different being. A thoughtful and kind individual is replaced by the instrument of survival driven by animal-like instincts.

But… who is doing the replacement? (Are you ready for the big reveal?) It would appear that we have another OODA loop, operating inside of us. Our body is running its own game, regardless of ours and with or without our awareness of it. Its intention is focused squarely on meeting demands of the expectation gradient.

This inner OODA loop is fairly primitive. It knows nothing about our aspirations. It cares very little about the intentions we form and write down in bold letters in decks and strategy 5-pagers. All it does is watch the gradient, trying to discern the gap between our current energy output, what the gradient says it should be, and try to change it as expediently as possible. Somewhere a long time ago, the evolutionary processes took us toward the setup where our unconscious mind is constantly and repeatedly asking this question: “How does the expectation gradient slope look right now and how much of my total energy do I need to mobilize to scale it?” 

For what it’s worth, such a two-loop setup is not uncommon. For example — you probably guessed where I am going — rendering graphics is a fairly computationally expensive process, and as such, makes processors heat up. To avoid overheating, most modern microprocessors have a tiny system called “thermal control” that’s built into most modern microprocessors… and it cycles its own OODA loop!

The thermal control loop is ignorant of rendering. It simply checks the processor’s temperature, and if the temp is above a certain value, takes an action of slowing down the processor’s clock. As a result, the rendering pipeline suddenly moves a lot slower and can no longer fit into the frame budget, producing jank.

It seems like a good thing, but more often than not, the result is deeply unsatisfying. The two loops are playing two different games, and step on each other’s toes, forming the familiar sawtooth pattern of jank. In consumer hands, this device seems downright menacingly janky. The brief periods of responsiveness feel like a taunt, like the device is actually messing with us. Back in the Chrome team, we’ve spent a bunch of time testing the performance of mobile phones, and many of those phones suffered this malady. As one of my colleagues quipped: “This is an excellent phone … as long as it’s sold with an ice pack.”

Similarly, our inner OODA loop is doing its thing, and the model of its environment is limited to the expectation gradient it periodically checks. Given that the expectation gradient is just a guess and often wrong, it’s no wonder that the inner, unconscious OODA loop ends up fighting with the conscious OODA loop we’re running, producing remarkable levels of macro jank.

From the perspective of the conscious OODA loop, this feels like a rug being periodically pulled from under us. I wanted to lose a few pounds… So what am I doing eating a Snickers bar in the pantry? I decided to work heads-down on a proposal today … So why am I watching random YouTube videos? We wanted our team to innovate daringly…  So what are we doing arguing about the names of the fields in our data structures? Ooh, a new Matrix movie preview… Stop it!

We might believe that we understand our intentions. We might even believe that we have a clear-eyed view of our “what should be.” Unfortunately, our simple-minded, yet highly effective, honed by eons of evolution inner OODA loop also has intentions. And these intentions, whether we want them or not, are woven deeply into the story of our actual “what should be.”

The expectation gradient

The conflation of “what is” and “what should be” is not the only way in which our intentions impact our prediction error rate. Another source is the intentions that we’re unaware of. To better understand what happens, we are going on another side adventure. And yes, we might even get to cast trigonometry spells again. But first, let’s talk about expectation gradients.

If we view prediction error rate as a measure of the accuracy of our predictions after the fact, expectation gradient is our forecasting metric. An easy way to grok it is to visualize ourselves standing on a trail and looking ahead, trying to guess the gradient of the incline. Is there a hill up ahead, or is it nice and flat? Or perhaps a wall that we can’t scale? The gradient of the path ahead foretells us of the effort we’ll need to put into moving forward.

In a similar vein, the expectation gradient reflects our sense of the difference between our models of “what is” and “what should be.” It is our estimate of the steering effort: how much energy we will need to invest to turn “what is” into “what should be.” A gentle slope of the gradient reflects low estimated effort, and as the estimate grows, the slope becomes steeper. If I find myself in a forest, facing a hungry tiger, I am experiencing a very steep gradient. Sitting in a comfortable chair while sipping eggnog (it is that time of the season!) contentedly and writing, however — that’s the definition of a gentle gradient slope for me.

With our trig hat on, we can picture the expectation gradient as the angle of a triangle. The adjacent side is the distance between “what is” and “what should be” (or a fraction thereof), and the opposite side is the measure of the required energy that we need to muster to steer the environment from “what is” to “what should be.”

The opposite-adjacent relationship to the angle is a tangent. When we deal with tangents, we face impossibilities. There is an asymptote, built into that little arrangement. The wavy tangent line starts slow, but then zooms into the sky, never ever quite fulfilling the promise of meeting required output.

I quite like this framing, because it feels pretty intuitive. The curve practically begs to be broken down into three distinct sections: the section before the kink where we’re reasonably certain that we can achieve our goal, the middle section where we are are uncertain of the outcome, and the asymptote – the section in which we’re pretty certain that our goal is unachievable.

Looking at “dancing with delusion” from the previous piece through the lens of expectation gradient, it’s all about convincing the team that the road ahead is mostly out of the third section, stretching the “uncertain” a bit longer.

OODA and Intention

The ability to make predictions is an astounding quality of retained-mode systems. Unlike the imaginary immediate-mode beings, we can start seeing what might be. And we humans are blessed/cursed with the ability to go one step beyond that: we can imagine alternatives. We create multitudes of “what might bes”. This is where the true significance of the Decide step becomes evident — we need to choose among the many things that “might be” to pick the Action that will do … what?

Turns out, we have preferences. We want some alternatives more than others. In our minds, the possible futures aren’t equal. A way to think of it is that we have a preference toward a certain state of the environment that we are imagining, as compared to our perceived current state. Instead of having just one model of the environment, we carry two: “what is” and “what should be.” The sum of our preferences manifests as intention, or our desire to move “what is” toward “what should be.”  

Every team is born and continues to exist around some objective. The objective itself may change over time, but it is the presence of this objective that holds the team together. This team, with some understanding of the environment —  “what is” — sets out on a journey of applying intention, to influence the environment toward some state, or “what should be.” From this perspective, the OODA loop is about steering — shifting the environment into some desired state.

Whoa, this is kind of a big insight, isn’t it? We’ve been walking around the OODA loop for a bit now, and — boom! — here we arrive at this moment. What is the point of the OODA loop if not saying: “Hey environment! I have some ideas about you. Let’s dance.” Within an OODA loop, intentionality is always present. There is no need for the OODA loop without it.

This whole steering business doesn’t come without downsides. Most individuals and certainly many teams don’t have the “what is” and “what should be” models clearly separated. One of my friends has a habit of pointing out the distinction, occasionally blowing people’s minds. This lack of separation commonly leads to the “what should be” model influencing the Observe and Orient steps. When Observing, we tend to filter out things that shouldn’t be there, and when Orienting, fit things that should be.

Guess what that does to our prediction error rate? That’s right. The more we the “what should be” model bleeds into our “what is” model, the higher is the error rate.

We live in a world where it’s hard not to notice the instances of conflation of “what is” and “what should be.” From conspiracy theories to magnetic personalities creating “reality distortion fields” around them, to filter bubbles, we are surrounded by them. We even have a term to describe some of these instances: cognitive biases. 

At the same time, this conflation can be productive. A team believing that they can ship a product may indeed ship a product, despite the overwhelming odds. Had they not mixed their “what is” and “what should be,” they would have seen right through their silly naivete at the start and folded early. Lucky for them, the environments are steerable – they can shift to “what should be” under certain conditions. The trick of any team lead is to recognize and hold the delicate balance between productive and unproductive blending of “what is” and “what should be.” Another friend of mine calls this balancing “dancing with delusion,” and I love how well it captures the nature of the process.

OO and DA

As it usually happens, we find ourselves in a conundrum. When managing jank, do we focus on the accuracy of our predictions or do we try to stay on pace with the clock? There does not seem to be a good answer — and trust me, “both” rarely feels helpful in the middle of the OODA cycle. It’s an iron triangle of seemingly impossible constraints. Given our current capacity as constant, we have to pick one of the two others: time or accuracy. 

Each presents two different configurations for the OODA loop: I’ll call them the OOda loop and the ooDA loop (note the capitalization).

Leaning toward the OOda loop, we spent most of our budget trying to perfect the model, favoring the Observe-Orient steps. We try to “consider all possibilities” and “look at the whole picture” when leaning toward this side of the spectrum. We hesitate to engage, hoping that the nature of the environment will reveal itself to us if we just keep our eyes peeled.

In the extreme, this configuration turns into the OO loop. We are subject to our “flight” instinct. We zoom out as wide as possible, trying to find ways out of the situation we’re currently in, gripped by the anxiety that comes with trying to consume the entirety of the environment. Everyone and everything is a potential threat, and every part of the environment hides nasty surprises. Every possible action looks like a wrong move. There is no escape.

This configuration produces jank that is immediately visible and seen, rarely a micro jank. Skipping a move is a big deal — and also a form of action. To collect more information about the environment for each iteration of the cycle, we need to act. Missing our opportunity to do so reduces effectiveness of our Observe-Orient steps. Despite our best and widest stares at the world, we are passive participants and our learning is limited to what is seen. The “analysis paralysis” is a common description of a team that is veering too hard onto this side.

In the ooDA loop, we forget — or willfully ignore — that the model might not be accurate. We concentrate our energy on the Decide-Act part of the process. If someone is calling for “bias toward action,” they are probably looking to move closer to this configuration. We lose sight of our model being just a fanciful depiction of the environment. It feels like “we’ve got it,” we finally “figured it out,” and now it’s time to seize the moment. All we have to do is “create order from chaos.”  

At the very extreme is the DA loop, when we’re driven entirely by our “fight” instinct. Here, our vision tunnels, and we only see simplified caricatures of the environment. A driver who just cut us off in traffic is a “stupid moron.” A colleague who said something we don’t agree with in a key meeting — a “backstabber.”

The ooDA configuration feels good at first. Asserting that the environment is “solved,” we gain a sense of certainty and confidence. Unfortunately, our prediction error rate tends to compound, because the model is being neglected — with each new cycle, we plow farther and farther away from reality. This compounding results in exponential growth in jank. We already know how this ends. From inside the organization, DA feels like one fire after another, sudden and unexpected. When teams are caught in constant fire-fighting and struggling to get out of one mess, then falling straight into another — chances are, they are favoring the ooDA loop’s end of the spectrum.

Neither of these extremes is a pleasant place to be, so organizations rarely spend time sitting in any of them. Instead, they lurch from one end to the other. The analysis paralysis gives way to “time for decisive action,” which is followed by “need to regroup and reassess” and so on. And in the process, teams pipe out jank like the smokestacks of the industrial revolution.

Individually, we all have our go-to OODA configuration as well. It is helpful to know our biases. For example, my first instinct is to shift to OOda, often in unproductive ways. Some folks I know prefer the more Leroy Jenkins style of ooDA, and recognizing how we might react in various situations helps us collaborate and reduce the collective lurching from one extreme to another. 

Prediction errors and jank

It seems that the retained mode is our way to compensate for the limited capacity to receive and process information about the environment. The implicit hypothesis behind the retained-mode setups is that we can make predictions based on the model we’ve constructed so far. As we Decide-Act, most of these will pan out, but some will generate prediction errors: evidence of incongruence between the model and the environment. We can then treat these errors as fodder to chew on in the Observe-Orient steps in our OODA cycle. Our rate of prediction errors for each cycle tells us how well we’re playing this whole OODA game.

Let’s see if we can add the concept of prediction errors to our framework. One way to visualize the idea of the model that is representative of the environment is to play on the idea of detaching from reality. You know, when we daydream about things at the stove, forget to turn down the heat, and burn our green beans (not that it ever happened to me). At that moment, our framework’s timelines come askew, with the environment’s timeline proceeding in one direction, and our model’s going in a slightly different one, at an angle.

Now, let’s say that the angle is informed by the amount of the prediction error generated during this OODA cycle. Allow me to channel my inner highschooler and do some arcane trigonometry: a triangle formed by the environment’s direction, and the model’s direction, and the adjacent-hypotenuse angle being the prediction error rate (kudos to my son for helping me remember all this nonsense).

There’s something very important about this relationship. With the environment clock continuing to tick at the constant rate, higher prediction errors will introduce a time dilation effect within the model: the clock will appear to be speeding up, leaving less space for the OODA loop to cycle! And what does that likely mean for us? Yup — more jank.

I will now take a tiny leap of faith here and correlate prediction errors and jank. Here it is: the higher our prediction error rate, the more incidents of jank we will experience. It seems that if we have a really awesome model that generates absolutely no prediction errors, we’ll have no jank. We’ll be like that youthful Keanu at the end of the Matrix, folding one of our hands behind our back, suddenly bored with the pesky Agent Smith. Conversely, if our model generates only prediction errors, it’s going to be all jank, all the time. We’ll feel like the Agents Smith in that same scene.

So it is likely that anytime we’re experiencing jank, we might be experiencing a troubling prediction error rate. Micro jank will come from the relatively small rate, and macro jank — from when the angle approaches 90 degrees (π/2 for you trig snobs) and the model clock is spinning like a top.

In either situation, especially when we feel like we have no time to react, it might be a good idea to reflect on how well we understand our environment — and most importantly, whether we’re aware that we only operate on the model of it. 

One of the most common mistakes organizations make is confusing high rates of prediction error in their models for the environment raging against them. If you ever had a fight with a loved one, and was humbled by recognizing how your assumptions took you there, that must resonate. With all the jank we produce and we’re surrounded by daily, and the enormous piles of prediction error rate this must represent, do you ever wonder how much slower the environment’s actual clock is compared to the one we perceive? And the untapped potential that the difference between them represents?

The model underneath

It will probably not come as a surprise to you that we humans are a retained-mode bunch. It’s cool to imagine ourselves as the immediate-mode beings: everything in the world around us would be brand new! For every cycle of our OODA loop, nothing is retained. Talk about living in the present.

Alas, — or fortunately, it’s hard to tell — we aren’t like that. It would totally suck if for every situation, we would need to relearn everything from scratch. We can only learn a tiny bit from each iteration of the OODA loop. Our strength, individual and collective, is in harnessing the retained mode. For example, when we look around the room, we can only see what’s in front of us. Yet we retain details of the room that aren’t in our direct eyesight, and can reason about them. We can reach for a glass of water without looking at it. This is our model being put to work. Every cycle makes the model a bit richer and more nuanced, helping us not just visualize things that we’re not seeing directly, but also make predictions about what happens to them in the immediate future.

When I first learned about the OODA loop, I naively presumed that all steps in the process operate directly on the environment. I observe the environment, I orient within it, I decide on what to do, and then I act on it. It wasn’t until later, after I learned about the concept of constructed reality, that a different understanding of the OODA process had emerged.

Aside from the first step, the OODA loop operates on the model of the environment, rather than directly on it.  This can be amazing, allowing us to connect our hockey stick with the puck for that awesome from-behind pass that sets the stands afire. It can also be a lot less awesome, because our models aren’t always representative of the environment. I reach for a glass — and accidentally poke it with my thumb, spilling the water. The model lied. 

Put differently, most steps in OODA occur in a mirror world of the environment that we created in our minds. If the mirror is clear, our actions proceed as intended. If it’s one of those funhouse mirrors, your guess is as good as mine. Our models are the sources of both our clairvoyance and our blindness.

Whether we want it or not, the OODA loop serves two interrelated purposes: one is to produce an action between the two ticks of the environment’s clock. The other is to update the model of our environment and keep it accurate. How well we manage to perform both tasks reflects in how we produce jank.

Retained and immediate mode

At the core of the OODA loop is the concept of a model. To create space for exploring it in depth, we’ll make a tiny little digression back into — you guessed it! — graphics rendering technology.

With my apologies to my colleagues — who will undoubtedly make fun of me for such an incredibly simplified story — everything you see on digital screens comes from one of the two modes of rendering: the immediate or the retained modes.

The immediate mode is the least complicated of the two. In this mode, the entirety of the screen is rendered from scratch every time. Every animation frame (remember those from the jank chapter?) is produced anew. Every pixel of output is brand new for each frame.

You might say: yeah, that seems okay — what other way could there be? Turns out, the immediate mode can be fairly expensive. “Every pixel” ends up being a lot of pixels and it’s hard to keep track of them, yet alone orchestrate them into user interfaces. Besides, many pixels on the screen stay the same from frame to frame. So clever engineers came up with a different mode.

In retained mode, there exists a separate model of what should be presented on screen. This model is usually an abstraction (a data structure as engineers might call it) that’s easy to examine and tweak and it is retained over multiple frames (hence the “retained” in the name). Such setup allows for partial changes: find and update only the parts of the model that need to change and leave the rest the same. So, when we want a button to turn a different color, the only part that has to be changed is the one representing the button’s color.

Both modes have their advantages and disadvantages. The immediate mode tends to need more effort and capacity to pay attention to the deluge of pixels, but it also offers a fairly predictable time-to-next-frame: if I can handle all these pixels for this frame, I can do so for the next frame. The retained mode can offer phenomenal benefits in saving the effort and do wonders when we have limited capacity. It also yields a “bursty” pattern of activity: for some frames, there’s no work to be done, while for others, the whole model needs to be rejiggered, causing us to blow the frame budget and generate jank.

This trade-off between unpredictable burstiness and potential savings of effort is at the crux of most modern UI framework development. The key ingredient in this challenge is designing how the model is represented. How do elements of the screen relate to each other? What are the possible changes? How to make them inexpensive? How to remain flexible when new kinds of changes emerge?

The story of Document Object Model (DOM) can serve as a dramatic illustration. Born as a way to represent documents at the early beginning of Web, DOM has a strong bias toward the then-common metaphor of print pages: it’s a hierarchy of elements, starting with the title, body, headings, etc. As computing moved on from pages towards more interactive, fluid experiences, this bias became one of the greatest limiting factors in the evolution of the Web. Millennia — hell, probably eons — of collective brain-racking had been invested into overcoming these biases, with mixed results. Despite all the earnest effort, jank is ever-present in the Web. Unyieldingly, the original design of the model keeps bending the arc of the story toward the 1990s, generating phenomenal friction in the process. 

In a weird poetic way, the story of DOM feels like the story of humanity: the struggle to overcome the limitations imposed by well-settled truths that are no longer relevant.