It seems that the retained mode is our way to compensate for the limited capacity to receive and process information about the environment. The implicit hypothesis behind the retained-mode setups is that we can make predictions based on the model we’ve constructed so far. As we Decide-Act, most of these will pan out, but some will generate prediction errors: evidence of incongruence between the model and the environment. We can then treat these errors as fodder to chew on in the Observe-Orient steps in our OODA cycle. Our rate of prediction errors for each cycle tells us how well we’re playing this whole OODA game.
Let’s see if we can add the concept of prediction errors to our framework. One way to visualize the idea of the model that is representative of the environment is to play on the idea of detaching from reality. You know, when we daydream about things at the stove, forget to turn down the heat, and burn our green beans (not that it ever happened to me). At that moment, our framework’s timelines come askew, with the environment’s timeline proceeding in one direction, and our model’s going in a slightly different one, at an angle.
Now, let’s say that the angle is informed by the amount of the prediction error generated during this OODA cycle. Allow me to channel my inner highschooler and do some arcane trigonometry: a triangle formed by the environment’s direction, and the model’s direction, and the adjacent-hypotenuse angle being the prediction error rate (kudos to my son for helping me remember all this nonsense).
There’s something very important about this relationship. With the environment clock continuing to tick at the constant rate, higher prediction errors will introduce a time dilation effect within the model: the clock will appear to be speeding up, leaving less space for the OODA loop to cycle! And what does that likely mean for us? Yup — more jank.
I will now take a tiny leap of faith here and correlate prediction errors and jank. Here it is: the higher our prediction error rate, the more incidents of jank we will experience. It seems that if we have a really awesome model that generates absolutely no prediction errors, we’ll have no jank. We’ll be like that youthful Keanu at the end of the Matrix, folding one of our hands behind our back, suddenly bored with the pesky Agent Smith. Conversely, if our model generates only prediction errors, it’s going to be all jank, all the time. We’ll feel like the Agents Smith in that same scene.
So it is likely that anytime we’re experiencing jank, we might be experiencing a troubling prediction error rate. Micro jank will come from the relatively small rate, and macro jank — from when the angle approaches 90 degrees (π/2 for you trig snobs) and the model clock is spinning like a top.
In either situation, especially when we feel like we have no time to react, it might be a good idea to reflect on how well we understand our environment — and most importantly, whether we’re aware that we only operate on the model of it.
One of the most common mistakes organizations make is confusing high rates of prediction error in their models for the environment raging against them. If you ever had a fight with a loved one, and was humbled by recognizing how your assumptions took you there, that must resonate. With all the jank we produce and we’re surrounded by daily, and the enormous piles of prediction error rate this must represent, do you ever wonder how much slower the environment’s actual clock is compared to the one we perceive? And the untapped potential that the difference between them represents?
One thought on “Prediction errors and jank”