Model compression and us

Often, it almost seems like if we run the process of understanding long enough, we could just stay in the applying cycle and not have to worry about learning ever again. Sure, there’s change. But if we study the nature of change, maybe we can find the underlying causes of it and incorporate it into our models – thus harnessing the change itself? It seems that the premise of modernism was rooted in this idea. 

If we imagine that learning is the process of excavating a resource of understanding, we can convince ourselves that this resource is finite. From there, we can start imagining that all we have to do is – simply – run everything through the process of understanding and arrive at the magnificent state where learning is more or less optional. History has been rather unkind to these notions, but they continue to hold great appeal, especially among us technologists.

Alas, combining technology and a large-enough number of people, it seems that we unavoidably grow our dependence on the applying cycle. In organizations where only compressed models are shared, change becomes more difficult. There’s not enough mental model diversity within the ranks to continue the cycle of understanding. If such organizations don’t pay attention to attrition of its veterans, the ones who knew how things worked and why, they find themselves in the Chesterton’s fence junkyard. At that point, their only options are to anxiously continue holding on to truisms they no longer comprehend or to plunge back to the bottom of the stairs and re-learn, generating the necessary mental model diversity by grinding through the solution loop cycle, all over again.

I wonder if the nadir of the hero’s journey is marked by suffering in part because the hero discovers first-hand the brittleness of model compression. Change is much more painful when most of our models are compressed.

At a larger scale, societies first endure horrific experiences and acquire embodied awareness of social pathologies, then lose that knowledge through compression as it is passed along to younger generations. Deeply meaningful concepts become monochrome caricatures, thus setting up the next generation to repeat mistakes of their ancestors. More often than not, the caricatures themselves become part of the same pathology that their uncompressed models were learned to prevent.

In a highly compressed environment, we often experience the process of understanding in reverse. Instead of starting with learning and then moving onto applying, we start with the application of someone else’s compressed models and only then – optionally – move on to learning them. Today, a child is likely to first use a computer and then understand how it works, more than likely never fully grasping the full extent of the mental model that goes into creating one. Our life can feel like an exploration of a vast universe of existing compressed models with a faint hope of sometimes ever fully understanding them. 

From this vantage point, we can even get disoriented and assume that this is all there is, that everything has already been discovered. We are just here to find it, dust it off, and apply it. No wonder the “Older is Better” trope is so resonant and prominent in fiction. You can see how this feeds back into the “excavating knowledge as a finite resource” idea, reinforcing the pattern.

In this way, a pervasive model compression appears pretty trappy. Paradoxically, the brittle nature of highly compressed environments makes them less stable. The very quest to conquer change results in more – and more dramatic – change. To thrive in these environments, we must put conscious effort to mitigate the nature of the compression’s trap. We are called to strive to deepen our diversity of mental models and let go of the scaffolding provided by the compressed models of others.

Model compression

At the end of each journey in our process of understanding, we have an effective solution to the problem we were presented with. Here’s an interesting thing I am noticing. We still have a diverse, deeply nuanced mental model of the problem that we developed by cycling through the solution loop. However, we don’t actually need the full diversity of the model at this point. We found the one solution that we actually need when approaching the given problem.

This is a pivotal point at which our solution becomes shareable. To help others solve similar problems, we don’t need to bestow the full burden of our trials and errors upon them. We can just share that one effective solution. In doing so, we compress the model, providing only a shallow representation of it that covers just enough to describe the solution.

This trick of model compression seems simple, but it ends up being nothing short of astounding. Let’s start with an example of simple advice, like that time when an expert showed me how to properly crack an egg and I almost literally felt the light bulb go off in my head. It would have taken me a lot of cycling through the solution loop to get anywhere close to that technique. Thanks to the compressed model transfer, I was able to bypass all of that trial and error.

Next, I invite you to direct your attention to the wonder of a modern toothbrush. Immeasurable amounts of separate solution loop iterations went into finding the right shape and materials to offer this compressed model of dental hygiene. To keep my teeth healthy, I don’t have to know any of that. I only need to have a highly compressed model: how to work the toothbrush. This ability to compound is what makes model compression so phenomenally important.

We live in a technological world. We are surrounded by highly compressed mental models that are themselves composed of other highly compressed models, recursing on and on. I am typing this little article on a computer, and if I stop to imagine an uncompressed mental model of this one device, from raw materials scattered unfound across the planet to the cursor blinking back at me, my mind boggles in awe. To type, I don’t have to know any of that. Despite us taking it for granted, our capacity to compress and share models might just be the single most important gift that humanity was given – aside from being able to construct these models, of course.

Model compression introduces a peculiar extra stage to the process of understanding. At this fifth stage, our solution effectiveness is high, flux is low, but our model diversity is low as well. When we acquire a compressed model – whether through technology or a story – we don’t inherit the rich diversity of the model. We don’t get the full experiential process of constructing it. We just get the most effective solution.

It feels like a reasonable deal, yet there is a catch. As we’ve learned earlier, things change

When my solution is at this newly discovered “compressed” stage, a new change will expose this stage’s brittleness: I don’t have the diversity of the model necessary to continue climbing the stair steps of understanding. Instead, it appears that I need to start problem-solving from scratch. This does make intuitive sense, and the compressed model compounding makes this even more apparent. When a modern phone suddenly stops working, we have only a couple of different things we can try to resuscitate: plug in the charger and/or maybe try to hold down the power button and hope it comes back. If it doesn’t, the vastness of crystallized model compression makes it as good as a pebble. Chuck it into a drawer or into a lake – not much else can happen here.

Lucky for us, this phenomenon of compressed models being brittle in the face of change is a problem in itself – which means that we can aim our solving ability at it. If we’re really honest about it, software engineering is not really about writing software. It’s about writing software that breaks less often and when it does, it does so in graceful ways. So we’ve come with a neat escape route out of this particular predicament. If my toothbrush breaks or wears out, I just replace it with a new one from the five-pack in which they usually come. If my laptop stops working, I take it to a “genius” to have it fixed. Warranties, redundancies, and repair facilities – all of these solutions rely on the presence of someone else possessing  – and maintaining! – their diversity of the mental model for me to lean on.

This shortcut works great in so many cases that I probably need to draw a special arrow on our newly updated diagram of the process of understanding. There are two distinct cycles that emerge: the already-established cycle of learning, and the applying cycle, where I can only use compressed models obtained through learning – even if I didn’t do the learning myself! Both are available to us, but the applying cycle feels much more (like orders of magnitude) economical to our force of homeostasis. As a result, we constantly experience the gravitational pull toward this cycle.


So far, I carefully avoided the topic of change, presenting my problem-solving realm in a delightfully modernist manner. “See phenomenon? Make a model of it! Bam! Now we’re cooking with gas.”

Alas, despite its wholesome appeal, this picture is incomplete. Change is ever-present. As the movie title says, everything, everywhere, all at once – is changing, always. Some things change incomprehensibly quickly and some change so slowly that we don’t even notice the change. At least, at first. And this ever-changing nature of the environment around us presents itself as its own kind of force.

While the force of homeostasis is pushing us toward routine, the force of change is constantly trying to upend it. As a result of these forces dancing around each other, our problems tend to walk the awkward gait of punctuated equilibrium: an effective solution appears to have settled down, then after a while, a change unmoors it and the understanding process repeats. The punctuated equilibrium pattern appears practically everywhere, indicating that this might be another general pattern that falls out of the underlying processes of mental modeling.

Throughout this repeating sequence, the flux and effectiveness components wobble up and down, just like we expect them to. However, something interesting happens with the model diversity: it continues to grow in a stair-step pattern.

If you’ve read my stories before, you may recognize the familiar stair-step shape from my ongoing fascination, the adult development theory (ADT). It seems to rhyme, doesn’t it? I wonder if the theory itself is a story that is imposed upon a larger, much more fractally manifesting process of mental modeling. The ADT stages might be a just slice of it, discerned by a couple of very wise folks and put into a captivating narrative.

Every revolution of the process of understanding adds to our model, making us more capable of facing the next round of change. Sometimes this process is just refining the model. Sometimes it’s a transformational reorganization of it. This is how we learn.

Moreover, this might be how we are. This story of learning is such a part of our being that it is deeply embedded into culture and even has a name: the hero’s journey. The call to the adventure, the reluctance, the tribulations, and facing the demons to finally reveal the boon and bring it back to my people is a deeply emotional description of the process of understanding. And often, it has the wishful “happily ever after” bookend — because this would be the last change ever, right? It’s another paradox. It seems that we know full well that change is ever-present, yet we yearn for stability.

For me, this rhymes with the notion of Damasio’s homeostasis. Unlike the common belief that homeostasis is about equilibrium, in Strange order of Things, he talks how, from our perspective, homeostasis is indeed about reaching a stable state… and then leaning a bit forward to ensure flourishing. It’s like our embodied intuition accepts the notion of change and prepares us for it, despite our minds continuing to weave stories of eternal bliss.

Life of a solution

Looking at the framework in the previous piece, I am noticing that the components of the tripartite loop (aka the solution loop, apologies for naming it earlier) form an interesting causal relationship. Check it out. Imagine that for every problem, there’s this process of understanding, or a repeated cycling through the loop. As this cycling goes on, the causality manifests itself.

Rising flux leads to rising solution diversity. This makes sense, right? More interesting updates to the model will provide a larger space for possible predictions. Rising solution diversity leads to rising effectiveness, since more predictions create more opportunities for finding a solution that results in the intended outcome. Finally, rising effectiveness leads to falling flux — the more effective the solution, the fewer interesting updates to the model we are likely to see. Once flux subsides past a certain point, we attest that the process of problem understanding has run its course. We now have a model of the phenomenon, ourselves, and our intention that is sufficiently representative to generate a reliably effective solution. We understood the problem.

I am realizing that I can capture this progression in roughly four stages. At the first stage, the effectiveness is low and diversity is low, with flux rapidly rising. This is the typical “oh crap” moment we all experience when experiencing a novel phenomenon that is misaligned with our intention. Let’s call this stage “novel,” and assign it the oh-so-appropriate virus emoji. 

Rising flux pushes us forward to the next stage that I will call “divergent”. Here, our model of the problem is growing in complexity, incorporating the various updates brought in by flux. This stage is less chaotic than the one before, but it’s usually even more uncomfortable. We are putting in a lot of effort, but the mental models remain squishy and there are few well-known facts. Nearing the end of the stage, there’s a sense of cautious excitement in the air. While the effectiveness of our solutions is still pretty low, we are starting to see a bit of a lift: all of that model enrichment is beginning to produce intended outcomes. Soon after, the next stage kicks in. 

The convergent stage sees continued, steady rise of effectiveness. Correspondingly, flux starts to ease off, indicating that we have the model figured out, and now we’re just looking for the most effective solution. This stage feels great for us engineering folks. Constraints appear to have settled in their final resting places. We just need to figure out the right path through the labyrinth. Or the right pieces of the puzzle. Or the right algorithm. We’ve got it.

After a bit more cycling of the loop, we finally arrive at the routine stage, the much desired steady state of understanding the problem well enough for it to become routine, where solving a problem is more of a habit rather than a bout of strenuous mental gymnastics. The problem has become boring.

The progression from novel to routine is something that every problem strives to go through. Sometimes it plays out in seconds. Sometimes it takes much longer. However, my guess is that this process isn’t something that we can avoid when presented with problems. It appears to be a general sequence that falls out of how our minds work. I want to call the pressure that animates this sequence the force of homeostasis. This force propels us inexorably toward the “routine” stage of the process, where the ongoing investment of effort is at its lowest value. Our bodies and our minds are constantly seeking to reach that state of homeostasis as quickly as possible, and this search is what powers this progression.

A Solution

If we are looking at a problem, and as we learned earlier, our understanding of  a problem is a model that includes us, our intention, and the phenomenon that is a subject of it, then a solution is the problem understanding-based prediction that resolves the problem’s intention, aligning the state of the phenomenon with it.

Because the problem’s model includes us, the solution often manifests as a set of actions we take. For example, for my trying to repel that mischievous bunny from the previous piece, one solution might look like the list of a) grab a tennis ball, b) aim at the tree nearby, c) throw the ball at the tree with the most force I can muster. However, solutions can also be devoid of our actions, like in that old adage: “if you ignore a problem long enough, it will go away on its own”.

Note that according to the definition above, a solution relies on the model, but is distinct from it. Same model might have multiple solutions. Additionally, a solution is distinct from the outcome. Since I defined it as a prediction, a solution is a peek into the future. And as such, it may or may not pan out. These distinctions give us just enough material to construct a simple framework to reason about solutions.

Let’s see… we have a model, a solution (aka prediction), and the outcome. All three are separate pieces, interlinked. Yay, time for another triangle! Let’s look at each edge of this triangle.

When we study the relationship between solution and outcome, we arrive at the concept of solution effectiveness, a sort of hit/miss scale for the solution. Solutions that result in our intended outcomes are effective. Solutions that don’t are less so. (As an aside, notice how the problem’s intention manifests in the word “intended”). Solution effectiveness appears to be fairly easy to measure. Just track the rate of prediction errors over time. The lower the rate, the more effective the solution is. We are blessed to be surrounded by a multitude of effective solutions. However, there are also solutions that fail, and to glimpse possible reasons why that might be happening, we need to look at the other sides of our triangle.

The edge that connects solution and model signifies the possibility that our mental model of the problem contains an effective solution, but we may have not found it yet. Some models are simple, producing very few possible solutions. Many are complicated labyrinths, requiring skill and patience to traverse. When we face a problem that does not yet have an effective solution, we tend to examine the full variety of possible solutions within the model: “What if I do this? What If we try that?”  When we talk about “finding a solution,” we usually describe this process. To firm this notion up a bit, a  model of the problem is diverse when it contains many possible solutions. Solution diversity tends to be only interesting when we are still looking to find one that’s more effective than what we currently have. Situations where the solution is elusive, yet the model’s solution diversity is low can be rather unfortunate – I need to find more options, yet the model doesn’t give me much to work with. In such cases, we tend to look for ways to enrich the model.

This is where the final side of the triangle comes in. This edge highlights the relationship between the model and the outcome. With highly effective solutions, this edge is pretty thin, maybe even appearing non-existent. Lack of prediction errors means that our model represents the phenomenon accurately enough. However, when the solution fails to produce the intended outcome, this edge comes to life: prediction errors flood in as input for updating the model. If we treat every failure to attain the intended outcome as an opportunity to learn more about the phenomenon, our model becomes more nuanced, and subsequently, increases its solution diversity – which in turn lets us find an effective solution, completing the cycle. This edge of the triangle represents the state of flux within the model: how often and how drastically is the model being updated in response to the stream of solutions that failed? By calling it “flux”, I wanted to emphasize the updates that lead to “interesting” changes in the model: lack of prediction error is also a model update, but it’s not going to increase its diversity. However, outcomes that leave us stunned and unsure of what the heck is going on are far more interesting.

Wait. Did I just reinvent the OODA loop? Kind of, but not exactly. Don’t get me wrong, I love the Mad Colonel’s lens, but this one feels a bit different. Instead of enumerating the phases of the familiar circular solution-finding process, our framework highlights its components, the relationships between them and their attributes. And my hope is that this shift will bring new insights about problems, solutions, and us in their midst.

Rubber duck meetings

When I am looking for new insights, a generative conversation with colleagues is hard to beat in terms of quality of output. When I look back at what I do, a large chunk of my total effort is invested into cultivating spaces for generative conversations. It seems deceptively easy (“Let’s invite people and have them talk!”), but ends up being rather tricky – an art more than a technique. My various chat spaces are littered with tombstones of failed generative spaces, with only a precious few attempts actually bearing fruit. Let’s just say I am learning a lot.

One failed outcome of trying to construct a generative space is what I call the “rubber duck meeting”. The key dynamic that contributes to this outcome is the gravity well of perceived power. For example, a manager invites their reports to partake in a freeform ideation session. At this session, the manager shares their ideas and walks the team through them, or reviews someone else’s idea and brainstorms around them. There is some participation from the others, but if we stand back, it’s pretty clear that most of the generative ideation – and talking – is done by the manager. 

Now, a one-person ideation session is not a bad thing. For programmers, it’s a very common technique to find our way out of a bug. It even has a name: rubber duck debugging. The idea is simple: pretend like you’re explaining the problem to someone (use a rubber ducky as an approximation if you must) and hope that some new insights will come dislodged in your network of mental models in the process.

The problem with the rubber duck meeting is that everyone else is bored out of their mind and often frustrated. The power dynamic in the room raises the stakes for participation for everyone else but the manager. No matter how much we earnestly try to participate, even a subtle gravity well inexorably shifts the meeting to monologue (or a dialog between two senior peers). The worst part? Unless these leaders make a conscious effort to reduce the size of their gravity well, they don’t notice what’s happening. They might even be saying to themselves: “This is going so well!” and “Look at all these ideas being generated!” and “I am so glad we’re doing this!” – without realizing that these are all their ideas and no new insights are coming in. They might as well be talking to a rubber duck. I know this because I led such meetings. And only much later, wondered: wait, was it just me thinking out loud all this time?

Now, about that “consciously reducing the size of the gravity well”? I don’t even know if it’s possible. I try. My techniques are currently somewhere around “just sit back and  let the conversation happen” and “direct attention to other folks’ ideas”. The easiest thing to reduce the rank-based power dynamics in a meeting seems to be inviting peers, though this particular tactic isn’t great either: the vantage points  are roughly similar, and so the depth of insights is more limited.

I kept looking for ways to finish this bit on a more uplifting note. So here’s one: when you do find that generative space where ideas are tossed around with care, hang onto it and celebrate your good fortune. For you struck gold.

Minimum viable legibility

Having seen folks struggle through another Google perf season and supporting them through it, I figured I’d write down some of the new things I learned. Specifically, I wanted to focus on the quadrant of performance management blind spot I talked about a while back.

If you do find yourself in the performance management blind spot, congratulations and my condolences. Congrats, because you are probably doing something so interesting and unusual enough that it is not (yet!) captured by the organization’s rubric. Condolences, because you need to do some extra legwork to show that you do indeed provide value to the organization.

Think of it as the minimum viable legibility: the additional effort of making your work recognizable as valuable when viewed through the lens of the performance management rubric, but not so much that it changes its nature. Minimum viable legibility is kind of like a tax, and there are probably lots of different ways to pay it. Based on my experience, it’s a combination of these three ingredients: sponsorship, network of support, and the “big idea”. Depending on the nature of the rubric, each of these may offer varying degrees of effectiveness.

Sponsorship is the situation in which an executive vouches for your work. They expend their own political capital to state that the work is important. If you can find a sponsor like that, you’re usually set. Things to look out for are the reserves of said political capital over time, the commitment to follow through, and the amount of cover given. There also may be additional logistics. For example, you might need to bug them regularly or connect them to the right places in the perf process if they are not in your reporting chain.

Less effective than sponsorship, the network of support may still work reasonably well in environments with peer-driven performance management systems. Kind of like a more distributed variant of sponsorship, the trick here is to grow a robust network of peers (the broader the better) who understand your work and are willing to provide peer feedback on it. It helps significantly if they can articulate well why they see your work as important, so you may have to invest some time into helping them do that – in addition to maintaining the network. When your manager’s sponsorship cover is limited, the network of support can really come through in a pinch.

Finally, sometimes effective, but also fraught with peril, the “big idea” refers to connecting your work to some important — and usually new — initiative. For example, if your organization suddenly sees the need to build a special kind of wooden spoon after focusing on sporks for the last decade, tying your work to the company’s Wooden Spoon OKR might be a tactic to try. The tactic only works when the connection is clear and not dubious, and in combination with other ingredients. Otherwise, it might backfire and actually do harm to your attempts at establishing legibility, becoming a MacGuffin-like distraction.

The unfortunate news is that this is not a reliable recipe. I’ve talked with folks whose extensive and enthusiastic support networks ended up amounting to very little. I know peeps who were trapped several layers of management deep in sponsorship deficit. In such situations, there’s very little that can be done to establish the minimal amount of legibility necessary. Blind spots are tough. However, if you truly believe that you’re doing good work that’s invisible, please give boosting minimum viable legibility a try.

Cheesecake and Baklava

I have been reading Alex Komoroske’s Compendium cards on platforms, and there’s just so much generative thinking in there. There’s one concept that I kept having difficulty articulating for a while, and here’s my Nth attempt at defining it. It’s about the thinness and thickness of layers.

Given that layers have vectors of intentions, we could imagine that the extent to which these intentions are fulfilled is described by the length of the vector. Some layers will have super-short vectors, while others’ are quite protracted. To make this an easier visual metaphor, we can imagine that layers with longer intention vectors are thicker than layers with shorter vectors of intention.

For example, check out the 2D Canvas API. A long time ago, back when I was part of the WebKit project, I was amazed to discover that Canvas’ Context API was basically a wrapper around the underlying platform API, the GCContext. Since then, both APIs have moved apart, but even now you can still see the resemblance. If we look at these two adjacent layers, the intention of this particular Web platform API was perfectly aligned with the underlying platform’s API intention and the length of the vector was diminutively tiny — being literally a pass-through. If you wanted to make graphics on the Web, this was as close to the metal you could get, illustrating our point of thinner layers yielding shorter intention vectors.

To compare and contrast, let’s look at Cascading Style Sheets. It’s fairly easy to see the intention behind CSS. It strives toward this really neat concept of separating content of a Web document from its presentation. When I first learned about CSS, I was wholly enamored—and honestly, still am—with such a groundbreaking idea. I even wrote an article or two or three (Hello, 2005! I miss your simplicity) about best practices for content/presentation separation.

We can also see that this vector of intention extends much farther than that of the 2D Canvas API. Especially from the vantage point of a WebKit engineer, if seems like CSS took something as simple (and powerful) as GCContext and then charged daringly forward, inventing its own declarative syntax, a sophisticated model for turning these declarations into rendering, including deeply nuanced bits like formatting and laying out text. CSS is a much thicker layer. It’s the whole nine yards.

The question that inevitably arises for the aspiring platform designers is “which kind is better?” Why would one decide to design their layer thin or thick? It’s a good question. Now that we’ve learned about the pace layers, we can seek insights toward answering this question through this thought experiment. Let’s pretend to be designing a platform in two alternate realities of extremes. In the first, we boldly decide to go for the cheesecake approach: a single layer that has one vector of intention, fulfilled as completely as possible. In the second, we, just as boldly, had chosen the baklava approach: our layers are countless and are as thin as possible, just like in filo dough. Anybody hungry yet?

Applying the pace layer dynamic to both alternatives, we can see that the baklava platform is better equipped to withstand it: the multiple layers can choose their pace and change independently of each other. Poor cheesecake will have a harder time. The single layer will inevitably experience a sort of shearing force, pushing its upper part to change faster, with the bottom part staying relatively still. If I were a poetic kind, I would say something like: “this is how layers are born – in the struggle with the shearing force of innovation.” But even if I weren’t, I can describe it as a pretty significant source of suffering for developers. Especially in monolith repositories, where everyone can depend on everyone and dependencies are rarely curated for proper layering (visualize that dense mass of sugar-laden dairy), developers will keep finding themselves battered by these forces, sometimes even without realizing that they are sensing the rage of pace layers struggling to emerge. Using CSS as the reference point, I remember having conversations with a few Javascript framework developers who were so fed up with the inability to reach into the CSS monolith, they contemplated — and some partially succeeded! — rolling their own styling machinery using inline styles. There’s no good ending to that particular anecdote.

Surprisingly, baklava will experience a different kind of force, caused by the same pace layer dynamic. If I design a platform that consists of a multitude of thin layers, I am now responsible for coordinating the moving of the ladder across them. As you can imagine, the costs of such an enterprise will accrue very quickly. I was once part of a project that intended to design a developer platform from scratch. As part of the design, several neatly delineated baklava-like layers were erected, with some infrastructure to coordinate them as they were developed. Despite having no external developers and still being at an early-days stage, the project rapidly developed a fierce bullwhip effect akin to the one in the infamous beer game, threatening the sanity of engineers working at the higher layers. Intuitively, it felt right that we would design a well-layered developer surface from the ground up and keep iterating within those boundaries. It just turned out that there are many different ways in which this layering could happen, and picking one way early on can quickly lead to unhealthy outcomes. Layers accrete. Imagining that they can be drawn from whole cloth is like planning to win a lottery.

Well, great. Our baklava keeps wanting to turn into cheesecake, and cheesecake keeps wishing to be baklava. Is there a golden middle, the right size for your layer? Probably. However, I have not found a way to determine it ahead of time. Platform engineering is not a static process. It is a messy dynamic resolution of layer boundaries based on developers interacting with your code — and dang it, the “right answer” to layering also changes over time. API layering is inherently a people problem. Despite our wishes to get it “right this time”, my experience is that until there is a decent number of developers trying to build things with my APIs, it is unknowable for me what “right” looks like.

When gearing up for shipping developer surfaces, bring lots of patience. The boundaries will want to move around, evolve, and reshape across the pace layer continuum. Layers will want to become thinner and then change their minds and grow a little cheesecake girth. Getting frustrated about that or trying to keep them still is a recipe for pain. Platform engineering is not about getting the API design right. It’s about being able, willing, and ready to dance with it.