Cheesecake and Baklava

I have been reading Alex Komoroske’s Compendium cards on platforms, and there’s just so much generative thinking in there. There’s one concept that I kept having difficulty articulating for a while, and here’s my Nth attempt at defining it. It’s about the thinness and thickness of layers.

Given that layers have vectors of intentions, we could imagine that the extent to which these intentions are fulfilled is described by the length of the vector. Some layers will have super-short vectors, while others’ are quite protracted. To make this an easier visual metaphor, we can imagine that layers with longer intention vectors are thicker than layers with shorter vectors of intention.

For example, check out the 2D Canvas API. A long time ago, back when I was part of the WebKit project, I was amazed to discover that Canvas’ Context API was basically a wrapper around the underlying platform API, the GCContext. Since then, both APIs have moved apart, but even now you can still see the resemblance. If we look at these two adjacent layers, the intention of this particular Web platform API was perfectly aligned with the underlying platform’s API intention and the length of the vector was diminutively tiny — being literally a pass-through. If you wanted to make graphics on the Web, this was as close to the metal you could get, illustrating our point of thinner layers yielding shorter intention vectors.

To compare and contrast, let’s look at Cascading Style Sheets. It’s fairly easy to see the intention behind CSS. It strives toward this really neat concept of separating content of a Web document from its presentation. When I first learned about CSS, I was wholly enamored—and honestly, still am—with such a groundbreaking idea. I even wrote an article or two or three (Hello, 2005! I miss your simplicity) about best practices for content/presentation separation.

We can also see that this vector of intention extends much farther than that of the 2D Canvas API. Especially from the vantage point of a WebKit engineer, if seems like CSS took something as simple (and powerful) as GCContext and then charged daringly forward, inventing its own declarative syntax, a sophisticated model for turning these declarations into rendering, including deeply nuanced bits like formatting and laying out text. CSS is a much thicker layer. It’s the whole nine yards.

The question that inevitably arises for the aspiring platform designers is “which kind is better?” Why would one decide to design their layer thin or thick? It’s a good question. Now that we’ve learned about the pace layers, we can seek insights toward answering this question through this thought experiment. Let’s pretend to be designing a platform in two alternate realities of extremes. In the first, we boldly decide to go for the cheesecake approach: a single layer that has one vector of intention, fulfilled as completely as possible. In the second, we, just as boldly, had chosen the baklava approach: our layers are countless and are as thin as possible, just like in filo dough. Anybody hungry yet?

Applying the pace layer dynamic to both alternatives, we can see that the baklava platform is better equipped to withstand it: the multiple layers can choose their pace and change independently of each other. Poor cheesecake will have a harder time. The single layer will inevitably experience a sort of shearing force, pushing its upper part to change faster, with the bottom part staying relatively still. If I were a poetic kind, I would say something like: “this is how layers are born – in the struggle with the shearing force of innovation.” But even if I weren’t, I can describe it as a pretty significant source of suffering for developers. Especially in monolith repositories, where everyone can depend on everyone and dependencies are rarely curated for proper layering (visualize that dense mass of sugar-laden dairy), developers will keep finding themselves battered by these forces, sometimes even without realizing that they are sensing the rage of pace layers struggling to emerge. Using CSS as the reference point, I remember having conversations with a few Javascript framework developers who were so fed up with the inability to reach into the CSS monolith, they contemplated — and some partially succeeded! — rolling their own styling machinery using inline styles. There’s no good ending to that particular anecdote.

Surprisingly, baklava will experience a different kind of force, caused by the same pace layer dynamic. If I design a platform that consists of a multitude of thin layers, I am now responsible for coordinating the moving of the ladder across them. As you can imagine, the costs of such an enterprise will accrue very quickly. I was once part of a project that intended to design a developer platform from scratch. As part of the design, several neatly delineated baklava-like layers were erected, with some infrastructure to coordinate them as they were developed. Despite having no external developers and still being at an early-days stage, the project rapidly developed a fierce bullwhip effect akin to the one in the infamous beer game, threatening the sanity of engineers working at the higher layers. Intuitively, it felt right that we would design a well-layered developer surface from the ground up and keep iterating within those boundaries. It just turned out that there are many different ways in which this layering could happen, and picking one way early on can quickly lead to unhealthy outcomes. Layers accrete. Imagining that they can be drawn from whole cloth is like planning to win a lottery.

Well, great. Our baklava keeps wanting to turn into cheesecake, and cheesecake keeps wishing to be baklava. Is there a golden middle, the right size for your layer? Probably. However, I have not found a way to determine it ahead of time. Platform engineering is not a static process. It is a messy dynamic resolution of layer boundaries based on developers interacting with your code — and dang it, the “right answer” to layering also changes over time. API layering is inherently a people problem. Despite our wishes to get it “right this time”, my experience is that until there is a decent number of developers trying to build things with my APIs, it is unknowable for me what “right” looks like.

When gearing up for shipping developer surfaces, bring lots of patience. The boundaries will want to move around, evolve, and reshape across the pace layer continuum. Layers will want to become thinner and then change their minds and grow a little cheesecake girth. Getting frustrated about that or trying to keep them still is a recipe for pain. Platform engineering is not about getting the API design right. It’s about being able, willing, and ready to dance with it.

Understanding

Once the notion of a problem is sufficiently semantically disambiguated, we can proceed toward the next marker on our map: the concept of “understanding”. Ambitious, right?! There will be two definitions in this piece, one building on another. I’ll start with the first one. Understanding of a phenomenon refers to our ability to construct a mental model of it, and that model makes reasonably accurate predictions about this phenomenon’s future state.

This definition places the presence of a mental model at the core of understanding. When we say that we understand something, we are conveying that a) we have a mental model of this something and b) more often than not, this model behaves like that actual something. The more accurate our predictions, the better we understand a phenomenon. At the extreme end of the spectrum of understanding is a model that is so good at making predictions that we literally don’t need to ever observe the phenomenon itself – the model acts as a perfect substitute. Though this particular scenario is likely impossible, there are many things around us that come pretty close. I don’t have to look at the stairs when I am climbing them. If I want to scratch my chin, I don’t need to carefully examine it: I just do it, sometimes automatically. In these situations, the rate at which the model makes prediction errors is low enough for us to assume that we understand the phenomenon. Of course, that makes rare prediction errors much more surprising, casting doubt on such assumptions.

Conversely, when we keep failing to predict what’s going to happen next with a thing we’re observing, we say that we don’t understand it. Our mental model of it is too incorrect, incomplete, or both, producing a high prediction error rate. Another related notion here is “legibility”. When we say something is legible, we tend to imply that we find it understandable, and vice versa, when we say that something is illegible, our confidence in understanding it is low. Think of legibility as a first-order derivative of understanding: it is our prediction of whether we can construct a low-prediction error rate model of the phenomenon.

Rolling along with this idea of predicting predictability of a mental model, I’d like to bring another definition to this story and define “understanding of a problem.”  I will do this in a way that may seem like sleight of hand, but my hope here is to both provide a usable definition and illuminate a tiny bit more of the abyssal depths of the nature of understanding. Here goes. Understanding a problem is understanding a phenomenon which includes us, the phenomenon that is the subject of our intention, and our intention imposed on it. See what I did there? It’s a definition turducken. Instead of producing something uniquely artisanal and hand-crafted, I just took my previous definition and stuffed it with new parameters! Worse yet, these parameters are just components of my previous definition of a problem: me, my intention, and the thing on which I impose that intention. However,  I believe this is, as we say in software engineering, “working as intended”.

First, the definition provides a useful mental model of how we understand problems. We need to understand the problematic phenomenon and we need to understand the different ways we can influence it to make it less problematic. If I want a tennis ball to smack into a tree in my backyard to scare away the bunny eating my carrots — obviously I don’t want to harm the cute bunny! — I need to know how I can make that happen. Just like we’ve seen with legibility, understanding of a problem is a first-order derivative of understanding of a phenomenon. It’s the understanding of agency: how do I and the darned thing interact and what are my options for shifting it toward some future state that I intend for it?

Second, this definition exposes something quite interesting: once we see something as a problem, we entangle ourselves with it. If something is a problem, our understanding of it always includes understanding – a predictive model! – of ourselves. This model doesn’t have to be complete. For example, to throw a ball at a tree, I don’t need to have a deep understanding of my inner psyche. I just need to understand how my arm throws a ball, as well as how far and how accurately I can throw it.

Third, we can see that, under the influence of our intention, phenomena appear to form systems: interlinked clusters of mental models that are entangled with each other. And it’s usually the mental models of ourselves at the center of entanglements, holding all of these clusters together and forming the network of the mental models that I mentioned a few articles back. This might not seem profound to you, but it was a pretty revelatory learning for me. Our intentions are what establishes our ever-complex network of mental models. Put differently, without us having preferences to some outcomes and not others, there is no need for mental modeling.

There’s probably another force at play. Intention could also be our models influencing us. When we construct the model, we have two ways to interpret prediction errors. One is to treat them as information to incorporate into the model. Another is to treat them as a manifestation of a problem, a misalignment between the environment (mistaking the environment for our model of it) and our intention. Can we reliably tell these apart? It is possible that many of our intentions are just our unwillingness to incorporate the prediction error.

Finally, the “incomplete model” bit earlier is a hint that understanding is a paradox. The pervasive interconnectedness that we encounter in trying to understand the world around us reveals that there’s rarely such a thing as a single phenomenon of which to construct a mental model. If we attempt to model the whole thing, we run into the situation that I call the Sagan’s Pie: “If you wish to make an apple pie from scratch, you must first invent the universe.” A recognition that we ourselves are part of this universe flings us toward the asymptote of understanding. So, our ability to understand some things depends on us choosing not to understand other things, by drawing distinctions and breaking phenomena down into parts of the whole — and intentionally remaining ignorant of some. We strive to understand by choosing not to.

To clear the fog of philosophy somewhat, let’s distill this all into a couple of takeaways that we’ll stash for later use in this adventure:

  • Understanding is iterative mental modeling, informed by prediction errors
  • Problem understanding is about modeling ourselves in relation to the problem
  • Problem understanding is also — and perhaps more so — about what we choose not to understand.

A problem

To get a more solid grounding of the newly born decision-making framework, we need to understand what a problem is. Let’s begin with a definition. A problem is an imposition of our intention on a phenomenon.

I touched on this notion of intention a bit in one of the Jank in Teams pieces, but here’s a recap. Our mental models generate a massive array of  predictions, and it appears that we prefer some of these predictions to others. The union of these predictions manifests as our intention. When we observe any phenomenon, we can’t help it but impose our intention on it. The less the predicted future state of the phenomenon aligns with our intention, the more of a problem it is.

For example, suppose I am growing a small garden in my backyard. I love plants and they are amazing, but if they aren’t the ones that I intended to grow on my plot, they are a problem. Similarly, If I am shown a video of a cute bunny eating a carrot, I would not see the events, documented by the videographer as a problem. Unless, that is, I am told that this video was just recorded in my garden. At this very instant, the fluffy animal becomes a harvest-destroying pest – and a problem.

I like this definition because it places problems in the realm of subjectivity. To become problems, phenomena need to be subject to a particular perspective. A phenomenon is a problem only if we believe it is a problem. Even world-scale, cataclysmic events like climate change are only a problem if our preferred future includes a thriving humanity and life as we know it. I also like how it incorporates intention and thus, a desire to impose our will on a phenomenon. When we decide that something is problematic, we reveal our preferences to its future state.

Framing problems as a byproduct of intentionality also allows us to play with the properties of intention to see how they shift the nature of a problem. Looking at the discussion of the definition above, I can name a couple of such properties: the strength of intention and the degree of alignment. Let’s draw – you know it! – a 2×2, a tool to represent the continuous spectrum of these property values as their extremities. The vertical axis will be the degree of alignment between the current state of the phenomenon and our intention imposed on it. The horizontal axis will represent the strength of our intention.

In the top-left quadrant, we’re facing a disaster. A combination of strong intention and a poor alignment means that we view the phenomenon as something pretty terrible and looming large. The presence of a strong intention tends to have this quality. The more important it is for a phenomenon to be in a certain state, the more urgent and pressing the problem will feel for us. Another way to think of strength of intention is how existential for us is the fulfillment of this intention. If I need my garden to survive through the winter, it being overrun by a horde of ravenous bunnies will definitely fit into this quadrant.

Moving clockwise, the alignment is still poor, but our intention is not that strong. This quadrant is a mess. This is where we definitely see that things could be better, but we keep not finding time on our schedule to deal with the situation. Problems in this quadrant can still feel large in scope, indicating that the predicted future state of the phenomenon is far apart from the state we intend it to have. It’s just that we don’t experience the same existential dread when we survey them. Using that same garden as an example, I might not like how I planted the carrots in meandering, halting curves, but that would be a mess rather than a disaster.

The bottom-right quadrant is full of quirks. The degree of intention misalignment is small, and the intention is weak. Quirks aren’t necessarily problems. They can even be a source of delightful reflection, like that one carrot that seems to stick out of the row, seemingly trying to escape its kin.

The final quadrant is the bread and butter of software engineers. The phenomenon’s state is nearly aligned with our intention, but the strength of our intention makes even a tiny misalignment a problem. This is the bug quadrant. Fixing bugs is a methodical process of addressing relatively small, but important problems within our code. After all, if the bug is large enough, it is no longer a bug, but a problem from the quadrant above – a disaster.

Mental models

I use the term “mental models” a lot, and so I figured – hey, maybe it’s time to do some semantic disambiguation and write down everything I learned so far about them?

When I say “mental model,” I don’t just mean a clean abstraction of “how a car works” or “our strategy” – even though these are indeed examples of mental models. Instead, I expand the definition, imagining something squishy and organic and rather hard to separate from our own selves. I tend to believe that our entire human experience exists as a massive interconnected network of mental models. As I mentioned before, my guess is that our brains are predictive devices. Without our awareness, they create and maintain that massive network of models. This network is then used to generate predictions about the environment around us. Some of these models indeed describe how cars work, but others also help me find my way in a dark room, solve a math problem, or prompt the name of emotion I am feeling in a given moment. Mental models are everything.

Our memories are manifestations of mental models. The difference between remembering self and experiencing self is in the process of incorporating our experiences into our mental models. What we remember is not our experiences. Instead, we recall the reference points of the environment in that vast network of models – and then we relive the moment within that network. Our memories are playing back a story with the setting and the cast of characters defined by our mental model.

This playback experience is not always like that black-and-white flashback moment in a movie. Sometimes it shows up as the annoying earworm song, or sweat on our palms in anticipation of a stressful moment, or just a sense of intuition. Mental models are diverse. They aren’t always visual or clothed in rational thought, or even conscious. They usually include sensory experiences, but most definitely, they contain feelings. Probably more accurately, feelings are how our mental models communicate. A “gut feeling” is a mental model at work. Feelings tell us whether the prediction produced by a mental model is positive (feels good) or negative (feel bad), so that’s the most important information to be encapsulated in the model. Sometimes these feelings are so nuanced and light that we don’t even recognize them as feelings – “I like this idea!” or “Hmm, this is weird, I am not sure I buy this” – and sometimes the feelings are touching-the-hot-plate visceral. Rational thinking is us learning how to spelunk the network of mental models to understand why we are feeling what we’re feeling.

One easy way to think of this network model as of a massive, parallel computer that is always running in the background, where we’re asleep or awake. There are always predictions being made and evaluated. Unlike computers, our models aren’t set structurally. As we grow up, our models evolve, not just by getting better, but also through the means by which models are created and organized. We can see this plainly by examining our memories. I may remember a painful experience from the past as a “terrible thing that happened to me” at first, and then, after living for a while, that “terrible thing” somehow transforms into “a profound learning moment.” How did that happen? The mental model didn’t sit still. The bits and pieces that comprised the context of the past experience have grown along with me, and shifted how I see my past experience.

We can also see that if my memory hasn’t changed over time, it’s probably worth examining. Large connected networks are notoriously prone to clustering. The seemingly kooky idea of the “whole self” is probably rooted in this notion that mental models are in need of gardening and deliberate examination. When I react to something in a seemingly childish way, it is not a stretch to consider: maybe the model I was relying on in that moment indeed remained unexamined since childhood? And if so, there’s probably a cluster within my network of mental models that still operates on the environment drawn by a three-year old’s crayon. This examination is a never-ending process. Our models are always inconsistent, sometimes a little, and sometimes a lot.

When I see a leader ungraciously lose their cool in a public setting, the thought that comes to mind is not whether their behavior is “right” or “wrong,” but rather that I’ve just been witness to a usually hidden, internal struggle of inconsistent mental models.

Our models never get simpler. I may discover a framing that opens up a new space in the previously constrained space, allowing me to find new perspectives. Others around us are at first simple placeholders in our models, eventually growing into complex models themselves, models that recurse, including complexity of how these others think of us and even perhaps how they might think we think of them (nested models!) Over time, the network of models grows ever-more complex and interconnected. At the same time, our models seamlessly change their dimensionality. Fallback fluidly influences the nuance of the model complexity, and thus – the predictions that come up. Fallback is a focusing function. If my body believes I am in crisis, it will rapidly flatten the model, turning a nuanced situation into a simple “just punch this guy in the face!” directive — often without me realizing what happened.

I am guessing that every organism has a kind of a mental model network within them. Even the simplest single-cell organisms contract when poked, which indicates that there’s a — very primitive, but still — a predictive model of environment somewhere on the inside. It is somewhat of a miracle to see that humans have learned to share mental models with such efficiency. For us, sharing the mental models is no longer limited to a few behaviors. We can speak, write, sing, and dance stories. Stories are our ways to connect with each other and share our models, extending already-complex networks way beyond the boundary of an individual mind. When we say “a story went viral,” we’re describing the awe-inspiring speed at which a mental model can be shared. Astoundingly, we have also learned to crystallize shareable mental models through this phenomenon we call technology. Because that’s what all of our numerous aids and tools and fancy gadgets are: the embodiments of our mental models.

This is what I mean when I say “mental models.” It may seem a bit useless to take such a broad view. After all, if I am just talking about leadership, engineering, or decision-making, it’s very tempting to stick to some narrower definition. Yet at the same time, it is usually the squishy bits of the model where the trickiest parts of making decisions, leading, or engineering reside. Ignoring them just feels like… well, an incomplete mental model.

The platform two-step

When we last saw Uncle Steve on his ladder, I left him a set of instructions about a less-risky way to adjust the ladder. Interestingly, these instructions roughly translate into a fairly useful technique for platform developers. I call it the platform two-step after a dance move, because platform development is a lot like dancing: both a skill and an art, and only fun when you’re into it.

This technique is something I find employed quite often by teams that make developer surfaces. Like the dance move, it is fairly intuitive and goes like this: if you desire to change an existing API, you need to first introduce a new API so that two are shipping in parallel (one foot steps out), then remove the old API (the other foot joins the first one). Simplifying it even more: to change, add one, then remove one.

This feels fairly straightforward, and appears to work nicely around the hazards of moving-the-ladder scenario we encountered with Steve. At the core of the technique is the realization that once shipped, APIs are basically immutable. Should we recognize that our first take on the API misses the mark, the only choice we have is to ship an improved API first, and then work to remove the first iteration. 

In my experience, having patience between the two steps is where most platform teams struggle. While it may take time to ship a feature, the time to remove it is usually longer – and sometimes much longer. This was a hard-learned lesson for me. In early 2011, a few of us championed a set of specs we called Web Components, shipping them in 2014. We didn’t get it right and it took the Chrome Web Platform team until early 2020 to finally remove that iteration of the API. Some removals take even longer. The Web SQL Database is finally being removed this year, having enjoyed the “remaining foot to join” status for over a decade.

While bearing this patience, a rather unhealthy dynamic tends to develop. Because the cost of removing APIs is so high, we can’t afford to ship APIs that miss the mark. Especially when already playing a catch-up game, every new feature of the layer that doesn’t inch the vector of intention toward the user value is considered an opportunity cost. A typical symptom here is the platform developer edition of analysis paralysis, where new APIs are scrutinized extensively while under increasing pressure to “ship something, darn it!” APIs end up being over-designed and late because of the fear of missing the mark, translating into more fodder for deprecation, thus resulting in more resources being sapped from the API design. 

I have observed teams stuck in that vicious cycle and I have been in them myself. The seemingly glacial pace of can be quite frustrating and appear as if the whole organization is about to seize like a giant rusted machine. The toil leaves little room for long-term thinking, with everyone just barely putting one foot in front of the other. It’s less of a dance, and more of a cruel march.

My intuition is that the way out of this situation lies through learning how to deprecate effectively. To complete the two-step, I need to learn how to migrate developers from one API to another. Put differently, in the platform two-step, the most important practice hides between the steps. Deprecation is hard work, and may seem mostly useless, because it takes away the resources for very little visible gain in the layer’s never-ending pursuit of adjusting its vector of intention. Folks who only measure a team’s success in features shipped will view deprecations as a distraction, or at best a “necessary evil.” If an organization’s culture was established during the rapid growth of a new-frontier platform, advocating for deprecation can be a tough sell.

When I want to know if a team knows how to evolve platforms, I look for things like the presence of deprecation processes, accompanying telemetry, and deprecation accomplishments celebrated on the same level (or above!) as shipping new features. A special bonus: a platform design itself considers deprecation, and perhaps even offers some novel insight that advances beyond this classic two-step technique. Perhaps a platform moonwalk of some sort?

Solved, solvable, unsolvable

I have been noodling on a decision-making framework, and I am hoping to start writing things down in a sequence, Jank-in-teams style. You’ve probably seen glimpses of this thinking process in my posts over the last year or so, but now I am hoping to put it all  together into one story across several short essays. I don’t have a name for it yet.

The first step in this adventure is quite ambitious. I would like to offer a replacement for the Cynefin framework. Dear Cynefin, you’ve been one of the highest-value lenses I’ve learned. I’ve gleaned so many insights from you, and from describing you to my friends and colleagues. I am not leaving you behind. I am building on top of your wisdom.

This new purported framework is no longer a two-by-two. Instead, it starts out as a layer cake of problem classes. Let us begin the story with their definitions.

At the top is the class of solved problems. Solved problems are very similar to those residing in Obvious space in Cynefin: the problems that we no longer consider problems per se, since there’s a reliable, well-established solution to them. Interestingly, the solution does not have to be deeply understood to be a solved problem. Hammering things became a solved problem way before the physics that make a hammer useful were discerned.

Then, there is a class of solvable problems. Cynefin’s Complicated space is a reasonable match for this class of problems. As the name implies, solvable problems don’t yet have solutions, but we have a pretty good idea on how they will look when solved. From puzzles to software releases, solvable problems are all around us, and as a civilization, we’ve amassed a wealth of approaches on how to solve them.

The final class of problems loosely corresponds to Complex space in Cynefin. These are the unsolvable problems. Unsolvable problems are just that: they have no evident solution. At the core of all unsolvable problems is a curious adaptive paradox: if the problem keeps adapting to your attempts at solving it, the solution will continue being just out of reach. I wonder if this is why games like chess usually have a limited number of pieces and a clear victory condition. If the opponents are matched enough, there must be some limit to make this potentially infinite game finite. Another way of thinking about unsolvable problems is that they are trying to solve you just as much as you’re trying to solve them.

You may notice that there is no corresponding match for Cynefin’s Chaotic space in this list. When describing Chaotic space, I’ve long recognized the presence of a clear emotional marker (disaster! emergency!) that seemed a bit out of place to how I usually described other spaces. So, in this framework, I decided to make it orthogonal to the class of the problem. But let’s save this bit for later.

The interesting thing about all three classes is that they are a spectrum that I loosely grouped into three bands. Obviously, I tend to think in threes, so it’s nice and comfy for me to see the spectrum in such a way. But more importantly, each class appears to have a different set of methods and practices associated with it. You may already know this from our studies of Cynefin. Just think of how the effective approaches in Complex space differ from those in Complicated, and how both are different from those in Obvious. 

Still, it is also pretty clear that the transition between these classes is fuzzy. As my child self was learning to tie shoes, the problem slowly traversed across the spectrum. First, the tricky bendy laces that kept trying to escape my grasp (oh noes, unsolvable!?) became more and more familiar, while tying the crisp Bunny Ears knot, despite being clearly and patiently explained, was a challenge (wait, solvable!). Then, this challenge faded, and tying shoes became an unbreakable habit (yay, solved). This journey across the problem class layers is a significant part of the framework, and something I want to talk about next. 

Moving the ladder

Riffing on the cost of opinion piece, I realized that there’s a neat framing around opinion mutability and the underlying systems dynamic that’s worth mentioning.

One way to think about the settledness of opinion is as finding a reasonable balancing point between the value and cost of the layer, where the value compounds at a relatively similar rate as costs. Opinions that find that point tend to be more settled, and opinions that are still looking for it are more transient.

Think of it as a spectrum. At one extreme are layers whose opinions have completely settled, and on the other – the layer where transient opinion churns like whitewater. As an example, the IP Protocol and the TCP/UDP twins that sit on top are a pretty good example of settled opinions. Even though I am sure there are people somewhere trying to invent something more awesome than these, I am going to rate their displacement  as “extremely unlikely.” 

On the other hand, the Web continues to experience the effervescence of Javascript frameworks that are born and discarded seemingly every month (week?), subject to fascinating natural selection-like dynamics. It would not at all be controversial for me to suggest that the opinion at this layer hasn’t settled yet.

Why do some opinions settle? And why do some continue to shift? Stewart Brand has this wonderful concept of pace layers that is pretty instructive. The idea behind pace layers is that all complex systems tend to organize themselves in terms of layers, and each outer layer evolves at a faster pace than the lower one. Stewart even attempts to introduce a taxonomy of layers, which I am not going to use here. However, this notion that inner layers shift at a slower pace than the outer is very useful in this conversation. I am not a biologist or an ethnographer, so I can’t speak for forests or civilizations. However, based on my experience in developer ecosystems, their layers are almost always organized in a pace-layer like fashion. It’s almost like the spectrum I was describing earlier is actually a description of the developer layer stack: things that are settled sit at the bottom (think TCP/IP), things that are frothing with change are at the top (think Javascript frameworks), while the layers between them span the gamut.

And I have a guess on why this happens. As a thought experiment, let’s imagine that our layer is like a ladder – I know, it’s analogy time. Just like the layer’s vector of intention, our ladder is currently leaning next to the upstairs bedroom window, where our uncle Steve just finished cleaning it. Good work, uncle Steve. Now, Steve wants to clean the bathroom window, which is a bit more to the left. The requirements changed and now, our vector of intention needs to adjust to point in a different direction. What do we do? Naively, we might say – let’s move the ladder! However, if we try to do that, we might hear some choice words from our uncle who suddenly finds himself hanging onto his dear life at the top of the ladder. Falling down is not fun. Instead, it is more common that Steve, the family daredevil that he is, chooses to lean out to reach the bathroom window. He might yet fall, but dammit, it will be on his terms. What our uncle just did there was layering. He’s okay, by the way, though Grams did see his stunt and will chew him out later. Uncle Steve created an extra layer on top of the ladder, and formed an opinion: an angle between the vector of intention of the ladder and that of his own. Even though the “rational” thing to do would have been to a) climb down the ladder, b) move it, c) climb it back up, he freakin’ chose to risk his life to save a bit of energy and time.

This move-the-ladder dynamic happens all the time in software layers. As a rule of thumb, upper layers prefer the lower layers to stay put. Waiting for the lower layer to adjust feels like giving up agency, so they tend not to. Instead, they expect the lower layer to remain roughly where it is now, perhaps refining some bits here and there, but not doing any wild swings toward the bathroom window. Recursing this effect through the layers, a pace layer structure develops. Every lower layer has more uncle-steves yelping and demanding they “whoa hold it there, you <redacted>”. Every change at a lower layer becomes a matter of painful negotiation that takes time and energy – and so the layers below tend to move at a much slower pace. It is fun to be the outermost layer, but as soon as anyone takes you as a dependency, the move-the-ladder dynamic starts to manifest itself. Every successful developer surface experiences it, and suffers through it.

Getting back to our notion of settled and transient opinions, I hope that uncles Stewart and Steve provided enough illustration of this idea that opinions of the outer layers tend to be more transient, and the deeper they go in the stack of layers, the more settled they become. It doesn’t make them right. Settled opinions could be patently, obviously wrong. However, given the full height of the ladders and foulmouthed uncles that tower above them, changing them involves a bucket of miracles.

Racing toward or running away

In this moment of flux that I heard being called the Great Resignation, it almost became normal to regularly receive emails from my colleagues and friends about changes in their work. People leave one company to join another, some start their own, and some decide to retire. When I do get to chat with them, the question that I usually ask is whether they are racing toward or running away. Given the decision to make this change, is it more about leaving the current environment, or entering a new environment?

Neither of these are wrong or right, but might be useful to understand, especially in conditions as stressful as job change. Having done a few of these in my career, I’ve learned to recognize that each brings a different mindset and a set of implicit expectations – and surfacing these expectations early usually does some good.

When I ask, folks often have an immediate answer. They can tell right away if the change they’re making is racing toward a new opportunity or running away from a setup they no longer find tenable. If you are blessed enough to contemplate such a change, and aren’t sure how you would answer this question, here’s a silly exercise that might be of use.

Ready? Ask yourself why you are currently working where you are. Answer as honestly as possible, trying to state it in terms of some larger reason behind your work. Once you have the first answer, see if it resonates deeply with you, excites you, gives you that sense of being aligned with your internal sense of purpose. If it isn’t, keep asking the question. Why is this larger purpose important? This line of questions may terminate early with the “Aha! This is exactly what I want to be doing with my life”,  or take you toward some lofty ideals like “uplifting humanity”, or it may attempt to trap you in weird causal circles.

Now, do the same for the new opportunity. Does it follow a similar path? Is it crisp and brief, or even more convoluted than the current one?

Here’s my intuition. If the second string of “whys”  is shorter than the first one, you are likely racing toward the new opportunity. If it’s the other way around, you’re likely running away from your current work situation. And if they’re both pretty long, then it might be worth looking at other opportunities, and seek the ones that ring a bit more true with what you believe you’re meant to do with this life.

Awareness of interoception

Recently, I have been fascinated by the wonderful and mysterious part of being human – our interoceptive system. It’s this thing that we all have, but to which I didn’t pay any special attention. The interoceptive system is how we experience what happens inside of our body.

If we sit very quietly and try to draw our attention inside, we can start noticing that we can perceive all kinds of things going within us. If we believe Antonio Damasio, the complete set of these experiences — or what we call feelings – plays a critical role in how we experience the world outside, how we show up, and indeed who we are. Even though it takes skill for us to consciously notice our feelings as distinct experiences in various parts of our body, our mind is well-familiar with these signals, constantly and seamlessly relying on them. Things that prick our fingers feel bad, as well as things that are too hot or too cold.

What I found particularly insightful is that our memories contain these experiences as well. Remembering an event when something bad happened actually feels bad – the interoceptive track of our memories replays how we felt during that moment. This leads to a bunch of weird effects.

For example, we can be afraid of feeling fear. Let’s chew on that one together. Suppose I walked under a tree… face-first into a spider web. Yuck. I am not a fan of spiders, so my interoceptive system would immediately inform me that this is a scary experience. Next time I go near that tree, something odd will happen. I will have this inkling that maybe I don’t want to go under that tree. What’s going on? Turns out, upon seeing the tree, my memory of the encounter with the spider will helpfully pop up, and replay the dose of fear I experienced. I will probably explain it as “intuition” or “good judgment” to walk around the tree, but more honestly, I will be reacting to the experience of an interoceptive memory. I will be afraid of feeling that experience again. 

Even more bizarrely, the whole thing is path-dependent: the new memory of choosing to walk around the tree will include the interoceptive experience of newly-experienced fear of feeling that first fear, and so on. This stuff can get rather gnarly and turn unproductive really fast. Maybe I shouldn’t walk under any trees at all. Or staircases. Or covered porches. Spiders could be anywhere.

Of the many moments I am not proud of, there was that one time when I needed to give a colleague of mine some really uncomfortable feedback. We were sitting right across from each other, and I just needed to lean over and say: “hey, can we talk?” And I couldn’t. I just sat there, looking at my colleague’s back, paralyzed. I was overcome by the spiral of fear of feeling fear of feeling fear, folding over and over onto itself.

Another weird effect is a similar kind of vicious cycle of our minds collaborating with our body to rationalize negative feelings. If you ever woke up from a bad dream you couldn’t even remember and then had trouble going back to sleep, this will be familiar to you. The thing is, our minds are exceptionally good at association. Whenever our interoceptive system informs us that something of negative valence (that is, something that feels bad) is happening, the mind eagerly jumps into the fray, helpfully finding all the similar interoceptive experiences from our past. In doing so, those experiences are replayed, exacerbating our interoceptive state, which feeds back into our minds looking up more and more terrible entries in the great database of “crappy stuff that happened to us.”

If this resonates with you and you’re curious about how to put an end to this drama, I have both good news and bad news. I’ll start with the bad news. This stuff happens to us pretty much all the time and will continue to happen, no matter how rationally we aspire to behave. Feelings are us. The somewhat good news is that we can learn to be more aware of our interoceptive system and apply that awareness to reduce the intensity of the vicious cycles. I can’t stop my interoceptive system from blaring klaxons, but I can learn to react to them more productively. The whole awareness thing takes effort and practice, but seems to work – at least, in my experience.

Choosing kale

Chatting with my friends about choosing developer frameworks, we accidentally arrived at this kale metaphor. It sounded witty and fun, so I’ll try to unpack it as best as I can. To begin, I will apply the value triangle lens from a while back, going over the edges of the triangle. I only need them as examples of the improbable extremes. What’s up with all the triangles in my posts lately?

A framework that favors the user/ecosystem edge will present itself as a promise of a high-minded idea, then quickly reveal a toy upon a close examination – kind of like one of those fake fruits arranged on a dinner table to spruce up the interior. As a child, yours truly once was lucky enough to taste one of those fruits and learn a valuable lesson about appearances. Yum, stale wax.

Building on the tasting metaphor, the combination of business and ecosystem value usually produces frameworks whose taste is best described as … cardboard, I guess? There’s definitely important ingredients like fiber in there, but the dietary value and enjoyment are nigh nil. Large companies tend to produce these frameworks for internal use and almost without fail, the quality of their developer experience tends to wind downward with time. These frameworks aren’t picked. They grow in place.

When a framework sits on the edge of user and business value, it usually tastes like candy. It’s downright addictive to use and makes everyone look good. Sadly, as we know from the value triangle discussion, the consequences of this sugar high are usually borne by the ecosystem – which eventually gets back to users. The long feedback loop of ecosystem effects creates a double-hook: if I only plan to stay on this team for a couple of years, there aren’t any downsides for me. I can just pick the hottest framework I like. Let the successors sweat the incurred debt.

It is my guess that when trying to find a framework that will work well for a team in the long term, the prudent choice will taste something like kale. Like the nutritious leafy vegetable, it won’t seem like an easy pick compared to other choices.

Such a framework will tend to look a bit boring compared to the other contenders, less opinionated. Instead, it will likely carefully manage its cost of opinion in relation to the underlying platform — and as a result, keep the papering over the platform’s rough edges to a minimum. Expect a couple of wart-looking things here and there. Make sure they are indeed the outcome of a well-budgeted opinion. Keep in mind the cardboard extreme – a good-for-you framework doesn’t have to taste bad. 

The values of a kale framework will likely point toward concerns around the larger ecosystem, rather than directly focusing on quality of the developer experience. This is usually a good kale marker: instead of promising how great the taste will be, there will be focus on long-term health. Of course, please do the due diligence of taking the framework for a spin and make sure it’s not just a decorative ornament. Give it a few bites to ensure it’s not made of wax.

The kale-like choice may also be somewhat out of alignment with the team’s current developer practices. Misalignments don’t feel great. However, if you are looking to improve how your organization ships products, a framework is a powerful way to influence the development norms within the organization. In such cases, the misalignment is actually a tilt that steers it toward desired outcomes. For example, if my team is currently stuck in the anti-pattern of erecting silos of widget hierarchies for each new app, choosing a framework that encourages shared components might seem eccentric and inefficient, but eventually lead to breaking out of stuckness.

I hope these musings will help you generate some insights in your next search for the right framework. And lest I succumb to the normative voice of a recommender, I hope you use these insights to find your own definition of what “kale” means in your environment. May you find the strength to choose it.