Cutting or highlighting?

Talking with one of my wise colleagues, we arrived at this neat framing around making decisions. “Decision” is a weird word. My friend Neel tells me that its Latin root is literally “to cut away,” and that recognition that a decision is always about narrowing the list of available options can be both liberating and anxiety-inducing. The interesting bit is that sometimes, decisions aren’t meant to cut away.

As it often happens in larger organizations, we tend to live in the swirl of the short-term and long-term objectives. And it is definitely a swirl: the art of leadership is picking out the right mix. Blend in too much short-term, and we’ll find the team stuck in the corner of a local maxima, overconstrained by its previous choices. Blend in too much long-term, and the team fails to make progress that’s necessary to keep that motor running.

To adjust the mix, leaders decide. A common mechanism here is prioritization: picking a shorter list of things that the team needs to focus on. One approach to prioritization is to apply the cutting mindset, as in cutting the list in half: keep what’s above the line and discard the rest. For example, let’s suppose I want to build an app that is available on both Android and iOS phones, but I want to prioritize iOS users. Applying this cutting mindset, I just forget about Android for a while, break out my XCode and start typing some Swift. Only after the iOS version is shipping do I start looking at the rest of the list. Given how many unexpected turns a typical software project takes, will I ever get to do that? Maybe. Will this approach result in a painful migration or two — or worse yet, the “release polka” where my app always looks out-of-date on one platform compared to the other? Probably.  

The cutting mindset is straightforward and clarifying. Yet, in situations where the rest of the list is still a thing we want to do later, it often leads to inferior choices. In these situations, we need something different. Instead of cutting the list, we want to highlight the items on which we want to focus — while still keeping the rest of the list in mind. With this highlighting mindset, the choices we make take on a different spin. Instead of asking “what’s the next step to deliver <prioritized item>?” we ask “how can we take a step toward delivering <prioritized item> that also takes us closer to completing the whole list?” The thing is, the items on our lists are rarely orthogonal and live in clean separate boxes. More often than not, considering them as a whole reveals opportunities for advancing toward completion of multiple items simultaneously. In my app example, while still focused on the iOS release first, I might consider picking a UI framework that also runs on Android, or at least build my middleware in a way that is portable.

When making prioritization decisions, it’s worth being explicit about the mindset with which you’re approaching, discussing with the team the reason you chose one over the other. Otherwise, despite your desire to highlight, an eager PM might swiftly cut your long-term objectives out of the mix. Or conversely, the team will continue to swirl aimlessly in the ideals you have already forgotten about, from back when you thought you cut them away.

Making cross-cutting investments

Over the last few weeks, I’ve been thinking about approaches to influencing projects and teams across a large organization. I framed the exercise like this: suppose I have some organizational currency (headcount!) to invest. How do I nudge project trajectories in the most effective way?

Then, I talked to a bunch of peeps, looking for forces present in this problem space. There are probably many more, but these two piqued my attention. The first one has to do with the term of the investment: do I want to invest in several different things over time or do I mostly want to keep investing into the same thing? The second one has to do with how much control I want to have over the structure of investment: how much steering do I want to do with my investment? Mapping these two forces into thinking space, a few recognizable clusters emerge. 

A good example of low-control, permanent investment is donation. I find a team, recognize that it is doing important work and decide to help them by adding to their capacity to hire new folks. Based on my experience, this is more or less a permanent investment. Withdrawing donated headcount tends to be painful for all parties involved. Nevertheless, if the team’s goals are largely aligned with mine over the long term, and I have no qualms with their strategy, it’s a pretty good fit.

If I want a more temporary engagement, I need a different approach. One is to temporarily augment a team with a group of folks to accelerate a particular aspect of work. It’s exciting to imagine that such a team will drop in and race forth with uncanny precision. However, in orgs that have strong engineering practices and structures, augmentation is first and foremost a matter of learning to follow those practices and fitting into existing structures. “Who will review your CLs?” is the question to contemplate when considering the augmentation setup. They work well in homogenous organizations, when the members know the engineering practices well and are the people who can review CLs. Otherwise, this investment tends to offer less control than anticipated.

To gain a bit more control without permanence, I will likely try to incubate a team: seed it with good peeps, set up a resilient structure to help it stay on course, get it past the early growing pains, and let it go. Variants of this approach are found in research organizations and idea incubators, and I’ve seen it work. In the couple of times that I participated in the process, the biggest challenge was finding the right fit for the graduating team and then shepherding the team through often painful reintegration. At least to me, incubation felt more like an art rather than a repeatable process, but that just might be the lack of experience.

Finally, if I am seeking to invest in the long term while retaining high control, I am probably productizing, or reframing my desire to help in terms of a developer-facing product: a tool, a library/framework, an SDK, etc. This product must be good enough for the teams to want to rely on — and to get results that I want them to get. Note that this end result is a second-order effect (first, they want to use it, second, they produce desired outcomes), which is what makes this approach so challenging. On the other hand, precisely because of the indirection, this approach has something that no other approaches offer: the ability to influence multiple teams. Productizing is typically more demanding compared to others. It takes more effort and capacity to build an effective team that reliably ships a successful developer product and have the resilience to keep an eye on the outcomes I need. That last one is important. It takes just a little bit of stress and firefighting to fall back into the “let’s make developers happy” mode and forget the whole point of the exercise.

Heuristics will be discerned and codified

Here’s another developer experience pattern that I’ve noticed. When designing APIs, we are often not sure how they will be used (and abused), and want to leave room for maneuvering, to retain a degree of agency after the API is in widespread use. One tempting tool we reach for is the use of heuristics: removing the explicit levers to switch on and off  or knobs to turn from the developer surface and instead relying on our understanding of the situational context to make decisions ourselves. Unfortunately, when used with developer surfaces, heuristics tend to backfire. Developers who want those levers and knobs inevitably find ways to make them without us. And in doing so, they remove that agency that we were seeking in the first place.

Because heuristics are so tempting, this pattern is very common. Here’s the most recent example I’ve stumbled upon. Suppose you are a Web developer who wants to create an immersive experience for their users and for that, you want to make sure that their device never goes to sleep while they are in this experience. There’s an API that enables that, called the Wakelock API. However, let’s imagine that I am a browser vendor who doesn’t want to implement this API because I might be worried that the developers will abuse it. At the same time, I know that some experiences do legitimately call for the device screen to stay awake. So I introduce a heuristic: stay awake if the Web site contains a playing video. Great! Problem solved. Except… You want to use this API in a different scenario. So what do you do? You discern the heuristic, of course! Through careful testing and debugging, you realize that if you put a tiny useless looping video in the document, the device will never go to sleep. And of course, now that you’ve discerned the heuristic, you will share it with the world by codifying it: you’ll write a tiny hosted API library that turns your hard-earned insight into a product. With the Web ecosystem being as large as it is, the library usage spreads and now, everyone uses it. Woe to me, the browser vendor. My heuristic is caught in the amber of the Web. Should I try to change it, I’ll never hear the end of it from angry developers whose immersive experiences suddenly start napping.

It’s not that heuristics are a terrible tool we should never use. It’s that when we decide to rely on them in lieu of developer surface, we need to anticipate that they will be discerned and codified — sometimes poorly. This means that if we wanted to rely on heuristics for some extra flexibility in our future decisions, we’re likely to get the opposite outcome — especially in large developer ecosystems.

Hosting and hosted API design perspectives

When discussing API design strategies, I keep running into this distinction. It seems like a developer experience pattern that’s worth writing down.

Consider these two perspectives from which API designers might approach the environment. The first perspective presumes that the API implementation is hosting the developer’s code, and the second that the API implementation is being hosted by the developers’ code.

From the first perspective, the API designer sees their work as making a runtime/platform of some sort. The developer’s code needs to somehow enter a properly prepared environment, execute within that environment, consuming the designed APIs, and then exit the environment. A familiar example of designing from this perspective is the Web browser. When the user types the URL, a new environment is created, then the developer’s code enters the environment through the process of loading, and so on. Every app (or extension) platform tends to be designed from this perspective. Here, the developer’s code is something that is surrounded by the warm (and sometimes not very warm) embrace of the APIs that represent the hosting environment.

When I design APIs from the second perspective, the developer’s code is something that hosts my code. I am still offering an API that is consumed by someone else, but I don’t set the rules or have opinions on how the surrounding environment should work. I just offer the APIs that might be useful. Typically, this perspective results in designing libraries and frameworks. For example, I might write a set of helper functions that provide a better handling of date math in Javascript. This tiny library can run in any Javascript environment, be that server or client. It can be hosted by any app or site that needs date math. This “run wherever, whatever” is a common attribute of this API design perspective.

There is the growth/control tension that seems to map into these two perspectives. Hosting perspective exhibits the attitude of control, while hosted perspective favors the force of growth. As with any tension, complexity arises along the spectrum between the two.

A Javascript framework (a hosted API) that has strong opinions about its environment (wanting to be a hosting API) will have challenges maintaining this environment, since it is ultimately incapable of creating it. Back in the day when I still worked in the Web Platform, I’ve had many discussions with framework authors who wanted us Web Platform folks to give them the option to create clean environments. This desire to shift from hosted to hosting was not something I recognized back then and now wish this article existed to help me reason through the struggle.

Similarly, a hosting API that wants to grow will be pressed to make the environment more flexible and accommodating. Going back to the example above, we Web Platform folks were experiencing that pressure, the force that was pulling us away from hosting and toward a hosted API design perspective. After that shift, the code that renders Web pages — the fundamental building block of the Web Platform environment — would become just one of the libraries to pick and choose from.

It is also important to note that, using Hamilton Helmer’s classification, the existence of a hosting environment is a form of cornered resource. It’s something that only becomes possible to have when the API designer has the luxury of a significant quantity of willing hosted participants. In the absence of eager hordes of developers knocking on your door, taking a hosting API design perspective is a high miracle count affair. When thinking about this, I am reminded of several ambitious yet ultimately unsuccessful efforts to “create developer ecosystems.” There are ways to get there, but starting out with the hosting API design perspective is rarely one of them.

The fractal dragon

I’ve already shared this piece elsewhere, but might as well post it here. This story is almost like a piece of fantasy fiction that’s waiting to be written — and a metaphor to which I keep coming back to describe flux.

Imagine a warrior who sets out to slay the dragon. The warrior has the sharpest sword, the best armor, and is in the top physical shape. The dragon is looming large, and the warrior bravely rushes forth. What the warrior doesn’t know is that this is a fractal dragon. You can’t defeat a fractal dragon with a sword. So our courageous, yet unwitting warrior lands mighty blows on the dragon. The warrior moving in perfect form, one with the sword. You might look at this feat of precision and go: wow, this warrior is so amazing at crushing the dragon into a million bits! Look at them go! Except… each bit is a tiny dragon-fractal. A few hundred more valiant slashes and the warrior will be facing a new kind of opponent: the hive of the dragon sand. The warrior’s blade will woosh harmlessly through the sand and all we can hope is that the warrior has dragon-sandblast-proof defenses (hint: nope).

This weird effect of flux is something that we engineering souls need to be keenly aware of. When we find ourselves in that confident, exhilarating problem-solving mindset, it is on us to pause and reflect: are we perchance facing the fractal dragon? Will each “solution” create an army of different kinds of problems, each more and more immune to the tools we applied to the original problem? And if/when we recognize the fractal dragon, do we have the access tools that aren’t our mighty sword we’re so fond of?

Causal chains

My experience is that most folks around me (myself included) enjoy employing the power of causality to understand various phenomena. There’s something incredibly satisfying about establishing a sound causal chain. Once the last piece of the puzzle clicks in place, there’s nothing like it. Back when I still worked directly in code, some of my fondest memories were tracking down the causes of bugs. I remember once, we shipped a version of Chrome and suddenly, people started having the freakiest of crashes. Like, I spent a few days just staring at traces trying to comprehend how that might even be possible. However, as more information (and frustrated users) piled up, the long causal chain slowly coalesced. This happens, then this, then that, and — bam! — you get a sad tab. I still remember the high of writing the patch that fixed the crash. The grittiest of bugs have the longest causal chains, which always made them so much fun to figure out.

At the same time, there are causal chains that we perceive as incredibly short. Reach for a cup – get a drink. Press a key to type a letter in a doc. They might not actually be short (by golly, I know enough about HTML Editing and how Google Docs work to know otherwise) — but to us, they are simple action-reaction chainlinks. We see them as atomic and compose the causal chains of our life stories out of them.

We engineers love collapsing long causal chains into these simple chainlinks: turning a daunting process into a single action. My parents reminded me recently of how much harder it used to be to send emails before the Internet. I had forgotten the hours I spent traversing FIDO maps, crafting the right UUCP addresses, and teaching my Mom how to communicate with her colleagues — in another city! Electronically! Nowadays, the Wizardry of Email-sending has faded away into the background, replaced with agonizing over the right emoji or turn of the phrase. And yes, adding (and encoding) emojis also used to be a whole thing. A poetic way to describe engineering could be as the craft of seeking out and collapsing long causal chains into simple chainlinks, crystallizing them into everyday products.

Based on my understanding of the human brain, this is not too dissimilar from how it works. I am not a neuroscientist myself. My guides here are books by Lisa Feldman Barrett and Jeff Hawkins, as well as Daniel Kahneman’s seminal “Thinking, Fast and Slow”. It does look like the two processes: the discovery of causal chains (Dr. Barrett calls the process “novelty search”) and then collapsing them into chainlinks (“categorization/compression” or “reference framing”) are something that our brains are constantly engaged in.  And once collapsed and turned into simple chainlinks, our brains are incredibly efficient at reaching for them to — you guessed it — seek out novel causal chains, continuing the infinite recursion of making sense of the world.

This seems like a perfect system. Except for one tiny problem: our discovered causal chains often contain mistakes. If this then that might omit an important variable or two. Remember those freaky crashes? Those were manifestations of engineers’ mistakes in their process of collapsing the massive causal chains that comprise a modern browser into the simple “go to URL.” In software, engineers spend a lot of time finding and fixing these bugs — and so do our brains. Still, both our software and our brains are teeming with chainlinks that hide mistakes (yes, I’ve said it — we’re full of bugs!) Worse yet, the recursive nature of our sense-making tends to amplify these mistakes, while  still concealing their origin. While software just stops working, we tend to experience the amplified, distorted mistakes as suffering: anxiety, depression, burnout, anger, stress, etc. It takes intentional spelunking to discern these mistakes and not get further overwhelmed in the process. Like most astonishing things, our capacity for discovering and collapsing causal chains is both a gift and a curse. Or so the causal chain of this story says.

Ecosystems from product and user perspective

In a couple of conversations this week, the word “ecosystem” came up, and I realized that there were two different ways in which we employed that word.

The first one I heard was using “ecosystem” to describe a collection of products with which users come in contact. Let’s call it the product ecosystem perspective. This perspective puts software and/or hardware at the center of the ecosystem universe. Users enter and exit the ecosystem, and changing the ecosystem means making updates to products, discontinuing them, and shipping new products. It’s a fairly clean view of an ecosystem.

The other way I’d heard the word “ecosystem” being used was to describe the users that interact with the product, or the user ecosystem perspective. Here, the user is at the center of the ecosystem universe. It is products that move. Users pick them up or drop them, according to interests, desires, comfort, or needs. Users are humans. They talk with each other, giving out their own and following others’ advice, giving rise to waves and wanes in product popularity. This view of an ecosystem is messy, annoyingly unpredictable, and beautifully real.

It feels intuitive to me that both of these perspectives are worth keeping in mind. The empowering feel of the product ecosystem perspective is comforting for us technologically-inclined folk. It’s easy to measure and prioritize. Diving into the complexity of user ecosystem perspective provides deeper insights into what’s really important.

Flux budget and predictability footprint

I’ve been thinking about this idea of the flux budget as a measure of capacity to navigate complexity of the environment. With a high flux budget, I can thrive in massively volatile, uncertain, complex, and ambiguous (yep, VUCA) spaces. With a low flux budget, a slightest sight of unpredictability triggers stress and suffering. If we imagine that the flux budget is indeed a thing, then we can look at organizations –and ourselves — and make guesses about how the respective flux budgets are managed.

Reflecting on my own habits, I am recognizing that to manage my flux budget, I have to deliberately work for it. To peer into the abyss of unpredictable, it appears that I need to be anchored to a sizable predictable environment. I ruthlessly routinize my day. From inbox zero to arranging shirts, to my exercise schedule, and even the allotment of guilty pleasures (like watching a TV show or the evening tea with cookies), it’s all pretty well-organized and neatly settled. Observing me in my natural routine habitat without context might conjure up depictions of thoughtless robots. Yet this is what allows me to have the presence to think deeply, to reflect, and patiently examine ideas without becoming attached to them.

This reaching for the comfort of routine has obvious consequences. How many beautiful, turning-point moments have I missed while sticking to my routine? How many times has the routine itself led me away from insights that would have otherwise been on my path? Or worse yet, imposed an unnecessary burden on others? Let’s call this phenomenon the predictability footprint: the whole of the consequences of us creating a predictable environment to which to anchor in the face of complexity.

I am pretty excited to be learning more about the relationship between flux budget and predictability footprint. The whole notion of the footprint (which I borrowed from carbon footprint) speaks to the second-order effects of us seeking solid ground in the flux of today’s world — and how that in turn might create more flux. A while back, I wrote about leading while sleepwalking, which seems like a decent example of a vicious cycle where a leader’s predictability footprint increases the overall state of flux, placing more demand on an organization’s flux budget.

These framings also help me ask new interesting questions. What is my predictability footprint? How might it affect my flux budget? What are the steps I can take to reduce my predictability footprint?

Intention and shared mental model space

Hamilton Helmer pointed out this amazing connection between intention and shared mental model space that I haven’t seen before. If we are looking to gain more coherence within an organization, simply expanding the shared mental model does not seem sufficient. Yes, expanding this space creates more opportunities for coherence. But what role does the space play in realizing these opportunities?

A metaphor that helped me: imagine the shared mental model space as a landscape. There are tall mountains, and deep chasms, as well as areas that make for a nice, pleasant hike. Those who are walking within this landscape will naturally form paths through those friendly areas.  When a shared mental model space is tiny, everyone is basically seeing a different landscape. Everyone is walking their own hiking trails, and none of them match. Superimposed into one picture, it looks like Brownian motion. When the shared mental model space is large, the landscape is roughly the same, and so is the trail, growing into a full-blown road that everyone travels.

On this road, where is everybody going? Where is the road leading them? Shared mental models aren’t just a way for us to communicate effectively. They also shape the outcomes of organizations. The slope of the road is slanted toward something. The common metaphors, terms, turns of the phrase, causal chains and shorthands — they are the forms that mold our organization’s norms and culture.

If my team’s shared mental model space is dominated by war metaphors and ironclad logic of ruthless expansion, the team will see every challenge — external or internal — as a cutthroat battle. If my organization’s key metaphors are built around evaluating the impact of individual contributions, we might have trouble cohering toward a common goal.

Put differently, every team and organization has an intention. This intention is encoded in its shared mental model space.  The slant of that road gently, but implacably pulls everyone toward similar conclusions and actions. This encoded intention may or may not be aligned with the intention of organization’s leaders. When it is, everything feels right and breezy. Things just happen. When it is not, there is a constant headwind felt by everyone. Everything is slow and frustrating. Despite our temptation to persevere, I wonder if we would be better off becoming aware of our shared mental model space, discerning the intention encoded in it, and patiently gardening the space to slant toward the intention we have in mind?

The story of a threat

Continuing my exploration of narratives that catalyze coherence, I would be remiss to not talk about the story of a threat.

The story of a threat is easily the most innately felt story. When compared to the story of an opportunity, it seems to be more visceral, primitive, and instinctive. It is also a prediction of compounding returns, but this time, the returns are negative. The story of a threat also conveys a vivid mental model of a compounding loop, but the gradient of the curve is pointing toward doom at an alarming rate. Living in 2021, I don’t need to go too far for an example here: the all-too-familiar waves of COVID-19 death rates are etched in our collective consciousness. Just like with the story of an opportunity, there’s something  valuable that we have and the story predicts that we’re about to lose it all.

Structurally, the story of a threat usually begins with depiction of the vital present (the glorious “now”), emphasizing the significance of how everything is just so right now. It then proceeds to point out a yet-hidden catastrophe that is about to befall us. The reveal of the catastrophe must be startling and deeply disconcerting: the story of a threat does not seem to work as effectively with “blah-tastrophes.” Being able to “scare the pants off” the listener is the aim of the story.

A curious property of the story of a threat is that it is a half-story. It only paints the picture of the terrible future in which we’ll definitely be engulfed. Unlike with the story of an opportunity, there is less agency. Something bad is happening to us, and we gotta jump or perish. In that sense, the story of a threat is reactive — contrasted with the proactive thrust of the story of an opportunity. Being reactive, it propels the listener toward some action, leaving out the specifics of the action.

This half-storiness is something that is frequently taken advantage of in politics. Once the listener is good and ready, sufficiently distraught by the prospect of the impending disaster, any crisp proposal for action would do. We must do something, right? Why not that?

The story of a threat is a brute force to be reckoned with, and is extremely challenging to contain. Such stories can briefly catalyze coherence. But unless quickly and deliberately converted to the story of an opportunity, they tend to backfire. Especially in organizations where employees can just leave for another team, the story of a threat is rarely a source of enduring coherence. More often than not, it’s something to be wary of for organizational leaders. If they themselves are subject to the story of a threat, chances are they are undermining the coherence of their organization.