Making cross-cutting investments

Over the last few weeks, I’ve been thinking about approaches to influencing projects and teams across a large organization. I framed the exercise like this: suppose I have some organizational currency (headcount!) to invest. How do I nudge project trajectories in the most effective way?

Then, I talked to a bunch of peeps, looking for forces present in this problem space. There are probably many more, but these two piqued my attention. The first one has to do with the term of the investment: do I want to invest in several different things over time or do I mostly want to keep investing into the same thing? The second one has to do with how much control I want to have over the structure of investment: how much steering do I want to do with my investment? Mapping these two forces into thinking space, a few recognizable clusters emerge. 

A good example of low-control, permanent investment is donation. I find a team, recognize that it is doing important work and decide to help them by adding to their capacity to hire new folks. Based on my experience, this is more or less a permanent investment. Withdrawing donated headcount tends to be painful for all parties involved. Nevertheless, if the team’s goals are largely aligned with mine over the long term, and I have no qualms with their strategy, it’s a pretty good fit.

If I want a more temporary engagement, I need a different approach. One is to temporarily augment a team with a group of folks to accelerate a particular aspect of work. It’s exciting to imagine that such a team will drop in and race forth with uncanny precision. However, in orgs that have strong engineering practices and structures, augmentation is first and foremost a matter of learning to follow those practices and fitting into existing structures. “Who will review your CLs?” is the question to contemplate when considering the augmentation setup. They work well in homogenous organizations, when the members know the engineering practices well and are the people who can review CLs. Otherwise, this investment tends to offer less control than anticipated.

To gain a bit more control without permanence, I will likely try to incubate a team: seed it with good peeps, set up a resilient structure to help it stay on course, get it past the early growing pains, and let it go. Variants of this approach are found in research organizations and idea incubators, and I’ve seen it work. In the couple of times that I participated in the process, the biggest challenge was finding the right fit for the graduating team and then shepherding the team through often painful reintegration. At least to me, incubation felt more like an art rather than a repeatable process, but that just might be the lack of experience.

Finally, if I am seeking to invest in the long term while retaining high control, I am probably productizing, or reframing my desire to help in terms of a developer-facing product: a tool, a library/framework, an SDK, etc. This product must be good enough for the teams to want to rely on — and to get results that I want them to get. Note that this end result is a second-order effect (first, they want to use it, second, they produce desired outcomes), which is what makes this approach so challenging. On the other hand, precisely because of the indirection, this approach has something that no other approaches offer: the ability to influence multiple teams. Productizing is typically more demanding compared to others. It takes more effort and capacity to build an effective team that reliably ships a successful developer product and have the resilience to keep an eye on the outcomes I need. That last one is important. It takes just a little bit of stress and firefighting to fall back into the “let’s make developers happy” mode and forget the whole point of the exercise.

Hosting and hosted API design perspectives

When discussing API design strategies, I keep running into this distinction. It seems like a developer experience pattern that’s worth writing down.

Consider these two perspectives from which API designers might approach the environment. The first perspective presumes that the API implementation is hosting the developer’s code, and the second that the API implementation is being hosted by the developers’ code.

From the first perspective, the API designer sees their work as making a runtime/platform of some sort. The developer’s code needs to somehow enter a properly prepared environment, execute within that environment, consuming the designed APIs, and then exit the environment. A familiar example of designing from this perspective is the Web browser. When the user types the URL, a new environment is created, then the developer’s code enters the environment through the process of loading, and so on. Every app (or extension) platform tends to be designed from this perspective. Here, the developer’s code is something that is surrounded by the warm (and sometimes not very warm) embrace of the APIs that represent the hosting environment.

When I design APIs from the second perspective, the developer’s code is something that hosts my code. I am still offering an API that is consumed by someone else, but I don’t set the rules or have opinions on how the surrounding environment should work. I just offer the APIs that might be useful. Typically, this perspective results in designing libraries and frameworks. For example, I might write a set of helper functions that provide a better handling of date math in Javascript. This tiny library can run in any Javascript environment, be that server or client. It can be hosted by any app or site that needs date math. This “run wherever, whatever” is a common attribute of this API design perspective.

There is the growth/control tension that seems to map into these two perspectives. Hosting perspective exhibits the attitude of control, while hosted perspective favors the force of growth. As with any tension, complexity arises along the spectrum between the two.

A Javascript framework (a hosted API) that has strong opinions about its environment (wanting to be a hosting API) will have challenges maintaining this environment, since it is ultimately incapable of creating it. Back in the day when I still worked in the Web Platform, I’ve had many discussions with framework authors who wanted us Web Platform folks to give them the option to create clean environments. This desire to shift from hosted to hosting was not something I recognized back then and now wish this article existed to help me reason through the struggle.

Similarly, a hosting API that wants to grow will be pressed to make the environment more flexible and accommodating. Going back to the example above, we Web Platform folks were experiencing that pressure, the force that was pulling us away from hosting and toward a hosted API design perspective. After that shift, the code that renders Web pages — the fundamental building block of the Web Platform environment — would become just one of the libraries to pick and choose from.

It is also important to note that, using Hamilton Helmer’s classification, the existence of a hosting environment is a form of cornered resource. It’s something that only becomes possible to have when the API designer has the luxury of a significant quantity of willing hosted participants. In the absence of eager hordes of developers knocking on your door, taking a hosting API design perspective is a high miracle count affair. When thinking about this, I am reminded of several ambitious yet ultimately unsuccessful efforts to “create developer ecosystems.” There are ways to get there, but starting out with the hosting API design perspective is rarely one of them.

The fractal dragon

I’ve already shared this piece elsewhere, but might as well post it here. This story is almost like a piece of fantasy fiction that’s waiting to be written — and a metaphor to which I keep coming back to describe flux.

Imagine a warrior who sets out to slay the dragon. The warrior has the sharpest sword, the best armor, and is in the top physical shape. The dragon is looming large, and the warrior bravely rushes forth. What the warrior doesn’t know is that this is a fractal dragon. You can’t defeat a fractal dragon with a sword. So our courageous, yet unwitting warrior lands mighty blows on the dragon. The warrior moving in perfect form, one with the sword. You might look at this feat of precision and go: wow, this warrior is so amazing at crushing the dragon into a million bits! Look at them go! Except… each bit is a tiny dragon-fractal. A few hundred more valiant slashes and the warrior will be facing a new kind of opponent: the hive of the dragon sand. The warrior’s blade will woosh harmlessly through the sand and all we can hope is that the warrior has dragon-sandblast-proof defenses (hint: nope).

This weird effect of flux is something that we engineering souls need to be keenly aware of. When we find ourselves in that confident, exhilarating problem-solving mindset, it is on us to pause and reflect: are we perchance facing the fractal dragon? Will each “solution” create an army of different kinds of problems, each more and more immune to the tools we applied to the original problem? And if/when we recognize the fractal dragon, do we have the access tools that aren’t our mighty sword we’re so fond of?

Causal chains

My experience is that most folks around me (myself included) enjoy employing the power of causality to understand various phenomena. There’s something incredibly satisfying about establishing a sound causal chain. Once the last piece of the puzzle clicks in place, there’s nothing like it. Back when I still worked directly in code, some of my fondest memories were tracking down the causes of bugs. I remember once, we shipped a version of Chrome and suddenly, people started having the freakiest of crashes. Like, I spent a few days just staring at traces trying to comprehend how that might even be possible. However, as more information (and frustrated users) piled up, the long causal chain slowly coalesced. This happens, then this, then that, and — bam! — you get a sad tab. I still remember the high of writing the patch that fixed the crash. The grittiest of bugs have the longest causal chains, which always made them so much fun to figure out.

At the same time, there are causal chains that we perceive as incredibly short. Reach for a cup – get a drink. Press a key to type a letter in a doc. They might not actually be short (by golly, I know enough about HTML Editing and how Google Docs work to know otherwise) — but to us, they are simple action-reaction chainlinks. We see them as atomic and compose the causal chains of our life stories out of them.

We engineers love collapsing long causal chains into these simple chainlinks: turning a daunting process into a single action. My parents reminded me recently of how much harder it used to be to send emails before the Internet. I had forgotten the hours I spent traversing FIDO maps, crafting the right UUCP addresses, and teaching my Mom how to communicate with her colleagues — in another city! Electronically! Nowadays, the Wizardry of Email-sending has faded away into the background, replaced with agonizing over the right emoji or turn of the phrase. And yes, adding (and encoding) emojis also used to be a whole thing. A poetic way to describe engineering could be as the craft of seeking out and collapsing long causal chains into simple chainlinks, crystallizing them into everyday products.

Based on my understanding of the human brain, this is not too dissimilar from how it works. I am not a neuroscientist myself. My guides here are books by Lisa Feldman Barrett and Jeff Hawkins, as well as Daniel Kahneman’s seminal “Thinking, Fast and Slow”. It does look like the two processes: the discovery of causal chains (Dr. Barrett calls the process “novelty search”) and then collapsing them into chainlinks (“categorization/compression” or “reference framing”) are something that our brains are constantly engaged in.  And once collapsed and turned into simple chainlinks, our brains are incredibly efficient at reaching for them to — you guessed it — seek out novel causal chains, continuing the infinite recursion of making sense of the world.

This seems like a perfect system. Except for one tiny problem: our discovered causal chains often contain mistakes. If this then that might omit an important variable or two. Remember those freaky crashes? Those were manifestations of engineers’ mistakes in their process of collapsing the massive causal chains that comprise a modern browser into the simple “go to URL.” In software, engineers spend a lot of time finding and fixing these bugs — and so do our brains. Still, both our software and our brains are teeming with chainlinks that hide mistakes (yes, I’ve said it — we’re full of bugs!) Worse yet, the recursive nature of our sense-making tends to amplify these mistakes, while  still concealing their origin. While software just stops working, we tend to experience the amplified, distorted mistakes as suffering: anxiety, depression, burnout, anger, stress, etc. It takes intentional spelunking to discern these mistakes and not get further overwhelmed in the process. Like most astonishing things, our capacity for discovering and collapsing causal chains is both a gift and a curse. Or so the causal chain of this story says.

Ecosystems from product and user perspective

In a couple of conversations this week, the word “ecosystem” came up, and I realized that there were two different ways in which we employed that word.

The first one I heard was using “ecosystem” to describe a collection of products with which users come in contact. Let’s call it the product ecosystem perspective. This perspective puts software and/or hardware at the center of the ecosystem universe. Users enter and exit the ecosystem, and changing the ecosystem means making updates to products, discontinuing them, and shipping new products. It’s a fairly clean view of an ecosystem.

The other way I’d heard the word “ecosystem” being used was to describe the users that interact with the product, or the user ecosystem perspective. Here, the user is at the center of the ecosystem universe. It is products that move. Users pick them up or drop them, according to interests, desires, comfort, or needs. Users are humans. They talk with each other, giving out their own and following others’ advice, giving rise to waves and wanes in product popularity. This view of an ecosystem is messy, annoyingly unpredictable, and beautifully real.

It feels intuitive to me that both of these perspectives are worth keeping in mind. The empowering feel of the product ecosystem perspective is comforting for us technologically-inclined folk. It’s easy to measure and prioritize. Diving into the complexity of user ecosystem perspective provides deeper insights into what’s really important.

Flux budget and predictability footprint

I’ve been thinking about this idea of the flux budget as a measure of capacity to navigate complexity of the environment. With a high flux budget, I can thrive in massively volatile, uncertain, complex, and ambiguous (yep, VUCA) spaces. With a low flux budget, a slightest sight of unpredictability triggers stress and suffering. If we imagine that the flux budget is indeed a thing, then we can look at organizations –and ourselves — and make guesses about how the respective flux budgets are managed.

Reflecting on my own habits, I am recognizing that to manage my flux budget, I have to deliberately work for it. To peer into the abyss of unpredictable, it appears that I need to be anchored to a sizable predictable environment. I ruthlessly routinize my day. From inbox zero to arranging shirts, to my exercise schedule, and even the allotment of guilty pleasures (like watching a TV show or the evening tea with cookies), it’s all pretty well-organized and neatly settled. Observing me in my natural routine habitat without context might conjure up depictions of thoughtless robots. Yet this is what allows me to have the presence to think deeply, to reflect, and patiently examine ideas without becoming attached to them.

This reaching for the comfort of routine has obvious consequences. How many beautiful, turning-point moments have I missed while sticking to my routine? How many times has the routine itself led me away from insights that would have otherwise been on my path? Or worse yet, imposed an unnecessary burden on others? Let’s call this phenomenon the predictability footprint: the whole of the consequences of us creating a predictable environment to which to anchor in the face of complexity.

I am pretty excited to be learning more about the relationship between flux budget and predictability footprint. The whole notion of the footprint (which I borrowed from carbon footprint) speaks to the second-order effects of us seeking solid ground in the flux of today’s world — and how that in turn might create more flux. A while back, I wrote about leading while sleepwalking, which seems like a decent example of a vicious cycle where a leader’s predictability footprint increases the overall state of flux, placing more demand on an organization’s flux budget.

These framings also help me ask new interesting questions. What is my predictability footprint? How might it affect my flux budget? What are the steps I can take to reduce my predictability footprint?

Intention and shared mental model space

Hamilton Helmer pointed out this amazing connection between intention and shared mental model space that I haven’t seen before. If we are looking to gain more coherence within an organization, simply expanding the shared mental model does not seem sufficient. Yes, expanding this space creates more opportunities for coherence. But what role does the space play in realizing these opportunities?

A metaphor that helped me: imagine the shared mental model space as a landscape. There are tall mountains, and deep chasms, as well as areas that make for a nice, pleasant hike. Those who are walking within this landscape will naturally form paths through those friendly areas.  When a shared mental model space is tiny, everyone is basically seeing a different landscape. Everyone is walking their own hiking trails, and none of them match. Superimposed into one picture, it looks like Brownian motion. When the shared mental model space is large, the landscape is roughly the same, and so is the trail, growing into a full-blown road that everyone travels.

On this road, where is everybody going? Where is the road leading them? Shared mental models aren’t just a way for us to communicate effectively. They also shape the outcomes of organizations. The slope of the road is slanted toward something. The common metaphors, terms, turns of the phrase, causal chains and shorthands — they are the forms that mold our organization’s norms and culture.

If my team’s shared mental model space is dominated by war metaphors and ironclad logic of ruthless expansion, the team will see every challenge — external or internal — as a cutthroat battle. If my organization’s key metaphors are built around evaluating the impact of individual contributions, we might have trouble cohering toward a common goal.

Put differently, every team and organization has an intention. This intention is encoded in its shared mental model space.  The slant of that road gently, but implacably pulls everyone toward similar conclusions and actions. This encoded intention may or may not be aligned with the intention of organization’s leaders. When it is, everything feels right and breezy. Things just happen. When it is not, there is a constant headwind felt by everyone. Everything is slow and frustrating. Despite our temptation to persevere, I wonder if we would be better off becoming aware of our shared mental model space, discerning the intention encoded in it, and patiently gardening the space to slant toward the intention we have in mind?

The story of a threat

Continuing my exploration of narratives that catalyze coherence, I would be remiss to not talk about the story of a threat.

The story of a threat is easily the most innately felt story. When compared to the story of an opportunity, it seems to be more visceral, primitive, and instinctive. It is also a prediction of compounding returns, but this time, the returns are negative. The story of a threat also conveys a vivid mental model of a compounding loop, but the gradient of the curve is pointing toward doom at an alarming rate. Living in 2021, I don’t need to go too far for an example here: the all-too-familiar waves of COVID-19 death rates are etched in our collective consciousness. Just like with the story of an opportunity, there’s something  valuable that we have and the story predicts that we’re about to lose it all.

Structurally, the story of a threat usually begins with depiction of the vital present (the glorious “now”), emphasizing the significance of how everything is just so right now. It then proceeds to point out a yet-hidden catastrophe that is about to befall us. The reveal of the catastrophe must be startling and deeply disconcerting: the story of a threat does not seem to work as effectively with “blah-tastrophes.” Being able to “scare the pants off” the listener is the aim of the story.

A curious property of the story of a threat is that it is a half-story. It only paints the picture of the terrible future in which we’ll definitely be engulfed. Unlike with the story of an opportunity, there is less agency. Something bad is happening to us, and we gotta jump or perish. In that sense, the story of a threat is reactive — contrasted with the proactive thrust of the story of an opportunity. Being reactive, it propels the listener toward some action, leaving out the specifics of the action.

This half-storiness is something that is frequently taken advantage of in politics. Once the listener is good and ready, sufficiently distraught by the prospect of the impending disaster, any crisp proposal for action would do. We must do something, right? Why not that?

The story of a threat is a brute force to be reckoned with, and is extremely challenging to contain. Such stories can briefly catalyze coherence. But unless quickly and deliberately converted to the story of an opportunity, they tend to backfire. Especially in organizations where employees can just leave for another team, the story of a threat is rarely a source of enduring coherence. More often than not, it’s something to be wary of for organizational leaders. If they themselves are subject to the story of a threat, chances are they are undermining the coherence of their organization.

Prototype the problem

I was looking for practices that help expand shared mental model space and thinking about prototyping. I’ve always been amazed by the bridging power of hacking together something that kind-of-sort-of works and can be played with by others. Crystallized imagination, even when it’s just glue and popsicle sticks, immediately advances the conversation.

However, we often accidentally limit this power by prototyping solutions to problems that we don’t fully understand. When trying to expand the shared mental model space, it is tempting to make our ideas as “real” as possible — and in the process, produce answers based on a snapshot of a state, not accounting for the movement of the parts. Given a drawing of a car next to a tree and asked to solve the “tree problem,” I might devise several ingenious solutions for protecting the paint of the car from tree sap. No amount of prototyping will help me recognize that the “tree problem” is actually about the car careening toward the tree.

My colleague Donald Martin has a resonant framing here: prototype the problem (see him talk about it at PAIR Symposium). Prototyping the problem means popping the prototyping effort a level above solution space, out to the problem space. The prototype of a problem will look like a model describing the forces that influence and comprise the phenomenon we recognize as the problem. In the car example above, the “tree problem” prototype might involve understanding the speed at which the car is moving, strengths of participating materials (tree, car, person, etc.), as well as the means to control direction and speed of the car.

Where it gets tricky is making problem prototypes just as tangible as solution prototypes. There are many techniques available: from loosely contemplating a theory of change, to causal loop diagrams, to full-blown system dynamics. All have the same drawback: they aren’t as intuitive to grasp or play with as actually making a semi-working product mock-up. Every one of these requires us to first expand our shared mental space to think in terms of prototyping the problems. Recursion, don’t you love it.

Yet, turning our understanding of the problem into a playable prototype is a source of a significant advantage. First, we can reason about the environment, expanding both our solution space and the problem space. For that “tree problem,” discovering the role of material strengths guides me toward inventing seatbelts and airbags, no longer confined to just yelling  “veer left! brake harder!” But most importantly, it allows us to examine the problem space collectively, enriching it with bits that we would have never seen individually. My intuition is that an organization with a well-maintained problem prototype as part of its shared mental model space will not just be effective — it would also be a joy to work in.

Organizing and sensing connections

Tucked away in a couple of paragraphs of the brilliant paper by Cynthia Kurtz and David Snowden, there’s a highly generative insight. The authors make a distinction between the kinds of connections within an organization and then correlate the strength of these connections to the Cynefin quadrants. I accidentally backed into these correlations myself once. What particularly interested me was the introduction of connections into the Cynefin thinking space, so I am going to riff on that.

First, I’ll deviate from the paper and introduce my own taxonomy (of course). Looking at how information travels across a system, let’s imagine two kinds: connections that relay organizing information and connections that relay sensing information. For example, a reporting chain is a graph (most often, a tree) of organizing connections: it is used to communicate priorities, set and adjust direction, etc. Muscles and bones in our bodies are also organizing connections: they hold us together, right? Organizing connections define the structure of the system. Nerve endings, whiskers, and watercooler chats are examples of sensing connections  — they inform the system of the environment (which includes the system itself), and hopefully, of the changes in that environment.

With this taxonomy in hand, we can now play in the Cynefin spaces. It is pretty clear that the Unpredictable World (my apologies, I also use different names for Cynefin bits than the paper) favors weak organizing connections and the Predictable World favors the  strong ones. Organization is what makes a world predictable. In the same vein, Chaotic and Obvious spaces favor weak sensing connections, contrary to the neighboring Complex and Complicated spaces with their fondness for strong sensing connections.

Seems fairly straightforward and useful, right? Depending on the nature of the challenge I am facing, aiming for the right mix of organizing and sensing connections of the organizational structures can help me be more effective. Stamping out billions of identical widgets?  Go for strong organizational connections, and reduce the sensing network. Solving hard engineering problems? Make sure that both organizational and sensing connection networks are robust: one to hold the intention, the other to keep analyzing the problem.

Weirdly, the causality goes both ways. The connection mix doesn’t just make organization more effective in different spaces. It also defines the kinds of problems that the organization can perceive.

A team with strong organizing connections and non-existent sensing connections will happily march down its predetermined path — every problem will look Obvious to it. Sure, the earth will burn around it and everything will go to hell in the end, but for the 99.9% of the journey, their own experience will be blissfully righteous. The solution to war is obvious to a sword.

Similarly, if that engineering organization loses its steady leader, weakening the strength of its organizing connection network, every problem will suddenly start looking Complex. The magic of constructed reality is that it is what we perceive it to be.

This might be a useful marker to watch for. If you work in a team that merrily stamps widgets, and suddenly everything starts getting more Complicated, look for those tendrils of sensing connections sprouting. And if you’re working at the place where the thick fog of Complexity begins to billow, it might be the environment. But it also could be the loss of purpose that kept y’all together all this time.