I keep coming back to the post from last year, where I tried to write down my understanding of the distinction between organizational states of consistency, cohesion, and coherence. It’s been a generative framing for many conversations. The thing that kept bugging me was that I laid the states out as a progression of sorts, from disorganized to coherent. A colleague insightfully pointed out that one could have coherent user experience even if it’s not consistent. This spurred an exploration into key ingredients of the state, which I present here for your perusal.
I already suggested before that the key attribute of coherence is the presence of intention. If we imagine an organization with strong intention about what it wants to accomplish, we can see how coherence will naturally emerge within it. Strong intention usually comes across as people roughly knowing where their organization is aiming and how their contributions fit into the picture. Weak intention will have a vibe of unmooredness, usually unsurmountable coordination headwinds. Both might have clearly stated missions, though in the former case, the missions will feel actionable and inspiring, and in the latter more like cartoonish slogans.
However, it didn’t occur to me until recently that cohesion also has a key attribute: structure. I am using the word “structure” in Peter Senge’s sense, as a broad descriptor for all things that comprise a functioning organization: the reporting hierarchies, the roles, the processes that make it go, and the culture that binds it all together. It’s this structure that makes the various bits and pieces that an organization produces cohesive. A team with strong structure will have the necessary means to coordinate cohesiveness of the outcomes, whereas weak structures will typically suffer from a “thousand flower bloom” phenomenon that ends in poor cohesiveness.
A similarly recent thought led to this notion that consistency’s key attribute is capacity. To define it a bit more, it’s not just the ability of doing something, but also the skill and the practice that accompanies it. It is fairly evident that the team’s consistency of product outcomes is only possible when they have enough skill and practice to apply it. If an organization’s capacity is low, its output will be at best random, occasionally striking gold – definitionally inconsistent. A high-capacity team will not have such issues. Usually, when I hear “engineering excellence,” the word “consistency” pops right next to it.
So I wonder if the three states – consistency, cohesion, and coherence – emerge from the mix of these key ingredients: capacity, structure, and intention. Though it seems like there’s a tension between them. It’s not like I can just will my product to be coherent if all I have is intention. Without capacity and structure, it’s just a bunch of grand ideas. Similarly, having the capacity is awesome, but if it’s capacity alone, the outcomes will feel like random noise. And finally, if all I have is structure, it’s an aimless zombie. I would guess that more likely, having all three ingredients, but favoring one or the other is what leads organizations toward their predestined states.
The presence of such trilemma indicates that there might not be a favored state, a static resolution of the tension. Instead, a team will likely find itself leaning from one corner of the triangle to another, experiencing a want of one of the two other states when it gets too zealous about one particular ingredient. Over-focusing on capacity brings a deficit of coherence and cohesion. Being too into structure diminishes coherence and consistency, and finally, pushing too hard on intention saps consistency and cohesion. And if polarities are hard, can you imagine navigating a three-body-problem equivalent of a polarity?
By the way, I don’t know if there’s a word for this polarity with three extremes. Is that still a trilemma?
It appears that I’ve been writing this essay since 2012, never quite finding the right framing. One of my brilliant colleagues asked me this question back then: “How do you evaluate whether a Javascript framework is good for the Web? You seem to do it intuitively, but I don’t get it. What’s the logic?” And I was stumped.
Now, a decade later, here is my best-effort attempt to capture that intuition. I am a bit older now, so I offer this with a bit more humility. The story is also a bit more technical in nature and I apologize to my non-technical readers in advance.
I tried to formulate my answer as briefly as I could, and it came out almost like a poem, in four … shall we call them stanzas?:
Frameworks and libraries are like layers, and these layers accrete.
Every layer has a vector of intention, pointing toward some idealized value to users, determined by the author of the layer.
Opinion, or the difference between the vectors of intention of two adjacent layers, always comes at a cost.
Opinion costs compound and are, directly or indirectly, shouldered by users.
Below is the reasoning that went into this little ditty. The first line sets the stage with a simplistic, but hopefully useful framing. Let’s imagine all developer-facing software as something that accretes layers of abstractions over time. These layers of abstraction usually emerge as libraries or frameworks, written on top of some existing layer (I will use the word “platform” to describe it) – usually to provide additional value that wasn’t there before.
For example, the venerable jQuery is probably the MVP of the Javascript frameworks (and maybe even all developer frameworks of all time): during the times of browser wars and through the winter, it steadily held developers’ hands, providing a decent interoperability layer over the treacherous terrain of grotesquely diverse and buggy browser implementations. If you knew jQuery, you knew how to make things on the Web. jQuery emerged out of necessity and accreted on top of the DOM APIs. All these years later, I still find it on many (most?) sites across the Web. In fact, some (many?) newer frameworks themselves relied on jQuery, accreting a second layer of value on top of it. So, here’s our first stanza of the narrative: frameworks and libraries are like layers, and these layers accrete.
As a second step forward, let’s recall the tale of two models: the “what is” and “what should be.” The former represents our current understanding of the environment, and the latter – our preference toward its future state, reflecting our intention. Every framework and library is its author’s manifestation of this intention, taking the “what is” of the underlying platform (or layer) and transforming them to produce the “what should be”: it’s own APIs. For example, jQuery took the dizzying variety of ways to find a DOM element across different browsers back then (the “what is”) and produced the now-ubiquitous “$()” function (the “what should be”). Think of this transformation as a directional arrow (a vector!) connecting the two models of environments. The butt of the arrow starts from “what is” and its head points toward “what should be.”
Unsurprisingly, the layer underneath also has an intention. Even bedrock platform APIs do this work of translating something at an even lower layer to something they believe to be more valuable to users. Even though its developers aren’t always conscious of this intention, every layer has one. Embedded in intention, there’s some mental model of what “good” (or “valuable to users”) looks like. Thus, our second stanza is: every layer has a vector of intention pointing toward some idealized value to users, determined by the author of the layer.
Now, we’re ready for the third hop. When the vector of a layer aligns with the vector of the layer below, we say that the library or framework that comprises this layer is un-opinionated. When it does not, we say that it is opinionated. The difference in the vectors is the degree of opinion. Think of opinion as a change in direction: the author of the lower layer was like, “here’s where it’s going!” and the author of the upper layer went: “Cool story, but actually, I want to try this other direction.”
A layer’s opinion is not something that just sits on the surface, easy to study and examine. Instead, it’s often subtle and hard to see, only becoming obvious over time. Usually, it’s a bunch of smells in the code of the framework or library. It usually looks like plumbing, like doing extra work to translate and adjust the vector of intention. To name a few off the top of my head, look for things like parsing, caching, and predictive logic, as well as wrapping — especially recursive — of underlying objects as possible hints. The opinion often comes across as treating the underlying platform as hostile, using as little of it as possible — which unfortunately, is sometimes necessary to make something that actually works.
To make this more concrete, let’s examine that “$()” function from earlier in the article. At first blush, it seems mostly un-opinionated, taking a CSS selector as the parameter and loosely doing the work of an existing platform API: document.querySelectorAll. The only opinion appears to be in the name of the function: one convenient character instead of … what? twenty five? However, if you were here during the browser wars, you might recall that IE6 did not support that particular API. So the brave jQuery engineers wrote their own implementation of it! I still remember peeking at that code and being in awe of such a feat. Compared to IE6, the hallowed dollar–sign function was a bit more opinionated. It disagreed with the idea that getElementsByTagName and getElementById ought to be enough for everyone and ventured forth in a direction that was more aligned with the nascent Web standards. And in doing so, jQuery incurred costs – the extra CPU cycles, the extra bytes over the wire. That’s the curious property of opinions. In frameworks and libraries, opinions have cost. Changing the intention’s direction is not free. To articulate this as our fourth stanza: opinion, or the difference between the vectors of intention of two adjacent layers, always comes at a cost.
Once incurred and embedded into the framework, this cost is difficult to give up, even when the platform underneath changes for the better. For example, even though IE6 has gone away, jQuery still carries the darn selector-parsing code, which like any proper barnacle, has grown all kinds of neat optimizations.
The cost is proportional to the degree of opinion. For example, if I decided to build a Javascript framework that completely reimagined UI rendering as graphs or three-dimensional octagonal lattices rather than trees, I would quickly find myself having to reinvent the universe. The resulting artifact will weigh some megabytes and consume some kilowatts, with DOM trees still impishly leaking out of my pristine abstractions here and there, necessitating tooling and various other aids to ensure successful launches of user experiences, built using my awesome framework.
What’s even more alarming is that opinion costs have a tendency to compound. Should developers find my framework appealing, I will see more high-profile sites adopting it, causing more developers to use it, extend on top of it, and so on. The outcome of this compounding loop is that users will find more of their computing resources drawn to chew on my opinions.
Commonly, this compounding effect tends to be indirect and delayed enough that originally, the framework or library appears to be providing a lot of benefit at nearly no cost. Only over time, the compounding cost of opinion swings the net value curve back into the ground — and we’re left with massive debt that swallows us and dims the vitality of the ecosystem around us. Which brings us to the last stanza: opinion costs compound and are, directly or indirectly, shouldered by users.
This effect of compounding costs crosses all layers. If the platform designers came up with the primitives that ultimately don’t satisfy the needs of users, the framework and library developers who attempt to rectify this situation will always incur opinion costs. There is no cheap way to salvage mistakes of the platform designers. Design of the platform primitives matters, because it establishes the opinion cost structure for the entire developer ecosystem.
Applying this lens, it seems that platform developers have the highest leverage for reducing the overall cost of opinions carried by users. This is why platforms are better off not sitting still. Every broadly used platform is spurred to learn how to evolve – even if at glacial pace – toward reducing the overall cost of opinion introduced by developers who layer on top of it. So if you’re designing a new platform, you would be wise to invest into building capacity to evolve into it. And if you’re a steward of a mature problem, your best bet just might be teaching it how to change.
This just occurred to me today, so I am writing this down while it’s fresh. When I talk about the shared mental model space (SMMS), I usually picture it as something like a bunch of circles, one for each individual within a group, and these circles are touching a larger circle that represents the mental models that are shared by all members of the group. It’s not the most accurate diagram, but it will work for this thought experiment. As I was reflecting on the desired properties of a SMMS, I realized that there’s a tension at play.
On one hand, I want my organization’s SMMS to be large enough to allow us to understand each other, to be “on the same page” so to say. At the same time, I am recognizing that a SMMS that perfectly encloses all of the individuals’ mental models is both impossible and undesirable. It is impossible, because in trying to achieve perfect closure, we encounter the paradox of understanding: since everyone’s internal mental model also includes the enclosed models themselves, we rapidly descend into the hall-of-mirrors situation. It is undesirable, because a team where all of the opinions are known and completely understood is only facing inward. There is no new information coming in. So there appears to be a sort of polarity in the size of the shared mental model space – and a tension it embodies.
A SMMS can both work for and against the organization. As it grows, the organization becomes more blind to the externalities. A cult enforces the suffocating breadth of SMMS among its members, since that’s what makes it impervious to change. As SMMS shrinks, the organization stops being an organization. If the diversity of perspectives is high, but there’s no way to share them, we no longer have a team. It’s just a bunch of people milling around.
The weird thing about polarities is that the sweet spot in the middle is elusive. Sitting right in the middle of the tension, it’s more likely to be periodically passed by the team — “OMG, this was amazing! Wait, where did it go?” — rather than having the team settle down in it. Even more annoyingly, diminishing the SMMS decreases the chances of reestablishing it — and large SMMS makes introducing new perspectives impossible. Both extremes are “sticky,” which means once an organization moves past some threshold, only a severe external perturbation can dislodge the state of its SMMS.
So it appears that it really matters how we decide to establish this space where our mental models are shared, and how we garden this space. The thing that becomes more and more evident to me is that if we do so in an unexamined way, we are unlikely to have a sustainable, resilient organization.
Have you ever experienced this really fun moment when a few of concepts you already knew suddenly came together as something new and completely different when revisited? This just happened when I was looking at Kim Scott’s Radical Candor framework. Putting it next to the ideas of the Adult Development Theory, I realized that it might be a rather useful tool to locate my fallback notch.
I already mentioned fallback a fewtimes in my writing. It’s this phenomenon when we, despite our best efforts, show up as developmentally earlier versions of selves. A concept that’s been really useful in my own self-work was a notion of the fallback notch, a hint at which I first found in Lisa Laskow Lahey and Robert Kegan’s Immunity To Change. The fallback notch is the habitual stance I take when I am experiencing the effect of fallback. The notchiness part of it is that it happens kind of automatically, like a hard-learned, yet almost-forgotten habit, a Schelling point for a disoriented mind. When fallback triggers, that’s the hill where I tend to regroup. I’d found that this notch is context-specific, but rather useful to name when reorienting. “What, what’s happening? <notch locating happens> Ah, I am currently in a Diplomat mindset… Hmm… I wonder what led me here?” Let’s see if we can use Radical Candor as a compass to help with reorienting.
The first – and most significant – leap of faith I invite you to take is the mapping of the quadrants to developmental stages. Using Bill Torbert’s classification, and our intuition, we can kind of see that Manipulative Insincerity loosely maps to the Opportunist mindset and Ruinous Empathy to Diplomat. Those two are fairly straightforward. The cunning trickery and unscrupulous antics of the Opportunist appear to be perfectly captured by the words “manipulative” and “insincerity.” Similarly, the Diplomat’s warmth and keen desire for getting along are well-described by “ruinous” and “empathy.” The other two quadrants need a bit more cajoling. The ornery obstinance of recognizing, yet unwilling to accept others’ perspectives of the Expert often manifests as Obnoxious Aggression. I’d found this notch particularly present when, in a subject in which I view myself an expert, someone comes in to ask questions that could potentially buckle the idea’s entire foundation. “How dare they! They must be corrected! <rising irritation leading to self-righteous condescending rants>” Finally, the Radical Candor quadrant is the zenith of the framework, representing the relative flexibility of the Achiever to consider and absorb multiple perspectives, yet keep the eye on the prize of their own objectives.
With the quadrants mapped, we can now use the full depth of the Radical Candor framework as our fulcrum for self-developmental purposes. The idea of mapping our interactions and situation into quadrants, described by Kim in her book, can serve as a clustering tool, helping us spot the particular fallback notches we find ourselves in. Further, we can use the quadrants to find our way back from the notch. Knowing that I am in the Ruinous Empathy quadrant helps me see that I fell all the way back to the Diplomat mindset, and getting out of that notch might start with reminding myself that I do indeed know what I am doing (regaining the Expert ground) while still staying connected to empathy and compassion of the upper quadrants.
Another thing that stood out for me: the Radical Candor framework appears to be Achiever-situated. It presumes that its practitioners are themselves at least at the Achiever developmental stage. It is useful for those who recognize that they temporarily fell back into behaviors they understand as detrimental to their path forward. This probably means that the framework quadrants will look weird to folks at the earlier stages. If I am stuck in the Expert trap, the Radical Candor quadrant will feel like a weird tautology. “Of course I care, that’s why I need to yell at them and shake some sense into them!” Things will look even more bleak if I am just now adopting the Diplomat mindset. The Radical Candor will look like a scary regression into the conflict-ridden Opportunist land. “Oh come on now, we just figured out how to get along. Why are we trying to mess things up again?” And of course, for the Opportunist, the whole story will seem like an elaborate ruse, a corporate prank to trick me into being more gullible and obedient.
A couple of us were chatting about coaching. It was such an amazing generative conversation that I kept walking around, thinking about it over the weekend. As a result, this somewhat late insight materialized, a remix of the Adult Development Theory (see my primer to get your bearings) and the expectations people might have around leadership coaching. This story hops along the stages of adult development (using Bill Torbert’s nomenclature here) and offers my guesses of how I might perceive coaching with the mindset of that stage.
With the Opportunist mindset, any sort of leadership coaching will likely appear as a hook to exploit or a threat that someone might exploit me. Any engagement will have this “let’s see if we can hack this to do my bidding” quality to it. For example, I might use it a bit to see if this would help me secure some advantage, such as attaching myself to a figure who I might perceive as powerful. Any genuine engagement for the purpose of coaching is unlikely. I engage to exploit.
With the next, Diplomat mindset, the deep attachment to shame of failure will hamper any active engagement. If I am asked something, I am petrified to answer in the wrong way. However, I would crave passive learning. I will glom on to anything that looks like advice and wise words, and I would be very happy to react to these words with “likes” and “thumbs up,” even if I don’t fully understand them. With this mindset, coaching software is primarily a way to procure approval and ensure that I am part of the “in-crowd” of those who learn from these really smart people who are clearly authorities on leadership. I engage to belong.
Further down the rabbit hole, the Experts mindset presents the same fear of failure, but now it is bolstered by my expertise. This configuration is least receptive to coaching. “Why would I want a coach? That’s for noobs. I already learned everything there is to know.” By the way, in my primer, I portrayed this developmental stage as transitional, as something that we experience on our way to the next stage. Since then, I’ve changed my mind. Expert is a very stable configuration. With fear of failure on one side and considerable wisdom on the other, it is often a lifelong trap of perpetual, agonizing slow-boil of misery and dissatisfaction with life. When inhabiting the Expert mindset, I am unlikely to hear feedback and will resist the nudges to even try coaching. I don’t engage.
The Achiever mindset turns this attitude upside down. The craving for coaching comes roaring back. I want to tussle with you and I know that every time I do, I will learn something that will take me closer to finally achieving the maximum level of effectiveness. I demand coaching advice. I set the schedule, come prepared with topics at hand, and ready to dig deep. I might even be hard to keep up with, and I might even fire my coach if they can’t. I want to have the latest and greatest in coaching strategies. Don’t give me any of that “how to win friends and influence people” stuff. I engage to get results.
Eventually, the hard-charging Achiever mindset gives way to something different. Somewhere, somehow, the realization emerges that the “maximum level” is not only unattainable, but also absurd as a notion. There’s a bit more shift in the attitude toward coaching. I still see it as valuable, but I am realizing that coaching is a nearly moment-to-moment activity. Everyone has so much to teach me. Every interaction is a coaching moment. I still talk to my coaches, and look forward to our conversations, but they no longer have that edge of Achiever angst. We talk to uncover insights hiding in the nuance, to play the hacky sack of ideas, with deep respect for each other’s experiences. I engage to generate.
Whew, that was fun to type out. I am realizing now how I loosely traced the same outlines Jennifer Garvey Berger drew in her seminal Changing on the Job. So if you’re interested in diving deeper into this particular ocean of ideas, that’s where I would direct you next.
A conversation with colleagues brought this insight to the surface. We were talking about cultivating generative spaces, where a diversity of perspectives is cherished and celebrated, the stakes are low to examine all of them, and enough room for everyone to jam on these perspectives.
Applying the diverge/converge lens, it might seem that such spaces would be classified as divergent. After all, this is where it’s “the more, the merrier,” and everyone is encouraged to pile on their own perspective or riff to create entirely new ones. From this vantage point, generative spaces are the sources of divergence. These are the spaces we reach for when we need to populate that rich pool of ideas.
It was a bit later in the conversation, when we moved on to the topic of convergence and increasing the shared mental model space across the organization that a curious thought occurred to me. What if we shifted our attention to a different set of outcomes that a generative space produces? Namely, the outcomes that result from multiple people playing with a great variety of perspectives. Every new perspective that I try out results in me better understanding it, and thus, acquiring a mental model behind it. Over the course of participating in a generative space, I become more aware of how my peers see the world, and so do they. Our shared mental models space widens. Even though it might look like we’re producing diverging ideas, the process helps us better understand each other, and thus, is convergent in nature.
Generative spaces are divergent in the short term, but convergent in the long term. My intuition is that a sustained generative space is likely the most effective way for an organization to become more coherent and productive, while at the same retaining awareness of its surroundings. While they could seem like “few folks just talking about random topics,” generative spaces might just be the recipe to aid an organization that struggles to converge.
Of course, this makes generative spaces rather counterintuitive and often difficult to describe. “How can we afford just having a chill conversation with no agenda?! We have stuff to design/build/ship!” Yet, every organization has watercooler or hallway conversations, and idle chit chat between meetings. Next time you’re in one, pause to reflect and notice how often they have the subtle overtones of generative spaces. We yearn to jam and riff on ideas. We want to share ours and see others play with them. If only we had space to do that.
Not sure how, last week I stumbled onto my Shadow DOM diaries from 2014 and realized how few of my learnings I captured there. Designing the beast as complex as adding a whole new type of composition to the Web platform was chock-full of them. So I thought, maybe I could share some of them. Think of it as “What Dimitri learned, then forgot to write down, felt guilty, and decided to share much, much later.” If you’re currently investing time into designing a composition system, I hope these will come in handy.
One particular learning that came to mind was the discovery of a distinct tension between graph and tree structures that I felt when exploring that problem space. If I were a law-formulating type of person, I would probably describe it as something like: “every graph wants to become a tree, but also secretly wants to remain a graph.”
There’s something around human intuition that views composition-as-containment as more legible than the chaos of graphs. This desire for legibility imposes a resolute force on pretty much any composition system we design, eventually converting it into a tree-like structure — or becoming enveloped by another composition that is tree-like. From files and folders to markup to house layouts, it seems that we try to put everything into boxes that contain boxes that contain boxes. We just can’t help it. Should we accidentally design graph-like structures, we immediately try to build additional structures that represent them as trees, kind of like the vast graph of the Web being reduced to a search page that — you guessed it! — contains links.
If a designer of a composition system ignores this force, they are likely to see a few things happening: either their composition system doesn’t make any sense to the intended audience, or it becomes the infrastructure of a tree. To the aspiring architects of web3, here’s a bit of advice that comes from the heart. Though graphs have the appealing property of being decentralized, they are still subject to the centralizing force of boxes-within-boxes-within-boxes. There’s always going to be that human desire to understand a graph of relationships in terms of containment, and thus, every graph-like structure will either eventually get simplified into a tree or grow an external “understander of the graph” that represents it as a hierarchy – with one thing most definitely being at its top. Designing a composition system that only offers a graph means designing an incomplete composition system – that will be completed, whether by future you or someone else.
Yet… when it is all said and done, and the primacy of tree structure is unequivocally established within a composition system, the secret desire to keep (or re-establish) some graph-like properties remains. Sometimes it manifests as wanting to reduce computational redundancy (like with shared stylesheets). Sometimes it pops up as the need to convey semantics that are obvious visually, but impossible to infer without eyesight (like with labels). Regardless of the specific requirement, real-life compositions will feel quite limited within a tree and continue to want to sprout non-hierarchical appendages. Often, these appendages come across as gnarly hacks. For a well-trained hierarchist, they are easy to dismiss or sigh about with disdain. A more productive way to look at these is probably this: they are an expression of the deficiency of a composition system design that assumes that, since the hierarchy is so strongly wanted by humans, it is the only way to represent things.
So, in addition to accepting the “want of the hierarchy” as the law, the composition system designer needs to recognize the “need for the graph” as the countervailing force in tension. My intuition is that a robust composition system design will be a layered affair: the graph composition abstraction as the infrastructure, with the tree composition as the human-friendly interface. This way, the tension between the two forces is embodied by the composition system itself, rather than leaving it to the consumers to figure out.
Of course, this means that most designers of Web frameworks wanting to follow this recipe are, as kids say, SOL: the Document Object Model is your bedrock, and it is unabashedly tree-first. My sincere apologies, y’all.
While I roughly understand how boosts and bumpers work, and I’ve seen/made plenty of those, tilts felt a bit more tricky. Coincidentally, a colleague recently asked me why anyone would open-source a library that is a part of another, larger project (whether itself open or close-sourced), and it just felt like a perfect moment to smush these threads into one story.
At the crux of every tilt are two forces in a or near a state of static equilibrium. When I find a teacup sitting on a table, the cup might seem serene. However, it is constantly experiencing a tension of two opposing forces: the force of gravity that is trying to yank the cup closer to the center of the Earth, and the force of the table that is preventing that, called in physics the normal force. I love the name, because I can just picture the table acting as the maintainer of normalcy, the defender against the crazy antics of gravity. If you think about it, gravity force acts as a boost (“Hey! Let’s go nuts and flyyyy!”) and normal force as a bumper (“Not if I have anything to do with it!”) Static equilibriums tend to be like that. They are a result of some long-term boost pointed at an equally robust bumper. For example, a software engineering team that is building a piece of critical infrastructure for a larger project is experiencing a near-state of static equilibrium: the force of the team’s mandate to ship the infrastructure (a boost) is mightily pushing against the force of difficulty of the problem (a bumper).
Tilts take advantage of such standoffs by angling the surface of where the interaction of the forces occurs. I once put a cup down on a piece of computer equipment and left the room, only to rush back, alarmed by the sound of glass breaking. What the heck? I looked closely … and sure enough, the surface was gently – nearly imperceptibly – curved, guiding the cup to slide off. Physics tells us that even a small angle between equally opposing forces results in additional momentum that’s roughly orthogonal to these forces. Wait a minute… Am I adding “silly physics” to my silliness repertoire?! You betcha.
Here’s the crux: this additional momentum will remain present for the duration of the tension between the two forces. Because of that, tilts can be a durable source of nonlinear effects. It’s like a judo move of influence: let existing forces do our bidding. Tilts typically have this “might as well…” quality. Unlike the booster’s contagious “let’s go!” or the bumper’s authoritative “don’t you dare,” tilts usually sound like “we’re doing this thing already, might as well do that other thing.”
So here we have a composition of a tilt: the boost and the bumper in a nearly even draw, and a small angle representing another, additional objective at the point where the two meet.
Let’s return to that engineering team we met earlier. Suppose that in addition to their mandate, they have such an additional objective. They believe that the surrounding software ecosystem will benefit from having a robust, best-in-class library that their project represents. So they do a tilt: they structure their code as a separate project, and run it in the open. Yes, there’s a bit of overhead associated with that, and yes, some colleagues furrow their brows at why this extra work has to happen (“I don’t get it, why are they not in our main repo? They are still part of our project, right?”) But over time, a magical thing happens. The funding of the overarching mandate ensures that the library is solidly built and can shoulder a high-scale deployment. The community around it is flourishing, excited about improvements and helping hunt down regressions. The project is welcoming those who want to adopt the technology, making it easy for others to innovate on top of them. Instead of remaining an implementation detail stuck in the amber of a larger project, the project becomes the means of industry-wide technological progress.
What I portrayed here is not a fictional tale. It’s the story of projects like WebKit, Skia, V8, and many others. Tilts are incredibly powerful that way. Especially when the forces in tension are large, even a tiny angle results in massive compounding effects over time, changing the entire landscape – just like the projects I mentioned changed the landscape of computing. If you are aiming to effect a lasting change in your organization, this might be the influencing approach to reach for.
Trying to describe my intuition about a project to a colleague, I found myself using this tongue-in-cheek inversion of a well-known catchphrase to describe a sequence that sometimes plays out on software engineering projects.
Here’s how that story typically goes. The idea looks big, ambitious, and fits into some bigger aspirations of the team. Then, there is usually a great demo or a prototype that appears to put the desired outcome within reach. There is a lot of excitement and the team boldly commits to pursue the project. Around half-way through, the full extent of the project’s scope becomes evident in its horrifying scale. Like a vast creature from the deep, it leans forth and threatens to capsize the whole thing, taking the team with it. Stuck between that and an equally unappealing prospect of cutting their losses, the team has some choices to make. Some decide to persevere. Some opt to scale down the effort, the big idea shrinking into a resounding “meh.” Whichever path is chosen, the shock of exploding scope never quite goes away, affecting team’s morale. In the hallways, there are disgruntled “this isn’t what I signed up for” or snarky “we’re always three quarters away from shipping” or “hey, Dimitri, didn’t you say you were shipping this two years ago?” Yep, I totally did. I was naive and — sigh! — too enamored with the idea. The object was much farther than it appeared. (And it will be three more years until it actually ships) So yeah, I’ve been there.
Especially in environments where there’s pressure to show results quickly, the distortion effect tends to exacerbate. Big ideas that clearly won’t yield outcomes for a while will be either dismissed or presented as simplistic, stick-figure caricatures of themselves. Here, it is usually the intuition of those who’ve been there before, the voice of the seemingly jaded and so frustratingly realistic that can break the illusion. Yes, it is scary to consider that the project we thought was going to take a year is actually a three (or five!) year endeavor. It is my experience that deluding ourselves ends up being much scarier. If you have that spidey sense that the proposed timelines might be too chipper, please consider doing a simple miracle count exercise to regain your grasp on reality.
And of course, I am so grateful to you, all of my ornery colleagues who have grounded my overly optimistic prognoses – and I expect you to continue to do so in the future. In return, I promise to do the same, even if it is uncomfortable.