The law of tightening aperture

What happens to the teams with wide apertures over time? Does the aperture stay the same? What about the teams with narrow strategy apertures? When I started examining these questions, I recognized a familiar pattern. This pattern seems so pervasive that I will go ahead and boldly proclaim it as a law: the law of tightening aperture. Here it is:

Given a changing environment, the strategy aperture of any organizational unit tends to only tighten over time.

Using super-plain language, the law states that every team’s openness to exploring new opportunities diminishes over time. Every organization that exists today is highly likely to be more strategically flexible than the same organization tomorrow.

💰 The requirement of value

Why does this happen? Strategy aperture appears to be subject to a gravity-like force that can be resisted and sometimes counterbalanced, but never expected to relent: the force of homeostasis. To make it a bit less concrete and easily digestible in the context of this essay, I will rename it to the force of requirement of value.

The requirement of value can be described as a collective expectation that an organization will at some point deliver value. Given how much we put into it, we’d like to receive it back – and get at least a little bit more  (notice how this rhymes with homeostasis). 

There are organizations that don’t have these requirements – but usually, they don’t have strategies, either. In this way, the requirement of value both creates the need for strategy and imposes the law on this strategy. 

The law of tightening aperture applies to any kind of organization, including polities and ecosystems. As long as there’s the requirement of value, the same dynamic will emerge.

The requirement of value doesn’t always come from the need for tangible outcomes (profits, growth, etc.). Team identity can play a big role in the tightening of the aperture. A team that painstakingly discovers who they are and becomes comfortable with that is the team can’t become something altogether different. 

As an illustration, consider Tuckman’s phases of group development model. The “performing” phase is highly sought after and celebrated when reached, yet it is also the one with the narrowest aperture. There is a reason why there’s the “adjourning” or “transforming” stage tacked on at the end: once a team learns how to perform, it must necessarily cease to exist in its current form to do something different. 

Another source of requirement of value might be the collective desire for predictability. This comes up very strongly in the rapidly changing environments. When everything is up in the air, the need for stability can feel existential. This desire may lead to a paradox during the times of change: just when the team needs to keep its aperture wide to better anticipate and see new opportunities, the collective fear of uncertainty will keep tightening the organization’s strategy aperture.

🏛️ Not necessarily a bad thing

Aperture tightening is not always a bad outcome. It might be exactly what we’re looking for. Teams with a narrow aperture are easily pointed at the problems whose shapes are within their aperture. Confidently solving that problem is their unique gift. The key here is to understand and be intentional about the change.

When we seek a tighter aperture, we usually talk about focus. For example, the phrase “more wood behind fewer arrows” heralded such a change for Google back in 2011. Transitioning from a wide aperture to a narrow aperture is often seen as a necessary step to improve a team’s ability to create value efficiently. This is all good. Aperture tightening works out great when the viable niche the organization ended up pointing at is durable in the long term. For instance, it is exceedingly likely that people will want to consume electricity or the Internet for the foreseeable future. No matter how tight it’s aperture, an organization can thrive providing either. 

We just need to remember that – at least according to the law I present in this essay – there will be no easy way to reverse that shift. Becoming more open to contemplating new opportunities is much, much harder once the aperture tightens. If the shape of the industry changes, and the niche shifts, we will suddenly discover that we are unable to adjust, battered by the winds of change and unable to even see them clearly. Like, hey those cassette tapes were cool, but if I built my business on them, I am in for a nasty surprise in the 90s.

⏪ Can the tightening be reversed?

Can the aperture tightening be reversed? Not permanently. As I mentioned earlier, it’s best to view this force as gravity. To keep a soccer ball in the air, I need to exert energy. The default state of the ball is to lay still on the ground. Similarly, the default state of a strategy aperture is to be as narrow as possible. We can resist this force by investing our energy into it, and counteract its effects. But these investments will not have a permanent effect.

Anytime we get an impression that organization has revitalized itself, it is doubtful that this happened to a permanent broadening of the aperture. More likely, a “kick-in-the-pants” event loosened the aperture long enough to spot a new promising similarly-shaped opportunity and allowed re-pointing of the organization toward it. Or perhaps we’re actually observing a wholly different organization, similar in name only – an outcome of (likely painful) transformation. 

As organization’s leaders, when we decide where we want to invest our time, it might be useful to look at where we want to be in relation to our current strategy aperture. Relative to the aperture we have, do we want it tightened or broadened?

If we need a tighter aperture, we must be very careful about embarking on our team-focusing venture: is this a long-term viable niche? If yes, let’s plow ahead. If it is likely to shift (or is shifting already), we are making ourselves more vulnerable to disruption.

If we determine that we need to broaden our strategy aperture, it might be that the entirety of our job will be resisting the law of tightening aperture. This is why leading teams and organizations may feel so challenging and downright futile: we keep trying to walk against gravity. It’s a task that is feasible in the short-term, but not in the long term. Eventually, that soccer ball has to come down.

The gardener

I often use the analogies of gardens and gardening in my writing. It’s only fitting that I talk about gardeners. This was inspired by something that my friend Alex Komoroske said once – though now he tells me that he can’t remember saying that. Oh well. 

When we talk about gardening, we picture a flourishing environment, where everything is neat and lovely and blossoming in the serene calm. We tend to imagine gardens the opposite of the hustle and bustle, dog-eat-dog kinds of environments.

However, to become these environments, gardens need gardeners. The key property of a garden is not the fertile soil, or a picturesque location, or even the choice of seedlings. The key property of a garden is the gardener — someone who is able and willing to exercise significant power to ensure that the garden is protected from the rest of the environment.

Gardeners skillfully wield all kinds of violent, cutting instruments to ensure that the garden is growing well. They get on their knees and ruthlessly pull out anything that is not supposed to be there. They engage in prolonged battles with pests, who keep finding new and clever ways to get into the garden. Gardeners fight for the garden.

The reason why gardens exist is because they have gardeners: individuals who are willing to put their sweat and tears into them.

When the gardener leaves, the garden dies. It doesn’t die quickly. For a little while, it may even look like the garden is going to be fine. Like all the work that the gardener has put into it has finally paid off and the garden can live on its own. But that is not to be. Eventually, the rabbits dig out the roots. The mites take over the leaves. And the garden withers. Over time, the surrounding environment swallows it, making it part of itself.

Sometimes, a garden gets lucky and gets another gardener. But the new gardeners see the inefficiencies of how the flowerbeds were drawn, and how the soil is too heavy on clay. How the water supply could be moved to a more central location. They have different ideas about the kinds of plants they want and where. With the same gusto as the previous gardener, they mold the garden to their liking. Gardens are shaped like their gardeners, and if they aren’t, they will be.

Here’s to gardeners. Those who are willing to expend their effort and their capital on building a patch of something that’s perhaps quirkier and weirder, but undeniably more intentional than the rest of the environment. I salute you.

Strategy aperture gifts and curses

Now that I’ve sketched out the concept of strategy aperture, let’s play with it. Let’s imagine two teams: one with a wide strategy aperture and the other with narrow. What are the gifts and curses of these teams?

A team with a wide strategy aperture will have the gift of sensing: it will be able to discern a massive variety of opportunities. The flexibility of the wide aperture gives it the capacity to try everything. Dabble in this, taste test that. Write a quick prototype here, throw together a demo there.

One thing that this team won’t be able to do is doggedly pursue one particular opportunity. When sensing is a gift, commitment is a curse. Teams with a wide strategy aperture usually stink at delivering on the opportunities. They are idea factories. Their curse is that someone else usually takes these ideas to market. PARC, Bell Labs and many other venerable institutions of technology innovation are all subject to that curse.

Let’s look at the other team. Its narrow aperture gives the team the gift of focus. This team knows how to take a vague idea and make it real. As long as this opportunity is within its capabilities, the team will find a way. Unlike the first team, this one won’t get distracted by a new shiny and accidentally forget about what matters. Like a tractor or any other power tool, if we point this team to a problem, we know that they will give it their all.

The curse of this team is that it’s pretty much blind to other opportunities. Once the target is locked, there might as well be no other opportunities – everything is poured into the one that’s chosen. As a member of engineering teams, I’ve seen this pattern repeat quite often. It’s like watching a train wreck in slow motion. When new disconfirming evidence emerges, the narrow-aperture team just keeps on chugging. Even when everyone knows the effort is going to fail, nobody dares to mess with the gears. People just keep shrugging and saying: “Yep, this one’s going to end poorly.”

This vignette gives us a nice distinction to build upon. If we look around various teams in our organization, can we spot the ones with the narrow aperture? Can we point at the ones with the wide aperture? Knowing their gifts and curses, can we predict what will happen next with the project they are working on?

Strategy Aperture

The concept of embodied strategy continues to captivate me. Recently, I found a more resonant way to talk about the narrowness and breadth of the cone of embodied strategy: the strategy aperture.

As I explored earlier,  organizations tend to have a certain gait, a way of doing and thinking that they develop over time. No matter how much we try to convince them otherwise, they will always veer toward that certain way – hence the term “embodied” strategy, used in contrast to “stated” strategy.

The cone of the embodied strategy is a degree to which the organization is subject to its embodied strategy. Put very simply, the cone of the embodied strategy indicates how much flexibility we have in pursuing various future opportunities. Narrow cones indicate very little flexibility – we are set in our ways, and that’s the way we are. Broad cones indicate a lot of flexibility – the world is our oyster.

Borrowing the term from physics, we can use the word “aperture” to indicate the breadth/narrowness of the cone of embodied strategy. 

More narrow aperture means that as an organization, we are well-suited for producing only a certain class of ideas. Think of a team that’s laser-focused on a particular problem or highly specialized to build a certain kind of product. 

Broader apertures allow organizations to be nimble, more mercurial, able to anticipate and act on a wide variety of opportunities. Some – most – of these will not pan out, and some will hit the motherlode.

Neither broad or narrow strategy aperture is bad or good in itself – however, it must be matched to the environment.

Narrow strategy aperture is very useful in more elephant-like environments, where it’s all about optimizing the fit into the well-known, stable niche. The more narrow our aperture, the more elephant-like environments will be attractive to us.

The flexibility the broader aperture brings is very effective in dandelion’s environments: brand new spaces, where constraints aren’t clearly defined. The broader the aperture, the more comfortable the team is in a dandelion environment.

How do we find out strategy aperture for our organization? My intuition is that we need to look at how this organization is constrained. Put differently, what are the limits that confine the cone of  its embodied strategy?

Applying a framing from the problem understanding framework, we can look for three  limits: time, capacity, and attachment. Note: these limits are nearly always tangled with each other in  mutually reinforcing ways.

The time limit is the easiest to spot and is the most intuitive. Does everything within our organization seem to happen slower than on the outside? The more emphatically we confirm this statement, the more likely our aperture is on the narrower side. Conversely, does it feel like our team moves faster than everyone can blink? Then we probably have a broader strategy aperture.

The limit of capacity is also fairly straightforward. What we don’t know creates a negative space  where we can’t create new ideas. Skills, expertise, and the breadth of experience are key factors in pushing the organization’s capacity limits outward. How specialized are we as a team? Deeper specialization narrows the aperture – and so do team cultures that produce echo chambers and groupthink. 

The third limit is attachment. Words like “risk”, “downside”, and “uncertainty” typically come up to describe this limit’s contributing factors. For example, the more customers we’re serving with our products, the more risk we will be taking to pursue new opportunities – this narrows our embodied strategy aperture. The more existential the idea of change feels within our organization, the less broad our aperture is.

By studying these limits, we should get a pretty good sense of our team’s strategy aperture. Now comes the key question: does it match our environment? If the answer is a confident yes, then we are set to accomplish amazing things. And more than likely, we’re not even asking ourselves these questions, busy doing those things. However, if trying to answer this question sows doubt in our minds, we might be in a mismatched environment: our embodied strategy prevents us from being effective to achieve what we’re aiming for.

The Wallpapering Principle

This principle builds on the layering principle, and deals with a common decision point that most software developers reach many times in the course of their work.

The situation that leads to this point unfolds something like this. There is some code at the lower layer that isn’t giving us the results we need for implementing the functionality of our layer. There’s some wart that was left there by developers of that layer, and we have to do something to minimize the exposure of our customers to this wart.

What do we do? The most intuitive action to take here is wallpapering: adding some code at our layer to reduce the gnarliness of the wart. This happens so commonly and so pervasively that many writers of code don’t even recognize they are doing it. Web development has a proud tradition of wallpapering. There are entire communities of libraries (jQuery, React, etc.) that invested a ton of time into wallpapering over the warts of the Web platforms. 

Especially when we are not thinking in terms of layering, we might just presume that we are simply writing good code. However, what is really happening here is a shift in layering responsibility – or perhaps a “layer entanglement” is a more catchy term. The code we are writing to fix the wart is out of place in our layer: it actually needs to live at the lower layer. And that means that by wallpapering, we are most definitely violating our layering principle. The code we write might be astoundingly good, but it’s kind of jammed sideways between the two layers.

As a result, the wallpapering code tends to be a drag on both layers. The layer below, now constrained by the specific way in which the wallpapering code consumes it, is grumpy about the loss of agency in addressing the original wart. By wrapping itself over the wart, our code now amber-ified it, preserving it forever.

At our layer, the code is an albatross. I already pointed at the CSS Selector-parsing code in jQuery as one example. Because it belongs to a lower, more general and more slowly moving layer, every wallpapering code saps efficiency of the team that needs to maintain it.

Perhaps most importantly, the wallpapering code has the capacity to misinform the layers above of the nature of the machinery below. If the opinion of the wallpapering code deviates strongly from the lower layer’s intention, the consumers at higher layers will form inaccurate mental models of how the lower layer works. And that is where the compounding costs really get us in the long term. The story that my friend Alex Russell has been telling about the state of modern web performance is a dramatic and tragic example of that.

All in all, we are best to avoid wallpapering at all cost. However, this is easier said than done. Most of the time, our bedrock layers (the lower layers we’re building on top of) are imperfect. They will have warts. And so here we are at the primary tension that the wallpapering principle helps us resolve: the tension between the intention to avoid wallpapering and the need to deliver reasonable products to our customers.

To resolve this tension, we must first acknowledge that both of these forces have merit, and in extreme, both result in unhappy outcomes. To navigate the tension, we must lean toward minimizing wallpapering, while seeking to reduce the cost of opinion of our wallpapers when we must employ them.

The key technique here is polyfilling (and its close cousin, prollyfilling):  when we choose to  wallpaper, do it as closely to the spirit of the lower layer as possible. For example, if our cloud API is occasionally emitting spurious characters, we might be better off filing the “please trim those characters” bug against this API, and then trimming these characters as closely as possible to the code that receives them from the network. Then, when the bug is fixed, we just remove the trimming code. 

A good polyfill is like a temporary tenant in an otherwise crowded family home: ready to move out as soon as the conditions permit. Wallpapering is usually a bad idea. But if we feel we must wallpaper, think of the code we’re about to write as a polyfill –  code that really wants to live at the lower layer, but can’t yet.

The Layering Principle

Preference toward layering is probably one of the more fundamental principles of software development. It pops up pretty quickly as soon as we start writing code. I wrote about layering extensively in the past, but basically, when we start connecting bits of code together, a tension arises between the bits. Some need to change faster and some need to stay put. This tension quickly forces our code to be arranged in layers, whether we want it or not. I sometimes joke that layering is either something we chose to do or something that happens to our code anyway.

Thinking of layering ahead of time is costly and usually involves discipline that is not always possible, especially when timelines are tight or the shape of the software we’re writing is not yet known. Often, our initial layering designs are wrong, and a whole different layering eventually emerges. These surprises might not be pleasant, but they are to be expected. Layers accrete. We are just here to garden them into the shape that’s most suitable for our needs.

Thus, as we engage in software development, we have to contend with two conflicting forces: one of expedience and convenience that beckons us away from layering, and one of intentionality that pulls us toward it. To resolve the conflict between these forces, here’s the layering principle: lean toward intentional layering, but give the layers room to develop.

A good rule of thumb here is to define layers early on as loosely as possible, and watch for where the layer boundaries are potentially crossed. When this crossing seems to happen, take the opportunity to clarify the layering. Watch for new layers to emerge and don’t add them without a clear need. 

Here’s a concrete example of loose layer definition. Suppose we’re building a client library for a cloud service. We might define three layers, listed here in reverse order (from bottom to top):

  1. Raw REST.  At the bottom, there’s the raw REST-ful API that is literally HTTP calls to the cloud service. This is the bedrock for us – we consume it, but don’t build ourselves. Don’t forget to have a bedrock layer. There’s always something that we build on.
  2. Core. In the middle, there’s the idiomatic layer that translates raw calls into constructs that are common for the target environment of the library. For example, if our target environment is Node, we might have something that uses http module or a new-fangled fetch to make the REST calls and return JSON.
  3. Features. Things that make the cloud service easier to use go in the top layer. This is where we can add fun syntactic sugar that lets us write the code in three lines instead of twenty, or address a particular use case in a particularly elegant way.

This might seem counterintuitive, but start writing code without explicitly putting these layers in place. Don’t force them. Think of the process as growing a seedling. Just keep giving them a glance as more code is added. Does this particular function seem like it could be in the Core layer? How would it group with others like it? Especially at the very early stages, think of layers as aspirational, and feel free to adjust the aspiration. Be patient: they will start showing up and becoming real.

Once the layers start coming together, it helps to develop a layering hygiene: imagine that a developer chooses to engage with a layer directly, instead using the full stack. If they are making raw REST calls, are they missing anything? Can they still get the same results? If they decide to write their own specialization layer, are they missing any of the core functionality?

Finally, as we develop features, watch for what’s happening to the code. Is there a new clump of code that seems to be forming? Maybe there’s a layer of specialization that starts emerging, or perhaps the core layer is splitting into idiomatic service calls and scaling/configuration layers?

The trick to the layering principle is in recognizing that there’s no simple answer: layering is a bit of a paradox that requires flexible thinking and continuous keen observation, rather than precise solutions.

Chances to get it right

This has been rolling around in my head for a while, and a conversation with fellow FLUX-ers spurred me to grapple with these ideas some more. Here’s somewhat rambling advice that emerged.

In the world of technology, there is this concept of shipping: the moment when we finally dot all the i’s and cross all the t’, and release the thing we’ve been working on to our intended audience. Shipping is a happy and stressful event for technologists, akin to parents sending their child off to college: we hope it will be okay, and can think of a thousand things why it might not.

At this threshold of shipping, all assumptions we’ve made face reality. Some of them will be right, and some–many!–will be wrong. We likely didn’t get everything right. Depending on how well we’ve guessed, we’ll see a range of outcomes. At one end of the spectrum, the scenario where we’ve gotten everything wrong. Our product flops entirely. At the other extreme is perfect success. Both extremes lead to easy next steps. It’s the middle that is muddy: if our thing only somewhat succeeds and somewhat fails, what are we to do next?

We need more chances to get it right. To navigate this muddiness, we find ourselves needing to engage in a delicate dance with our customers, trying to understand them. We make new guesses, see the customer react, adjust our thinking, and try again. Every guess is a chance to get it right.

This dance can be tentative and exhausting. The customer might be annoyed with us persistently getting it somewhat wrong. We might want to finish it already, wishing to move to other things. It can be exhilarating, surprising us with new opportunities that we couldn’t dream of at the beginning. It is these opportunities that make such a dance worth it.

If anything, we need to learn to anticipate this dance and look forward to it. We are better off maximizing the number of chances to get our thing right. To get there, we need two mindset shifts. 

🚃 Treat shipping as a process

First, we must view shipping as a process, not a single event. Just like with our kid going to college, we’re not even close to being done. If anything, shipping is the beginning of the next phase of our journey: the one where what we’ve built makes contact with the customer and upends our initial guesses. Similar to good parenting, the job changes, but it doesn’t go away.

This may seem obvious in the world of modern software development. Of course shipping is a process! However, our yearning for the sense of completion and predictability keeps getting in the way. Look at any shipping roadmap of the product. It might seem like a perfect depiction of the process, rather than an event: a lineup of milestones, planned out a few quarters ahead. But… Do these milestones look like a neat progression of features? If so, this is just a bunch of shipping events, stringed together.

When “shipping as a process”, milestones rarely line up neatly into a clear sequence of features on a roadmap. Instead, milestones are treated as trains that arrive and leave on schedule, and the contents of these trains are determined at their departure. Shipping as a process accounts for uncertainty. Some releases might have many features in them, and some might be mostly bug fixes. It is okay to hold a feature back a release, or remove it altogether.

 I’ve learned this lesson in the early days of Chrome. I don’t know how the team functions now, but in the early days, the whole “train on schedule” process was amazingly effective. The level of stress among engineers was low, and we actually had time to dig into why things we shipped worked or didn’t work – and crafted better software as a result.

No, we couldn’t show beautiful multi-year roadmaps of features – but I tend to think that was a good thing. Most of those roadmaps are fiction anyway, designed to alleviate collective angst over pervasive uncertainty. Life doesn’t lend itself to nice clear lines, and the less time we spend trying to line up our futures, the more time we have to work toward the future we want.

🔄 Close the gap between hypothesis  and test

Second, we need to actively work to reduce the gap between making a hypothesis and testing it. A useful framing here is the OODA loop: make sure that the speed of our testing of hypotheses matches the speed of the environment. If we plan to ship a product in a year, yet the space into which this product will ship changes every month, we are likely to be disappointed in the outcomes of our hypotheses.

Naturally, different technologies lend themselves to varying degrees of the wiggle room we have here. Hardware traditionally gravitates toward longer gaps between the initial hypothesis and its test. Software tends to offer more flexibility. Some markets are less forgiving in chance-taking than others. Make choices carefully here: sometimes it’s worth moving things to software to speed up to the OODA loop, or play in an adjacent, more chance-rich field to test things out.

Whatever choices we make, we are better off when we make contact with our customers as quickly as possible. Only they can inform us about the future direction of our technology and its potential. Only in cooperation with them can we build a product that actually works.

Instead of fretting to get it right the first time, opt for small, incremental releases whose main purpose is learning. Think about it as a series of tiny, controlled explosions in the combustion engine, rather than one Big Bang. Ship small things that build up to the big thing, not the other way around.

Learn to set user expectations low. Instead of large splashes and announcements, release quietly and improve quickly. View shipping as a marathon, where our customers are surprised by consistent improvements (“you know, I never thought of this, but <product> has grown to be really great”), not by a flash of dramatic discovery. Apply the principle of least astonishment liberally.

This stance will feel counterintuitive and wrong. We technologists love that visage of a single man on stage in a black turtleneck, blowing people’s minds. This is not what we need to optimize for. In fact, if we do, we will likely never get there. Instead, focus on maximizing the number of chances to get it right. Ship early and often, taking the time to observe and orient between each chance.

Beyond pros and cons

Among engineers and product managers, a pros and cons document is a fairly common practice. The practice calls for outlining a problem statement that defines the scope of solutions, then lists potential solution options, with two lists for each option: a list of pros and a list of cons. It’s a fairly straightforward and easy to follow pattern.

After observing it in the wild, one thing that I am noticing is that this pattern tends to yield somewhat unsatisfying outcomes. Here are some typical failure cases.

In one failure case, as we add the pros and cons, we somehow lose sight of what’s important: the lists just grow with various things we observe about the options, making comparison of options an increasingly arbitrary process. I have seen several of these exercises stall because the pros and cons list became unwieldy and daunting.

Another failure case comes from the flattening that any list forces on decision-making. Among all options, there is usually one or two that stand out as preferred solutions to anyone familiar with the matter, and the other options are there to make a crowd. It’s kind of like the police lineup. Another variation is where the options listed include solutions that aren’t feasible, however desirable. A police lineup with unicorns, I guess.

Sometimes, options look either too similar or too dissimilar to each other, which deflates the decision making process, and an endless debate emerges. Folks who prefer their option keep beefing up its pros, and diminishing the cons. Instead of being a helpful tool, the list becomes the arena of organizational dysfunction.

Let’s give pros and cons a makeover. I’ve been playing with a rev on the practice and it seems to work more effectively. Here’s what I am doing differently.

First, start with the principles. What are the attributes of the solution that are important to us? What are the desired properties that we want the solution to have? Try to limit the number of principles: three to five is a goldilocks range. This might take a little thinking and wrangling of often-conflicting wishes. If you’re new to the concept of principle-crafting, check out this handy guide I wrote a while back.

The process of discerning the principles is useful in itself, since it leads to better understanding of the problem and possible solutions. This might not feel like problem-solving, but it’s actually the bulk of it. Knowing what’s important in a solution is the key to finding it.

Using principles we’ve just devised, imagine the ideal solution. If the principles are sound, it will just pop out. We should be able to articulate what it is we’re looking for. Write it out, draw a mock, etc. Principles and the ideal solution don’t have to be approached sequentially: sometimes I have an intuition for an ideal solution first, and then have to articulate why with principles.

If we’re lucky, this will be the end of the journey, because the ideal solution is something we can already do. If so, let’s do it!

If not, we move on to studying headwinds. What makes it hard for us to implement this ideal solution? List each reason as a separate headwind, and note which principle it undermines. These require a high level of candor and may be uncomfortable, especially in team cultures that overvalue being agreeable. For example, it might be that our team does not have the right skills, or that the infrastructure we currently use is not a good fit. It is okay to have weaknesses. Knowing our weaknesses helps us make better decisions.

Now, let’s look at the alternatives. This is the familiar step where we generate a list of options. Since we’ve already thought about the headwinds, we can lean on them to come up with better options. Which solutions will be resilient to the headwinds? Which ones will manage to avoid them by compromising on the ideal?


Instead of pros and cons, evaluate how far the solution is from our imagined ideal. Think of it as taking on a “principle debt”: how much are we violating our own principles to solve this problem? The farther the solution is from the ideal, the more work we will have to do to get there in the future. At this step, the key is not to get mired in pros and cons. Instead, play the outcomes of the solution forward and understand its effects. How far from the ideal will they take you?

Pick the option that accumulates the least amount of debt. This will no longer be a daunting task. At this point, we’ve done all the thinking, and will have much better clarity on the matter. 

Finally, document the ways in which the solution we’ve chosen would not live up to our principles. Again, blunt candor is preferred here. Each of the items in this list is a bug to be fixed, our “principle debt” payment schedule. Set an intention to fix them over time.

Don’t skip this last step. Most solutions are compromises. If we found ourselves at this step, we picked something that deviates from the ideal solution. Knowing the shortcomings of the solution is our roadmap, a path to iterate toward what we actually want.

If you’re up for it, give this method a try. Let me know how it works for you. I would love to learn about ways to improve it.