A user-situated trustworthiness model

Picking up where I left off in the previous essay, I want to reflect on the causal arrows that turned up in the exploration. It seems that maybe we have a seedling of a simple framework for evaluating user-situated trustworthiness. I’d like to now zoom in a bit on software products, since this is the area where I spent most of my time. In this area, the “things that are mine” are usually the data that I, as a user associate with myself. Looking at the properties of the boundary-tracing process, I can infer that there are two challenges that any user evaluating a product for trustworthiness will face. 

First, there’s the challenge of evaluating the extent of what they consider theirs. The implicit question a user asks: “What’s all the data that I need to think about in relation to this product?” When looking at the extent, two concerns pop out for me: quantity and substance of the data. Quantity of data seems to correlate with extent. When interacting with a software product, the more of my data (higher quantity) I share with this product, the higher will be the extent of the boundary-tracing. Substance is similarly correlated. The more important the data is to me, the more invested I will be in the boundary-tracing. Conversely, if I don’t consider this data to be important, I will be engaged in boundary tracing to a lower extent.

My first year in the US was one of the most culturally transformative years of my life. I might as well have arrived on an alien planet. It took me a few painful mistakes and great wisdom of caring friends to learn the strength of the spirit of individualism in American culture. Coming from the culture where very few things were truly “owned” by an individual (and thus, would be considered as insubstantial in our little framework), the discovery of property rights and proprietorship was jarring and profound. Think of “substance” as the strength of a user’s connection to their data. Comparing myself back then and now, it is fascinating how little of “what is mine“ that I consider valuable today would be viewed as such by that Soviet kid.

At least within this framework, it is now easy to see that the extent of boundary-tracing is inversely correlated to trustworthiness. The more important and more of the data, the more difficult it would be for the user to trace the boundary around it.

The second, orthogonal challenge of the boundary-tracing process that a user will face is that of clarity. How much confidence do I have as a user that the boundary I traced is accurate? The big two obstacles — or put differently, the inversely correlated components — are connectedness and fluidity. The first one stems from the idea that tracing the boundary is more difficult in a densely connected graph. If the software product I use is potentially connected to another product or a place where the data could be moved to, do I have to treat that other place as part of my boundary-tracing?

Fluidity makes things even worse. Being able to move data quickly adds ambiguity to where to trace boundaries. In my last post, I talked about floppy disks. If you ever used one, you probably remember the unmistakable grinding noise of the floppy drive writing your data down, light blinking and all that. Once the noise stopped and light stopped blinking, you knew that the data made it over to disk. Compare that to the frictionless fluidity of today’s Internet, with its seemingly instant data transfer speeds. The more the data is like water, the less confident a user is about their ability to trace the boundary around it.

So, when a user is looking at a software product, I am suggesting that they are implicitly evaluating these four components. Is the data I will share with this product substantial? How much of it will I share? How connected is this product to others? How quickly can my data be moved elsewhere? Of course, depending on whether they are a young adult from the Soviet Union or an aging Silicon Valley software engineer, results of their evaluation will differ. However, my intuition is that they will roughly follow the same process.

Tracing the boundary

Early last year, I invested a bunch of time into exploring the idea of trustworthiness as it pertains to engineering products and people who use them. I found this to be an incredibly complex and nuanced topic, and I have learned a bunch of lenses and developed a few framings. I want to share one of these with you.

Let me preface the story with a bit of semantic disambiguation. When looking at trustworthiness, I am defining it from the perspective of a person interacting with a product. Let’s call them a “user” for simplicity, though I do have a few quibbles with that particular word (a digression for another time). So put simply, the degree of trustworthiness is the level to which a user considers the product worthy of their trust. 

This definition unflinchingly hands the discernment of trustworthiness to my users. And, given that users are wonderfully diverse in their perspectives, it seems like the usefulness of applying such a framing is vanishingly small. After all, it is much easier to define trustworthiness in a principles-based approach, usually as a set of bright lines that I, the creator of the product, commit to never crossing. Once I have these principles outlined, I can organize processes and methodologies to ensure that they are satisfied.

The experience — however limited — from my exploration of the topic of trust is that even though a principle-based approach does offer a comfortable nest for us engineers and product managers to settle into, it is exceedingly difficult to get right. The trouble appears to stem from the fact that the bright lines are usually inferred from some idealized mental model of a user and thus rarely resemble trust-related concerns of the actual user. This feels like a general observation: principles are most effective when they are clear, yet clarity typically arises through removing the nuance. As a result, there’s a nagging sense of dissonance between my impassioned commitments to principles and the “meh” response of the users of my product.

I’ve been grappling with squaring this circle ever since, fairly unsuccessfully. One direction that seemed promising was the framing of boundaries and “tracing the boundary” thought experiment. Here is how it goes.

Imagine that every user is constantly and more or less unconsciously trying to trace the boundary around what’s theirs. Is it mine and if so, is it within or outside the boundary? If I have my phone in my front pocket, I feel like I can trace the boundary around this phone with confidence. At the same time, when I realize that I left this phone in the park, I no longer have such confidence. The phone now resides outside of the boundary, and given that this is my phone, I’d feel pretty anxious about it.

Similarly, when I have a piece of data stored on a — let’s go retro! — floppy disk, I know exactly where this data is stored. The disk becomes a physical embodiment of the data. I can now put it in my front pocket, or forget it at the park. In both cases, I will have clarity on whether my data is inside or outside my boundary.

As we start moving toward data that moves more or less frictionlessly across vast distances and in enormous quantities, the notion of a boundary becomes blurred. The task of tracing the boundary becomes more daunting and seemingly impossible. I have to contend with the idea that I can’t confidently draw boundaries around things I consider mine. And in many cases, I can’t account for all the things out there that if I knew they existed,  I would consider mine.

My sense is that the trustworthiness of a product is somehow correlated with the degree to which the user believes they can trace the boundary around what’s theirs. If as a user, I have high confidence that I know how everything of mine is kept by the product — be that inside or outside my traced boundary, — I can develop a trusting relationship with this product. If, on the other hand, I have low confidence of understanding how things of mine are handled, I am unlikely to find this product trustworthy.

Now, my confidence may be misplaced. My mental model of how the product keeps what’s mine could be naively optimistic – or conversely, overly paranoid. My intuition is that it is the trustworthiness, the predilection to having a relationship with the product is what enables me, the user, to develop a more accurate mental model over time. And in this thought experiment, I do it by tracing and retracing the boundaries of what’s me and mine.

Behavior over time graphing tool

As I was writing last week’s bit on behavior over time graphs, I realized that I kind of want to have a simple tool that enables me to quickly draw and share a behavior over time graph. It seemed like a small-enough project. So I went ahead and wrote it. It is a picture of simplicity. Go to a page, adjust some points, type in a title and you’ve made your own visage of some behavior over time.

If you are using a computer with a mouse or touchpad, the adjustable points on the graph will reveal themselves as your cursor moves over the coordinate space. You can then drag any one of those points up or down. Click on the title at the top of the graph to edit and change it to your liking. If you are using a touch-based device (such as a phone or a tablet), just drag the curve with your finger.

Once you have your curve the way you want it and the title capturing its essence, click the “Share” button. This will change the URL to capture the parameters of the curve and its title. Now you can share your creation with your friends and colleagues or even complete strangers by sharing a URL. In most browsers, clicking the button will also copy the URL to the clipboard for convenience. Like this.

I am currently hosting the tool at https://dglazkov.github.io/botg/. Give it a whirl, and see if it works for you. If you encounter problems or have ideas for new features, file an issue here. Enjoy!

It was an interesting adventure. From the start, I wanted to go with something very simple and fast, so I opted against using dependencies or modern frontend stacks. I thought, hey – maybe I should try to code straight to the platform? After all, I was an Uber TL for the Chrome Web Platform team for a few years. Do I still have the chops?

Here’s what I’d found out. Custom elements are now everywhere. Shadow DOM just works. SVG works as intended, even though WebKit still has its repaint glitches (try to drag the curve and observe the “ripples” appearing on the dashed line in the center). Modules are amazing. Classes are amazing. The future had finally arrived. It is also all very well documented on MDN. It was kind of crazy to read about CSS variables as if they were just an ordinary part of the developer’s tool chest. How cool is that? I remember when we were pitching these ideas and people were looking at us like crazy. And – of course, OMG – VS Code. What an amazing development suite this project grew up to be.

All in all, a pleasant experience. However, here’s one thing I noticed. Even though I’d forgotten some names and keywords, I realized that the path I walked was informed by the intuition developed by working with the other side of the platform. Perhaps for those who haven’t spent years in the C++ entrails of the WebKit and Blink, coding directly to the platform might not be as easy?

Behavior over time graphs and ways to influence

I was geeking out over behavior-over-time graphs (BOTG) this week and found this neat connection to bumpers, boosts, and tilts. My colleague Donald Martin first introduced me to BOTG and they fit right in, given my fondness for silly graphs.

The idea behind BOTG is simple: take some measure of value and draw a graph of its imagined behavior over time. Does the line go up? Does it go down? Does it tepidly mingle at about the same level? To make it even more useful, we can use BOTG for predictions: draw the “today” vertical line on the graph, splitting the space into “past” and “future.” Now, we can use it to convey our sense of how things were going before, and predict what happens next.

Now, let’s add another twist to this story: the goals. Usually, if we care about a value, we have some notion of a goal in relation to this value. Let’s draw this goal as a horizontal line at the appropriate level. If our goal is reaching a certain number of daily active users, and capture this notion by drawing our BOTG squiggly in a way that crosses the goal line in the “future” part of the graph.

It turns out that by considering how we’ve drawn this intersection of goal and BOTG lines, we can determine the type of the influence that might be effective to make our prediction come true. 

If our BOTG curve needs to touch and stick to — or asymptotically approach — the goal line, we are probably talking about a bumper. There is some force that needs to keep that curve at a certain level, and that’s what bumpers are for. For instance, if I want to keep the size of the binary of my app at a certain level, I will likely need to employ team processes that enforce some policy about only landing changes that keep us under that magic number.

If we picture the curve as temporarily crossing the goal line, we are probably looking at a boost. This is literally the “we just need to get it across the line” case. A good example here is the intense march toward a release that many software engineering teams experience. There are some criteria that are determined as requirements for shipping, and, spurred by the boost, the team works their hearts out trying to meet them. A common effect of a boost is the slide back across the line after the team ships, relaxing and resting ahead of another round of shipping.

Last but not least, the curve that permanently crosses the goal line and never looks back is likely a marker of a tilt. Here, the goal line is just a checkpoint. Did we reach N daily active users? Great. Now let’s go for N x 2. When such ambition is part of the prediction, we are likely looking for some constant source of compounding energy. A good question to ask here is — where will it come from?

One of the common mistakes that I’ve seen leads make is confusing the outcomes of boosts with those of tilts. Both offer gains. Boosts feel faster and provide that satisfying thrill of accomplishment, but they are at best temporary. Tilts are slower, but their advances are more lasting.  So when leaders employ a boost and expect the curve to just stay over the goal line, they are in for an unpleasant surprise. Early in my tenure at the Chrome team, I organized a task force to reduce the number of failing tests (shout out to my fellow LTTF-ers!), a small scrappy band of engineers dedicated to fixing failing tests. At one time, I reported that we brought the number of failures down from a couple of thousands to just 300! Trust me, that was an amazing feat. I am still in awe of us being able to get there. Unfortunately, my strategy — organizing a task force — was that of a boost. The spoils of the hard-won victory lasted a week after the task force disbanded. For a sobering reference, that list of failing tests is currently clocking at 7623 lines.

See if the BOTG with goals can help you puzzle out what might be the strategy for that difficult next endeavor you’re facing. Use them to clearly capture your predictions – and perhaps glimpse which method of influence might be needed to make them a reality.

Normative, Informative and Generative Voices

I’ve been thinking about how to convey the style of writing that I’ve learned while writing here, and this lens materialized. And yes, once again, the distinctions come in threes. As Nicklas Berild Lundblad suggested, I might be suffering from triangulism… and I like it.

The normative voice is spurring you to action. Normative voice aims to exert control, which may or may not be something you desire. For example, signs in public places tend to be written in a normative voice. Objectives, team principles, political slogans, and codes of conduct – all typically have that same quality. Normative voice usually conveyed some intention, whether veiled or not. 

The informative voice is not here to tell you what to do. Informative voice goes “here’s a thing I see” or “here is what I am doing.” Informative voice does not mean to impose an intention – it just wants to share what it’s seeing. As such, the informative voice tends to come across aloof and unemotional, like that of a detached observer.

Given that our primary medium of communication is of human origin and thus deeply rooted in feelings, it is incredibly challenging to separate normative and informative voices. I might believe that I am writing something in an informative voice, but employ epithets and the turns of the phrase that betray my attachment. Similarly, I could be voicing something that looks informative, but actually intends to control you – aka the ancient art of manipulating others. Let’s admit it, my teen self saying: “Mom, all kids have cool <item of clothing> and I don’t” was not an informative voice, no matter how neutrally presented. Another good sign of “conveyed as informative, yet actually normative” voice is the presence of absolute statements in the language or subtly – or not! – taking sides as we describe them.

Conversely, I might be trying to write normative prose, yet offer no firm call to action or even a sense of intention. My experience is that this kind of “conveyed as normative, yet actually informative” voice is commonly an outcome of suffering through a committee-driven wordsmithing process. “I swear, this mission statement was meant to say something. We just can’t remember what.” – oh lordy, forgive me for I have produced my share of these.

Within this lens, the constant struggle between the two – and the struggle to untangle the two – might seem rather unsatisfying and hopeless. Conveniently, I have a third voice that I’ve yet to introduce.

The generative voice accepts the struggle as a given and builds on that. Generative voice embraces the resonance-generating potential of the normative voice and the wealth of insights cherished by the informative voice. Yet at the same time, it aims to hold the intention lightly while still savoring the richness of feelings conveyed by the language. Generative voice is the voice that spurs to improvise, to jam with ideas, to add your own part to the music of thinking.

This is the language that I aim for when writing these little essays. For example, I use the words “might” and “tends to” to indicate that these aren’t exact truths, and I don’t intend to hold these firmly. I try to explore every side of the framing with empathy, inhabiting each of the corners for a little while. But most significantly, I hold up hope that after reading these, you feel invited to play with the ideas I conveyed, to riff on these, departing from the original content into places that resonate more with you. When speaking in the generative voice, I primarily care about catalyzing new insights for my future self – and you. And in doing so, I am hopeful that I am helping us both find new ways to look at the challenges we’re facing.

The value triangle

My colleagues and I were chatting about the idea of a “good change,” and a lens popped into my head, along with the name: the value triangle. I swear it was an accident.

When a team is trying to discern whether a change they are imparting on their product (and thus the world) is “good,” it’s possible that their conversation is walking the edges of a triangle. This triangle is formed by three values: value to business, value to the user, and value to the ecosystem. 

When something is valuable to business, this usually means that the team benefits from this change. When something is valuable to the user, it is usually the user who perceives the change as desirable in one way or another. The third corner is there to anchor the concept of a larger system: does the change benefit the whole surrounding environment that includes the business, and all current and potential users? A quick note aside: usually, when we talk about this last corner, we say things like “thinking about long-term effects.” This is usually true – ecosystems tend to move at a slower clip than individual users. However, it helps to understand that the “long” here is more of a side effect of the scope of the effects, rather than a natural property of the change.

Anyhow, now that we have visualized this triangle, I am going to sheepishly suggest that a “good” change is an endeavor that somehow creates value in all three corners. To better illustrate, it might be useful to imagine what happens when we fail to meet this criteria.

Let’s start with situations when we only hit one of the three. If our change only produces value for our business, we’re probably dealing with something rather uncouth and generally frowned upon. Conversely, if we only produce value for our users, we’re probably soon to be out of business. And if we are only concerned about the ecosystem effects, it’s highly likely we’re not actually doing anything useful.

Moving on to hitting two out of three, delivering a combination of user and business value will feel quite satisfying at first and will fit right at home with a lot of things we humans have done since the Industrial Age. Unfortunately, without considering the effects of our change on the surrounding ecosystem, the all-too-common outcome is an environmental catastrophe – literal or figurative. Moving clockwise in our triangle, focusing on only producing value for users and the ecosystem yields beautiful ideas that die young of starvation. The third combination surprised me. I’ve been looking for something that fits the bill, and with a start, realized that I’ve lived it. The intricately insane web of Soviet bureaucracy, designed with the purpose of birthing a better future for humanity, captured tremendous amounts of value while explicitly favoring the “good of the many” over that of an individual. For a less dramatic example, think of a droll enterprise tool you used recently, and the seeming desire of the tool to ignore or diminish you.

It does seem like hitting all three will be challenging. But hey, if we’re signing up to do “good,” we gotta know it won’t be a walk in the park. At least, you now have this simple lens to use as a guide.

Lenses, framings, and models

At a meeting this week, I realized that I use the terms “lens,” “framing,” and “model” in ways that hold deep meaning to me, but I am not certain that this meaning is clear to others. So here’s my attempt to capture the distinctions between them.

The way I see these, the lens is the most elemental of the three. A lens is a resonant,  easy to remember (or familiar) depiction of some phenomenon that offers a particular way of looking at various situations. Kind of like TV commercial jingles, effective lenses are catchy and brief. They usually have a moniker that makes them easy to recall, like “Goodhart’s Law” or “Tuckman’s stages of group development” or “Cynefin.” Just like their optical namesakes, lenses offer a particular focus, which means that they also necessarily distort. As such, lenses are subject to misuse if used too ardently. With lenses, the more the merrier, and the more lightly held, the better. Nearly everything can be turned into a lens. A prompt “How can this be a lens?” tends to yield highly generative conversations. For fun, think of a fairy tale or a news story and see how it might be used as a lens to highlight some dynamic within your team. Usually, while names and settings change, the movements remain surprisingly consistent.

Framings are a bit more specialized. They are an application of one or more lenses to a specific problem space. For example, when I am devising a strategy for a new team, I might employ Tuckman’s stages to describe the challenges the team will face in the first year of its existence. Then, I would invoke Cynefin to outline the kind of problems the team will need to be equipped to solve, rounding up with Goodhart’s Law to reflect on how the team will measure its success. When applied effectively, framings turn a vague, messy problem space into a solvable problem. To take me there, framings depend on the richness of the collection of lenses that are available to me. If these are the only three lenses I know, I will quickly find myself out of depth in my framing efforts: everything I come up with will limp with a particular Tuckman-Cynefin-Goodhart gait.

Finally, models are networks of causal relationships that form within my framings. The problem, revealed by my framing exercise, might yet be untenable. While I can start forming hypotheses, I still have little sense in how many miracles each will take. This is where models help. Models allow me to reason about the amount of effort each of my hypotheses will require. Because each of the hypotheses is a causal chain of events, models help uncover links of these chains that are superlinear.

Getting back to our team planning example, the first four Tuckman’s stages are a neat causal sequence and might lead us to conclude that the process we’re dealing with is linear and thus easily scheduled. However, if we study the network of causal relationships closer, we might be able to see that they aren’t. The team’s storming phase can tip the team’s environment into complex Cynefin space and thus extend the duration of the storming phase. Or, the arrival to the norming stage might make the team susceptible to over-relying on its metrics to steer, triggering Goodhart’s law, eventually leading to the slide into chaotic Cynefin space, setting the stages all the way back to forming.

The nonlinearity does not need to be surprising. Once we see it in our models, the conversation elevates from just looking at possible solutions to evaluating their effectiveness. Framings give us a way to see solvable problems. Models provide us with insight on how to realistically solve them.

Bumpers, Boosts, and Tilts

A discussion late last year about different ways to influence organizations led to this framing. The pinball machine metaphor was purely accidental – I don’t actually know that much about them, aside from that one time when we went to a pinball machine museum (it was glorious fun). The basic setup is this: we roughly bucket different ways to influence as bumpers, boosts, and tilts.

Bumpers are hard boundaries, bright lines that are not to be crossed. Most well-functioning teams have them. From security reviews, go/no-go meetings, or any sort of policy-enforcing processes, these are mostly designed to keep the game within bounds. They tend to feel like stop energy – and for a good reason. They usually encode hard-earned lessons of pain and suffering: our past selves experienced them so you don’t have to. Bumpers are usually easy to see and even if hidden, they make themselves known with vigor whenever we hit them. By their nature, bumpers tend to be reactive. Though they will help you avoid the undesired outcomes, they aren’t of much use in moving toward something – that is the function of boosts.

Boosts propel. They are directional in nature, accelerating organizations toward desired outcomes. Money commonly figures into a composition of a boost, though just as often, boosts can vibe with a sense of purpose. An ambitious, resonant mission can motivate a team, as well as an exciting new opportunity and/or a fresh start. Boosts require investment of energy, and sustaining a boost can be challenging. The sparkle of big visions wears off, new opportunities grow old, and bonuses get spent. Because of that, boosts are highly visible when the new energy is invested into them, and eventually fade as this energy dissipates. For example, many organizational change initiatives have this quality.

Finally, tilts change how the playing field is leveled. They are often subtle in their effects, relying on some other constant force to do the work. Objects on a slightly slanted floor will tend to slide toward one side of the room, gently but inexorably driven by gravity. In teams, tilts are nearly invisible by themselves. We can only see their outcomes. Some tilts are temporary and jarring, like the inevitable turn to dramatic events in the news during a meeting. Some tilts are seemingly permanent, like the depressing slide toward short-term wins in lieu of long-term strategy, … or coordination headwinds (Woo hoo! Hat trick!! Three mentions of Alex’s excellent deck in three consecutive stories!) Despite their elusive nature, titls are the only kind of influence capable of a true 10x change. A well-placed gentle tilt can change paradigms. My favorite example of such a tilt is the early insistence of folks who designed protocols and formats that undergird the Internet to be exchanged as RFCs, resulting in openness of the most of the important foundational bits of the comlex beautiful mess that we all love and use. However, most often, tilts are unintentional. They just don’t look interesting or useful to mess with.

Any mature organization will have a dazzling cocktail of all three of these. If you are curious about this framing, consider: how many boosts in your team are aimed at the bumpers? How many boosts and bumpers keep piling on because nobody had looked at the structure of tilts? How many efforts to 10x something fail because they were designed as boosts? Or worse yet, bumpers?

Silly math

In Jank in Teams, I employed a method of sharing mental models that I call “silly math.” Especially in surroundings that include peeps who love (or at least don’t hate) math, these can serve as a simple and effective way to communicate insights.

For me, silly math started with silly graphs. If you ever worked with me, you would have found me me at least once trying to draw one to get a point across. Here I am at BlinkOn 6 (2016! – wow, that’s a million years ago) in Munich talking about the Chrome Web Platform team’s predictability efforts and using a silly graph as illustration. There are actually a couple of them in this talk, all drawn with love and good humor by yours truly. As an aside, the one in Munich was my favorite BlinkOn… Or wait, maybe right after the one in Tokyo. Who am I kidding, I loved them all.

Silly graphs are great, because they help convey a sometimes tricky relationship between variables with two axes and a squiggle. Just make sure to not get stuck on precise units or actual values. The point here is to capture the dynamic. Most commonly, time is the horizontal axis, but it doesn’t need to be. Sometimes, we can even glean additional ideas from a silly graph by considering things like area under the curve, or single/double derivatives. Silly graphs can help steer conversations and help uncover assumptions. For example, if I draw a curve that has a bump in the middle to describe some relationship between two parameters – is that a normal distribution that I am implying? And if the curve bends, where do I believe nonlinearity comes from? 

Silly math is a bit more recent, but it’s something I enjoy just as much. Turns out, an equation can sometimes convey an otherwise tricky dynamic. Addition and subtraction are the simplest: our prototypical “sum of the parts.” Multiplication and division introduce nonlinear relationships and make things more interesting. The one that I find especially fascinating is division by zero. If I describe growth as effort divided by friction, what happens when friction evaporates? Another one that comes handy is multiplication of probabilities. It is perfectly logical and still kind of spooky to see a product of very high probabilities produce a lower value. Alex Komoroske used this very effectively to illustrate his point in the slime mold deck (Yes! Two mentions of Alex’s deck in two consecutive pieces! Level up!) And of course, how can we can’t forget exponential equations to draw attention to compounding loops?! Basic trigonometry is another good vehicle to share mental models. If we can sketch out a triangle, we can use the sine, cosine, or tangent to describe things that undulate or perhaps rise out of sight asymptotically. In the series, I did this a couple of times when talking about prediction errors and the expectation gradient.

Whatever math function you choose, make sure that your audience is familiar with it. Don’t get too hung up on details. It is okay if the math is unkempt and even wrong. The whole point of this all is to rely on an existing shared mental model space of math as a bridge, conveying something that might otherwise take a bunch of words in a simple formula.

How to make a breakthrough

The title is a bit tongue-in-cheek, because I am not actually providing a recipe. It is more of an inkling, a dinner-napkin doodle. But there’s something interesting here, still half-submerged, so I am writing it down. Perhaps future me – or you! – will help make it the next step forward.

Ever since my parents bought me an MK 54, I knew that programming was my calling. I dove into the world of computers headfirst. It was only years later when I had my formal introduction to the science of it all. One of the bigger moments was the discovery of the big O notation. I still remember how the figurative sky opened up and the angels started singing: so that’s how I talk about that thing that I kept bumping into all this time! The clarity of the framing was profound. Fast programs run in sublinear time. Slow programs run in superlinear time. If I designed an algorithm that turns an exponential-time function to constant time, I found a solution to a massive performance problem – even if I didn’t realize it existed in the first place. I’ve made a breakthrough. Suddenly, my code runs dramatically faster, consuming less power. Throughout my software engineering career, I’ve been learning to spot places in code where superlinearity rules and exorcizing it. And curiously, most of them will hide a loop that compounds computational bandwidth in one way or another. 

I wonder if this framing can be useful outside of computer science. Considered very broadly, The Big O notation highlights the idea that behind every phenomenon we view as a “problem” is a superlinear growth of undesired effects. If we understand the nature of that phenomenon, we can spot the compounding loop that leads to the superlinearity. A “breakthrough” then is a change that somehow takes the compounding loop out of the equation.

For example, let’s reflect briefly on Alex Komoroske’s excellent articulation of coordination headwinds. In that deck, he provides a crystal clear view of the superlinear growth of coordination effort that happens in any organization that aims to remain fluid and adaptable in the face of a challenging environment. He also sketches out the factors of the compounding loop underneath – and the undesired effects it generates. Applied to this context, a breakthrough might be an introduction of a novel way to organize, in which an increase in uncertainty, team size, or culture of self-empowerment result in meager, sublinear increases in coordination effort. Barring such an invention, we’re stuck with rate-limiting: managing nonlinearity by constraining the parameters that fuel the compounding loop of coordination headwinds.

Though we can remain sad about not yet having invented a cure to coordination headwinds, we can also sense a distinct progression. With Alex’s help, we moved from simply experiencing a problem to seeing the compounding loop that’s causing it. We now know where to look for a breakthrough – and how to best manage until we find it. Just like software engineers do in code, we can move from “omg why this is so slow” to “here’s the spot where the nonlinear growth manifests.”

It is my guess that breakthroughs are mostly about finding that self-consistent, resonant framing that captures the nature of a phenomenon in terms of a compounding loop. Once we are able to point at it and describe it, we can begin doing something about it. So whether you’re struggling with an engineering challenge or an organizational one, try to see if you can express its nature in terms of big O notation. If it keeps coming up linear or sublinear, you probably don’t have the framing right. Linear phenomena tend to be boring and predictable. But once you zero in on a framing that lights up that superlinear growth, it might be worth spending some time sketching out the underlying compounding loop, causality and factors and all. When you have them, you might be close to making a breakthrough.