Normative, Informative and Generative Voices

I’ve been thinking about how to convey the style of writing that I’ve learned while writing here, and this lens materialized. And yes, once again, the distinctions come in threes. As Nicklas Berild Lundblad suggested, I might be suffering from triangulism… and I like it.

The normative voice is spurring you to action. Normative voice aims to exert control, which may or may not be something you desire. For example, signs in public places tend to be written in a normative voice. Objectives, team principles, political slogans, and codes of conduct – all typically have that same quality. Normative voice usually conveyed some intention, whether veiled or not. 

The informative voice is not here to tell you what to do. Informative voice goes “here’s a thing I see” or “here is what I am doing.” Informative voice does not mean to impose an intention – it just wants to share what it’s seeing. As such, the informative voice tends to come across aloof and unemotional, like that of a detached observer.

Given that our primary medium of communication is of human origin and thus deeply rooted in feelings, it is incredibly challenging to separate normative and informative voices. I might believe that I am writing something in an informative voice, but employ epithets and the turns of the phrase that betray my attachment. Similarly, I could be voicing something that looks informative, but actually intends to control you – aka the ancient art of manipulating others. Let’s admit it, my teen self saying: “Mom, all kids have cool <item of clothing> and I don’t” was not an informative voice, no matter how neutrally presented. Another good sign of “conveyed as informative, yet actually normative” voice is the presence of absolute statements in the language or subtly – or not! – taking sides as we describe them.

Conversely, I might be trying to write normative prose, yet offer no firm call to action or even a sense of intention. My experience is that this kind of “conveyed as normative, yet actually informative” voice is commonly an outcome of suffering through a committee-driven wordsmithing process. “I swear, this mission statement was meant to say something. We just can’t remember what.” – oh lordy, forgive me for I have produced my share of these.

Within this lens, the constant struggle between the two – and the struggle to untangle the two – might seem rather unsatisfying and hopeless. Conveniently, I have a third voice that I’ve yet to introduce.

The generative voice accepts the struggle as a given and builds on that. Generative voice embraces the resonance-generating potential of the normative voice and the wealth of insights cherished by the informative voice. Yet at the same time, it aims to hold the intention lightly while still savoring the richness of feelings conveyed by the language. Generative voice is the voice that spurs to improvise, to jam with ideas, to add your own part to the music of thinking.

This is the language that I aim for when writing these little essays. For example, I use the words “might” and “tends to” to indicate that these aren’t exact truths, and I don’t intend to hold these firmly. I try to explore every side of the framing with empathy, inhabiting each of the corners for a little while. But most significantly, I hold up hope that after reading these, you feel invited to play with the ideas I conveyed, to riff on these, departing from the original content into places that resonate more with you. When speaking in the generative voice, I primarily care about catalyzing new insights for my future self – and you. And in doing so, I am hopeful that I am helping us both find new ways to look at the challenges we’re facing.

The value triangle

My colleagues and I were chatting about the idea of a “good change,” and a lens popped into my head, along with the name: the value triangle. I swear it was an accident.

When a team is trying to discern whether a change they are imparting on their product (and thus the world) is “good,” it’s possible that their conversation is walking the edges of a triangle. This triangle is formed by three values: value to business, value to the user, and value to the ecosystem. 

When something is valuable to business, this usually means that the team benefits from this change. When something is valuable to the user, it is usually the user who perceives the change as desirable in one way or another. The third corner is there to anchor the concept of a larger system: does the change benefit the whole surrounding environment that includes the business, and all current and potential users? A quick note aside: usually, when we talk about this last corner, we say things like “thinking about long-term effects.” This is usually true – ecosystems tend to move at a slower clip than individual users. However, it helps to understand that the “long” here is more of a side effect of the scope of the effects, rather than a natural property of the change.

Anyhow, now that we have visualized this triangle, I am going to sheepishly suggest that a “good” change is an endeavor that somehow creates value in all three corners. To better illustrate, it might be useful to imagine what happens when we fail to meet this criteria.

Let’s start with situations when we only hit one of the three. If our change only produces value for our business, we’re probably dealing with something rather uncouth and generally frowned upon. Conversely, if we only produce value for our users, we’re probably soon to be out of business. And if we are only concerned about the ecosystem effects, it’s highly likely we’re not actually doing anything useful.

Moving on to hitting two out of three, delivering a combination of user and business value will feel quite satisfying at first and will fit right at home with a lot of things we humans have done since the Industrial Age. Unfortunately, without considering the effects of our change on the surrounding ecosystem, the all-too-common outcome is an environmental catastrophe – literal or figurative. Moving clockwise in our triangle, focusing on only producing value for users and the ecosystem yields beautiful ideas that die young of starvation. The third combination surprised me. I’ve been looking for something that fits the bill, and with a start, realized that I’ve lived it. The intricately insane web of Soviet bureaucracy, designed with the purpose of birthing a better future for humanity, captured tremendous amounts of value while explicitly favoring the “good of the many” over that of an individual. For a less dramatic example, think of a droll enterprise tool you used recently, and the seeming desire of the tool to ignore or diminish you.

It does seem like hitting all three will be challenging. But hey, if we’re signing up to do “good,” we gotta know it won’t be a walk in the park. At least, you now have this simple lens to use as a guide.

Lenses, framings, and models

At a meeting this week, I realized that I use the terms “lens,” “framing,” and “model” in ways that hold deep meaning to me, but I am not certain that this meaning is clear to others. So here’s my attempt to capture the distinctions between them.

The way I see these, the lens is the most elemental of the three. A lens is a resonant,  easy to remember (or familiar) depiction of some phenomenon that offers a particular way of looking at various situations. Kind of like TV commercial jingles, effective lenses are catchy and brief. They usually have a moniker that makes them easy to recall, like “Goodhart’s Law” or “Tuckman’s stages of group development” or “Cynefin.” Just like their optical namesakes, lenses offer a particular focus, which means that they also necessarily distort. As such, lenses are subject to misuse if used too ardently. With lenses, the more the merrier, and the more lightly held, the better. Nearly everything can be turned into a lens. A prompt “How can this be a lens?” tends to yield highly generative conversations. For fun, think of a fairy tail or a news story and see how it might be used as a lens to highlight some dynamic within your team. Usually, while names and settings change, the movements remain surprisingly consistent.

Framings are a bit more specialized. They are an application of one or more lenses to a specific problem space. For example, when I am devising a strategy for a new team, I might employ Tuckman’s stages to describe the challenges the team will face in the first year of its existence. Then, I would invoke Cynefin to outline the kind of problems the team will need to be equipped to solve, rounding up with Goodhart’s Law to reflect on how the team will measure its success. When applied effectively, framings turn a vague, messy problem space into a solvable problem. To take me there, framings depend on the richness of the collection of lenses that are available to me. If these are the only three lenses I know, I will quickly find myself out of depth in my framing efforts: everything I come up with will limp with a particular Tuckman-Cynefin-Goodhart gait.

Finally, models are networks of causal relationships that form within my framings. The problem, revealed by my framing exercise, might yet be untenable. While I can start forming hypotheses, I still have little sense in how many miracles each will take. This is where models help. Models allow me to reason about the amount of effort each of my hypotheses will require. Because each of the hypotheses is a causal chain of events, models help uncover links of these chains that are superlinear.

Getting back to our team planning example, the first four Tuckman’s stages are a neat causal sequence and might lead us to conclude that the process we’re dealing with is linear and thus easily scheduled. However, if we study the network of causal relationships closer, we might be able to see that they aren’t. The team’s storming phase can tip the team’s environment into complex Cynefin space and thus extend the duration of the storming phase. Or, the arrival to the norming stage might make the team susceptible to over-relying on its metrics to steer, triggering Goodhart’s law, eventually leading to the slide into chaotic Cynefin space, setting the stages all the way back to forming.

The nonlinearity does not need to be surprising. Once we see it in our models, the conversation elevates from just looking at possible solutions to evaluating their effectiveness. Framings give us a way to see solvable problems. Models provide us with insight on how to realistically solve them.

Bumpers, Boosts, and Tilts

A discussion late last year about different ways to influence organizations led to this framing. The pinball machine metaphor was purely accidental – I don’t actually know that much about them, aside from that one time when we went to a pinball machine museum (it was glorious fun). The basic setup is this: we roughly bucket different ways to influence as bumpers, boosts, and tilts.

Bumpers are hard boundaries, bright lines that are not to be crossed. Most well-functioning teams have them. From security reviews, go/no-go meetings, or any sort of policy-enforcing processes, these are mostly designed to keep the game within bounds. They tend to feel like stop energy – and for a good reason. They usually encode hard-earned lessons of pain and suffering: our past selves experienced them so you don’t have to. Bumpers are usually easy to see and even if hidden, they make themselves known with vigor whenever we hit them. By their nature, bumpers tend to be reactive. Though they will help you avoid the undesired outcomes, they aren’t of much use in moving toward something – that is the function of boosts.

Boosts propel. They are directional in nature, accelerating organizations toward desired outcomes. Money commonly figures into a composition of a boost, though just as often, boosts can vibe with a sense of purpose. An ambitious, resonant mission can motivate a team, as well as an exciting new opportunity and/or a fresh start. Boosts require investment of energy, and sustaining a boost can be challenging. The sparkle of big visions wears off, new opportunities grow old, and bonuses get spent. Because of that, boosts are highly visible when the new energy is invested into them, and eventually fade as this energy dissipates. For example, many organizational change initiatives have this quality.

Finally, tilts change how the playing field is leveled. They are often subtle in their effects, relying on some other constant force to do the work. Objects on a slightly slanted floor will tend to slide toward one side of the room, gently but inexorably driven by gravity. In teams, tilts are nearly invisible by themselves. We can only see their outcomes. Some tilts are temporary and jarring, like the inevitable turn to dramatic events in the news during a meeting. Some tilts are seemingly permanent, like the depressing slide toward short-term wins in lieu of long-term strategy, … or coordination headwinds (Woo hoo! Hat trick!! Three mentions of Alex’s excellent deck in three consecutive stories!) Despite their elusive nature, titls are the only kind of influence capable of a true 10x change. A well-placed gentle tilt can change paradigms. My favorite example of such a tilt is the early insistence of folks who designed protocols and formats that undergird the Internet to be exchanged as RFCs, resulting in openness of the most of the important foundational bits of the comlex beautiful mess that we all love and use. However, most often, tilts are unintentional. They just don’t look interesting or useful to mess with.

Any mature organization will have a dazzling cocktail of all three of these. If you are curious about this framing, consider: how many boosts in your team are aimed at the bumpers? How many boosts and bumpers keep piling on because nobody had looked at the structure of tilts? How many efforts to 10x something fail because they were designed as boosts? Or worse yet, bumpers?

Silly math

In Jank in Teams, I employed a method of sharing mental models that I call “silly math.” Especially in surroundings that include peeps who love (or at least don’t hate) math, these can serve as a simple and effective way to communicate insights.

For me, silly math started with silly graphs. If you ever worked with me, you would have found me me at least once trying to draw one to get a point across. Here I am at BlinkOn 6 (2016! – wow, that’s a million years ago) in Munich talking about the Chrome Web Platform team’s predictability efforts and using a silly graph as illustration. There are actually a couple of them in this talk, all drawn with love and good humor by yours truly. As an aside, the one in Munich was my favorite BlinkOn… Or wait, maybe right after the one in Tokyo. Who am I kidding, I loved them all.

Silly graphs are great, because they help convey a sometimes tricky relationship between variables with two axes and a squiggle. Just make sure to not get stuck on precise units or actual values. The point here is to capture the dynamic. Most commonly, time is the horizontal axis, but it doesn’t need to be. Sometimes, we can even glean additional ideas from a silly graph by considering things like area under the curve, or single/double derivatives. Silly graphs can help steer conversations and help uncover assumptions. For example, if I draw a curve that has a bump in the middle to describe some relationship between two parameters – is that a normal distribution that I am implying? And if the curve bends, where do I believe nonlinearity comes from? 

Silly math is a bit more recent, but it’s something I enjoy just as much. Turns out, an equation can sometimes convey an otherwise tricky dynamic. Addition and subtraction are the simplest: our prototypical “sum of the parts.” Multiplication and division introduce nonlinear relationships and make things more interesting. The one that I find especially fascinating is division by zero. If I describe growth as effort divided by friction, what happens when friction evaporates? Another one that comes handy is multiplication of probabilities. It is perfectly logical and still kind of spooky to see a product of very high probabilities produce a lower value. Alex Komoroske used this very effectively to illustrate his point in the slime mold deck (Yes! Two mentions of Alex’s deck in two consecutive pieces! Level up!) And of course, how can we can’t forget exponential equations to draw attention to compounding loops?! Basic trigonometry is another good vehicle to share mental models. If we can sketch out a triangle, we can use the sine, cosine, or tangent to describe things that undulate or perhaps rise out of sight asymptotically. In the series, I did this a couple of times when talking about prediction errors and the expectation gradient.

Whatever math function you choose, make sure that your audience is familiar with it. Don’t get too hung up on details. It is okay if the math is unkempt and even wrong. The whole point of this all is to rely on an existing shared mental model space of math as a bridge, conveying something that might otherwise take a bunch of words in a simple formula.

How to make a breakthrough

The title is a bit tongue-in-cheek, because I am not actually providing a recipe. It is more of an inkling, a dinner-napkin doodle. But there’s something interesting here, still half-submerged, so I am writing it down. Perhaps future me – or you! – will help make it the next step forward.

Ever since my parents bought me an MK 54, I knew that programming was my calling. I dove into the world of computers headfirst. It was only years later when I had my formal introduction to the science of it all. One of the bigger moments was the discovery of the big O notation. I still remember how the figurative sky opened up and the angels started singing: so that’s how I talk about that thing that I kept bumping into all this time! The clarity of the framing was profound. Fast programs run in sublinear time. Slow programs run in superlinear time. If I designed an algorithm that turns an exponential-time function to constant time, I found a solution to a massive performance problem – even if I didn’t realize it existed in the first place. I’ve made a breakthrough. Suddenly, my code runs dramatically faster, consuming less power. Throughout my software engineering career, I’ve been learning to spot places in code where superlinearity rules and exorcizing it. And curiously, most of them will hide a loop that compounds computational bandwidth in one way or another. 

I wonder if this framing can be useful outside of computer science. Considered very broadly, The Big O notation highlights the idea that behind every phenomenon we view as a “problem” is a superlinear growth of undesired effects. If we understand the nature of that phenomenon, we can spot the compounding loop that leads to the superlinearity. A “breakthrough” then is a change that somehow takes the compounding loop out of the equation.

For example, let’s reflect briefly on Alex Komoroske’s excellent articulation of coordination headwinds. In that deck, he provides a crystal clear view of the superlinear growth of coordination effort that happens in any organization that aims to remain fluid and adaptable in the face of a challenging environment. He also sketches out the factors of the compounding loop underneath – and the undesired effects it generates. Applied to this context, a breakthrough might be an introduction of a novel way to organize, in which an increase in uncertainty, team size, or culture of self-empowerment result in meager, sublinear increases in coordination effort. Barring such an invention, we’re stuck with rate-limiting: managing nonlinearity by constraining the parameters that fuel the compounding loop of coordination headwinds.

Though we can remain sad about not yet having invented a cure to coordination headwinds, we can also sense a distinct progression. With Alex’s help, we moved from simply experiencing a problem to seeing the compounding loop that’s causing it. We now know where to look for a breakthrough – and how to best manage until we find it. Just like software engineers do in code, we can move from “omg why this is so slow” to “here’s the spot where the nonlinear growth manifests.”

It is my guess that breakthroughs are mostly about finding that self-consistent, resonant framing that captures the nature of a phenomenon in terms of a compounding loop. Once we are able to point at it and describe it, we can begin doing something about it. So whether you’re struggling with an engineering challenge or an organizational one, try to see if you can express its nature in terms of big O notation. If it keeps coming up linear or sublinear, you probably don’t have the framing right. Linear phenomena tend to be boring and predictable. But once you zero in on a framing that lights up that superlinear growth, it might be worth spending some time sketching out the underlying compounding loop, causality and factors and all. When you have them, you might be close to making a breakthrough.