Dandelion or Elephant?

The discovery of the dandelion/elephant framing was exciting and my fellow FLUX colleagues and I engaged in a rather fun “hacky sack of ideas” game, tossing the framing back and forth and looking at it from this side and that. One pattern that emerged was the “dandelion/elephant” test: is this company/team/product/concept a dandelion or an elephant? The test kept producing unsatisfactory results, making us wonder: are we holding this wrong? As usual, some new insights emerged. I will try to capture them here.

First things first: it is very easy to get disoriented about what it is that we’re testing. In our excitement, we’d forgotten that the biological equivalents of our subjects are strategies. The r-selected strategy and the K-selected strategy are approaches to the problem space that various species take. Similarly, “dandelion” or “elephant” aren’t attributes or states of an organization or product. They are strategies that an entity chooses to overcome a challenge it faces. In other words, it’s not something that an entity is or has, but rather how it acts.

Since it deals with strategies, the dandelion/elephant lens is highly contextual. A whole company or an organization or even a product is not beholden to just one strategy. There can be multiple, complementary sets of strategies for the same product. 

If I am building a REPL environment, I am clearly exercising the dandelion strategy in relation to its customers. I want ideas my customers have to be easily copyable, discoverable, fast to first results, etc. See the Interest, Legibility, Velocity, Access conditions I outlined earlier.

However, when considering how to organize the development of this REPL environment itself (all the infrastructure and tooling that goes into creating a dandelion field for others), I am likely to take an elephant strategy. I would want capabilities that enable me to build upon my idea, not continue to reinvent it from scratch every few months. I will seek higher reliability, more features, rigorous processes, and increasingly more powerful capabilities – the outcome of the Stability, Breadth, Rigor, and Power conditions.

Just like with any strategy, these are subject to becoming embodied. This is why I keep harping on about conditions. Us choosing to employ a given strategy is not a simple decision. It is a matter of the environment in which this decision is made. It is our environment that enables us to choose a strategy – or prevents us from doing so.

Here’s one way to think of it. Our strategy is an aggregate of the moves we individually make. If most of us are making dandelion moves (rapidly mutating ideas we discover, generating new ones without holding on to the old ones), we are in the dandelion environment. If instead, we seem to be making mostly elephant moves (collectively reinforcing one big idea, making it richer, more nuanced, more thorough, etc.), we are in the elephant environment. 

In either case, no matter how hard our leaders may call on us to change a strategy from the one we’ve currently embraced, we will only be able to produce gnarly beasts: dandelions with elephant trunks, or elephants made of pappus.

The contextual quality of the lens allows us to use it to spot inconsistencies of our intentions with our conditions. Once spotted, these inconsistencies can offer a lot of insight on what nudges to make to the cone of embodied strategy.

This framing of strategy challenges feels more hopeful to me. Instead of looking for someone to blame, look for the conditions that are present and whether or not these conditions are mismatched with the intention. If there is a distinct mismatch, look for ways to change conditions to align better with the desired outcomes.

Embrace the suck

The titular phrase is well-known in the military, though this might be a different take on the adage. This one came out of a morning conversation with fellow FLUX-ers, where we briefly chatted about life experiences that we didn’t look forward to, didn’t like when we were in the midst of them, yet have grown to cherish them over the years. To draw a line, we’re talking about experiences that didn’t involve actual threats to life or violence.

Picture a simple framework. There are three attributes that can have positive or negative value: anticipation, experience, and satisfaction. The “anticipation” attribute reflects how much we are looking forward to or dreading a situation we’re about to experience. “Experience” describes what we feel throughout the situation. “Satisfaction” is our long-term attitude toward the experience.

Lining up possible values, we have a simple three-row four-column table, starting with all three attributes being negative (“hated coming into it, hated being in it, and keep hating it ever since”) and eventually flipping them, one-by-one, to positive (“loved the idea of it, love every minute of it, still smiling when thinking about it”).

If we try to draw a graph of experiential learning on top of that table, it is fairly evident that the amount of experiential learning is the highest in the middle, and lowest at the edges – kinda like a bell curve. Those experiences that made us uncomfortable at first, but turned into a fond memory later are the ones where we learned something. Perhaps we didn’t realize how much we’d love broccoli. Or maybe reliably shipping the same product instead of trying to build something new every few months. Everyone will have their story of a transformative experience like that. On the fringes, neither experience is particularly educational: the left-most predictably sucks and the right-most reliably rocks.

However, if we try to draw a curve of learning potential, we’ll see something more like a power curve. Despite us learning a lot in the middle of the graph, it’s all of the obvious kind: we were thrust into a novel situation and were able to orient ourselves using some tweaks to our existing mental models. The highest potential for learning will hide in the least pleasant corner: it is here where we weren’t able to relate to the environment in a productive way. 

It is in these situations we have the most to learn, to update our models of the environment. The suckiness is the signal. It tells us that there are gems of wisdom and insight to be discovered. This will feel counterintuitive – I had a bad experience, and that’s the one where I stand the most to learn from? Shouldn’t I just shove it down into the back corner of my memory and never think about it again? And usually, it feels so right to do just that.

To countervail, we can develop a habit to look at our past totally sucky experiences with a kind of inward-focused curiosity:  what was it within me that reacted so negatively to it? What was being protected and why? Is there perhaps something to learn about this part of me that is being protected, something that would help me see this past experience in a different light?

Shadows and Curses

In this Halloween-themed episode, I wanted to share a story that might be useful to API engineers, both aspiring and experienced. This is the story of four curses of API development and the shadows that conjure them. So grab that pumpkin spice latte and get ready to hear the 🎃 SPOOOOOKY! 👻 tale.

🕳️ Shadows

First – the shadows. Every upside has a downside. We don’t like to think of them on the upswing. When the shadows visit us, we are unhappily surprised to learn of their existence. This happens so often, we’d think we would have learned by now.

Take the “✨Interest” condition from my earlier story about growing dandelions. Interest is great, right? Unfortunately, with interest and excitement about an API’s potential value comes the shadow of … well, people actually trying to realize this potential value – and extract as much of it as possible. 

With the initial spirit of exploration comes the thrust of exploiting: trying to use and – unfortunately, all too commonly – abuse the API to make it do their bidding. If anything, the sudden rise of interest in an API is a warning sign for their vendor: time to think about confronting the shadow of grift that will inevitably emerge.

It is often an uncomfortable job to be the one pointing out the shadow when the team’s collective eyes are on the shiny light of success. Yet, knowing of the existence and anticipating the emergence of the shadow can really save the organization’s hide by helping it orient toward the challenge, rather than be blindsided by it.

However imperfect and goofy, I hope that this narrative will help you do just that. I organized it around eight shadows – one for each condition for growing dandelions and caring for elephants. Think of these eight as the tripwires, the emergent downsides of having been successful at attaining each condition. But to come together, the narrative needs one more twist: the curses.

🧙 Curses

Curses are menacingly sticky. They are imposed on us. No matter how much we try, curses hold us. We can point at them, battle them, and even occasionally proclaim victory over them. But sooner or later, we recognize with a sinking feeling that the celebration was premature. Our curses find yet another way to rear their ugly heads. All we can do is cherish the gift that usually comes with the curse.

The particular kind of curse I want to highlight here emerges from a seemingly innocent concept of idea pace layers. I touched on it briefly in my first article about dandelions and elephants. Ideas thrive as light, free-floating dandelions. Some survive the descent through the ideation pace layers. These ideas grow and create value around them – that is the gift of this descent. As they grow, the conditions of supporting and nurturing them transform to accommodate their growth – to treat them more elephant-like. Somewhere alongside that transformation, the conditions cross the threshold where preserving the accumulated value means more than contemplating change. 

Therein hides the curse. Though they still have their strengths and amazing survival abilities within their particular niche, idea-elephants are unable to challenge their shadows. At the bottom of the idea gravity well, we can only make our idea-elephant more precisely formulated and incrementally improve it within its niche – the local maximum.

To find a different local maximum, we need another cycle of exploration: a gazillion of idea-dandelions spreading all over the space, perishing en masse while uncovering precious few novel insights. But to get there, we need conditions that would enable such a development. And such changing of conditions is a threatening proposition when we’re caring for an idea-elephant: starting all over means potentially losing the value we hold. Thus cursed, we flail and struggle to change, but as a rule – fail to do so. The new idea-dandelions can’t find fertile ground in elephant-caring conditions, which makes finding our grip on the elephant shadow even harder.

Even if we’re somehow able to transform ourselves again and recreate favorable conditions for dandelions – it’s not like we’ve gotten away from the curse. As the end credits start rolling, the viewers see our faces being struck by the recognition that we’re starting the cycle all over again. 

In the API developer’s world, the progression of this curse can be described as a cadence of steps. It begins with a success, when the conditions we’ve created for dandelion APIs actually start bearing fruit. There are lots of consumers of the APIs and they are starting to build eye-popping things. Somewhere around here, the dandelion shadow is discovered, and we valiantly face the challenges it presents. Whether we know it or not, this process transforms our requirements to create idea-elephant conditions. In the moment, it always makes sense — now that there are successful businesses running on our APIs, this feels like a logical next step. As we do so, the elephant shadow manifests, and forces us to recognize that we need to get back to conditions that are more dandelion-like – and, despite our efforts, the curse prevents us from doing so. 

Pairing the four conditions (one from the dandelion-growing list, one from the elephant-keeping one), we end up with four such progressions, the four curses. I’ll call them, respectively, the curse of irrelevance, the curse of immensity, the curse of immobility, and the curse of inscrutability.

🏚️ The Curse of Irrelevance

The two polar conditions in this curse are “ ✨ Interest” for dandelions and “ ⚓️ Stability” for elephants. I already described the moment of discovering the first shadow. I’ve lived that moment a bunch of times throughout my career, and it’s almost always followed by the call to bring things under control. This exertion of control is transformational: it brings the change of conditions toward Stability. 

Once that change is complete, we enter the third beat of the curse: encountering the shadow of Stability. It turns out, once we’ve gotten things under control, these things get boring and stale. That same explosive growth, attenuated by the faucet of predictability, slows down to a trickle. 

Facing this second shadow, we try to bring back the mojo – and more than likely, can’t. Idea-elephants don’t travel upward in the pace layers. No matter how much we try, new ideas are quickly shot down: too risky, too crazy, too irresponsible. The hard-earned stability resists being disturbed, cursing us with irrelevance. 

♾️ The Curse of Immensity

The second curse can be seen as the interplay between “🔮 Legibility” and “⛰ Breadth”. The gift of legibility is in the simplicity with which the API can be used. It’s just begging us to play with it. 

However, once our customers start messing with the API, something interesting happens: they start seeing the edges of our canvas, bumping into the limits: “Oh, I wish this API supported this <feature>!” As the dandelion idea of an API takes root in the collective minds of its consumers, there’s a steady stream of requests for improvements. Obliging to fulfill these requests is the second beat of the curse – the transformation to Breadth. 

On cue, the shadow of Breadth presents itself: the bloated, incoherent, everything-bagel API surface. Adding new features to the API is a puzzle with many moving pieces. Removing APIs is a massive pain in the butt. Everything around us is gigantic – the scale of our usage, the number of feature requests that keep showing up. And of course – the rising chorus of complaints that the API surface is just too darned large.

Steeling ourselves to confront the second shadow, we discover that it’s much harder to tame than the first one. A common API designer’s trope that I’ve seen (and tried to use myself) is the “well-lit paths” pattern. It seems logical that if we just highlighted some APIs and not others and organized them into well-designed pathways for developers, then some of our incoherence issues would go away. I’ve yet to see a great application of this pattern. Instead, what typically happens is something of a high-modernist paving of lonely highways and bridges to nowhere that adds to the confusion and girth rather than alleviating it. Mocking us, the curse of immensity knows that organizing large API surfaces only makes them larger.

I’ve already written a bit about API deprecation. Deprecation of APIs tends to be a losing battle. It takes a lot more time and effort to remove features than to add them, which means that over time, the tyranny of the curse of immensity only strengthens.

🧊 The Curse of Immobility

Between the conditions of “🚀 Velocity” and “📚 Rigor”, we find the third curse. A setting that allows us to string together a quick prototype is rarely the same setting that we use for launch. As soon as our API customers start seeing some uptick in their usage, the first shadow will immediately remind us of that lesson.

As a matter of transformation, we overcome this shadow by introducing processes and infrastructure that are critical for shipping products at scale. If we are to retain our customers and set them up for long-term success, we must transition to the stance of Rigor.

Pretty soon, the shadow of Rigor makes itself known. All these amazing best practices, checks and balances, launch gates and test infrastructure reduce velocity, sometimes quite dramatically. Gone are the days when one could quickly put together a bug fix. Everything seems to take eons to get done.

This one is especially hard for engineers. Everyone seemingly notices this, yet there does not appear to be a way out. Another rallying cry to make things go faster gets mired in yet another committee or working group. Once the API conditions transform into caring for elephants, getting it back to the lightweight experimentation is prevented by the curse of immobility.

🗝️ The Curse of Inscrutability

The final curse is formed by pairing of “🔎 Access” and “⚡️ Power”. The key tension here is in the level of opinion within the API. Access needs APIs to be highly opinionated, while Power needs the opposite. 

The first shadow becomes visible when our users start using the APIs in earnest, beyond initial prototypes. All that opinion that made it possible for them to build those prototypes quickly starts getting in the way. “That’s so cool! How do I turn it off?” was one of my favorite bits of developer feedback to some of my early Web Components API ideas. As developers’ ideas start holding value, the focus shifts to getting closer to the metal.

One of the common drivers in this transformation to the condition of Power happens as a result of trying to  squeeze a bit more performance or capabilities out of the product, built on top of the API. This story typically involves the API vendor exposing deeper and deeper hooks inside, and thus relinquishing some (or all) of the opinion held by these APIs. A while back, I already mentioned the Canvas API in WebKit, which collapsed the whole of the HTML/CSS opinion straight to Apple’s GCContext API, which was as close to the underlying platform one could get back then.

As predictably as a Greek tragedy’s plot, the second shadow makes its entrance. With power comes the need for skill to wield this power, which in turn leads to rapid decline in the number of folks who can actually use it effectively. In such scenarios, there are only a few (grumpy) wizards who actually know how to use the APIs, and whoever hires them accrues all the value. 

And of course, it is very, very hard to argue convincingly that this value needs to be lost and the power given up to return to the Access condition to confront the second shadow. The curse of inscrutability has taken its hold.

🧛 Haunted API design

The four curses accost us simultaneously and often interplay with each other, usually to a reinforcing effect. The curse of Immensity invites Inscrutability.  The curse of Immobility often comes on the heels of those two. The curse of Irrelevance stokes the fears of obsolescence and exacerbates the effect of the other curses. It’s all a hauntingly accursed mess. There is seemingly no escape from it. At least based on my experience, every team that sets out to ship API comes under the spell of these curses.

What’s an API designer to do? Clearly, scream and wail in horror – what kind of Halloween tale would it be otherwise? Oh well. Perhaps some future episode will point the path out of the spine-chilling quagmire. Maybe in time for Christmas? 

Caring for elephants

Now that we have a guiding compass for growing dandelions, what of the elephants? What are the conditions that might be effective for our APIs to nurture ideas that are like elephants? 

As a quick reminder, elephants, as all species that rely on K-selected strategy in biology, are characterized by these four characteristics that are roughly the opposite of dandelions (the r-selected strategy).

First, constant size of the population is important in an environment at or near its carrying capacity, so K-selected strategy encourages low reproduction rate.

Second, to survive through ebbs and flows of resources within the particular ecological niche, an organism needs mass. K-selected bodies are usually larger in their size.

Third, rather than let mutation take care of finding the fit within their niche, there’s an additional band of adaptation – through knowledge. K-selected species learn and change their behavior over their lifetime. This encourages a longer life span and a longer maturation process. Children take a while to become adults and grown-ups invest time passing their learning on to their offsprings.

Fourth and final characteristic is a set of particular strengths. These come handy when K-selected organisms compete for limited resources in a crowded niche. Be it flexibility, agility, or just plain brawn – like for elephants, each is carefully selected for over a long curve of evolutionary selection.

Before we go any further, a reasonable question: why would one anyone want to build APIs for idea-elephants? The motivation is usually somewhere around their size. Idea-elephants tend to hold and retain value. If we want to build APIs to help us generate reliable revenue for a long period of time, we are probably looking for an elephant-caring strategy.

A good example of an idea-elephant is an ecosystem: people and technology mingling together in mutually beneficial ways. Thriving ecosystems have lasting power and behind each, there’s a learned way of doing things, the idea that defines the nature of the ecosystem. Unlike with dandelions, the idea is no longer freely mutable. The Internet is one of those gigantic elephants, with the Internet Protocol as the API that makes it possible.

At a smaller scale, anytime we want to preserve some value that we believe is contained in the use of a particular technology, we’ll likely want to create favorable conditions for elephants in our API design.

Just like in the previous exercise with dandelions, I’ll map the biological attributes to equivalent conditions.

⚓️ Stability

Unlike dandelions’ obsession with excitement, elephants tend to want their dependencies to be boring. Being predictable and reliable is a highly sought-after quality.  Earned trust is of the essence for the APIs that aspire to cater to elephants. Trustworthy API design is somewhat of an art form. In some ways, the seemingly easiest move is not to change anything. Every change might result in a potential breakage and loss of held value. Holding that value is more important than pursuing new ideas, and as such, the number of new ideas (the “reproduction rate” from biology) will be small.

One of the projects I helped start at the Chrome Web Platform team was the predictability effort, to identify and address key gotchas and inconsistencies that frustrate Web developers. The strategic thrust of this effort was to help make the Web Platform APIs more hospitable to the elephant of the Web developer ecosystem.

Every organization that accumulates value in its own infrastructure and code usually ends up investing in making both as stable and reliable as possible. An entire profession of Site Reliability Engineers (SREs) emerged to represent the special skill that’s required in making that happen. 

Breadth

Elephant-tending APIs cover a lot of ground. They are large (the biological “large size” equivalent). There are lots of use cases that accumulate over time and each unaddressed use case is a missed opportunity, a value loss. A very common thing that happens with dandelion APIs that become popular is that they grow in size and complexity. When that happens, we are observing a relatively rare event: the API is moving down through the pace layers, becoming more and more elephant-like.

When Javascript was first introduced to the Web, the API to access the document tree (aka Document Object Model) was exceedingly simple. Just a few objects, a simple way to access things, and that’s about it – poorly documented, a “figure it out” dandelion spirit. 

Today’s DOM API is quite large, and it doesn’t even include most of the Web Platform APIs – to keep the DOM spec light, a notion of partial interfaces was introduced. The HTML spec captures the bulk of the vast surface. Give your scrolling muscles a go to get a sense of the breadth.

This is a fairly common occurrence when caring for idea-elephants with our APIs: their needs are rarely captured in a few simple calls.

📚 Rigor

Just like with dandelions, some conditions reflect the setting, rather than the APIs themselves. This is the case for rigor. As ideas expand to become elephants, they trade their agility for rigor as a strategy to extend lifetime (“longer lifespan” in biology). Haunted by the danger of potential lost value, decisions are made more carefully and thoughtfully. 

One does not simply push to production in elephant-land. There are feature launch calendars, approval gates, and deliberate release processes that move developers through. New ideas emerge very slowly – and for good reason. Ideas must be tested to fit well with the massive body of existing ideas. Reducing uncertainty triumphs over explosive innovation.

For example, shipping a new Web platform feature in Blink (the Web rendering engine in Chromium) is a six-step process that involves building out an initial set of use cases, potentially asking for a mentor to help with specification writing, making a proposal in a standards organization, and socializing the problem with other browser vendors and Web developers. And by the way, all these items are just part of step one.

The upside of such a slow process is that the ideas that do make it through are at full maturity. Like elephant calves, they have to be slowly nurtured and “taught” all of the intricacies of the wisdom that affords the massive scale and value of an elephant.

⚡️ Power

The last, but definitely not least condition is power, which neatly matches the biological equivalent of a set of strengths. Idea-elephants gravitate toward — and often demand — more powerful and less opinionated APIs. Put simply, elephants want to be closer to the metal. Unlike dandelions, elephants have the capacity to hold their own opinions. In fact, some of these opinions might be load-bearing: the value that an elephant so carefully wants to preserve is based on them. Presenting them with other opinions might appear foolish or downright hostile.

One of the common struggles when designing APIs for the Web for me was trying to resolve that constant pressure from Web framework developers wanting to see powerful, low-level APIs and the declarative spirit of the Web. If you read this blog, you probably remember my stories about layering: all of them come from this weird paradox that the Web platform houses both elephants and dandelions in the same massive farmhouse.

When we are designing for idea-elephants, we are much better off letting the elephants hold their opinions, and concentrate on low-level, opinion-less abstractions that delegate most of the power to the elephants we are caring for.

Caregiver’s Guide

What can we learn from this exercise? Here is a set of questions we can ask ourselves when looking to care for idea-elephants:

  • Are the APIs we offer predictable, reliable, stable? Can we guarantee providing them for a long period of time and continuously reducing any inconsistencies or bugs that might creep in?
  • Are our APIs comprehensive and cover most of the use cases that the developers are asking for? Do we commit to improving this coverage over time?
  • Does the setting into which we release the API have the necessary infrastructure for ensuring that the API consumers make good choices, from robust integration and testing to deployment processes, as well as telemetry and safe experimentation, locally and in the wild?
  • Do our APIs take developers as close to the underlying technologies as possible, offering little of its own opinion in the process?

An interesting question to ponder: so far, I’ve been only talking about conditions necessary for taking care of an elephant. I didn’t mention anything about the conditions for creating one, like I did with dandelions.

My intuition here is that elephants are rarely created from whole cloth. Every elephant-idea begins as a rare dandelion-idea that managed to grow and accrue value over a long period of time, traversing the idea pace layers. So, we rarely choose to be in the position of elephant caregiver: it’s something that happens to us as a result of our idea’s success.

Growing dandelions

A couple of weeks ago, I talked about r/K-selection and mentioned two kinds of ideas: the mutate-through-replication dandelions and capacity-preserving elephants.

I remain curious about the conditions in which dandelions thrive. Here are some initial thoughts on the subject. To narrow the broad designation of “ideas” a bit, I am going to focus on a special case:  innovation on top of APIs. That is, new ideas that emerge while writing code that consumes some set of APIs.

Let’s suppose we’re just starting down the process of designing a new API. Very early, we decided that we want this API to spur dandelions. We did a lot of thinking and realized that our fledgling enterprise will benefit greatly from employing the r-selected strategy.

Why would we want to do that? Primarily, the r-selected strategy works best in environments that aren’t (yet) predictable or stable, where the rate of change is high. For example, we might be entering some new problem space and we want to lean onto the “wisdom of the crowd” to explore it. Or perhaps we’re a newcomer and we would like to convert the budding enthusiasm in the problem space into as many dependencies on our services as possible (if you’re looking for a case study on both, check out Stable Diffusion playbook).

What are the conditions that we need to grow dandelions? How might we design APIs that encourage dandelion-like innovation?

Using the r/K-selection in biology as our guide, we see four key conditions: high reproduction rate, small size, short generation time, and wide dispersion radius. Yes, just like dandelions.

Translating these conditions from plants to ideas, I came up with: interest, legibility, velocity, and access. Let’s go through them one by one. You know me. I love my “let’s go through them one by one” bit.

✨ Interest

First, this API actually needs to promise to unlock exciting new opportunities. This leads me to the first condition: interest, which roughly matches the “high reproduction rate” in biology. Exciting  ideas are contagious. They spur lots of new ideas, churning them out at a high rate. They don’t even have to have concrete value behind them – just a promise of something big and potentially groundbreaking. This sometimes leads to formation of a bubble of hype around them, like with Web3 and the NFTs. Such bubbles, while not healthy in the long term, are a strong sign of the interest condition being met.

Interest is not an intrinsic property of the API design, but rather the property of the technology behind it. Researchers teased the developer community with the tantalizing potential of AI-generated media for years now, building up the interest in the underlying technology. It was OpenAI and then Stable Diffusion who capitalized on this interest and shipped first publicly-accessible APIs that enabled developers to actually play with technology. The resulting wave of innovation was nothing short of astounding – and it keeps going. There ought to be a clock of “days since Stable Diffusion was released” somewhere, because the quantity of interesting new ideas born out of that event feels unbelievable when put in the context of the little time that had passed. Again, a great example of the interest condition being met.

To give you a counter-example, consider OpenSocial,  a cool idea from way back in 2007 that started with much fanfare at Google and ended up dying quietly in the W3C effort graveyard. Even though yours truly did end up playing with it, very few others did. Was it ahead of its time? Was it too obtuse? Was it the XML thing? We will never know. But if your API adoption patterns are looking like those of OpenSocial, check the pulse of the community interest.

🔮 Legibility

Another significant condition is legibility. To allow an idea to spark imagination and produce new ideas, it must be easily understood – even if partially. I correlated this one with the biological counterpart of “small size”. Metaphorically, think of it this way: an idea that is light and small like a dandelion is much easier to grasp than the weighty idea-elephant. A litmus test: can a useful program built with our API fit into a tweet? The deca-LOC framing is useful here, though it’s not just the number of lines of code. 

One of the key tenets of the WebKit open source project at one time was the idea of self-explanatory code: is your change making the code more or less easy to read? There were some who even suggested that adding comments is somewhat of a code smell: if you have to explain what it does, then perhaps maybe you could write it more eloquently instead? 

While I don’t take this extreme point of view, I appreciate the sentiment. Choosing the idioms and concepts that make the code concise while capturing the key thrust of the idea behind the API is difficult work and is full of difficult trade-offs. Sometimes it takes several iterations to arrive at a mental model of the API that clicks with most people. When designing for dandelions, opinion is front and center, and the easiest-to-grasp concepts win over the more obtuse ones – even if the latter are more powerful and flexible.

Speaking of WebKit, and similar large codebases. A key ingredient of legibility is the ease with which an idea can be separated from other ideas. How discrete is it? How easy is it to spot this idea and lift it out? WebKit has a ton of great ideas in its code. I know, I lived in that repository a few years back, and I bet there are even more flashes of brilliance now. However, to spot them as separate ideas, we have to spend a bunch of time understanding how all the surrounding neighboring ideas fit.

This is one of the challenges of implementing r-selected strategies in large code repositories. Now matter how we try, our dandelions end up being somewhat elephant-like.

🚀 Velocity

The third condition has to do with how quickly the ideas borne out of interacting with our APIs can turn into other, new ideas. This is not necessarily a property of the API itself, but rather a setting into which it is born. Loosely, it corresponds to “short generation time” in biology. Velocity is commonly called “tight developer feedback loop” in developer experience jargon, and yes, it’s that, and a bit more. I see it as somewhat two-fold: time to result and effort to copy.

Time to result is the time it takes between making a change to the code and seeing the results of the change. Back when I first started using computers, I remember working with a particularly old piece of equipment that provided the output of my program only as a paper printout – and the printer was across the hall from the monitor and keyboard. Time to result included jogging out of the lab and into the computer room, where massive printers hammered out loudly our many failures and rare successes. Paper jam? Well, you might have to run that job again. Even just capturing the simplest idea into working code was a multi-hour (and sometimes, multi-day) process. That mini-computer was from the pre-dandelion era. The shorter the time to result, the better velocity of an idea.

Effort to copy is an adjacent concept. How much effort does it take to copy an idea? Is it a complicated process? Or is it just one click? A somewhat unexpected, yet obvious-upon-inspection factor here is organizational boundaries. 

Back when I worked on Google Gears as an external-to-Google contributor, I was puzzled at the weird phenomenon: to land, my patches would need to be sent over email as diff files. The open-source directory did not contain any commits from individuals: instead, every commit was made by a bot. After a couple of days of submitting the patch, the bot would dutifully add my commit to the repository. What the heck was happening?! As one of the engineers explained, the actual source of truth was on the other side of the wall that separated the inside of Google from the outside. To land the code, a Google engineer had to patch it in, have it reviewed, and then let the bot take it outside. What I was working with was a mirror, not a real thing. Sure, the automated bot made things easier. But across the wall like this, the effort to copy is still high – and not just for the code going out. If there is some really cool new innovation on the outside of the wall, an organization has little choice but to rebuild it – often from scratch – on the inside. 

On the other side of the spectrum, Github’s “fork” button is a great example of intentionally lowering effort to copy. Want to play with an idea? Click and start making it yours. As another illustration, both effort to copy and time to result are combined delightfully into various read-eval-print loop (REPL) tools that sprouted all over the place in the past decade. Though my first love was JS Bin (hi Remy!), one of my favorite ones today is Replit, which seems to be designed by someone who deeply understands the concept of dandelion gardening.

🔎 Access

The final condition is access. It seems that, to stimulate r-selection, we need a large pool of minds that our APIs can come in contact with. To generate many ideas, we need many minds – or to have a “high dispersion radius”, speaking biologically.

What does it take to start using our API? What are the barriers that the person must overcome? For example, if we decide to provide our API in some programming language that nobody ever heard of, we are increasing the barrier. To access our APIs, people first have to learn this language. 

As perhaps a somewhat controversial example, in the very early days of Flutter, we had this contentious debate whether the engine would rely on Javascript or Dart. Though Dart won in the end, I wonder how much more widespread the use of Flutter would have been had we stayed with Javascript. In other words, is Flutter successful because of or despite Dart?

Dandelion-growing needs space. If we’re planning a small group within our organization as the potential users of the API, we are unlikely to get any benefits of the r-selected strategy. This is often counterintuitive in organizations that pride themselves on engineering excellence. It feels like we should be able to just get a couple of really smart folks to play with the API, and they will figure out some interesting possibilities… right? Well, maybe. 

But if we are aiming to harness the r-selected strategy, we need to stop looking for experts who might give us great insights. Instead, we need to open the API up as broadly as possible and let the wave of hobbyists and enthusiasts wash over it. When growing dandelions, think quantity over quality. Skill and expertise are a barrier.

Additionally, to maximize the number of ideas connecting with each other, I need to make them easy to find and browse. Can a seed of an idea be easily discovered? Can I trace its heritage and find earlier seeds on which the idea was based? Can I see who else is playing with it currently? And no less importantly, who can find my idea?

A gardener’s guide

Putting these all together, we can build a simple compass. The four conditions form an arrow that points us toward dandelion-like growth:

  • Is the underlying technology that the API exposes interesting? Do we anticipate developer buzz around it?
  • Is the API mental model easy to grasp? Can developers’ ideas be expressed in simple, elegant code? Is it free of dependencies that developers might be unfamiliar with?
  • Is the setting into which we release the API offer REPL-like iteration speed and one-click copying of ideas?
  • How large is the pool of people who could conceivably use the API? Is the cost to entry minimal? Is the list of prerequisite learnings short? Is it easy to find similar or different ideas and understand how they came together?

There are a handful of folks that I know who seem to intuitively understand these conditions, and their approaches to API development reflects that. For the rest of us – myself included – here’s hoping that this compass will serve us well in our dandelion-growing pursuits.

A crutch

Thinking some more about organizational pathologies, I realized that there is another way to spot and potentially face the challenges presented by them: a crutch.

A crutch is a kind of organizational myth that had served this organization well in the past, and as a result of its own success, became so overused and over-relied upon that it’s actually doing more harm than good.

Crutches come in various forms and shapes. They could be processes, like the “Success Score Cards” and “strategic commitments” in the “Royalfield” story from Rumelt’s book. They could be strong cultural beliefs that, as they age, become effigy husks of their former selves, yet still capture enough imagination of the team to stick around. They could also be individuals. If a team can not make any significant decisions or forward progress without their leader in the room, this leader might be their crutch.

Crutches are rarely seen as crutches. They are typically viewed as foundational, immovable parts of the organization. How could we possibly do something other than Success Score Cards? That’s preposterous! Even when challenged with the mounting evidence of the pathological processes burning through the organization’s body, crutches are often viewed as the cure – sometimes leaving to weird instances of iatrogenesis, where the use of the crutch becomes the source of the problem (“Let’s do Success Score Cards harder!”)

If we are suspecting that our organization had developed a pathology, we could start looking at the bits of norms, culture, org charts, and processes that we hold in the highest regard and/or haven’t examined in a while. To know that we’ve found a crutch candidate, listen to how people react to our gentle poking at it. If the response is along the lines of: “What do you mean?!” or “Sure, it has flaws, but what else is there? I don’t know of anything better!” – we might have found a crutch. If closely examining a potential crutch suddenly feels like a career-limiting move, we are likely getting very close to the source. When spelunking for crutches, we are better off wearing a helmet and protective gear.

It is not hard to infer from this description that pathologies have a strong staying power precisely because spotting and pointing at a crutch is so deeply uncomfortable for the organization. Almost by definition, crutches are part of an organization’s embodied strategy. While spotting one is a significant breakthrough in itself, it is rarely sufficient to cure the pathology. Just pointing at it and loudly yelling “Look! I found it! Here it is!” is more likely to get us shunned than celebrated. Even if we are the leaders of this organization, our decisive attempts at surgery are likely to backfire.

Instead, my guess is that our approach might be the same as in any change of embodied strategy: nudging. Pathologies inflict suffering, and with suffering comes the innate desire for change. And that might just be the potential energy that our nudges need to succeed.

Organizational pathology

I was reading the latest book by Richard Rumelt, The Crux, and something clicked that I just had to write down. 

One of the reasons I enjoy Rumelt’s books, aside from that fiery, witty style, is that they are always packed with a wide variety of examples. This book is no exception, and, after a little while, a pattern emerged in my mind in all of the examples of failed applications of strategic thinking – and action. It’s a special case of “solving the wrong problem”, which I am giving a somewhat clinical term of a “pathology”. Here’s what I mean by it.

We all make mistakes. Making mistakes is an essential part of being human, and part of every organization. If we don’t make mistakes, we don’t learn, as I illustrated in the learning loop bits of the problem understanding framework. However, making mistakes in itself is a neutral activity: it can be either used to learn and create a better mental model of our environment, and it can also be used to lead astray, reinforcing a mental model that doesn’t generate accurate predictions. 

It is the latter case that piqued my interest, because in my experience, this is the one that most organizations – and people – struggle with. A pathology is a kind of unproductive mistake-making process that emerges under the following conditions:

  • There’s an obstacle that we believe we will encounter in the future. 
  • The expectation of this encounter makes our current situation discomforting.
  • There are two discernible classes of actions available to us: a) one that alleviates the discomfort of the current situation, and b) one that reduces the size or the likelihood of encountering the obstacle
  • The obstacle-reducing actions do not alleviate the discomfort.
  • The discomfort-alleviating actions grow the size of the obstacle or create an entirely new one.
  • Finally, – and this is key – under pressure to act, created by our discomfort, we consistently choose actions that alleviate that discomfort.

Whoa, that’s a lot of bullets. Let’s unpack them. 

I called this process a pathology because it often feels like a sort of thinking disease: it’s a thing that takes hold of us. Even when we are fully aware of what is happening, we still struggle to make it stop. Usually, an external intervention is necessary to make a change.

As I mentioned before, pathologies are part of the larger corpus of “solving the wrong problem” situations. It is not fun when we end up in a place where our diagnosis points us at something that we later realize is just an effect, not the cause of the problem. 

Borrowing a story from my past as a software engineer, I might assume that the rise in the latency metric in my code is due to some new regression. I would then spend a bunch of time – like a week! – trying to unsuccessfully hunt down this regression, only to notice that the metric suddenly moves back within acceptable threshold without any action on my part. It’s a heisenbug! After much more searching, I realize that my colleagues on the infrastructure team have been moving their code around and accidentally shifted how the metric is computed. Oops. Well, at least they put it back! Sure, I did get to “about to pull my hair out” state in the fruitless debugging session. But ultimately, once the culprit was found, I learned about yet another where problems can arise, and moved on. “Solving the wrong problem” of this kind tends to be frustrating, but highly educational.

Where pathologies differ is that they add an extra twist: solving the “wrong” problem both exacerbates the actual problem over time and provides a false sense of doing the opposite. These two factors often interlock. 

The exacerbation bit can come in many different forms, but here are the most common LEGO bricks they’re made of:

  • Backsliding, when the action taken as part of solving literally takes you in the opposite direction.
  • Entrenching, when the action reinforces some existing process or practice that needs to change to solve the actual problem.
  • Delaying, when the action is that of putting off dealing with the actual problem.

The false sense of moving forward often builds on these. For example, an entrenched habit can be rather comfortable. Sometimes the mere fact of doing something is, too – as the Politician’s syllogism goes: “we must do something – this is something – therefore, we must do this”. The key attribute to look for here is pain relief: something that reduces the discomfort of being presented with the problem.

The “Royalfield” case study, one of many in Rumelt’s book, particularly stood out for me as an example of a pathology. The author describes a multi-day strategic planning exercise that he attended. At this exercise, he observes folks following a well-entrenched process that involves “Success Score Cards” and “strategic commitments”. 

As the author interviews the executives, he starts realizing that none of the actual strategic challenges are being discussed. Instead, the rote motions of the process are driving the event. When he tries to raise the issue with the CEO, he’s chastised for distracting the participants from their Success Score Cards. The author notes that since that event, the company’s fortunes continued to dwindle in an unfortunate, but predictable way.

Here, we have all the elements of a pathology. There’s definitely an obstacle that the executives perceive and the discomfort they experience – aka “the problem”. Otherwise, they wouldn’t be holding this exercise in the first place. There are two kinds of actions they can take: start examining actual strategic challenges of the company, or follow the existing planning process. The process is already here and everyone already knows it, so the executives choose to follow it. So far, this looks like a “solving the wrong problem” scenario.

The pathology locks in when we add two final ingredients: first, the “Success Score Cards” and “strategic commitments” act as a powerful analgesic, giving executives the permission to stop worrying about the obstacle for a little while. “We did this planning thing, didn’t we?”

Second, we can clearly see how entrenching and delaying are in full force. The CEO’s nearly allergic reaction to even an idea of considering something different indicates that change will be exceptionally difficult for this company – and each such planning session delays the moment when the team must come to terms with the reality of the situation.

I imagined how even now, the executive team is still trying harder to make better, more effective Success Score Cards, and going to great lengths to ensure that the strategic commitments slide deck is breathtakingly dramatic – stuck in the pathological case of solving the wrong problem.

Now that I have this pattern outlined, when (or if) you read the book, I am guessing that it will be nearly impossible not to spot it, over and over again. And it is also my sincere hope that when you look around your team and organization, you will have a fresh way to notice this pattern around you. I even have this simple template for you:

  • What is the obstacle that is in front of your organization that perhaps evokes some discomfort?
  • What are some of the actions that your organization commonly takes to relieve this discomfort?
  • Are they the same actions that will also help overcome the discomfort?
  • If not, which ones seem to increase the size or the likelihood of encountering the obstacle (backslide, entrench, delay) or maybe creating a new one?

If there are items in the last list, it’s worth giving them a careful look. They might be one of those bad habits that subvert your team’s process of learning from its mistakes.

The Perspective Ladder

I’ve been trying to come up with a better way to illustrate the concept of mental model flattening, and realized that one way to do this is by describing it as I experience it.

Imagine a scale that reflects our relationship to a perspective: a self-coherent worldview. This worldview is possible because of the network of mental models we possess, helping us orient ourselves, see choices and possibilities, and act on them. 

This scale is arranged as a ladder of sorts, with the mental model complexity is at its highest at the top, becoming progressively simplified with every rung down. As the model gets simpler, our capacity to relate to a perspective diminishes. As we proceed through our day, we climb up and down this ladder multiple times a day, sometimes moment to moment. Even though all rungs of the ladder may be accessible to us, we aren’t sitting still. We run up and down, sometimes intentionally, but most often not.

Just like real-life ladders, ours is subject to gravity. Climbing to the upper rungs seems to require effort. Conversely, all it takes is a little stress, fatigue, or a particular triggering experience, and down the rungs we go. Picture the process of mental model flattening as this sliding down the ladder.

Understanding which rung we are at any given moment is, in itself, a form of acquiring a perspective. It serves as an extra boost upward: pausing to understand where we are might serve a useful trick to start climbing.

So, the ladder. I’ll start at the bottom and we’ll clamber up together. I’ll describe each rung as my own experiences – I hope they resonate and help you reconstruct the picture that I am seeing in your own mind.

🌪 Detached from a perspective

The ladder’s lowest rung is what I call “just stayin’ alive”. Here, I am thoroughly disoriented and lost, mostly just reacting as best as I can. My impulses are in charge. Whether positive or negative, my experiences at this rung are intense.  If you’ve ever felt that panicky feeling of just trying to find bearings – or perhaps get that insatiable craving, then you know what I am talking about. At this rung, when the notion of a perspective is suggested to me, I will typically react with bewilderment, struggling to remember what it might mean. It may feel like a lifeline when a perspective – any perspective! – is offered. The particular strength of this rung is in the rush of adrenaline it provides, the extra kick for powering through a particularly tough situation. At the same time, these hormone baths are taxing for our bodies and aren’t great for our health in the long term. And well, there’s this whole “lost” thing.

When detached from a perspective, it’s not that I don’t have any of my mental models. It’s that they all appear to have this squeaky, slippery rubber feel to them: grasping at them just makes them pop out of my hands and float farther away. The only ones that feel accessible are primitive and atavistically simple: “they bad, I good”, etc. This is the extreme of mental model flattening.

🩹 Sticking to a perspective

Just above is “sticking to a perspective” rung. I am firmly attached to a particular way of looking at what’s around me. I am no longer as disoriented, but I ain’t seeing much outside of the very specific window of the perspective. I can feel pretty comfortable in this state, aside from the occasional nagging feeling that something is missing. When someone suggests that they have different perspectives, I may look at them like they are messing with me – or trying to deceive me. While stuck to a perspective, irritation and anger are common reactions to the evidence that doesn’t fit into that perspective: I am part of the perspective and a threat to it is a threat to me. The gift of this rung is followership: I will happily roll up my sleeves and chip in to help with a problem when asked – as long as it fits into my perspective window.

While sticking to a perspective, the mental models tend to appear as crisp and simple causal chains that are unburdened by any fuzzy notions. If this, then that. Even when well-familiar with loops in a network of causal relationships, any notion of them will be neatly elided from my thinking. Mental models of others are either “exactly like me” or some overly primitive caricatures. 

At least for me, this rung serves as sort of a defensive crouch. When I am distracted or tired, this is where you are most likely to find me. I’ve been looking for a sample of my writing at this rung of the latter and the Agony of a Thousand Puppies comes pretty close:

Hiding Javascript behind another language’s layer of abstraction is like killing puppies.

Yep, it’s the grumpy-pants Dimitri.

🔨 Holding a perspective

Another rung up is “holding a perspective”. My attachment to the perspective becomes more or less intentional. I hold the perspective, as opposed to it holding me. It is the only perspective I know, but I know it well. This perspective is particularly helpful when I am asked for loyalty or need to commit to a certain course of action.

When I hold a perspective, I can wield it like a sword and find clever ways to destroy perspectives of others. Even when I am clearly shown blind spots of the perspective I hold, I remain unfazed, retreating into the realm of magical thinking and conspiracies if necessary. I may not even be upset with people who have different perspectives. I just would have this calm sense of boundless confidence that they’re wrong.

This rung is populated by mental models that we’re experts in. The sheer depth of experience does the heavy lifting. These models may be fairly intricate, but tend to have this mechanical quality. When holding a perspective, we view all things around us in terms of precisely, pain-stakingly crafted mental models. Unlike at the previous rung, our mental models of others expand quite a bit in their sophistication. They have gears and springs that move them — and we feel like we know exactly how they tick.

It pains me to admit how familiar this rung is to me. Some of my biggest blunders were rooted in this mechanized world – yet it remains to be somewhat of a default notch for me. If you want to hear how I sound at this rung, here’s a post from 2007, written in “holding a perspective” voice, where I propose an alternative to Apple’s iPhone SDK.

🌳 Having a perspective

The next rung up is “having a perspective.” It is characterized by this sense that I do indeed have a particular perspective, and I know why I have this perspective. I can look around and see ample evidence that the way I am seeing is self-consistent and coherent. I can discern my own work of constructing this perspective. When someone talks to me about it, I can describe my perspective (and the reasoning behind it) in detail. I can take the perspectives of others, understand them, compare them, show flaws and benefits in them, and even adjust mine to make it more robust. 

When at this rung, I am not trying to answer the question of “why is this perspective true/false?” but rather “why is it so?” The growing depth of understanding of my perspective helps me maintain the clarity of direction, which frames the main strength of this rung: the capacity to lead others. 

It is my experience that leadership and commitment are at the different rungs of the ladder. Especially in fluid environments, these are two different activities. One is patiently staying a decided course of action, and the other is looking to navigate a changing terrain. The latter requires flexibility and rapid changing of the mind, which can appear like lack of conviction at the “holding a perspective” rung.

The sense of organic flexibility, like weeds, permeates our models at this rung. The models open up, no longer precise and mechanistic, allowing for ambiguity and tolerating clusters of unknowable. When we have a perspective, we recognize that it’s just a snapshot of some much more complex thing that we aren’t yet seeing, and are able to be okay with our model’s inherent incompleteness. The source of groundness at this rung emerges not from the confidence of “knowing the world”, but rather from the sense of awe of the complexity of the world and the deep desire to glimpse the whole picture. 

A post from 2014 about Web platform deprecation might be a sample of how I sound at this rung. There’s a patient and thorough analysis of the situation, admission of its gravity and high ambiguity, and an optimistic call to action. It is the voice of most of my writing at work. 

🌀 Visiting perspectives

The “visiting perspectives” rung is at the top of our ladder. In addition to seeing perspectives of others, I can shift to inhabit them. I can step out of what I believe to be true and temporarily adopt someone else’s ontological realm. I do this without judgment and temptation to evaluate it against my perspective, accepting that this someone else also has a rich life experience and a lot to teach me. I am a traveler who’s visiting perspectives, and the less I hold on to mine, the more depth of other perspectives is revealed to me.

This ladder might actually be one of those spiral staircases. In a weird circular fashion, the last rung resembles the first one: in both, I appear to have no attachment to any given perspective, and may even seem lost. The big distinction is that in the first rung, I can see very little. In the last one, I have the full capacity to see perspectives of others around me. This process of perspective examination is deeply enriching, and offers the gift of being able to see blind spots that nobody else can. When I am able to stay at this rung at the ladder, it almost feels like I can see around corners. When I say “systems thinking”, I usually mean not a particular discipline or methodology, but rather the experience of gaining access to the complexity of mental models at this rung of the ladder.

For me, writing rarely happens at this rung, because the moment is so fleeting. I could try and plead the case that most of my recent posts are written in that voice, but I suspect it’s just skill. I have learned to emulate the gentle, curious voice of this rung, even though I am not actually inhabiting it.

However, I do have a few posts that I remember writing immediately after touching that top rung. They have this nearly delirious delight of briefly seeing something much larger than what I can typically afford. My Decoherence post from 2020 post is one of those. The first sentence reaches for a galactic horizon: “There is no past or future”.

Perspective ladder mini-case study

To give you a sense of how traveling up and down the ladder feels, I’ll describe my experience at a meeting I’ve been at just last week – here it goes, with commentary.

We were discussing a subject that I knew a lot about, and I was pretty sure about that. I was also pretty sure that we were all on the same page. I started getting a little bit distracted, checking my email, and thinking about something else. Only a bit of attention remained on the conversation. 

At this point, it is unclear if I am sticking to a perspective, holding it or having one. All three feel the same. However, the “partial attention” bit points at the sticking to perspective: letting it hold me.

Then, one of the participants spoke up, revealing a different perspective that was incongruent with mine. My first internal reaction: “Wait, what? What’s happening here?” 

Aha! Indeed, it looks like I was at “sticking to a perspective,” rung, and my colleague’s confident voice triggered a dip into the “just stayin’ alive” rung. The disorientation is a telltale sign in such cases.

Then I immediately had this sense of irritation come over me. I was feeling disheartened and disappointed.

It sounds like I quickly escaped out of the lowest rung and climbed to “sticking to a perspective.” There could have been two alternatives here: one of me sticking to my colleague’s perspective, immediately adopting it, and the other is of me remembering that I have a lot of expertise in this area, and sticking to that. It looks like I picked the latter.

One interesting marker for “sticking to a perspective” rung is Should-ness. If I feel strongly that I should be doing something or acting in a certain way, and/or others around me should be doing the same, or something I expect them to do – that’s a sign that I am at this rung of the ladder.

After fuming for a little bit, but not saying anything, I noticed that I was feeling irritated. I tried to reorient and did a breathing exercise to release the tension. Once I settled down, I started noticing that my colleague was just wrong: they clearly didn’t understand the problem as well as I did.

This indicates that I am currently climbing upward, likely at the “holding a perspective” rung. There’s usually a sense of relief that accompanies reaching this rung, when I realize that all is well and it is the others who are lost, not me.

I started asking questions trying to better sense where their misunderstanding originated. To my surprise, I realized that my colleague had good insights that I could build on to improve my knowledge.

I am now at the “having a perspective” run. There is a subtle shift when climbing up here: other people’s “wrongness” becomes a parts bin for ideas. Instead of being a protector of my perspective, I become a savvy gardener who eagerly improves it once novel insights are uncovered.

The meeting ended on a good note, and I felt pretty excited about learning new things, though I wasn’t quite sure what to do about the difference of perspectives between my colleague and I that we encountered.

It was only next morning that I realized how neatly our perspectives cover a larger problem space that I haven’t seen before, and how their view is not only valid, but also overlays a blind spot that I suspected was there, but couldn’t quite see (because that’s how our blind spots typically behave). The idea of how we could collaborate immediately became obvious and got me excited and fired up for the day.

This flash of insight is a result of briefly reaching the top rung of the ladder. I could swear it feels like it’s accompanied by the angelic choir sound: that’s how clarifying and revealing the insight that arrives. I wish I’d spend more time at this rung, but I only seem to reach it ephemerally and in certain conditions. Things like having enough rest and sleep, for example, are a large part of the equation.

The excitement that followed is a drop to “having a perspective” rung: learning and incorporating the new gems into my garden of mental models. It is followed by the commitment and determination (“I am fired up!”) that tells me I clicked into the “holding a perspective” rung.

What I learned so far

I hope this illustration has been helpful. I have been using a less kempt rendition of this framework for a while. Here are a couple of things I’ve learned along the way. 

  • Fake it till you make it. When at the bottom rungs, it seems that the key is to remember that there’s a ladder at all. At these rungs, it’s just trying to cling to the framework itself as a perspective (there’s literally a stickie with the words “Remember the Ladder” on it), and being patient with myself. Have faith that the model flattening is a temporary effect, and the disorientation and the limited sight being experienced will pass.
  • Different gifts at different rungs. Each rung is useful for unlocking its particular gifts, though these gifts tend to be mutually exclusive. There is no rung where I can both follow and lead. To do both, I have to be consciously switching between the two, and that is an effortful process. If I want to see around corners, but am also looking to commit to one particular perspective, I need to anticipate a lot of climbing up and down. Similarly, if I am “just stayin’ alive” at “detached from a perspective” rung, any attempts to lead or even follow will be highly unproductive.
  • No free pass. Going down can happen quickly (gravity makes things fall), but going up, the rungs of the ladder don’t appear to be bypassable. Even if very quickly, it looks like I have to travel through the rungs to reach the ones I need. Be it reading a book or watching an interview, the travel from “well, that’s just preposterous!” to “ah, yes – I see how they’re wrong” to “huh, these bits are insightful” is something I am learning to anticipate. At this point, I’ve gotten to the point where I can sometimes feel the clicking of the rungs happening – which helps to orient.
  • Bottom rungs are trappy. The “sticking to a perspective” rung is, well, sticky. I have to be fairly vigilant about not getting trapped in the not-learning cycle that lives just between it and the “holding a perspective” rung. Both rungs have a very satisfying feel of not having to change anything about myself. On too many occasions, I would recognize that I’d spent a chunk of my time just going up and down between these two rungs, trapped in the cycle.

The story above might sound familiar, because it is a loose, more flexible re-telling of the adult development theory. I’ve been unsatisfied with how, in conversations about ADT, the stage-like progression aspect of the theory takes center stage, displacing another useful aspect of it – the fluidity of how we show up in our daily lives. It is tempting to imagine that once a person reaches a certain stage, they just kind of stay at it.  Since my experience is a bit different, viewing it as a one-way stair step progression felt too static. However, when combined with the concept of model flattening, it seems to gain this dynamic flavor.

r/K selection and innovation

I’ve been thinking about the different conditions under which innovation emerges, and how the environment influences the kind of innovation that happens.

To set the stage, I am going to do a very brief detour into biology (I am not a biologist, so will definitely make a mess of it) and use the r/K-selection lens to guide this story. If we observe various organisms, the theory goes that all fall somewhere in the spectrum between r-selection and K-selection. The r-selection is a strategy that’s focused on reproduction: make copies of my kind as quickly as possible, and mutate generation to generation. The K-selection is a carrying capacity strategy: stay alive as long as possible, learn and teach my (usually very few) offsprings the necessary tricks to thrive. Both are viable strategies, depending on the environment.

Typical examples used for r-selection are dandelions and for K-selection – elephants. To bring it closer to home, we can have the coronavirus represent the side of the r-selection, and us humans the K-selection. The battle of pandemic/endemic we’re still locked in shows that humans are likely to win, but oh boy is COVID giving us the run for the money.

Turning to the realm of ideas, it seems that some of them are r-selected. In the most basic form, ideas are memes: easily transmitted, rapidly spreading earworms. Each transmission is an opportunity for mutation. As they inhabit our mind, ideas fall in the fertile ground of other ideas and the next time we communicate these ideas, we – nearly always – change them. They mutate into something different. As the r-selected strategy suggests, ideas evolve as they are copied.

Other kinds of ideas are K-selected. These are usually larger, self-coherent entanglements of smaller ideas. They live in our minds and they evolve over time, growing and maturing. Just like K-selected organisms, these ideas reproduce with pain and effort. We can’t just give them to others. We must garner our patience and engage with them in the process of teaching, smuggling them across the often unreliable medium of interpersonal communication, one tiny elemental bit at a time.

My fellow FLUX-er Erika pointed out that by putting these two strategies into a spectrum, a framing of pace layers pops out. There are simple ideas boiling afroth at the top layer in a pure r-selected strategy. As they connect with other ideas, their strategy becomes more and more K-selection-like, forming layers that move at a slower pace. Learning Newtonian physics? Welcome to the lower layers. Reading the latest memes on Twitter? That’s at the top.

This pace layering offers us a neat way to look at the kinds of innovation. Since innovation is powered by ideas, we can see it manifest in different ways at different pace layers. The higher, more r-selected layers are great for rapidly exploring new spaces, where there are lots of unknowns and possibilities. The lower, more K-selected layers are incredibly effective for optimizing and refining existing ideas. As ideas mature, they traverse the pace layers, descending from r to K.

Using this lens, an organization that desires to innovate can act more intentionally by picking the idea selection strategy. If we’re looking at a new wide-open space, or searching for the next frontier, we would want to create conditions where ideas can spread rapidly and replicate to mutate. 

A recent example of this happening is the release of Stable Diffusion, an AI model for generating images from prompts. Simon Willison has an insightful write-up of the phenomenon, and we don’t have to squint to spot application of the r-selected strategy: the model and the surrounding tools are open source, and the process of setting up your own instance is fairly straightforward. 

Combined with rising interest around the future potential of generative media, this caused an immediate explosion of innovation: people messing with it for fun and/or quickly putting together possible business ideas. Given that no other contenders offered such an opportunity to the interested crowd, I would not at all be surprised if Stable Diffusion, no matter how well it fares in its comparative quality today, becomes the de-facto engine for generating pictures from text.

On the other hand, if we want to climb the gradient hill and improve upon an existing idea, we are better off picking the K-selected strategy: invest into apprenticeship practices to facilitate tacit knowledge transfer, and ensure that idea refinements are carefully curated. 

I will once again lean onto browser development to illustrate the K-selected strategy. The modern Web rendering engine, which is the thing that interprets text, images, and code as a visual, interactive composition on your screen that we call “Web sites” is a prototypical outcome of descending down the pace layers.

A wild-haired idea at first, it eventually grew into a massively complicated and often self-contradictory tangle of ideas that is captured in code. There are only three distinct instances of that code: WebKit, Blink, and Gecko. These instances try to interoperate, and there is an incredible amount of knowledge — both tacit and described in long, dry documents called specs  – that need to be grokked before proceeding with constructing another instance. If Stable Diffusion is a dandelion, Web rendering engines are elephants. 

One does not simply create a new rendering engine. To move forward, a K-selected strategy is called for. When we look at the efforts surrounding creating a rendering engine, we can see extensive documentation and education that helps aspiring Web browser engineers climb the learning curve, as well as rigorous processes to ensure that every change that is being made to the codebase carefully, yet purposefully evolves the project.

Similarly, we can anticipate mismatches in the kind of innovation we’d like to see and the strategy that we have at our disposal. We can’t expect to explore a new space with data-driven analyses and an elite team of experts — just like we can’t expect to see consistent results aimed toward some goal if we’re set up for rapid, unbridled experimentation.

e-LOC

I’d like to introduce a metric for developer APIs that I call “e-LOC”. I am not great with names, so it’ll just have to wait until someone gives it a better one. Meanwhile, let’s explore it.

The definition of e-LOC is as follows: e-LOC of a technology is the order of magnitude of lines of code that one needs to write to get to a working product prototype using this technology. Applying the usual metric prefixes, we get ourselves a nice scale: 0-LOC, deca-LOC, hecto-LOC, kilo-LOC, mega-LOC, and so on. 

Various APIs reside on different parts of this scale. A finished, functioning product is 0-LOC. On the other side of the scale is a developer surface that requires gazillon of lines of code to produce something that we can reasonably offer to our friends and colleagues to try out. Want to build a web-based product prototype? It’s probably going to be hecta-LOC. A brand new Web rendering engine? You’re likely looking at a mega-LOC.

The order of magnitude serves as a decent measure of developer barrier to entry. For a deca-LOC, it’s just a handful of lines of code, so the barrier is very low. For Web rendering engines, only a skilled, determined, and well-funded team of developers can overcome it.

Because of this barrier to entry property, the e-LOC can serve as a leading indicator for the amount of innovation that will happen around a given technology. Put differently, e-LOC can be used to measure innovation potential of a technology. 

New OSes and rendering engines are one-shot, lonely affairs. They undoubtedly innovate, but the fruits of this innovation are singular and rare. On the other side of the scale, if it takes 60 lines to make a working Discord bot that generates pictures using Stable Diffusion, nearly anyone can do it. Expect a lot of people to be messing around with these APIs and trying to build something interesting with them.

Another useful property of e-LOC is that it also indicates how much opinion is contained within the API or technology. More opinionated APIs will tend to have lower e-LOC, while the less opinionated ones will trend higher in numbers. Of course, lower e-LOC doesn’t make the API good. That’s not what this metric measures. Some of the most troublesome, gnarly APIs are deca-LOCs.

The opposing directions of opinion and developer barrier to entry present an interesting tension in innovation potential. A 0-LOC technology is fully opinionated and requires zero lines to use. Because of that, it also has zero innovation potential. The innovation potential increases proportionally to the amount of opinion that could still be added, yet the barrier to development tapers it off as the e-LOC scale increases.

Advancing the state of my earlier developer funnel framing, we can draw this as a curve to illustrate the relationship of technology innovation potential and e-LOC. The curve starts at near-zero at the beginning of the scale, then rapidly expands to its maximum value at around deca- and hecto-LOC, then drops off in a power law fashion as the magnitude of lines of code keeps increasing. The near-zero at the start is significant: in my experience, even finished products end up growing APIs, whether they want it or not.

This metric gives us a nice way to orient our thinking around the purpose of the technology we might be building. How much innovation do we want it to spur? If we want to minimize it, we ship only finished products. If we want to maximize it, we must relentlessly drive our APIs to be in deca-LOC range.