Degrees of Open Source

I realized that I never captured what I learned about the nature of open source projects anywhere, so here’s one attempt to rectify this omission.

The way I understand the concept of “open source” is that it’s a way of running a software project. In this definition hides a gazillion of options, and I will try to lay them out in a spectrum. Understanding these options as a spectrum has helped me quite a bit in reasoning about software project strategy, and I hope this will be helpful for you, too.

As bookends for our spectrum, let’s position two radical alternatives: fully closed-source and fully open source projects. As an example that might resonate with Web developers, we can see that the late Internet Explorer fits very close to the fully closed-source extreme and Firefox is at the other side of the spectrum.

To place other projects on the line between these bookends, we need a couple of notches. I will define these notches loosely, and give them numbers. So, any project that you’re evaluating for their degree of open source, you can use the whole part of the number as the rough estimate of where a project sits within the spectrum, and then the decimal part to adjust more finely.

We will also have two numbers! They are both located on the same scale. One number will reflect visibility and the other participation. Project visibility describes how much the general public can see into the project, and participation shows how much anyone can engage with the project. So, to describe Internet Explorer, we can firmly give it two zeroes: zero for visibility and zero for participation, or v0p0. I will use this convention from here on: a string of “vNpM”, where N is the number indicating the scale of visibility, and M is the number indicating the scale of participation for a project.

How high can these numbers go? I propose that we use a what/how/why framing. It seems to fit well. Think of the spectrum that defines the degrees of open source as separated by three veils of invisibility.  

🏠 The what

The first veil is the “what” veil. Outside of it, all you get is the final artifact – the product that resulted from compiling/running/hosting the source code. Anything that’s outside of this veil will get a grade of zero (0) in our scale on both participation and visibility.

Once this veil is lifted, we can start seeing into the “what” of the project. For example, we could start seeing the source code itself, even if it’s just some file with source code in it. We can examine what was written, and try to understand what the code does. Anything that’s between this veil and the next gets a one (1) in visibility. 

To evaluate participation, we ask the question: “how can a random developer who comes by this code provide feedback on it?” For example, does the project have the means to allow commenting on code or perhaps making suggestions or filing bugs? A good example of an OSS project that is at v1p0 is Apple MacOS OSS distributions on Github: you can see the code, but there’s literally nothing else you can do to participate. Should this project gain some means of communication (a mailing list, a Discord server, a bug tracker, etc.) that allow the general public to contribute their insights, it would move to a v1p1.

🏗️ The how

The next veil is the “how” veil, outside of which we can see what the source code is, but can only guess about how it’s been written.

After we lift this veil, we can start seeing how the code is being written. We can start getting a sense of the units of work (bugs, commits, etc.) and how they come together as the source code. Inside this veil, we start seeing things like actual commits by actual contributors, rather than automated bot code dumps. Importantly, we start seeing how these commits are connected to particular bugs and feature requests, and the actual engineering/product discussions about them are revealed.

Participating at this level means that there is a path for anyone to introduce their own units of work into the project: to start contributing to these discussions, to start and guide them, to create pull requests and have a path to merge them, etc. Depending on the kind of set up the project chooses, there are many different ways to do so, but the key property of achieving this degree of open source participation is that anyone can walk up to the project and contribute meaningfully to how the project’s source code comes together.

Projects that have visibility and/or participation beyond the veil of “how” will have a grade of two (2). For example, a v2p1 project will have its bug tracker and commits open to the public, but no way to actually become a committer to the project, outside of filing bugs or providing ad hoc feedback. Lots of open source projects that come from large companies are run this way. With a v2p1 project, I as a random developer can check out the code, follow the steps to get it running, and even fork it. What I can’t do is join the circle of project developers and say things like: “Oh, I really love what you’re doing with <foo>! Here’s a PR to refactor <foo> for clarity”.

To get to v2p2, a project must have a clear path to do so. Typically, there’s some section on the project site that describes how one can start contributing, outlining the development environment, and the basic processes that go into the “how”. Apple’s Webkit is a great example of such a project, with a clear outline on how to get started with the source code, and how to begin to contribute. There’s even a diagram outlining the process flow as well as the famously elegant code style guidelines. Effectively, it’s a batteries-included participation toolkit: as much as possible, the tacit knowledge of the team is documented clearly and concisely. If one has the necessary programming skills, they can jump right in. 

📐 The why

The final veil is the “why” veil. Beyond that veil, we find the project’s motivations and strategy. Why is this project even here? What are the leaders of the project thinking? Why are we choosing to work on this unit of work and not the other?

This is the realm of project governance. Giving the general public the ability to understand (visibility) and/or influence (participation) project direction is the ultimate degree of openness for a project, and we give it the degree of three (3) on our scale.

Many open source foundations take this approach. It’s not for the faint of heart, since the “why” can contain significant value. A project that adopts a v3p2 stance is effectively accepting that the value of the project is not within the confines of the organization that runs it, but rather in the community that gathers around the project.

Such projects will openly state their intentions, outline their strategy and approach, and eagerly look for feedback. The whole idea is to lean onto the wisdom of the crowd to get it right.

An even more radical stance is v3p3, where there is a path for anyone to contribute to the governance itself – not just evaluating and helping shape directions and priorities, but also change them. In projects like this, in addition to clear roadmaps and open strategy, the governance is fully transparent, and there isn’t a point at which the capacity to make decisions goes into a tunnel.

✨ The Open Source Scale

Now we have our four-grade scale, with the numbers going from zero (0) to three (3). When visibility and participation are combined, I presented them in a bit of a pattern: the visibility goes one in front of participation, with participation catching up.

Are there other combinations of grades? Are there v3p0 or v0p2 projects out there? Maybe, although my experience tells me they are rare. Typically, a successful open source project only has one degree of difference between the grades, and the participation trails the visibility. One can’t participate at a certain grade, without having visibility at the same grade. Similarly, the setups where participation trails by too much (like v2p0) are rarely sustainable: eventually the project owners arrive at “what’s the point?” question and either close down visibility or further open up participation.

The key insight for y’all: the difference between the two grades nearly always results in additional overhead and friction. The larger the difference, the more likely it is that the project will be overwhelmed by this overhead, which will inevitably lead to a change in its grade.

So, when you’re planning a new open source project, please make sure to:

  • clearly understand the grade you’re starting with;
  • if this grade is unsustainable, understand the grade you’re aiming at and devise strategies to get there as quickly as possible;
  • put in place project operating principles that hold it within this grade.

Hope this helps and good luck on your adventures in open source! 

How to think and what to think

There’s this gap that I’ve been struggling with for a long time, and I often feel like I am no closer to finding a way to bridge it. It’s a gap between nuance and action. I’ve talked about many variants of problems that this gap causes, but at their core, they all share the same dynamic.

Very loosely, the dynamic is that understanding nuance requires so much work in addition to the important, urgent work that already needs to be done, that folks who need to understand this nuance rarely have energy/time to do so.

Suppose you have invested a bunch of time into studying a problem space – like growing developer ecosystems, for example. You have acquired admirable depth of nuance in this problem space. You understand how APIs evolve. You understand the layering fundamentals of opinion. All that good stuff. Now, you would like to convey this depth to others. These others are typically your team, but could be just random people on the Internet.

You choose the option of carefully writing words and drawing diagrams to describe the nuance in the problem space as concisely and clearly as you can. Then you share the resulting artifact with your team. And… nothing happens. At best, people diligently flip through your artifact and go: “This is really nice. But why so abstract? How does this translate to action?”

This question has become somewhat triggering for me and my fellow ecosystem adventurers. Nuance often feels too abstract, even to a most receptive audience. 

I wonder if it’s because nuance points at how to think, rather than what to think. This might be an important distinction.

We act on what we think, so “what to think” is close to the ground, and an artifact that outlines what to think will tend to feel more concrete. Do this. Don’t do that. Go here. Stay there.

All the while, the “what we think” is situated inside of how we think. Put differently, how we think defines all possibilities for action. When we change “what we think”, we just pick another of those possibilities. When we change “how we think”, we start seeing another set of new possibilities. Until then, our field of view is limited to the initial set.

Because nuance affects how we think, it will nearly always feel detached from action. Modifying “how we think” means introducing different mental models, a whole new space from which new actions could be drawn. And there’s a lot of effort that needs to go into accommodating these new mental models, integrating them into one’s understanding.

Only after that happens, new actions will emerge – but there will be an inevitable delay between conveying the nuance and the actions appearing. I bet there was a puzzled look on the face of the proverbial man who was expecting the fish, and instead got a fishing rod thrust into his hands and an invitation to walk to the lake.

When everyone is busy to the point of delirium, stopping to learn how to fish feels both impossible and foolish. So what do we do?

I don’t know. But I will keep trying to find different ways of sharing nuance with others – and of course, keep sharing what I learned with you.

Edgewise Thinking

I stumbled into this framing when talking about product/platform thinking with my friends. To put this framing together, we have to employ a certain way of thinking about software products – viewing them as a graph.

Imagine a graph – a bunch of vertices and edges that connect them. In this particular graph, bits of software we write are represented vertices, and the formats, protocols, and APIs we use to glue these bits together are represented by the edges. Both are important and necessary.

Represented like this, the two groups (the software and the glue) can be discussed separately. For example, when I talk about an API to get some data (like current time) from a particular server, I can see what appears to be one thing – the API to get current time – as two distinct entities. In our graph representation, there will be the vertex representing the code that is required to obtain current time and format it according to some requirements, and the edge, which represents the actual API contract that defines the requirements to be implemented.

C++ has this separation built into the language: to write useful code, a developer has to produce both the header file (the edge) and the source file (the vertex). Other languages have the construct of an interface that serves a similar purpose, though most of the time, this distinction between edges and vertices is more of a practice. One of the more familiar ones is test-driven development (TDD), where we are encouraged to start by describing the behavior of the code we want to write in tests, and then writing the code itself.

This distinction reveals two different thinking modes that appear to be complementary, but independent of each other: the vertex-wise thinking and the edgewise thinking.

 When I have my vertex-wise thinking hat on, I am usually contemplating the actual building blocks of the thing I intend to make. This one is very common and a bit of a habit for software folks: when asked “what are we building?”, the answer shows up as a list or a matrix of things, a to-do list of sorts. “To get <foo>, I will need <bar> and <baz>” and so on. This approach is super-familiar to any maker of things, professional engineer or not. While in this thinking mode, the vertices are at the center of attention and edges are an emergent artifact. We are focused on constructing the blocks, and how they are connected is something we figure out as we go. 

The situation inverts with edgewise thinking. When I have the edgewise hat on, I am primarily focused on the edges. I want to understand how the vertices connect – before figuring out what they’re made of. Here, the glue between the blocks is the subject of my attention, and the blocks themselves are an emergent outcome. When asked “what are we building?”, the answers that the edgewise thinkers give usually come in the form of control flow or other kinds of interaction diagrams, defining how the vertices will communicate with each other. Edgewise thinkers obsess about formats and protocols.

One of my mentors had a habit of designing through mocks: sketching out an entire project as a collection of dummy classes that do the absolutely minimal amount of work to glue together. You could build this project and run it, and it would actually print out sentences like “contacting bluetooth stack” and “getting results”, none of it real: it was to be implemented later.

Such edgewise thinking exercises allowed us to see—and mess with—the shape of the entire thing long before it existed. The mocks served as the edges. Once we figured them out, vertex work was just a matter of typing out code that conformed to the shapes the edges defined.

The inevitable question in an article of this sort arrives: which one is better? It depends on the situation. A colleague of mine captured the rule-of-thumb well: edgewise thinking tends to lead to divergent effects and vertex-wise thinking to convergent.

Vertex-wise thinking works exceptionally well – and is called for – when we know pretty well what we’re looking for as the end result, and the biggest obstacle we’re facing is proper sequencing of work. Program managers can do wonders turning lists and matrices of building blocks into burn-down charts and shipping schedules, and help the team click through the milestones.

Edgewise thinking can’t compete with that. In fact, folks who favor edgewise thinking tend to struggle with calendars. However, one of the superpowers of edgewise thinking is the ability to surprise ourselves. If we leave vertices as an emergent outcome, we create the potential for something we didn’t anticipate: different ways of using the edges to build a whole different thing.

For example, imagine that we are Tim Berners-Lee back in the 1980s and we want to organize the documents across departments at CERN. Applying vertex-wise thinking, we end up with a pretty cool database that any CERN member can use. Applying edgewise thinking, we end up inventing the Web.

Platform and ecosystem work is first and foremost edgewise thinking. We start by imagining how various bits will connect and what the value of these connections might be. We find ways to ensure that this value is preserved and increased by designing the edges: interfaces, formats, and protocols that define the connections. We intentionally leave the vertices blank and invite ecosystem participants to invent them by relying on the connection mechanisms we designed. 

For me, a really useful marker in a software design conversation is when the notion of “reusable building blocks” comes up. It’s a good sign of a thinking hat confusion: the participants are engaged in vertex-wise thinking when they need to think edgewise. A “reusable building block” is not really a block, but rather the glue that makes it reusable. When in this conversation, it might be good to pause and reorient, switching hats: discuss interfaces, formats, and protocols, rather than the actual code that will fill out the block. 

Wanderer

Here’s another little experiment I built over the break. Like pretty much everyone these days, I am fascinated by the potential of applying large language models (LLMs) to generative use cases. So I wanted to see what’s what. As my target, I picked a fairly straightforward use case: AI as the narrator.

While I share in the excitement around applying AIs as chatbots and pure generators of content, I also understand that predictive models, no matter how sophisticated, will have a hard time crossing the chasm of the uncanny valley – especially when pushed to its limits. As a fellow holder of a predictive mental model, I am rooting for them, but I also know how long this road is going to be. We’ll be in this hollow of facts subtly mixed with cheerful hallucination for a while.

Instead, I wanted to try limiting the scope of where LLM predictions can wander. Think of it as putting bounds around some text that I (the user) deem as a potential source of insights and asking the AI to only play within these bounds. Kind of like “hey, I know you’ve read a lot and can talk about nearly anything, but right now, I’d like for you to focus: use only these passages from War and Peace for answering my questions”.

In this setup, the job of an LLM would be to act as a sort of well… a language model! This is a terrible analogy, but until I have a better one, think of it being a speech center of a human brain: it possesses incredible capabilities, but also, when detached from the rest of the prefrontal cortex, produces conditions where we humans just babble words. The passages from War and Peace (or any other text) act as a prefrontal cortex, grounding the speech center by attaching it to the specific context in which it offers a narration.

Lucky for me, this is a well-known pattern in current LLM applications. So, armed with a Jupyter cookbook, I set off on my adventure.

A key challenge that this cookbook overcomes is the limited context window that modern LLM applications have. For example, GPT-3 is only capable of just over 4000 tokens (or pieces of words), which means that I can’t simply stuff all of my writings into context. I have to be selective. I have to pick the right chunks out of the larger corpus to create the grounding corpus for the narrator. So, for example, if I have questions about Field Marshal Kutuzov eating chicken at Borodino, yet I give my narrator only the passages from Natasha’s first ball, I will get unsatisfying results.

I won’t go into technical details of how that’s done in this post (see the cookbook for the general approach), but suffice to say, I was very impressed with the outcomes. After I had the code running (which, by the way, was right at the deca-LOC boundary), my queries yielded reasonable results with minimal hallucinations. When in the stance of a narrator, LLM’s hallucinations look somewhat different: instead of presenting wrong facts, the narrator sometimes elides important details or overemphasizes irrelevant bits.

Now that I had a working “ask ‘What Dimitri Learned’ a question” code going, I wondered if the experience could be more proactive. What if I don’t have a question? What if I didn’t know what to ask? What if I didn’t want to type, and just wanted to click around? And so the Wanderer was born.

Here’s how it works. When you first visit wanderer.glazkov.com, it will pull a few random chunks of content from the whole corpus of my writings, and ask the LLM to list some interesting key concepts from these chunks. Here, I am asking the narrator not to summarize or answer a specific question, but rather discern and list out the concepts that were mentioned in the context.

Then, I turn this list of concepts into a list of links. Intuitively, if the narrator picked out a concept from my writings, there must be some more text in my writings that elaborates on this concept. So if you click this link, you are asking the wanderer to collect all related chunks of text and then have the narrator describe the concept.

Once you’ve read the description, provided by Wanderer, you may want to explore related concepts. Again, I rely on the narrator to list some interesting key concepts related to the concept. And so it goes.

When browsing this site (or “wandering”, obvs), it may feel like there’s a structure, a taxonomy of sorts that is maintained somewhere in a database. One might even perceive it as an elaborate graph structure of interlinked concepts. None of that exists. The graph is entirely ephemeral, conjured out of thin air by the narrating LLM, disappearing as soon as we leave the page. In fact, even refreshing the page may give you a different take: a different description of the concept or a different list of related concepts. And yes, it may lead you into dead ends: a concept that the narrator discerned, yet is failing to describe given the context.

The overall experience is surreal and – at least for me – highly generative. After wandering around for a while, it almost feels like I am interacting with a muse that is remixing my writing in odd, unpredictable, and wonderful ways. I find that I use the Wanderer for inspiration quite a bit. What concepts will it dig up next? What weird folds of logic will be unearthed with the next click? And every “hmm, that’s interesting” is a spark for new ideas, new connections that I haven’t considered before.

No ship ums

I’ve been thinking about the path that software dandelions take toward becoming elephants, and this really interesting framing developed in a conversation with my friends.

Software dandelions are tiny bits of software we write to prototype our ideas. They might be as small as a few lines of code or deca-LOCs, yet they capture the essence of some unique thought that we try to present to others. If, in a highly unlikely event, this dandelion survives this contact, I am usually encouraged to tweak it, grow it, incorporate insights. Through this process, the dandelion software becomes more and more elephant-like.

If I imagine idea pace layers (the concept from slide 55 on in glazkov.com/dandelions-and-elephants-deck) , and draw a line from the outermost, the full-dandelion meme layers into the innermost, absolute elephant ones, this line becomes the course of the idea maturity progression. As my dinky software traverses this course, it encounters a bunch of obstacles.

As I was looking at this progression, I realized that the environment in which I develop this software may impose constraints on how far my software can travel along this course – before it needs to switch to another environment.

One such constraint is what I called the “No ship ums”,  kind of like those tiny bugs (yes, I am continuing that r/K-selection naming schtick). When in a development environment like that, I can develop my idea and make it work, and even share it with others, but I can’t actually ship it to potential users. A typical REPL tool is a good example of such environments, with Jupyter/Colab and Observable notebooks being notable examples. While I could potentially build a working prototype using them, I would be hard pressed to treat them as something I would ship to my potential customers as-is.

Why would one build such a boundary into an environment? There are several reasons. When shipping is out of the question, many of the critical requirements (like security and privacy, for example) are less pronounced. An environment can be more relaxed and better tailored for wild experimentation.

More interestingly, such a barrier can be a strategically useful selection tool for dandelions. When I build something in a notebook, I might be excited that it works – yay! To get to the next stage of actually seeing it in the hands of a customer, the barrier offers the opportunity to pause. As the next step, I will need to port this work into another environment. Will it be worth my time? Is my commitment to the idea strong enough to put in the extra work?

On the other hand, there’s definitely folks that would look at the paragraph above and say: “are you crazy?!” Putting a speed bump of this sort might appear like the worst possible move. If we are to help developers reach for new, unexplored places, why would we want to add extra selection pressure? My favorite example here is Repl.it’s “Always On” switch. Like where this experiment is going? Just flip this switch and make it real.

So… who is right? My sense is that the answer depends on the context of what we’re trying to do. Think of the “no ship ums” as a selection pressure ratcheting tool. Anticipate hordes of developers entering your environment and launching billions of tiny gnats? Consider applying it. Unsure about the interest and worry about growth? Maybe try to avoid no ship ums. 

Dandelions and Elephants, Illustrated

Over the holiday break, I made a thing for you to enjoy. It’s an illustrated deck, similar to the one I’d done for Adult Development Theory. It’s about Dandelions and Elephants:

https://glazkov.com/dandelions-and-elephants-deck

Unlike other framings and lenses that I had come up with along my path through 2022, this one was by far the most resonant and interesting. I still get new insights from studying it. This deck is an invitation for you to do the same. I tried to write it as concisely as possible, yet leave enough room for nuance – and of course, a lot of room for further exploration.

The deck is meant to be consumed at your own pace, and some slides are denser than others. I think of it as a kind of holiday treat tin: some items are rich truffles and some are one-bite wafers.  Give it a try.

The deck consists of four chapters. The first chapter introduces the dandelions and elephants framing, roughly following my exploration from earlier in 2022.

The second chapter zooms out our perspective, asking a larger question: what makes up environments that lead to dandelions or elephant idea formation?

The third chapter expands on the idea of tensions that I mentioned in my writing, and tries to make a more cogent argument about the forces that tend to batter us when we engage in persistent idea generation.

The final fourth chapter ties the framing to pace layers. This one probably has the most new ideas that I haven’t expressed elsewhere, or ideas that I’ve talked about but haven’t connected to the original framing.

Throughout, I tried to use concrete examples to ground the ideas. I recognize that the concepts might still feel somewhat abstract, especially outside of the software and organizational development situations that I’ve described. So here’s another invitation: if you’re curious about the concepts, but are struggling to apply them in your particular environment, please drop me a note. My email is at the end of the deck. I’d love to hear from you.

What the heck is strategy work?

I am realizing that I’ve been gabbing on and on about strategy and strategy work, and I have never actually defined what strategy work entails. The word “strategy” evokes a variety of images, and when I say “what is strategy work?”, a few stereotypes show up.

One interpretation is a visage of a wizard: me sitting in an ivory tower, all-seeing, devising the next big artifact that will forever alter the landscape of the industry. This picture implicitly includes a sort of power that no individual possesses, at least in practice.

Another interpretation is that of a magician, whereby I travel from team to team, setting them on the right path as a sort of strategy debugger. This depiction is quite popular in the industry, though I have doubts about the long-term durability of these engagements – a sort of strategic bandaid, which looks suspiciously like an oxymoron.

Yet one more metaphor that I’ve seen used to describe someone engaged in strategy work is a mastermind who is weaving the web of influence across the organization, quietly pulling strings to ensure that all the necessary bits are flowing to their proper destinations.

All of these are a bit cartoonish – and yet all have a grain of truth in them. Strategic work means having a perch to observe what’s happening across a broad perspective and providing a stream of insights. It also means engaging with various teams and helping them wrangle with their strategy challenges. And of course, strategic work is about creating conditions, so yeah, behind-the-scenes influence is most definitely involved.

So what the heck is strategy work? At the core, doing strategy works means helping an organization to be strategic. How does one even do that?

A clarifying insight for me was this: strategy is a team sport. One of the most common mistakes a strategist can make is to presume that they get to “make strategy”. They may produce a sleek artifact that looks like strategy. They may even get the leaders to enthusiastically co-sign it. However, unless this artifact describes what the organization already does, it isn’t the team’s strategy. As a team, we make decisions that influence our team’s future. Every decision we make adds up to the sum vector of where we end up going. We all do strategy work. The strategy we end up with is what emerges from our collective efforts: the embodied strategy

Thus, the mission of a strategist is not to set or devise strategy: it is to understand how our strategy emerges and why, then constantly scrutinize and interrogate the process, identifying inconsistencies and nudging the organization to address them. In this way, strategy work is a socratic process: gradually improving the thinking hygiene of the organization.

Now that we’ve diagnosed the problem and chose the approach to strategy work, what is the set of the coherent actions that a strategist undertakes to fulfill their mission? To reveal these, I will take our earlier tropes and convert them into healthier roles.

The wizard evolves into the role of sensing. In a VUCA world, staying deeply engaged with the environment is key – as well as sense-making like there’s no tomorrow. If we are to diagnose problems and understand the outcomes of our actions, we need to have clarity on what is going on. To be sensing means to stay aware of the variety of signals, curating them into a set of legible forces, patterns, and trends. Sensing needs to be both externally-facing and internally-facing: understanding what happens outside of the organization as well as inside. This is where that wizard’s perch comes so handy. To remain unbiased, observing and sense-making needs a bit of detachment from the daily slog.

We turn the mastermind toward frameworks. The key objective of this role is to ensure that there are rubrics, lenses, and framings in place that help establish and grow the team’s shared mental model space. Shared mental model space helps build shared vocabulary that acts as scaffolding for effective strategic work.

When a lead brings up the innovator’s dilemma or an invisible asymptote, and nobody else knows what that is, it’s a potential loss of insight: the lens just drops on the floor. It wasn’t part of the shared mental model space. Who has the time to explain and deeply understand the concept? Conversely, in a large shared mental model space, people can talk almost in shorthand and still achieve high strategic rigor.

Here, mental model hygiene is critical. Broken lenses (like “we should just work harder!” or “I would simply…”) can cripple or doom the team. 

A recently learned lesson for me is that frameworks aren’t processes: the former are the blueprints for the latter. When the operations folks devise and implement a process, they are much better off if there is a framework to help shape it. Otherwise, a process will be informed by the embodied strategy, all of its existing inconsistencies embedded.

Finally, the healthy version of the magician trope is practice: the responsibility to keep the collective strategy muscle engaged. Instead of simply running from fire to fire, I want to proactively establish robust strategic thinking practice within my organizations. Such practice can take many forms. 

For example, in my team, we’re currently eyeing scenario planning and systems thinking as strong contenders. Whatever it is, it must be something that spurs team leaders to lift up their heads from the minutiae of execution and shift their minds to think longer and broader. 

With the practice in place, engaging with teams across the organization becomes a coaching function, rather than the reactive band-aid.

So really, what this translates to so far is a strategist playing three roles at once: sensing, frameworks, and coaching. This is not an easy task.

Framework and sensing roles are nearly diametrically opposite in nature. Sensing role implies wholly engaging with the full complexity of the environment, letting it wash over, spotting interesting trends, gardening my collections of known forces and their traits. When in a sensing role, I might spot something very novel and groundbreaking, something that requires a dramatic rethink of everything … and that’s where it runs straight into the framework role’s wall. 

Because the framework role is charged with creating conditions for a shared mental model space across the leadership team, it is naturally conservative. When wearing this hat, I want to ensure that there is a stable foundation of framings and lenses, neatly polished, accessible to all, easy to grasp, like tools in a toolbox. The silly sensing role keeps constantly messing with this toolbox, questioning whether the screwdriver is actually a butterfly … and what if this wasn’t a toolbox, but rather … oh, I don’t know, a sunset?

Keeping both roles in one head is maddening and requires a lot of practice. This was one of the big lessons for me – time management and calendar-slicing need to keep framework and sensing roles separate from each other. In some sense, it’s like having to apply both dandelion and elephant strategies – I am better off not mixing them. At the same time, I am weary of delegating these to separate individuals: the inherent tension will likely result in friction between them. Something to think about.

Speaking of time management… In addition to the need for a regular strategy practice within this team, the practice role is easily the biggest temporal vampire and randomizer of the bunch. The demand to jump in and help out with some strategic thinking ebbs and flows, and It’s simply difficult to know when the next interesting thing happens. Just when I have my framework and sensing hats sorted out, the practice hat barges in and announces that my help is needed. Gotta stay nimble.

I hope this helps y’all see the shape of strategy work a bit better. Does it resonate? Did I mess it up? Missed something? Let me know.

Being Strategic

What does it mean to be strategic? It is a sort of practice, a thinking hygiene.

Simply put, being strategic means that the outcomes produced by our actions are not at odds with our intentions. Even though this sounds simple, it most definitely isn’t. Thankfully, Richard Rumelt has done most of the heavy lifting to unpack what strategy entails, so all I have to do is summarize.

  1. It all begins with intention. There’s something in our environment – the world out there – that we would like to change. Formulated in terms of motion, this intention emerges as a question of “Where do we intend to go?”
  2. To answer this question, we engage in the diagnosis of the problem, which produces a destination: where we decide to go. The next question that we ask ourselves is “How will we get there?” 
  3. Devising a guiding policy answers this question, allowing us to arrive at the approach we choose and move onto the next question: “How will we do it?”
  4. At this step, we come up with a coherent set of actions. Finally, something we can do! As we observe ourselves taking these actions, we are asking ourselves: “What are the outcomes?”
  5. It is here where we usually encounter our first clear signals on whether we’re being strategic or not. Do the outcomes match the intention? The “What did we miss?” question is key, allowing us to compare what we see with where we started from – and repeat the cycle.

At every step, there’s an opportunity for error that puts our intentions and outcomes at odds with each other. 

We are constantly tempted to confuse our understanding of the situation with reality (“it is what we see”) and more often than not, we forget that our diagnosis is more of a hypothesis. 

We are swayed by our embodied strategy to choose approaches that are familiar rather than the ones that are called for by our diagnosis, veering us off course.

Urged to act, we end up making up our set of actions on the fly rather than considering them deliberately.

We are distracted by the multitude of other things in front of us, failing to execute on what we’ve decided to do.

We forget to look back at the original intentions and check if the outcomes are incongruent, too exhausted to reflect on the evidence provided by these outcomes and improve our understanding of the environment.

All of these forces are “water”. We are in them, surrounded by them. We are them.

Being strategic means somehow finding a way to become aware of these forces for more than mere moments – and then find energy to countervail them. Being strategic means facing the headwind of what feels like “the most logical next step”. Strategic moves are usually the ones that aren’t easy. Confusingly, hard choices aren’t always strategic.

The only way to accomplish this is through regular practice. Just like brushing teeth or regular exercise, being strategic always feels like something we have to do in addition to all other things on our plate.

How would one know if they are being strategic? That one is simple. Here’s a test::

  • Do I have a general awareness of the cycle and the headwinds outlined above? It doesn’t need to be this particular cycle. Any robust strategic framework will do.
  • Do I purposefully navigate this cycle as I conduct business?
  • Beyond conducting everyday business, do I have a regular practice that helps me improve my capacity to be strategic?

Add one point for each “yes” answer. Scored 3 points? Congratulations! Otherwise, we still have work to do.

Dandelion/Elephant Strategy Rubric

I discussed the nature of conditions that encourage dandelion or elephant strategies, but I only skimmed a little bit the topic of why one would choose these strategies. Here’s a sketch of digging a bit more in depth of this particular “why”.

A rubric that we want to use here stems from the observation of environments leading to the thriving of the organizations that chose either strategy. The environment must be complementary to the strategy.

Elephants and other K-selected species choose their strategy because the environment makes this strategy viable: there is a well-defined, stable niche into which the overall shape of the species’ activities fits. This niche is usually crowded: there are other species within that niche, and the game of life is mostly about optimizing the shape to vie for bettering other species. In this way, the environment of an elephant strategy is shape-focused.

This helps us define a rule of thumb for choosing to pursue the elephant strategy: if the shape of our technology or product is well-known and well-understood (and thus strongly expected) by our market, the elephant strategy is likely warranted.

The word “shape” here is meant in a broader sense: it’s not necessarily the physical dimensions of some object. Rather, it’s the whole set of expectations and constraints that define the niche in which we’re playing.

Using this definition, we can see that the basic shape of a mobile phone is becoming more and more entrenched in the mindshare of the world’s population. It’s the iconic iPhone shape that Steve Jobs revealed in 2007. There’s an expectation of a touch screen and a camera, of GPS, and various other sensors that are considered table stakes by the users. Should a company decide to ship a phone that does not fit into that shape, it will be playing at a different, less populated table.

Everyone who wants to live in this niche must be engaged in a strongly elephant-leaning strategy: year over year, continuous, incremental improvement of what can fit into the shape defined by the niche. We think of elephants as being slow, and that’s where the metaphor falls a bit short. Pursuing the elephant strategy is all about steadily accumulating mass and gaining velocity through momentum. Maybe thinking about how a capital ship reaches their tremendous speed would help here. It’s the game of consistency and compounding momentum, not leaps of faith or moonshots.

Prioritizing while pursuing elephant strategy, we must ask ourselves these questions: does this work help us accumulate momentum over the long term? Does it contribute to optimizing for the shape in the niche we’re playing? Does it help us better understand the nuanced nooks and crannies of the shape that might be critical for the relentless striving of shape-fitting? 

Having impact in pursuit of elephant strategy means contributing to gaining momentum. If I did something impressive, but not aligned with this larger goal, I probably wasted my energy – or worse yet, slowed down the ship. As I described before, top-down cultures tend to be effective here: it is easier to understand what to align with.

As a thought experiment, compare two teams: in one team, planning happens as a top-down process, and in another – bottom-up. In the top-down team, the direction is clearly stated at the beginning of the planning, and all team members must shape their work to align with this direction. Conversely, in a bottom-up planning process, the members supply what they are planning to do and then the sum vector of this work determines where the team will go. Which team will have a greater consistency of momentum accumulation over time?

On the other hand, dandelions and the r-selected species they represent choose their strategy because the space in front of them is wide open. This typically happens when a technological breakthrough suddenly provides the means to build something new, but the actual product-market fit – the shape of the product or technology – is entirely unclear. There are no right or wrong answers – yet. There aren’t clearly defined constraints and the perceived limits seem to give when pushed on. Everyone else is doing roughly the same thing: stumbling around to find out what the heck this new space is all about.

To turn it into a rule of thumb for choosing the dandelion strategy: if the technology or product space is ill-defined and little-understood, pick the dandelion strategy.

I already used the AI-generated media space as the currently-unfolding example of this scenario, and it’s still very relevant. It is not knowable where this thing will go. Early attempts to discern constraints or define a shape will likely look foolish in the long run.

It is fairly easy to see how trying to build a new dreadnought or turn the existing ship toward this momentum are both activities fraught with peril. When we don’t know where dandelion seeds will land, we are better off letting go of our habit of predicting the outcomes.

Instead of investing into enterprise-strength, scalable software, we can be better off with duct tape and popsicle sticks when adopting the dandelion strategy. Throw something together, make it go, ship it, and see what happens. Do it again. And again. Watch carefully for surprising developments. If our tiny app suddenly gains a small following – that’s something. Avoid presuming that we know what this “something” is. Very often, in a new space, even when given an opportunity to tell us directly, our users might not be able to articulate it, either. Revealed preferences rule over the stated ones. The trick is to keep trying and learning and making sure that the learning propagates to some common pool of wisdom.

In such an environment, bottom-up cultures work amazingly well. Returning to the earlier thought experiment, can you see how the fortunes of these teams will be reversed? The top-down team will form a tight fist, punch hard in one direction … to never be heard from again. In contrast, the bottom-up team will diverge in a cloud of divergent direction vectors, thus maximizing their chances of stumbling onto a fertile niche.

Here, being impactful means uncovering something interesting and surprising as quickly as possible – and bringing it back to the team. Trying and failing is just as useful, because it uncovers where not to go, or go about differently. The key difference between measuring the impact for the elephant strategy is in contributing to the common pool of knowledge while exploring a direction that’s different from the rest of the team. This can feel rather unintuitive: members reinforcing each other’s approaches can be a source of blindness when applying the dandelion strategy. The way to structure incentives here is to emphasize individual agency while rewarding contributions to collective knowledge of the space.

How does a bottom-up team prioritize? In an uncertain environment, prioritization is emergent. There aren’t well-defined metrics and clear lines to cut. Instead, the team’s stumbling into novelty is the source of knowing what’s important, leading to recurring waves of swarming and scattering. This may feel rather mercurial and drive some engineers and program managers nuts. The trick here is to zero in not on the ever-moving objects of prioritization, but rather on whether the information about these objects is flowing as quickly and clearly as possible.

To summarize these two rules of thumb, I will bring them together into a rubric. Ask these two questions:

  1. Is the shape of the product/technology niche in which we are playing well-understood? 
  2. Are we playing in (or closely adjacent to) the space that just opened up because of a recent technological breakthrough?

The answers form a 2×2.

For the environments where the shape is well-known, with no new space opening up, we are looking at a strongly elephant-leaning strategy. Get that colossal ship going and keep it rolling. Don’t get discouraged when first outcomes are unsatisfying. Elephant calves need nursing and care.

If the answer to both questions is “yes”, we’re probably seeing increasing potential for a new product category in a previously-stable space. Something curious will happen soon, and we don’t want to miss it. Deploy the “fuzzy elephant” stance: structure most of the team to adopt the elephant-leaning strategy, with a modest-sized group making dandelion moves. Given the recent rate of technological advance, this is an effective posture for any large player: there will always be an opportunity of surprise.

The full-on dandelion approach is warranted in the presence of a technological breakthrough in a brand new area with few well-defined niches. Manage divergence and get that insight flow going – who will be the first player to spot a niche?

The final quadrant is a bit puzzling for me. If the shape is not known and there aren’t any breakthroughs, why are we playing here? It feels like there might not be enough evolutionary pressure to get the selection process going – which means that if we find ourselves in this space, we are better off looking for a way out.

Generating ideas and strategy coherence

I’ve been talking about dandelions and elephants for a while now, and yes, it may seem like I’ve gone a bit nuts. Oh well. It’s just that it’s such a good framing and I keep finding uses for it nearly every day. When applied to ideas, r/K-selection strategies seem to be uncommonly generative.

It all begins with a question: what kind of new ideas do we want to produce? Do we want a collection of different, independent ideas or do we want each idea to improve upon some larger idea?

What I like about these questions is that they are objective-agnostic. They don’t ask “what do you want to achieve?” or “where do you want to go?” Instead, they require us to choose the means to generate ideas. And strategy is all about the means. In the field where I work, strategy is also about generating new ideas.

Here’s the thing. In software engineering (as likely in many technology fields), more often than not, we don’t know what the path to our objective will look like. Heck, most of the time we don’t even have a clear sense of what the objective will look like. This is assuredly not a “let’s plan all steps in advance” process. The fog of uncertainty is right there in front of us. 

If we are to navigate toward it, we must be prepared to shift course, to adjust, to learn on the spot about the next step, make it, learn again, and so on. And to do this well, we need new ideas. Our strategy must count on us continuously producing these new ideas – and applying them. In this way, my ramblings about dandelions and elephants aren’t fun side metaphors. They are the essence of business.

Summoning my inner Rumelt and putting things perhaps overly bluntly, an organization can only be effective at setting a strategy and actually following through when it is intentional about creating conditions for generating ideas. While it’s not the only crucial ingredient, the organization that doesn’t have it will suffer from strategic incoherence.

A team may accept as a truism that bottom-up cultures are superior to top-down cultures. And yes, if we are setting out to explore a large space of unknown untapped potential, then we probably want to create conditions for a dandelion strategy. The bottom-up culture has them: individual incentives (Interest), small teams, short-term objectives (Legibility), independent decision-making (Velocity) and non-hierarchical structure and mobility (Access).

However, when we’re endeavoring to care for one big idea, we likely want the conditions to encourage the elephant strategy: more structured and predictable organization and incentives (Stability), care and accountability in decision-making (Breadth), comprehensive processes and long-term thinking (Rigor), and concentrated points of organizational control (Power). These are a depiction of the top-down culture.

If we set out to do something that calls for an elephant strategy, yet the culture we have is a bottom-up one, we will have strategically incoherent outcomes (I called them the “pappus elephants” in the previous post). Our bottom-up culture will suddenly snag us like a trap, with coordination headwinds becoming universally felt and recognized. Things that worked really well for us before, like emphasizing individual impact in our incentive structures, will become a source of pain: why are our teammates acting in such a self-interested way?! Well… maybe because that was a good thing when we needed a dandelion strategy?

Even when the need to pursue a multi-year objective becomes existential, the dandelion conditions will keep blowing us off course: multi-year ideas will be simply swept away by the churn of the quarterly objective-setting and obsessive focus on individual impact. In a dandelion culture, when given a chance to make a dandelion move, most folks will take it. When strategy is incoherent, one can be a superstar while directly contributing to the team’s demise. 

Perhaps even more bizarrely, by all accounts of witnesses, these efforts will look like elephants – until they disappear in a puff. It is in everyone’s interest to create a perception that they are indeed operating in an elephant factory, despite all the dandelion moves they are making. 

When caught in this condition inconsistency, the long-term projects within this organization will inevitably find themselves in a weird cycle: set out to do big things, fail to articulate them clearly, struggle to do something very ambitious, get distracted, then quietly discontinue the effort, unable to examine what happened due to the deep sense of shame that follows – only to try again soon thereafter. When underlying conditions allow only dandelion-like moves, trying to choose an elephant strategy is a tough proposition.

The variables and symptoms might vary, but the equation will remain the same. If they sound at all familiar, consider asking different questions to get to a more productive conversation about incentives, culture, structure, and practices. What are our current conditions for generating new ideas? Do they lean dandelion or elephant? How might they be inconsistent with our desired outcomes?