Chances to get it right

This has been rolling around in my head for a while, and a conversation with fellow FLUX-ers spurred me to grapple with these ideas some more. Here’s somewhat rambling advice that emerged.

In the world of technology, there is this concept of shipping: the moment when we finally dot all the i’s and cross all the t’, and release the thing we’ve been working on to our intended audience. Shipping is a happy and stressful event for technologists, akin to parents sending their child off to college: we hope it will be okay, and can think of a thousand things why it might not.

At this threshold of shipping, all assumptions we’ve made face reality. Some of them will be right, and some–many!–will be wrong. We likely didn’t get everything right. Depending on how well we’ve guessed, we’ll see a range of outcomes. At one end of the spectrum, the scenario where we’ve gotten everything wrong. Our product flops entirely. At the other extreme is perfect success. Both extremes lead to easy next steps. It’s the middle that is muddy: if our thing only somewhat succeeds and somewhat fails, what are we to do next?

We need more chances to get it right. To navigate this muddiness, we find ourselves needing to engage in a delicate dance with our customers, trying to understand them. We make new guesses, see the customer react, adjust our thinking, and try again. Every guess is a chance to get it right.

This dance can be tentative and exhausting. The customer might be annoyed with us persistently getting it somewhat wrong. We might want to finish it already, wishing to move to other things. It can be exhilarating, surprising us with new opportunities that we couldn’t dream of at the beginning. It is these opportunities that make such a dance worth it.

If anything, we need to learn to anticipate this dance and look forward to it. We are better off maximizing the number of chances to get our thing right. To get there, we need two mindset shifts. 

🚃 Treat shipping as a process

First, we must view shipping as a process, not a single event. Just like with our kid going to college, we’re not even close to being done. If anything, shipping is the beginning of the next phase of our journey: the one where what we’ve built makes contact with the customer and upends our initial guesses. Similar to good parenting, the job changes, but it doesn’t go away.

This may seem obvious in the world of modern software development. Of course shipping is a process! However, our yearning for the sense of completion and predictability keeps getting in the way. Look at any shipping roadmap of the product. It might seem like a perfect depiction of the process, rather than an event: a lineup of milestones, planned out a few quarters ahead. But… Do these milestones look like a neat progression of features? If so, this is just a bunch of shipping events, stringed together.

When “shipping as a process”, milestones rarely line up neatly into a clear sequence of features on a roadmap. Instead, milestones are treated as trains that arrive and leave on schedule, and the contents of these trains are determined at their departure. Shipping as a process accounts for uncertainty. Some releases might have many features in them, and some might be mostly bug fixes. It is okay to hold a feature back a release, or remove it altogether.

 I’ve learned this lesson in the early days of Chrome. I don’t know how the team functions now, but in the early days, the whole “train on schedule” process was amazingly effective. The level of stress among engineers was low, and we actually had time to dig into why things we shipped worked or didn’t work – and crafted better software as a result.

No, we couldn’t show beautiful multi-year roadmaps of features – but I tend to think that was a good thing. Most of those roadmaps are fiction anyway, designed to alleviate collective angst over pervasive uncertainty. Life doesn’t lend itself to nice clear lines, and the less time we spend trying to line up our futures, the more time we have to work toward the future we want.

🔄 Close the gap between hypothesis  and test

Second, we need to actively work to reduce the gap between making a hypothesis and testing it. A useful framing here is the OODA loop: make sure that the speed of our testing of hypotheses matches the speed of the environment. If we plan to ship a product in a year, yet the space into which this product will ship changes every month, we are likely to be disappointed in the outcomes of our hypotheses.

Naturally, different technologies lend themselves to varying degrees of the wiggle room we have here. Hardware traditionally gravitates toward longer gaps between the initial hypothesis and its test. Software tends to offer more flexibility. Some markets are less forgiving in chance-taking than others. Make choices carefully here: sometimes it’s worth moving things to software to speed up to the OODA loop, or play in an adjacent, more chance-rich field to test things out.

Whatever choices we make, we are better off when we make contact with our customers as quickly as possible. Only they can inform us about the future direction of our technology and its potential. Only in cooperation with them can we build a product that actually works.

Instead of fretting to get it right the first time, opt for small, incremental releases whose main purpose is learning. Think about it as a series of tiny, controlled explosions in the combustion engine, rather than one Big Bang. Ship small things that build up to the big thing, not the other way around.

Learn to set user expectations low. Instead of large splashes and announcements, release quietly and improve quickly. View shipping as a marathon, where our customers are surprised by consistent improvements (“you know, I never thought of this, but <product> has grown to be really great”), not by a flash of dramatic discovery. Apply the principle of least astonishment liberally.

This stance will feel counterintuitive and wrong. We technologists love that visage of a single man on stage in a black turtleneck, blowing people’s minds. This is not what we need to optimize for. In fact, if we do, we will likely never get there. Instead, focus on maximizing the number of chances to get it right. Ship early and often, taking the time to observe and orient between each chance.

Beyond pros and cons

Among engineers and product managers, a pros and cons document is a fairly common practice. The practice calls for outlining a problem statement that defines the scope of solutions, then lists potential solution options, with two lists for each option: a list of pros and a list of cons. It’s a fairly straightforward and easy to follow pattern.

After observing it in the wild, one thing that I am noticing is that this pattern tends to yield somewhat unsatisfying outcomes. Here are some typical failure cases.

In one failure case, as we add the pros and cons, we somehow lose sight of what’s important: the lists just grow with various things we observe about the options, making comparison of options an increasingly arbitrary process. I have seen several of these exercises stall because the pros and cons list became unwieldy and daunting.

Another failure case comes from the flattening that any list forces on decision-making. Among all options, there is usually one or two that stand out as preferred solutions to anyone familiar with the matter, and the other options are there to make a crowd. It’s kind of like the police lineup. Another variation is where the options listed include solutions that aren’t feasible, however desirable. A police lineup with unicorns, I guess.

Sometimes, options look either too similar or too dissimilar to each other, which deflates the decision making process, and an endless debate emerges. Folks who prefer their option keep beefing up its pros, and diminishing the cons. Instead of being a helpful tool, the list becomes the arena of organizational dysfunction.

Let’s give pros and cons a makeover. I’ve been playing with a rev on the practice and it seems to work more effectively. Here’s what I am doing differently.

First, start with the principles. What are the attributes of the solution that are important to us? What are the desired properties that we want the solution to have? Try to limit the number of principles: three to five is a goldilocks range. This might take a little thinking and wrangling of often-conflicting wishes. If you’re new to the concept of principle-crafting, check out this handy guide I wrote a while back.

The process of discerning the principles is useful in itself, since it leads to better understanding of the problem and possible solutions. This might not feel like problem-solving, but it’s actually the bulk of it. Knowing what’s important in a solution is the key to finding it.

Using principles we’ve just devised, imagine the ideal solution. If the principles are sound, it will just pop out. We should be able to articulate what it is we’re looking for. Write it out, draw a mock, etc. Principles and the ideal solution don’t have to be approached sequentially: sometimes I have an intuition for an ideal solution first, and then have to articulate why with principles.

If we’re lucky, this will be the end of the journey, because the ideal solution is something we can already do. If so, let’s do it!

If not, we move on to studying headwinds. What makes it hard for us to implement this ideal solution? List each reason as a separate headwind, and note which principle it undermines. These require a high level of candor and may be uncomfortable, especially in team cultures that overvalue being agreeable. For example, it might be that our team does not have the right skills, or that the infrastructure we currently use is not a good fit. It is okay to have weaknesses. Knowing our weaknesses helps us make better decisions.

Now, let’s look at the alternatives. This is the familiar step where we generate a list of options. Since we’ve already thought about the headwinds, we can lean on them to come up with better options. Which solutions will be resilient to the headwinds? Which ones will manage to avoid them by compromising on the ideal?

Instead of pros and cons, evaluate how far the solution is from our imagined ideal. Think of it as taking on a “principle debt”: how much are we violating our own principles to solve this problem? The farther the solution is from the ideal, the more work we will have to do to get there in the future. At this step, the key is not to get mired in pros and cons. Instead, play the outcomes of the solution forward and understand its effects. How far from the ideal will they take you?

Pick the option that accumulates the least amount of debt. This will no longer be a daunting task. At this point, we’ve done all the thinking, and will have much better clarity on the matter. 

Finally, document the ways in which the solution we’ve chosen would not live up to our principles. Again, blunt candor is preferred here. Each of the items in this list is a bug to be fixed, our “principle debt” payment schedule. Set an intention to fix them over time.

Don’t skip this last step. Most solutions are compromises. If we found ourselves at this step, we picked something that deviates from the ideal solution. Knowing the shortcomings of the solution is our roadmap, a path to iterate toward what we actually want.

If you’re up for it, give this method a try. Let me know how it works for you. I would love to learn about ways to improve it.

Growth and control biases

When we don’t recognize that growth and control are in tension with each other, we tend to develop a bias that prevents us from spotting problems with the decisions we make. To better make sense of how this bias shows up – and how to account for it – I will rely on the dandelions and elephants framing. As you may recall, the whole point of dandelions is growth, and elephants are all about holding on to value in a well-defined niche.

The more dandelion-like the environment, the more likely we are to exhibit bias toward control, having higher than warranted confidence in our ability to predict the shape of the product niche that is yet to be discovered. This bias is entrenched in the usual “brilliant engineer/scientist” stereotype: the plucky hero who can see the solution way before anyone else. My experience is that these brilliant minds are narratives: they are stories of the lucky survivors whose being in the right place at the right time is reinterpreted – and entrenched in myth – as some superhuman ability.

In a dandelion environment, everyone is just stumbling around to find out. The less firmly we hold on to our conceptions about the final form of the product, the more we invite serendipity. The more optionality we embed into our own stumbling around, the more likely we are to hit the vein of something interesting. The early pivots of products that led to Flickr and Slack are great examples of doing this well. Poor Stewart Butterfield. He keeps trying to build a video game, and these successful businesses pop out instead.

One typical pattern emerging from the control bias in a dandelion environment is placing early big bets. When we see a large organization spun up for a focus area whose shape is not fully known, we are likely seeing the “control as cost” scenario: the mere size of the organization and the need to justify its existence will prevent it from embracing serendipity. The book Showstopper! is a great read for an inside story of the suffering this anti-pattern causes.  I call these kinds of products “elephantine dandelions” – they are simply too big to be agile among other dandelions in the wind of change, yet they continue to exist, buoyed by their size.

Conversely, the more elephant-like our environment, the more likely we are to bias toward growth, presuming that it is something we can simply rely on. There are a couple of patterns here that I am familiar with. One shows up as a belief that continued growth is the organization’s birthright – despite the fact that most of the effort is expended eking out the last bits of value under the curve in the well-defined niche. Thankfully, Clayton Christensen’s concept of disruptive innovation provides an antidote to this bias – though given that the pattern keeps repeating across all industries, I am not sure how effective this antidote is.

Another pattern of growth bias is what I call the “never-ending long tail.” Every so often, a new contender comes up to disrupt a well-established niche, with the tantalizing promise of breaking through the mold of the well-optimized niche. In such cases, it’s very important to watch for the long tail of consumer expectations: are they still there? When the newcomer shows promise and inspires a compelling new vision of what’s possible, are they playing in an entirely new space, or are they still on the hook to deliver on all existing, mostly implicit contracts with the customer?

A good example here is the emergence of new Web rendering engines. I have seen a couple of announcements crop up here and there, and I am totally rooting for them as a fellow engineer, but as a strategist, I have very little hope that they’ll survive under the crushing weight of all the quirks and barnacles that a modern Web rendering engine must support. After initial success and admiration of colleagues (“look! it’s so much faster than others!”), a never-ending long tail of conforming to the same shape within the same niche awaits them.

The tension between growth and control

I alluded to this tension before, but it was somewhat buried in a fairly obscure context of compounding loops. Here’s a somewhat unkempt riff on the concept.

When introducing a product into the hands of our customers, we will nearly always face the tension between growth and control. Here, growth reflects the number of people who choose to use our product, and control is the degree to which we get to decide what this product will be and how it might be used.

Basically, think of these as two extreme ends of a line that can never move close to each other. We can travel along the line with our product strategy, but never bend it to make control and growth meet. We can either lean toward growth and thus relinquish more control, or we can lean toward control and thus give up on more growth.

Interestingly, the quests for growth and control both stem from the same intention toward value. We seek growth when we are looking to acquire value, and we seek control when we seek to protect it. And almost always, we want to have both: we are intent on holding on to some value and are looking to acquire more. Truly blessed are those who don’t have attachments: they are blissfully unaware of this tension. If one has nothing to lose, they have no need to grow. For the rest of us, the growth/control tension is something we experience every day.

For example, if we’re shipping an API, we usually want its usage to go up (growth) and we want developers who use this API to only use it the way we want to (control). When we imagine a new service, we want it to be enjoyed by as many users as possible (growth), yet we also have very firm ideas on what we want to build (control).

Often, control isn’t something we have due to value we already hold, but rather due to a debt we carry. Writing software or making hardware takes time. In the gap between us starting the project and it becoming available to our customers, we crystallize our ideas into code and circuitry. Think of this crystallization as a form of control: we make decisions about what the product should be, and by making these decisions we exercise control over its shape. 

This crystallization is also an unrealized value – we won’t start accruing this value until the product is released to the users. When we do, many of these decisions are somewhat irreversible: even if we want to relinquish control and adapt our product to stimulate its growth, we can’t do so easily. Especially with hardware, a single decision made somewhere in depth of the multi-year gap between ideation and shipping can severely limit a product’s capacity to grow.

The outcome of growth has a similar trappy quality. If our product is blessed with exponential growth, after reaping its benefits for a while, we will eventually arrive at the weirdest situation. No matter how much we try, we can’t do anything but optimize for what is already expected by our massive user base. Any attempt at deviation triggers the allergic reaction of irate customers: “why are you moving my cheese and how soon can you put it back?” In this trap of successful growth, control is no longer ours, no matter how much we try to wrestle for it.

Degrees of Open Source

I realized that I never captured what I learned about the nature of open source projects anywhere, so here’s one attempt to rectify this omission.

The way I understand the concept of “open source” is that it’s a way of running a software project. In this definition hides a gazillion of options, and I will try to lay them out in a spectrum. Understanding these options as a spectrum has helped me quite a bit in reasoning about software project strategy, and I hope this will be helpful for you, too.

As bookends for our spectrum, let’s position two radical alternatives: fully closed-source and fully open source projects. As an example that might resonate with Web developers, we can see that the late Internet Explorer fits very close to the fully closed-source extreme and Firefox is at the other side of the spectrum.

To place other projects on the line between these bookends, we need a couple of notches. I will define these notches loosely, and give them numbers. So, any project that you’re evaluating for their degree of open source, you can use the whole part of the number as the rough estimate of where a project sits within the spectrum, and then the decimal part to adjust more finely.

We will also have two numbers! They are both located on the same scale. One number will reflect visibility and the other participation. Project visibility describes how much the general public can see into the project, and participation shows how much anyone can engage with the project. So, to describe Internet Explorer, we can firmly give it two zeroes: zero for visibility and zero for participation, or v0p0. I will use this convention from here on: a string of “vNpM”, where N is the number indicating the scale of visibility, and M is the number indicating the scale of participation for a project.

How high can these numbers go? I propose that we use a what/how/why framing. It seems to fit well. Think of the spectrum that defines the degrees of open source as separated by three veils of invisibility.  

🏠 The what

The first veil is the “what” veil. Outside of it, all you get is the final artifact – the product that resulted from compiling/running/hosting the source code. Anything that’s outside of this veil will get a grade of zero (0) in our scale on both participation and visibility.

Once this veil is lifted, we can start seeing into the “what” of the project. For example, we could start seeing the source code itself, even if it’s just some file with source code in it. We can examine what was written, and try to understand what the code does. Anything that’s between this veil and the next gets a one (1) in visibility. 

To evaluate participation, we ask the question: “how can a random developer who comes by this code provide feedback on it?” For example, does the project have the means to allow commenting on code or perhaps making suggestions or filing bugs? A good example of an OSS project that is at v1p0 is Apple MacOS OSS distributions on Github: you can see the code, but there’s literally nothing else you can do to participate. Should this project gain some means of communication (a mailing list, a Discord server, a bug tracker, etc.) that allow the general public to contribute their insights, it would move to a v1p1.

🏗️ The how

The next veil is the “how” veil, outside of which we can see what the source code is, but can only guess about how it’s been written.

After we lift this veil, we can start seeing how the code is being written. We can start getting a sense of the units of work (bugs, commits, etc.) and how they come together as the source code. Inside this veil, we start seeing things like actual commits by actual contributors, rather than automated bot code dumps. Importantly, we start seeing how these commits are connected to particular bugs and feature requests, and the actual engineering/product discussions about them are revealed.

Participating at this level means that there is a path for anyone to introduce their own units of work into the project: to start contributing to these discussions, to start and guide them, to create pull requests and have a path to merge them, etc. Depending on the kind of set up the project chooses, there are many different ways to do so, but the key property of achieving this degree of open source participation is that anyone can walk up to the project and contribute meaningfully to how the project’s source code comes together.

Projects that have visibility and/or participation beyond the veil of “how” will have a grade of two (2). For example, a v2p1 project will have its bug tracker and commits open to the public, but no way to actually become a committer to the project, outside of filing bugs or providing ad hoc feedback. Lots of open source projects that come from large companies are run this way. With a v2p1 project, I as a random developer can check out the code, follow the steps to get it running, and even fork it. What I can’t do is join the circle of project developers and say things like: “Oh, I really love what you’re doing with <foo>! Here’s a PR to refactor <foo> for clarity”.

To get to v2p2, a project must have a clear path to do so. Typically, there’s some section on the project site that describes how one can start contributing, outlining the development environment, and the basic processes that go into the “how”. Apple’s Webkit is a great example of such a project, with a clear outline on how to get started with the source code, and how to begin to contribute. There’s even a diagram outlining the process flow as well as the famously elegant code style guidelines. Effectively, it’s a batteries-included participation toolkit: as much as possible, the tacit knowledge of the team is documented clearly and concisely. If one has the necessary programming skills, they can jump right in. 

📐 The why

The final veil is the “why” veil. Beyond that veil, we find the project’s motivations and strategy. Why is this project even here? What are the leaders of the project thinking? Why are we choosing to work on this unit of work and not the other?

This is the realm of project governance. Giving the general public the ability to understand (visibility) and/or influence (participation) project direction is the ultimate degree of openness for a project, and we give it the degree of three (3) on our scale.

Many open source foundations take this approach. It’s not for the faint of heart, since the “why” can contain significant value. A project that adopts a v3p2 stance is effectively accepting that the value of the project is not within the confines of the organization that runs it, but rather in the community that gathers around the project.

Such projects will openly state their intentions, outline their strategy and approach, and eagerly look for feedback. The whole idea is to lean onto the wisdom of the crowd to get it right.

An even more radical stance is v3p3, where there is a path for anyone to contribute to the governance itself – not just evaluating and helping shape directions and priorities, but also change them. In projects like this, in addition to clear roadmaps and open strategy, the governance is fully transparent, and there isn’t a point at which the capacity to make decisions goes into a tunnel.

✨ The Open Source Scale

Now we have our four-grade scale, with the numbers going from zero (0) to three (3). When visibility and participation are combined, I presented them in a bit of a pattern: the visibility goes one in front of participation, with participation catching up.

Are there other combinations of grades? Are there v3p0 or v0p2 projects out there? Maybe, although my experience tells me they are rare. Typically, a successful open source project only has one degree of difference between the grades, and the participation trails the visibility. One can’t participate at a certain grade, without having visibility at the same grade. Similarly, the setups where participation trails by too much (like v2p0) are rarely sustainable: eventually the project owners arrive at “what’s the point?” question and either close down visibility or further open up participation.

The key insight for y’all: the difference between the two grades nearly always results in additional overhead and friction. The larger the difference, the more likely it is that the project will be overwhelmed by this overhead, which will inevitably lead to a change in its grade.

So, when you’re planning a new open source project, please make sure to:

  • clearly understand the grade you’re starting with;
  • if this grade is unsustainable, understand the grade you’re aiming at and devise strategies to get there as quickly as possible;
  • put in place project operating principles that hold it within this grade.

Hope this helps and good luck on your adventures in open source! 

How to think and what to think

There’s this gap that I’ve been struggling with for a long time, and I often feel like I am no closer to finding a way to bridge it. It’s a gap between nuance and action. I’ve talked about many variants of problems that this gap causes, but at their core, they all share the same dynamic.

Very loosely, the dynamic is that understanding nuance requires so much work in addition to the important, urgent work that already needs to be done, that folks who need to understand this nuance rarely have energy/time to do so.

Suppose you have invested a bunch of time into studying a problem space – like growing developer ecosystems, for example. You have acquired admirable depth of nuance in this problem space. You understand how APIs evolve. You understand the layering fundamentals of opinion. All that good stuff. Now, you would like to convey this depth to others. These others are typically your team, but could be just random people on the Internet.

You choose the option of carefully writing words and drawing diagrams to describe the nuance in the problem space as concisely and clearly as you can. Then you share the resulting artifact with your team. And… nothing happens. At best, people diligently flip through your artifact and go: “This is really nice. But why so abstract? How does this translate to action?”

This question has become somewhat triggering for me and my fellow ecosystem adventurers. Nuance often feels too abstract, even to a most receptive audience. 

I wonder if it’s because nuance points at how to think, rather than what to think. This might be an important distinction.

We act on what we think, so “what to think” is close to the ground, and an artifact that outlines what to think will tend to feel more concrete. Do this. Don’t do that. Go here. Stay there.

All the while, the “what we think” is situated inside of how we think. Put differently, how we think defines all possibilities for action. When we change “what we think”, we just pick another of those possibilities. When we change “how we think”, we start seeing another set of new possibilities. Until then, our field of view is limited to the initial set.

Because nuance affects how we think, it will nearly always feel detached from action. Modifying “how we think” means introducing different mental models, a whole new space from which new actions could be drawn. And there’s a lot of effort that needs to go into accommodating these new mental models, integrating them into one’s understanding.

Only after that happens, new actions will emerge – but there will be an inevitable delay between conveying the nuance and the actions appearing. I bet there was a puzzled look on the face of the proverbial man who was expecting the fish, and instead got a fishing rod thrust into his hands and an invitation to walk to the lake.

When everyone is busy to the point of delirium, stopping to learn how to fish feels both impossible and foolish. So what do we do?

I don’t know. But I will keep trying to find different ways of sharing nuance with others – and of course, keep sharing what I learned with you.

Edgewise Thinking

I stumbled into this framing when talking about product/platform thinking with my friends. To put this framing together, we have to employ a certain way of thinking about software products – viewing them as a graph.

Imagine a graph – a bunch of vertices and edges that connect them. In this particular graph, bits of software we write are represented vertices, and the formats, protocols, and APIs we use to glue these bits together are represented by the edges. Both are important and necessary.

Represented like this, the two groups (the software and the glue) can be discussed separately. For example, when I talk about an API to get some data (like current time) from a particular server, I can see what appears to be one thing – the API to get current time – as two distinct entities. In our graph representation, there will be the vertex representing the code that is required to obtain current time and format it according to some requirements, and the edge, which represents the actual API contract that defines the requirements to be implemented.

C++ has this separation built into the language: to write useful code, a developer has to produce both the header file (the edge) and the source file (the vertex). Other languages have the construct of an interface that serves a similar purpose, though most of the time, this distinction between edges and vertices is more of a practice. One of the more familiar ones is test-driven development (TDD), where we are encouraged to start by describing the behavior of the code we want to write in tests, and then writing the code itself.

This distinction reveals two different thinking modes that appear to be complementary, but independent of each other: the vertex-wise thinking and the edgewise thinking.

 When I have my vertex-wise thinking hat on, I am usually contemplating the actual building blocks of the thing I intend to make. This one is very common and a bit of a habit for software folks: when asked “what are we building?”, the answer shows up as a list or a matrix of things, a to-do list of sorts. “To get <foo>, I will need <bar> and <baz>” and so on. This approach is super-familiar to any maker of things, professional engineer or not. While in this thinking mode, the vertices are at the center of attention and edges are an emergent artifact. We are focused on constructing the blocks, and how they are connected is something we figure out as we go. 

The situation inverts with edgewise thinking. When I have the edgewise hat on, I am primarily focused on the edges. I want to understand how the vertices connect – before figuring out what they’re made of. Here, the glue between the blocks is the subject of my attention, and the blocks themselves are an emergent outcome. When asked “what are we building?”, the answers that the edgewise thinkers give usually come in the form of control flow or other kinds of interaction diagrams, defining how the vertices will communicate with each other. Edgewise thinkers obsess about formats and protocols.

One of my mentors had a habit of designing through mocks: sketching out an entire project as a collection of dummy classes that do the absolutely minimal amount of work to glue together. You could build this project and run it, and it would actually print out sentences like “contacting bluetooth stack” and “getting results”, none of it real: it was to be implemented later.

Such edgewise thinking exercises allowed us to see—and mess with—the shape of the entire thing long before it existed. The mocks served as the edges. Once we figured them out, vertex work was just a matter of typing out code that conformed to the shapes the edges defined.

The inevitable question in an article of this sort arrives: which one is better? It depends on the situation. A colleague of mine captured the rule-of-thumb well: edgewise thinking tends to lead to divergent effects and vertex-wise thinking to convergent.

Vertex-wise thinking works exceptionally well – and is called for – when we know pretty well what we’re looking for as the end result, and the biggest obstacle we’re facing is proper sequencing of work. Program managers can do wonders turning lists and matrices of building blocks into burn-down charts and shipping schedules, and help the team click through the milestones.

Edgewise thinking can’t compete with that. In fact, folks who favor edgewise thinking tend to struggle with calendars. However, one of the superpowers of edgewise thinking is the ability to surprise ourselves. If we leave vertices as an emergent outcome, we create the potential for something we didn’t anticipate: different ways of using the edges to build a whole different thing.

For example, imagine that we are Tim Berners-Lee back in the 1980s and we want to organize the documents across departments at CERN. Applying vertex-wise thinking, we end up with a pretty cool database that any CERN member can use. Applying edgewise thinking, we end up inventing the Web.

Platform and ecosystem work is first and foremost edgewise thinking. We start by imagining how various bits will connect and what the value of these connections might be. We find ways to ensure that this value is preserved and increased by designing the edges: interfaces, formats, and protocols that define the connections. We intentionally leave the vertices blank and invite ecosystem participants to invent them by relying on the connection mechanisms we designed. 

For me, a really useful marker in a software design conversation is when the notion of “reusable building blocks” comes up. It’s a good sign of a thinking hat confusion: the participants are engaged in vertex-wise thinking when they need to think edgewise. A “reusable building block” is not really a block, but rather the glue that makes it reusable. When in this conversation, it might be good to pause and reorient, switching hats: discuss interfaces, formats, and protocols, rather than the actual code that will fill out the block. 


Here’s another little experiment I built over the break. Like pretty much everyone these days, I am fascinated by the potential of applying large language models (LLMs) to generative use cases. So I wanted to see what’s what. As my target, I picked a fairly straightforward use case: AI as the narrator.

While I share in the excitement around applying AIs as chatbots and pure generators of content, I also understand that predictive models, no matter how sophisticated, will have a hard time crossing the chasm of the uncanny valley – especially when pushed to its limits. As a fellow holder of a predictive mental model, I am rooting for them, but I also know how long this road is going to be. We’ll be in this hollow of facts subtly mixed with cheerful hallucination for a while.

Instead, I wanted to try limiting the scope of where LLM predictions can wander. Think of it as putting bounds around some text that I (the user) deem as a potential source of insights and asking the AI to only play within these bounds. Kind of like “hey, I know you’ve read a lot and can talk about nearly anything, but right now, I’d like for you to focus: use only these passages from War and Peace for answering my questions”.

In this setup, the job of an LLM would be to act as a sort of well… a language model! This is a terrible analogy, but until I have a better one, think of it being a speech center of a human brain: it possesses incredible capabilities, but also, when detached from the rest of the prefrontal cortex, produces conditions where we humans just babble words. The passages from War and Peace (or any other text) act as a prefrontal cortex, grounding the speech center by attaching it to the specific context in which it offers a narration.

Lucky for me, this is a well-known pattern in current LLM applications. So, armed with a Jupyter cookbook, I set off on my adventure.

A key challenge that this cookbook overcomes is the limited context window that modern LLM applications have. For example, GPT-3 is only capable of just over 4000 tokens (or pieces of words), which means that I can’t simply stuff all of my writings into context. I have to be selective. I have to pick the right chunks out of the larger corpus to create the grounding corpus for the narrator. So, for example, if I have questions about Field Marshal Kutuzov eating chicken at Borodino, yet I give my narrator only the passages from Natasha’s first ball, I will get unsatisfying results.

I won’t go into technical details of how that’s done in this post (see the cookbook for the general approach), but suffice to say, I was very impressed with the outcomes. After I had the code running (which, by the way, was right at the deca-LOC boundary), my queries yielded reasonable results with minimal hallucinations. When in the stance of a narrator, LLM’s hallucinations look somewhat different: instead of presenting wrong facts, the narrator sometimes elides important details or overemphasizes irrelevant bits.

Now that I had a working “ask ‘What Dimitri Learned’ a question” code going, I wondered if the experience could be more proactive. What if I don’t have a question? What if I didn’t know what to ask? What if I didn’t want to type, and just wanted to click around? And so the Wanderer was born.

Here’s how it works. When you first visit, it will pull a few random chunks of content from the whole corpus of my writings, and ask the LLM to list some interesting key concepts from these chunks. Here, I am asking the narrator not to summarize or answer a specific question, but rather discern and list out the concepts that were mentioned in the context.

Then, I turn this list of concepts into a list of links. Intuitively, if the narrator picked out a concept from my writings, there must be some more text in my writings that elaborates on this concept. So if you click this link, you are asking the wanderer to collect all related chunks of text and then have the narrator describe the concept.

Once you’ve read the description, provided by Wanderer, you may want to explore related concepts. Again, I rely on the narrator to list some interesting key concepts related to the concept. And so it goes.

When browsing this site (or “wandering”, obvs), it may feel like there’s a structure, a taxonomy of sorts that is maintained somewhere in a database. One might even perceive it as an elaborate graph structure of interlinked concepts. None of that exists. The graph is entirely ephemeral, conjured out of thin air by the narrating LLM, disappearing as soon as we leave the page. In fact, even refreshing the page may give you a different take: a different description of the concept or a different list of related concepts. And yes, it may lead you into dead ends: a concept that the narrator discerned, yet is failing to describe given the context.

The overall experience is surreal and – at least for me – highly generative. After wandering around for a while, it almost feels like I am interacting with a muse that is remixing my writing in odd, unpredictable, and wonderful ways. I find that I use the Wanderer for inspiration quite a bit. What concepts will it dig up next? What weird folds of logic will be unearthed with the next click? And every “hmm, that’s interesting” is a spark for new ideas, new connections that I haven’t considered before.

No ship ums

I’ve been thinking about the path that software dandelions take toward becoming elephants, and this really interesting framing developed in a conversation with my friends.

Software dandelions are tiny bits of software we write to prototype our ideas. They might be as small as a few lines of code or deca-LOCs, yet they capture the essence of some unique thought that we try to present to others. If, in a highly unlikely event, this dandelion survives this contact, I am usually encouraged to tweak it, grow it, incorporate insights. Through this process, the dandelion software becomes more and more elephant-like.

If I imagine idea pace layers (the concept from slide 55 on in , and draw a line from the outermost, the full-dandelion meme layers into the innermost, absolute elephant ones, this line becomes the course of the idea maturity progression. As my dinky software traverses this course, it encounters a bunch of obstacles.

As I was looking at this progression, I realized that the environment in which I develop this software may impose constraints on how far my software can travel along this course – before it needs to switch to another environment.

One such constraint is what I called the “No ship ums”,  kind of like those tiny bugs (yes, I am continuing that r/K-selection naming schtick). When in a development environment like that, I can develop my idea and make it work, and even share it with others, but I can’t actually ship it to potential users. A typical REPL tool is a good example of such environments, with Jupyter/Colab and Observable notebooks being notable examples. While I could potentially build a working prototype using them, I would be hard pressed to treat them as something I would ship to my potential customers as-is.

Why would one build such a boundary into an environment? There are several reasons. When shipping is out of the question, many of the critical requirements (like security and privacy, for example) are less pronounced. An environment can be more relaxed and better tailored for wild experimentation.

More interestingly, such a barrier can be a strategically useful selection tool for dandelions. When I build something in a notebook, I might be excited that it works – yay! To get to the next stage of actually seeing it in the hands of a customer, the barrier offers the opportunity to pause. As the next step, I will need to port this work into another environment. Will it be worth my time? Is my commitment to the idea strong enough to put in the extra work?

On the other hand, there’s definitely folks that would look at the paragraph above and say: “are you crazy?!” Putting a speed bump of this sort might appear like the worst possible move. If we are to help developers reach for new, unexplored places, why would we want to add extra selection pressure? My favorite example here is’s “Always On” switch. Like where this experiment is going? Just flip this switch and make it real.

So… who is right? My sense is that the answer depends on the context of what we’re trying to do. Think of the “no ship ums” as a selection pressure ratcheting tool. Anticipate hordes of developers entering your environment and launching billions of tiny gnats? Consider applying it. Unsure about the interest and worry about growth? Maybe try to avoid no ship ums. 

Dandelions and Elephants, Illustrated

Over the holiday break, I made a thing for you to enjoy. It’s an illustrated deck, similar to the one I’d done for Adult Development Theory. It’s about Dandelions and Elephants:

Unlike other framings and lenses that I had come up with along my path through 2022, this one was by far the most resonant and interesting. I still get new insights from studying it. This deck is an invitation for you to do the same. I tried to write it as concisely as possible, yet leave enough room for nuance – and of course, a lot of room for further exploration.

The deck is meant to be consumed at your own pace, and some slides are denser than others. I think of it as a kind of holiday treat tin: some items are rich truffles and some are one-bite wafers.  Give it a try.

The deck consists of four chapters. The first chapter introduces the dandelions and elephants framing, roughly following my exploration from earlier in 2022.

The second chapter zooms out our perspective, asking a larger question: what makes up environments that lead to dandelions or elephant idea formation?

The third chapter expands on the idea of tensions that I mentioned in my writing, and tries to make a more cogent argument about the forces that tend to batter us when we engage in persistent idea generation.

The final fourth chapter ties the framing to pace layers. This one probably has the most new ideas that I haven’t expressed elsewhere, or ideas that I’ve talked about but haven’t connected to the original framing.

Throughout, I tried to use concrete examples to ground the ideas. I recognize that the concepts might still feel somewhat abstract, especially outside of the software and organizational development situations that I’ve described. So here’s another invitation: if you’re curious about the concepts, but are struggling to apply them in your particular environment, please drop me a note. My email is at the end of the deck. I’d love to hear from you.