I’ve been writing a bit of Web components code recently and used Shadow DOM. I am realizing that there’s a fairly useful pattern in incorporating Shadow DOM into Web apps that I will hereby name the “Shadow Gristle”.
First things first. If you don’t like Shadow DOM for one reason or another, this is not an attempt to convince you otherwise. If you have no idea what Shadow DOM is, this will be just a few paragraphs of gobbledygook. Sorry. However, if you do find yourself dabbling with the ye olde Shadow DOM even occasionally, you might find this pattern useful.
Very simply put, the idea is that we only put the necessary scaffolding code into the Shadow DOM, and leave most of our application code in the light DOM.
When we have the power of Shadow DOM at our fingertips, we have two choices about how we grow the subtree of the DOM elements: one is inside of the shadow tree (in the Shadow DOM), and the other on the outside (in the regular DOM).
So if we want to add another component as a child of our Web component, how do we decide which of the two places it should go into?
My intuition is that placing a child component into a shadow tree is a code smell. It indicates that we might have lessened our ability to compose elements. There are probably perfectly good reasons to put a component into a shadow tree, but more often than not, it’s probably not the right place.
Child components love light. If they stay in the regular DOM, they remain composable. I can rearrange them or replace them without having to muck with the innards of my component.
Thus, the rule of thumb is: seek to place child components into the regular DOM. Reduce occurrences of them being added to the shadow tree.
So what goes into the Shadow DOM? Mostly gristle. It’s the stuff that connects components together. There may need to be some routing or event handling, and perhaps a few styles to set the foundation. Everything else goes in the regular DOM. For example, I try to avoid styling in the shadow tree. Thanks to the CSS variables, I can use them as pointers and allow the regular DOM tree to supply the specifics.
I hope this little pattern helps you build better Web apps. And yes, the gobbledygook is over now. I promise I’ll write something less obtuse next time.
This principle builds on the layering principle, and deals with a common decision point that most software developers reach many times in the course of their work.
The situation that leads to this point unfolds something like this. There is some code at the lower layer that isn’t giving us the results we need for implementing the functionality of our layer. There’s some wart that was left there by developers of that layer, and we have to do something to minimize the exposure of our customers to this wart.
What do we do? The most intuitive action to take here is wallpapering: adding some code at our layer to reduce the gnarliness of the wart. This happens so commonly and so pervasively that many writers of code don’t even recognize they are doing it. Web development has a proud tradition of wallpapering. There are entire communities of libraries (jQuery, React, etc.) that invested a ton of time into wallpapering over the warts of the Web platforms.
Especially when we are not thinking in terms of layering, we might just presume that we are simply writing good code. However, what is really happening here is a shift in layering responsibility – or perhaps a “layer entanglement” is a more catchy term. The code we are writing to fix the wart is out of place in our layer: it actually needs to live at the lower layer. And that means that by wallpapering, we are most definitely violating our layering principle. The code we write might be astoundingly good, but it’s kind of jammed sideways between the two layers.
As a result, the wallpapering code tends to be a drag on both layers. The layer below, now constrained by the specific way in which the wallpapering code consumes it, is grumpy about the loss of agency in addressing the original wart. By wrapping itself over the wart, our code now amber-ified it, preserving it forever.
At our layer, the code is an albatross. I already pointed at the CSS Selector-parsing code in jQuery as one example. Because it belongs to a lower, more general and more slowly moving layer, every wallpapering code saps efficiency of the team that needs to maintain it.
Perhaps most importantly, the wallpapering code has the capacity to misinform the layers above of the nature of the machinery below. If the opinion of the wallpapering code deviates strongly from the lower layer’s intention, the consumers at higher layers will form inaccurate mental models of how the lower layer works. And that is where the compounding costs really get us in the long term. The story that my friend Alex Russell has been telling about the state of modern web performance is a dramatic and tragic example of that.
All in all, we are best to avoid wallpapering at all cost. However, this is easier said than done. Most of the time, our bedrock layers (the lower layers we’re building on top of) are imperfect. They will have warts. And so here we are at the primary tension that the wallpapering principle helps us resolve: the tension between the intention to avoid wallpapering and the need to deliver reasonable products to our customers.
To resolve this tension, we must first acknowledge that both of these forces have merit, and in extreme, both result in unhappy outcomes. To navigate the tension, we must lean toward minimizing wallpapering, while seeking to reduce the cost of opinion of our wallpapers when we must employ them.
The key technique here is polyfilling (and its close cousin, prollyfilling): when we choose to wallpaper, do it as closely to the spirit of the lower layer as possible. For example, if our cloud API is occasionally emitting spurious characters, we might be better off filing the “please trim those characters” bug against this API, and then trimming these characters as closely as possible to the code that receives them from the network. Then, when the bug is fixed, we just remove the trimming code.
A good polyfill is like a temporary tenant in an otherwise crowded family home: ready to move out as soon as the conditions permit. Wallpapering is usually a bad idea. But if we feel we must wallpaper, think of the code we’re about to write as a polyfill – code that really wants to live at the lower layer, but can’t yet.
Preference toward layering is probably one of the more fundamental principles of software development. It pops up pretty quickly as soon as we start writing code. I wrote about layering extensively in the past, but basically, when we start connecting bits of code together, a tension arises between the bits. Some need to change faster and some need to stay put. This tension quickly forces our code to be arranged in layers, whether we want it or not. I sometimes joke that layering is either something we chose to do or something that happens to our code anyway.
Thinking of layering ahead of time is costly and usually involves discipline that is not always possible, especially when timelines are tight or the shape of the software we’re writing is not yet known. Often, our initial layering designs are wrong, and a whole different layering eventually emerges. These surprises might not be pleasant, but they are to be expected. Layers accrete. We are just here to garden them into the shape that’s most suitable for our needs.
Thus, as we engage in software development, we have to contend with two conflicting forces: one of expedience and convenience that beckons us away from layering, and one of intentionality that pulls us toward it. To resolve the conflict between these forces, here’s the layering principle: lean toward intentional layering, but give the layers room to develop.
A good rule of thumb here is to define layers early on as loosely as possible, and watch for where the layer boundaries are potentially crossed. When this crossing seems to happen, take the opportunity to clarify the layering. Watch for new layers to emerge and don’t add them without a clear need.
Here’s a concrete example of loose layer definition. Suppose we’re building a client library for a cloud service. We might define three layers, listed here in reverse order (from bottom to top):
Raw REST. At the bottom, there’s the raw REST-ful API that is literally HTTP calls to the cloud service. This is the bedrock for us – we consume it, but don’t build ourselves. Don’t forget to have a bedrock layer. There’s always something that we build on.
Core. In the middle, there’s the idiomatic layer that translates raw calls into constructs that are common for the target environment of the library. For example, if our target environment is Node, we might have something that uses http module or a new-fangled fetch to make the REST calls and return JSON.
Features. Things that make the cloud service easier to use go in the top layer. This is where we can add fun syntactic sugar that lets us write the code in three lines instead of twenty, or address a particular use case in a particularly elegant way.
This might seem counterintuitive, but start writing code without explicitly putting these layers in place. Don’t force them. Think of the process as growing a seedling. Just keep giving them a glance as more code is added. Does this particular function seem like it could be in the Core layer? How would it group with others like it? Especially at the very early stages, think of layers as aspirational, and feel free to adjust the aspiration. Be patient: they will start showing up and becoming real.
Once the layers start coming together, it helps to develop a layering hygiene: imagine that a developer chooses to engage with a layer directly, instead using the full stack. If they are making raw REST calls, are they missing anything? Can they still get the same results? If they decide to write their own specialization layer, are they missing any of the core functionality?
Finally, as we develop features, watch for what’s happening to the code. Is there a new clump of code that seems to be forming? Maybe there’s a layer of specialization that starts emerging, or perhaps the core layer is splitting into idiomatic service calls and scaling/configuration layers?
The trick to the layering principle is in recognizing that there’s no simple answer: layering is a bit of a paradox that requires flexible thinking and continuous keen observation, rather than precise solutions.
In this Halloween-themed episode, I wanted to share a story that might be useful to API engineers, both aspiring and experienced. This is the story of four curses of API development and the shadows that conjure them. So grab that pumpkin spice latte and get ready to hear the 🎃 SPOOOOOKY! 👻 tale.
First – the shadows. Every upside has a downside. We don’t like to think of them on the upswing. When the shadows visit us, we are unhappily surprised to learn of their existence. This happens so often, we’d think we would have learned by now.
Take the “✨Interest” condition from my earlier story about growing dandelions. Interest is great, right? Unfortunately, with interest and excitement about an API’s potential value comes the shadow of … well, people actually trying to realize this potential value – and extract as much of it as possible.
With the initial spirit of exploration comes the thrust of exploiting: trying to use and – unfortunately, all too commonly – abuse the API to make it do their bidding. If anything, the sudden rise of interest in an API is a warning sign for their vendor: time to think about confronting the shadow of grift that will inevitably emerge.
It is often an uncomfortable job to be the one pointing out the shadow when the team’s collective eyes are on the shiny light of success. Yet, knowing of the existence and anticipating the emergence of the shadow can really save the organization’s hide by helping it orient toward the challenge, rather than be blindsided by it.
However imperfect and goofy, I hope that this narrative will help you do just that. I organized it around eight shadows – one for each condition for growing dandelions and caring for elephants. Think of these eight as the tripwires, the emergent downsides of having been successful at attaining each condition. But to come together, the narrative needs one more twist: the curses.
Curses are menacingly sticky. They are imposed on us. No matter how much we try, curses hold us. We can point at them, battle them, and even occasionally proclaim victory over them. But sooner or later, we recognize with a sinking feeling that the celebration was premature. Our curses find yet another way to rear their ugly heads. All we can do is cherish the gift that usually comes with the curse.
The particular kind of curse I want to highlight here emerges from a seemingly innocent concept of idea pace layers. I touched on it briefly in my first article about dandelions and elephants. Ideas thrive as light, free-floating dandelions. Some survive the descent through the ideation pace layers. These ideas grow and create value around them – that is the gift of this descent. As they grow, the conditions of supporting and nurturing them transform to accommodate their growth – to treat them more elephant-like. Somewhere alongside that transformation, the conditions cross the threshold where preserving the accumulated value means more than contemplating change.
Therein hides the curse. Though they still have their strengths and amazing survival abilities within their particular niche, idea-elephants are unable to challenge their shadows. At the bottom of the idea gravity well, we can only make our idea-elephant more precisely formulated and incrementally improve it within its niche – the local maximum.
To find a different local maximum, we need another cycle of exploration: a gazillion of idea-dandelions spreading all over the space, perishing en masse while uncovering precious few novel insights. But to get there, we need conditions that would enable such a development. And such changing of conditions is a threatening proposition when we’re caring for an idea-elephant: starting all over means potentially losing the value we hold. Thus cursed, we flail and struggle to change, but as a rule – fail to do so. The new idea-dandelions can’t find fertile ground in elephant-caring conditions, which makes finding our grip on the elephant shadow even harder.
Even if we’re somehow able to transform ourselves again and recreate favorable conditions for dandelions – it’s not like we’ve gotten away from the curse. As the end credits start rolling, the viewers see our faces being struck by the recognition that we’re starting the cycle all over again.
In the API developer’s world, the progression of this curse can be described as a cadence of steps. It begins with a success, when the conditions we’ve created for dandelion APIs actually start bearing fruit. There are lots of consumers of the APIs and they are starting to build eye-popping things. Somewhere around here, the dandelion shadow is discovered, and we valiantly face the challenges it presents. Whether we know it or not, this process transforms our requirements to create idea-elephant conditions. In the moment, it always makes sense — now that there are successful businesses running on our APIs, this feels like a logical next step. As we do so, the elephant shadow manifests, and forces us to recognize that we need to get back to conditions that are more dandelion-like – and, despite our efforts, the curse prevents us from doing so.
Pairing the four conditions (one from the dandelion-growing list, one from the elephant-keeping one), we end up with four such progressions, the four curses. I’ll call them, respectively, the curse of irrelevance, the curse of immensity, the curse of immobility, and the curse of inscrutability.
🏚️ The Curse of Irrelevance
The two polar conditions in this curse are “ ✨ Interest” for dandelions and “ ⚓️ Stability” for elephants. I already described the moment of discovering the first shadow. I’ve lived that moment a bunch of times throughout my career, and it’s almost always followed by the call to bring things under control. This exertion of control is transformational: it brings the change of conditions toward Stability.
Once that change is complete, we enter the third beat of the curse: encountering the shadow of Stability. It turns out, once we’ve gotten things under control, these things get boring and stale. That same explosive growth, attenuated by the faucet of predictability, slows down to a trickle.
Facing this second shadow, we try to bring back the mojo – and more than likely, can’t. Idea-elephants don’t travel upward in the pace layers. No matter how much we try, new ideas are quickly shot down: too risky, too crazy, too irresponsible. The hard-earned stability resists being disturbed, cursing us with irrelevance.
♾️ The Curse of Immensity
The second curse can be seen as the interplay between “🔮 Legibility” and “⛰ Breadth”. The gift of legibility is in the simplicity with which the API can be used. It’s just begging us to play with it.
However, once our customers start messing with the API, something interesting happens: they start seeing the edges of our canvas, bumping into the limits: “Oh, I wish this API supported this <feature>!” As the dandelion idea of an API takes root in the collective minds of its consumers, there’s a steady stream of requests for improvements. Obliging to fulfill these requests is the second beat of the curse – the transformation to Breadth.
On cue, the shadow of Breadth presents itself: the bloated, incoherent, everything-bagel API surface. Adding new features to the API is a puzzle with many moving pieces. Removing APIs is a massive pain in the butt. Everything around us is gigantic – the scale of our usage, the number of feature requests that keep showing up. And of course – the rising chorus of complaints that the API surface is just too darned large.
Steeling ourselves to confront the second shadow, we discover that it’s much harder to tame than the first one. A common API designer’s trope that I’ve seen (and tried to use myself) is the “well-lit paths” pattern. It seems logical that if we just highlighted some APIs and not others and organized them into well-designed pathways for developers, then some of our incoherence issues would go away. I’ve yet to see a great application of this pattern. Instead, what typically happens is something of a high-modernist paving of lonely highways and bridges to nowhere that adds to the confusion and girth rather than alleviating it. Mocking us, the curse of immensity knows that organizing large API surfaces only makes them larger.
I’ve already written a bit about API deprecation. Deprecation of APIs tends to be a losing battle. It takes a lot more time and effort to remove features than to add them, which means that over time, the tyranny of the curse of immensity only strengthens.
🧊 The Curse of Immobility
Between the conditions of “🚀 Velocity” and “📚 Rigor”, we find the third curse. A setting that allows us to string together a quick prototype is rarely the same setting that we use for launch. As soon as our API customers start seeing some uptick in their usage, the first shadow will immediately remind us of that lesson.
As a matter of transformation, we overcome this shadow by introducing processes and infrastructure that are critical for shipping products at scale. If we are to retain our customers and set them up for long-term success, we must transition to the stance of Rigor.
Pretty soon, the shadow of Rigor makes itself known. All these amazing best practices, checks and balances, launch gates and test infrastructure reduce velocity, sometimes quite dramatically. Gone are the days when one could quickly put together a bug fix. Everything seems to take eons to get done.
This one is especially hard for engineers. Everyone seemingly notices this, yet there does not appear to be a way out. Another rallying cry to make things go faster gets mired in yet another committee or working group. Once the API conditions transform into caring for elephants, getting it back to the lightweight experimentation is prevented by the curse of immobility.
🗝️ The Curse of Inscrutability
The final curse is formed by pairing of “🔎 Access” and “⚡️ Power”. The key tension here is in the level of opinion within the API. Access needs APIs to be highly opinionated, while Power needs the opposite.
The first shadow becomes visible when our users start using the APIs in earnest, beyond initial prototypes. All that opinion that made it possible for them to build those prototypes quickly starts getting in the way. “That’s so cool! How do I turn it off?” was one of my favorite bits of developer feedback to some of my early Web Components API ideas. As developers’ ideas start holding value, the focus shifts to getting closer to the metal.
One of the common drivers in this transformation to the condition of Power happens as a result of trying to squeeze a bit more performance or capabilities out of the product, built on top of the API. This story typically involves the API vendor exposing deeper and deeper hooks inside, and thus relinquishing some (or all) of the opinion held by these APIs. A while back, I already mentioned the Canvas API in WebKit, which collapsed the whole of the HTML/CSS opinion straight to Apple’s GCContext API, which was as close to the underlying platform one could get back then.
As predictably as a Greek tragedy’s plot, the second shadow makes its entrance. With power comes the need for skill to wield this power, which in turn leads to rapid decline in the number of folks who can actually use it effectively. In such scenarios, there are only a few (grumpy) wizards who actually know how to use the APIs, and whoever hires them accrues all the value.
And of course, it is very, very hard to argue convincingly that this value needs to be lost and the power given up to return to the Access condition to confront the second shadow. The curse of inscrutability has taken its hold.
🧛 Haunted API design
The four curses accost us simultaneously and often interplay with each other, usually to a reinforcing effect. The curse of Immensity invites Inscrutability. The curse of Immobility often comes on the heels of those two. The curse of Irrelevance stokes the fears of obsolescence and exacerbates the effect of the other curses. It’s all a hauntingly accursed mess. There is seemingly no escape from it. At least based on my experience, every team that sets out to ship API comes under the spell of these curses.
What’s an API designer to do? Clearly, scream and wail in horror – what kind of Halloween tale would it be otherwise? Oh well. Perhaps some future episode will point the path out of the spine-chilling quagmire. Maybe in time for Christmas?
While riffing on customer-centric mindsets and the veering toward first-order effects pattern, my colleagues and I came up with this idea of stages of customer-centric maturity. These stages are modeled after the personal development stages, like the ones in adult development theory. The basic premise is that teams typically progress through these stages as they become more experienced in the trade of developer experience, with each new stage building on the insights of the previous one: the include and transcend type of thing, rather than a wholesale replacement of fundamental premises.
At the zeroth stage of customer-centric maturity, we have the teams who come together to build cool developer tools and other kinds of products. The thought about developers is limited to “if I like it, then others will like it, too” and perhaps some vague notions of “shifting the developer paradigm.” There’s a sense of a rather imperial perspective, with nearly zero investment into understanding customers. These teams tend to build amazing things that nobody needs, though occasionally, they strike gold, perpetuating the “if you build it, they will come” myth.
After accumulating enough scars looking for problems that fit their predetermined solutions, teams tend to graduate to the first stage of customer-centric maturity. The pendulum swings all the way in the other direction. “Make developers happy” is the mantra. Intense focus on developers as customers is at the center of prioritization/investment decisions (aka first-order effects). If the team at this stage is building a UI toolkit, most of the attention will be devoted to ergonomics, faster build cycles, or seamless refactoring. Talking about users who might or might not benefit from developers relying on this UI toolkit is usually met with “well, that’s important, but isn’t it up to developers to do the right thing?” As a result, teams at this stage tend to struggle squeezing diminishing returns out of “faster horses”, unable to break out of the local maxima of developer wishes.
Somewhere around here, the awareness of connection between the first-order effects and the second-order effects may develop, potentially leading the team to the second stage of customer-centric maturity. Teams realize that having satisfied developers isn’t the end goal. Rather, it is the means to improve satisfaction of users: customers who will enjoy (or suffer from) the products made by developers — the second-order effects. Thanks to the constant pull toward the first-order effects (as outlined in the DX pattern), the arrival to this stage may be elusive. However, if the team perseveres, it is rewarded with a much broader perspective that transforms local maximas from inescapable cages to minor bumps along the way.
One of my colleagues had a wise observation that within a large organization that ships developer-facing products, teams might be scattered across the whole spectrum of stages. In such a situation, teams will likely do a lot of talking past each other. For example, if a team operating at the second stage decides that the developer satisfaction metric can dip to accommodate a shift toward better user outcomes, they might encounter strong resistance from the team that’s still at the first stage. To them, such a move will bring back the pain of the scars they earned at the zeroth stage. Perhaps this simple framework could help them understand where their disconnect is coming from?
I mentioned this concept before, and I feel like it’s worth expanding on a little bit. If we are in the business of making a product that developers rely on to create user experiences, the developer surface of this product is the union of all means through which developers create these experiences.
Let’s unpack this, starting with a simple case. Suppose you and I decided to ship a library that has one function. Applying the definition, that library is the product and its developer surface is the function. Easy, right? As our product becomes popular, we start noticing something weird. Remember that one-line file where we track the version of the library, just for ourselves? Well, turns out some developers started using its contents in their build. So when we thought — “oh hey, let’s just delete that file, we don’t need it anymore” — all hell broke loose? That file became developer surface, too!
In mature developer-facing products, the developer surface becomes far more than just the API. Shipping samples along with the library? Yep, these are part of the developer surface, too. Got some clever heuristics deep in your code? Or maybe just bugs? Hyrum’s Law captures beautifully the spirit of this phenomenon:
With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.
This sufficiently high number of API users can truly mess with what is or is not a developer surface. While we imagine the contract with developers as a crisp document of high transparency and clarity, we are usually mesmerized by the messy innards that are the outcome of developers just trying to get things to work. Even messier in comparison will be our attempts to convince developers to use the APIs the way we intended.
When we embark on a project that intends to ship a developer-facing product, it’s worth planning the work and structuring the team in a way that anticipates this messiness. We are not writing the developer contract. Developers write the contract with us, and frequently, their contributions carry more weight. Walking this line of carrying our original intention while having awareness of where the developers want to take is not something that comes intuitively or easily.
Have you ever driven a car that pulls to one side? It’s often subtle, but after a while, the counter-steering effort becomes impossible to ignore. This metaphor comes to my mind whenever I encounter a common developer experience pattern: the veering toward first-order effects.
To set the context a bit more, let’s arrange the effects of producing developer surfaces in two orders. The first-order effects relate to producing the developer surface. When we ship an API, we want it to be adopted, to be used broadly. Thus, when we measure first-order effects of our efforts, we look at the API adoption rate, developer satisfaction, etc.
The second-order effects relate to developers producing user experiences using our developer surface. At the end of the day, an organization that invests into shipping APIs does so — intentionally or not — to influence the overall state of user experience in some way. When we measure second-order effects, our metrics will likely track changes in the user experience. Does using our APIs result in products that are more secure, performant, accessible, etc. for the user?
Based on what I’ve seen working with developer experience teams throughout my career, there’s a pronounced pull toward first-order effects. They are easier to measure, have a shorter feedback loop, and are more familiar to folks accustomed to shipping consumer products. Even if a team sets out to influence the state of user experience at the beginning of their journey, the appeal of relative immediacy of first-order effects is so strong that the original intention often gets left behind.
A common symptom of forgetting to counter-steer toward second-order effects is the loss of strategic flexibility within a larger organization. When the first-order effects become the means onto themselves, they tend to get entrenched in a local maxima of developer expectations, stuck in an optimizing loop. An organization that contains teams stuck in that particular way feels like it is unable to do anything about it: everyone is seemingly doing “the right thing,” and prioritization exercises quickly devolve into peanut buttering. When something like this is happening, it’s a good hint that the concept of second-order effects got rolled into a dusty corner of the team’s shared mental models space, or ejected altogether.
To counter-steer, organizations must exert conscious effort to keep second-order effects in the shared mental model space. Whether it’s constantly pointing at them during the all-hands, setting up the metrics structure to reflect user experience shifts, or even just reminding about the unyielding force that — like that darned car — never quits pulling, it’s an investment that’s well-worth the price.
Here’s another developer experience pattern that I’ve noticed. When designing APIs, we are often not sure how they will be used (and abused), and want to leave room for maneuvering, to retain a degree of agency after the API is in widespread use. One tempting tool we reach for is the use of heuristics: removing the explicit levers to switch on and off or knobs to turn from the developer surface and instead relying on our understanding of the situational context to make decisions ourselves. Unfortunately, when used with developer surfaces, heuristics tend to backfire. Developers who want those levers and knobs inevitably find ways to make them without us. And in doing so, they remove that agency that we were seeking in the first place.
Because heuristics are so tempting, this pattern is very common. Here’s the most recent example I’ve stumbled upon. Suppose you are a Web developer who wants to create an immersive experience for their users and for that, you want to make sure that their device never goes to sleep while they are in this experience. There’s an API that enables that, called the Wakelock API. However, let’s imagine that I am a browser vendor who doesn’t want to implement this API because I might be worried that the developers will abuse it. At the same time, I know that some experiences do legitimately call for the device screen to stay awake. So I introduce a heuristic: stay awake if the Web site contains a playing video. Great! Problem solved. Except… You want to use this API in a different scenario. So what do you do? You discern the heuristic, of course! Through careful testing and debugging, you realize that if you put a tiny useless looping video in the document, the device will never go to sleep. And of course, now that you’ve discerned the heuristic, you will share it with the world by codifying it: you’ll write a tiny hosted APIlibrary that turns your hard-earned insight into a product. With the Web ecosystem being as large as it is, the library usage spreads and now, everyone uses it. Woe to me, the browser vendor. My heuristic is caught in the amber of the Web. Should I try to change it, I’ll never hear the end of it from angry developers whose immersive experiences suddenly start napping.
It’s not that heuristics are a terrible tool we should never use. It’s that when we decide to rely on them in lieu of developer surface, we need to anticipate that they will be discerned and codified — sometimes poorly. This means that if we wanted to rely on heuristics for some extra flexibility in our future decisions, we’re likely to get the opposite outcome — especially in large developer ecosystems.
When discussing API design strategies, I keep running into this distinction. It seems like a developer experience pattern that’s worth writing down.
Consider these two perspectives from which API designers might approach the environment. The first perspective presumes that the API implementation is hosting the developer’s code, and the second that the API implementation is being hosted by the developers’ code.
From the first perspective, the API designer sees their work as making a runtime/platform of some sort. The developer’s code needs to somehow enter a properly prepared environment, execute within that environment, consuming the designed APIs, and then exit the environment. A familiar example of designing from this perspective is the Web browser. When the user types the URL, a new environment is created, then the developer’s code enters the environment through the process of loading, and so on. Every app (or extension) platform tends to be designed from this perspective. Here, the developer’s code is something that is surrounded by the warm (and sometimes not very warm) embrace of the APIs that represent the hosting environment.
There is the growth/control tension that seems to map into these two perspectives. Hosting perspective exhibits the attitude of control, while hosted perspective favors the force of growth. As with any tension, complexity arises along the spectrum between the two.
Similarly, a hosting API that wants to grow will be pressed to make the environment more flexible and accommodating. Going back to the example above, we Web Platform folks were experiencing that pressure, the force that was pulling us away from hosting and toward a hosted API design perspective. After that shift, the code that renders Web pages — the fundamental building block of the Web Platform environment — would become just one of the libraries to pick and choose from.
It is also important to note that, using Hamilton Helmer’s classification, the existence of a hosting environment is a form of cornered resource. It’s something that only becomes possible to have when the API designer has the luxury of a significant quantity of willing hosted participants. In the absence of eager hordes of developers knocking on your door, taking a hosting API design perspective is a high miracle count affair. When thinking about this, I am reminded of several ambitious yet ultimately unsuccessful efforts to “create developer ecosystems.” There are ways to get there, but starting out with the hosting API design perspective is rarely one of them.