AI Developer Experience Asymptotes

To make the asymptote and value niches framing a bit more concrete, let’s apply it to the most fun (at least for me) emergent new area of developer experience: the various developer tools and services that are cropping up around large language models (LLMs).

As the first step, let’s orient. The layer above us is AI application developers. These are folks who aren’t AI experts, but are instead experienced full-stack developers who know how to build apps. Because of all the tantalizing promise of something new and amazing, they are excited about applying the shiny new LLM goodness.

The layer below us is the LLM providers, who build, host, and serve the models. We are in the middle, the emerging connective tissue between the two layers. Alright – this looks very much like a nice layered setup!

Below is my map of the asymptotes. This is not a complete list by any means, and it’s probably wrong. I bet you’ll have your own take on this. But for the purpose of exercising the asymptotes framing, it’ll do.

🚤 Performance

I will start with the easiest one. It’s actually several asymptotes bundled into one. Primarily because they are so tied together, it’s often difficult to tell which one we’re actually talking about. If you have a better way to untangle this knot, please go for it.

Cost of computation, latency, availability – all feature prominently in conversations with AI application developers. Folks are trying to work around all of them. Some are training smaller models to save costs. Some are sticking with cheaper models despite their more limited capabilities. Some are building elaborate fallback chains to mitigate LLM service interruptions. All of these represent opportunities for AI developer tooling. Anyone who can offer better-than-baseline performance will find a sound value niche.

Is this a firm asymptote or a soft one? My guess is that it’s fairly soft. LLM performance will continue to be a huge problem until, one day, it isn’t. All the compute shortages will continue to be a pain for a while, and then, almost without us noticing, they will just disappear, as the lower layers of the stack catch up with demand, reorient, optimized – in other words, do that thing they do.

If my guess is right, then if I were to invest around the performance asymptote, I would structure it in a way that would keep it relevant after the asymptote gives. For example, I would probably not make it my main investment. Rather, I would offer performance boosts as a complement to some other thing I am doing.

🔓 Agency

I struggled with naming this asymptote, because it is a bit too close to the wildly overused moniker of “Agents” that is floating around in AI applications space. But it still seems like the most appropriate one.

Alex Komoroske has an amazing framing around tools and services, and it describes the tension perfectly here. There is a desire for LLMs to be tools, not services, but the cost of making and serving a high-quality model is currently too high.

The agency asymptote clearly interplays with the performance asymptote, but I want to keep it distinct, because the motivations, while complementary, are different. When I have agency over LLMs, I can trace the boundary around it – what is owned by me, and what is not. I can create guarantees about how it’s used. I can elect to improve it, or even create a new one from scratch.

This is why we have a recent explosion of open source models, as well as the corresponding push to run models on actual user devices – like phones. There appears to be a lot of AI developer opportunities around this asymptote, from helping people serve their models to providing tools to train them.

Is this value niche permanent or temporary? I am just guessing here, but I suspect that it’s more or less permanent. No matter how low the costs and latency, there will be classes of use cases where agency always wins. My intuition is that this niche will get increasingly smaller as the performance asymptote gets pushed upward, but it will always remain. Unless of course, serving models becomes so inexpensive that they could be hosted from a toaster. Then it’s anyone’s guess.

💾 Memory

LLMs are weird beasts. If we do some first-degree sinning and pretend that LLMs are humans, we would notice that they have the long-term memory (the datasets on which they were trained) and the short-term memory (the context window), but no way to bridge the two. They’re like that character from Memento: know plenty of things, but can’t form new memories, and as soon as the context window is full, can’t remember anything else in the moment.

This is one of the most prominent capability asymptotes that’s given rise to the popularity of vector stores, tuning, and the relentless push to increase the size of the context window.

Everyone wants to figure out how to make an LLM have a real memory – or at least, the best possible approximation of it. If you’re building an AI application and haven’t encountered this problem, you’re probably not really building an AI application.

Based on how I see it, this is a massive value niche. Because of the current limitation of how the models are designed, something else has to compensate for its lack of this capability. I fully expect a lot of smart folks to continue to spend a lot of time trying to figure out the best memory prosthesis for LLMs.

What can we know about the firmness of this asymptote? Increasing the size of the context window might work. I want to see whether we’ll run into another feature of the human mind that we take for granted: separation between awareness and focus. A narrow context window neatly doubles as focus – “this is the thing to pay attention to”. I can’t wait to see and experiment with the longer context windows – will LLMs start experiencing the loss of focus as their awareness expands with the context window?

Overall, I would position the slider of the memory asymptote closer to “firm”. Until the next big breakthrough with LLM design properly bridges the capability gap, we’ll likely continue to struggle with this problem as AI application developers. Expect proliferation of tools that all try to fill this value niche, and a strong contentious dynamic between them.

📐 Precision

The gift and the curse of an LLM is the element of surprise. We never quite know what we’re going to get as the prediction plays out. This gives AI applications a fascinating quality: we can build a jaw-dropping, buzz-generating prototype with very little effort. It’s phenomenally easy to get to the 80% or even 90% of the final product.

However, eking out even a single additional percentage point comes at an increasingly high cost. The darned thing either keeps barfing in rare cases, or it is susceptible to trickery (and inevitable subsequent mockery), making it clearly unacceptable for production. Trying to connect the squishy, funky epistemological tangle that is an LLM to the precise world of business requirements is a fraught proposition – and thus, a looming asymptote. 

If everyone wants to ship an AI application, but is facing the traversal of the “last mile” crevasse, there’s a large opportunity for a value niche around the precision asymptote.

There are already tools and services being built in this space, and I expect more to emerge as all those cool prototypes we’re all seeing on Twitter and Bluesky struggle to get to shipping. Especially with the rise of the agents, when we try to give LLMs access to more and more powerful capabilities, it seems that this asymptote will get even more prominent.

How firm is this asymptote? I believe that it depends on how the LLM is applied. The more precise the outcomes we need from the LLM, the more challenging they will be to attain. For example, for some use cases, it might be okay – or even a feature! – for an LLM to hallucinate. Products built to serve these use cases will feel very little of this asymptote.

On the other hand, if the use case requires an LLM to act in an exact manner with severe downside of not doing so, we will experience precision asymptote in spades. We will desperately look for someone to offer tools or services that provide guardrails and telemetry to keep the unruly LLM in check, and seek security and safety solutions to reduce abuse and incursion incidents.

I have very little confidence in a technological breakthrough that will significantly alleviate this asymptote.

🧠 Reasoning

One of the key flaws in confusing what LLMs do with what humans do comes from the underlying assumption that thinking is writing. Unfortunately, it’s the other way around. Human brains appear to be multiplexed cognition systems. What we presume to be a linear process is actually an emergent outcome within a large network of semi-autonomous units that comprise our mind. Approximating thinking and reasoning as spoken language is a grand simplification – as our forays into using LLMs as chatbots so helpfully point out.

As we try to get the LLMs to think more clearly and more thoroughly, the reasoning asymptote begins to show up. Pretty much everyone I know who’s playing with LLMs is no longer using just one prompt. There are chains of prompts and nascent networks of prompts being wired to create a slightly better approximation of the reasoning process. You’ve heard me talk about reasoning boxes, so clearly I am loving all this energy, and it feels like stepping toward reasoning.

So far, all of this work happens on top of the LLMs, trying to frame the reasoning and introduce a semblance of causal theory. To me, this feels like a prime opportunity at the developer tooling layer.

This asymptote also seems fairly firm, primarily because of the nature of the LLM design. It would take something fundamentally different to produce a mind-like cognition system. I would guess that, unless such a breakthrough is made, we will see a steady demand and a well-formed value niche for tools that help arrange prompts into graphs of flows between them. I could be completely wrong, but if that’s the case, I would also expect the products that aids in creating and hosting these graphs will be the emergent next layer in the LLM space, and many (most?) developers will be accessing LLMs through these products. Just like what happened with jQuery.

There are probably several different ways to look at this AI developer experience space, but I hope this map gives you: a) a sense of how to apply the asymptotes and value niches framing to your problem space and b) a quick lay of the land of where I think this particular space is heading.

Asymptotes and Value Niches

I have been thinking lately about a framing that would help clarify where to invest one’s energy while exploring a problem space. I realized that my previous writing about layering might come in handy.

This framing might not work for problem spaces that aren’t easily viewed in terms of interactions between layers. However, if the problem space can be viewed in such a way, we can then view our investment of energy as an attempt to create a new layer on top of an existing one.

Typically, new layers tend to emerge to fill in the insufficient capabilities of the previous layers. Just like the jQuery library emerged to compensate for consistency in querying and manipulating the document object model (DOM)  across various browsers, new layers tend to crop up where there’s a distinct need for them.

This happens because of the fairly common dynamic playing out at the lower layer: no matter how much we try, we can’t get the desired results out of the current capabilities of that layer. Because of this growing asymmetry of effort-to-outcome in the dynamic, I call it “the asymptote” – we keep trying harder, but get results that are about the same.

Asymptotes can be soft and firm.

Firm asymptotes typically have something to do with the laws of physics. They’re mostly impossible to get around. Moore’s law appears to have run into this asymptote as the size of a transistor could no longer get any smaller. 

Soft asymptotes tend to be temporary and give after enough pressure is applied to them. They are felt as temporary barriers, limitations that are eventually overcome through research and development. 

One way to look at the same Moore’s law is that while the size of the transistor has a firm​​ asymptote, all the advances in hardware and software keep pushing the soft asymptote of the overall computational capacity forward.

When we think about where to focus, asymptotes become a useful tool. Any asymmetry in effort-to-outcome is usually a place where a new layer of opinion will emerge. When there’s a need, there’s potential value to be realized by serving that need. There’s a potential value niche around every asymptote. The presence of an asymptote represents opportunities:  needs that our potential customers would love us to address.

Depending on whether the asymptotes are soft or firm, the opportunities will look differently.

When the asymptote is firm, the layer that emerges on top becomes more or less permanent. These are great to build a solid product on, but are also usually subject to strong five-force dynamics. Many others will want to try to play there, so the threat of “race to the bottom” will be ever-present. However, if we’re prepared for the long slog and have the agility to make lateral moves, this could be a useful niche to play in.

The jQuery library is a great example here. It wasn’t the first or last contender to make life easier for Web developers. Among Web platform engineers, there was a running quip about a new Web framework or library being born every week. Yet, jQuery found its place and is still alive and kicking. 

When the asymptote is soft, the layer we build will need to be more mercurial, forced to adapt and change as the asymptote is pushed forward with new capabilities from the lower layer. These new capabilities of the layer below could make our layer obsolete, irrelevant – and sometimes the opposite. 

A good illustration of the latter is how the various attempts to compile C++ into Javascript were at best a nerdy oddity – until WebAssembly suddenly showed up as a Web platform primitive. Nerdy oddities quickly turned into essential tools of the emergent WASM stack.

Putting in sweat and tears around a soft asymptote usually brings more sweat and tears. But this investment might still be worth it if we have an intuition that we’ll hit the jackpot when the underlying layer changes again.

Having a keen intuition of how the asymptote will shift becomes important with soft asymptotes. When building around a soft asymptote, the trick is to look ahead to where it will shift, rather than grounding in its current state. We still might lose our investment if we guess the “where” wrong, but we’ll definitely lose it if we assume the asymptote won’t shift.

To bring this all together, here’s a recipe for mapping opportunities in a given problem space:

  • Orient yourself. Does the problem space look like layers? Try sketching out the layer that’s below you (“What are the tools and services that you’re planning to consume? Who are the vendors in your value chain?”), the layer where you want to build something, and the layer above where your future customers are.
  • Make a few guesses about the possible asymptotes. Talk to peers who are working in or around your chosen layer. Identify areas that appear to exhibit the diminishing returns dynamic. What part of the lower layer is in demand, but keeps bringing unsatisfying results? Map out those guesses into the landscape of asymptotes.
  • Evaluate firmness/softness of each asymptote. For firm asymptotes, estimate the amount of patience, grit, and commitment that will be needed for the long-term optimization of the niche. For soft asymptotes, see if you have any intuitions on when and how the next breakthrough will occur. Decide if this intuition is strong enough to warrant investment. Aim for the next position of the asymptote, not the current one.

At the very least, the output of this recipe can serve as fodder for a productive conversation about the potential problems we could collectively throw ourselves against.


My friend Dion asked me to write this down. It’s a neat little pattern that I just recently uncovered, and it’s been delighting me for the last couple of days. I named it “porcelains”, partially as an homage to spiritually similar git porcelains, partially because I just love the darned word. Porcelains! ✨ So sparkly.

The pattern goes like this. When we build our own cool thing on top of an existing developer surface, we nearly always do the wrapping thing: we take the layer that we’re building on top and wrap our code around it. In doing so, we immediately create another, higher layer. Now, the consumers of our thing are one layer up from the layer from which we started. This wrapping move is very intuitive and something that I used to do without thinking.

  // my API which wraps over the underlying layer.
  const callMyCoolService = async (payload) => {
    const myCoolServiceUrl = "";
    return await // the underlying layer that I wrap: `fetch`
      await fetch(url, {
        method: "POST",
        body: JSON.stringify(payload),
  // ...
  // at the consuming call site:
  const result = await callMyCoolService({ foo: "bar" });

However, as a result of creating this layer, I now become responsible for a bunch of things. First, I need to ensure that the layer doesn’t have too much opinion and doesn’t accrue its cost for developers. Second, I need to ensure that the layer doesn’t have gaps. Third, I need to carefully navigate the cheesecake or baklava tension and be cognizant of the layer thickness. All of a sudden, I am burdened with all of the concerns of the layer maintainer.

It’s alright if that’s what I am setting out to do. But if I just want to add some utility to an existing layer, this feels like way too much. How might we lower this burden?

This is where porcelains come in. The porcelain pattern refers to only adding code to supplement the lower layer functionality, rather than wrapping it in a new layer. It’s kind of like – instead of adding new plumbing, put a purpose-designed porcelain fixture next to it.

Consider the code snippet above. The fetch API is pretty comprehensive and – let’s admit it – elegantly designed API. It comes with all kinds of bells and whistles, from signaling to streaming support. So why wrap it?

What if instead, we write our code like this:

  // my API which only supplies a well-formatted Request.
  const myCoolServiceRequest = (payload) =>
    Request("", {
      method: "POST",
      body: JSON.stringify(payload),
  // ...
  // at the consuming call site:
  const result = await (
    await fetch(myCoolServiceRequest({ foo: "bar" }))

Sure, the call site is a bit more verbose, but check this out: we are now very clear what underlying API is being used and how. There is no doubt that fetch is being used. And our linter will tell us if we’re using it improperly.

We have more flexibility in how the results of the API could be consumed. For example, if I don’t actually want to parse the text of the API (like, if I just want to turn around and send it along to another endpoint), I don’t have to re-parse it.

Instead of adding a new layer of plumbing, we just installed a porcelain that makes it more shiny for a particular use case.

Because they don’t call into the lower layer, porcelains are a lot more testable. The snippet above is very easy to interrogate for validity, without having to mock/fake the server endpoint. And we know that fetch will do its job well (we’re all in big trouble otherwise).

There’s also a really fun mix-and-match quality to porcelain. For instance, if I want to add support for streaming responses to my service, I don’t need to create a separate endpoint or have tortured optional arguments. I just roll out a different porcelain:

  // Same porcelain as above.
  const myCoolServiceRequest = (payload) =>
    Request("", {
      method: "POST",
      body: JSON.stringify(payload),
  // New streaming porcelain.
  class MyServiceStreamer {
    // TODO: Implement this porcelain.
  // ...
  // at the consuming call site:
  const result = await fetch(
    myCoolServiceRequest({ foo: "bar", streaming: true })
  ).body.pipeThrough(new MyServiceStreamer());

  for await (const chunk of result) {

I am using all of the standard Fetch API plumbing – except with my shiny porcelains, they are now specialized to my needs.

The biggest con of the porcelain pattern is that the plumbing is now exposed: all the bits that we typically tuck so neatly under succinct and elegant API call signatures are kind of hanging out.

This might put some API designers off. I completely understand. I’ve been of the same persuasion for a while. It’s just that I’ve seen the users of my simple APIs spend a bunch of time prying those beautiful covers and tiles open just to get to do something I didn’t expect them to do. So maybe exposed plumbing is a feature, not a bug?

The developer funnel

If I ever chatted with you about developer experience in person, I’ve probably drawn the developer funnel for you on the whiteboard. For some reason, I always draw it as a funnel — but any power-law visualization can suffice. As an aside, I do miss the whiteboards. 

At the bottom of the funnel, there are a rare few who know how to speak to computers as directly as humans can. Their perspective on writing data to storage involves programming a controller. As we travel a bit higher, we’ll find a somewhat larger number peeps who write hardcore code (usually C/C++, and more recently, Rust)  that powers operating systems. Higher still, the power law kicks into high gear: there are significantly more people with each click. Developers who write system frameworks are vastly outnumbered by developers who consume them, and there are orders of magnitudes more of those who write Javascript. They are still entirely eclipsed by the number of those who create content.

With each leap in numbers, something else happens — the amount of power passed upward diminishes. Each new layer of abstraction aims to reduce the underlying complexity of computing and in doing so, also collapses the number of choices available to developers who work with that layer. Try to follow the rabbit trail of setting a cookie value, a one-liner in Javascript — and a massive amount of work in the browser that goes into that. Reducing complexity makes the overall mental model of computing simpler and easier to learn, which helps to explain the growing number of developers.

This funnel can be a helpful framing for a conversation about desired qualities of the API. Do we want to have rapid growth? Probably want to be aiming somewhere higher in the funnel, designing convenient, harder to mess up APIs. Want to introduce powerful APIs and not worry about the sharper edges? Expect only a small number of consumers who will understand how to use them effectively.

The Relationship Polarity

I’ve been feeling a little stuck in my progress with the Four Needs Framework, and one thing I am trying now is reframing my assumptions about the very basics of the framework. Here’s one such exploration. It comes from the recent thinking I’ve done around boundaries and relationships, a sort of continuation of the Puzzle Me metaphor. In part, the insight here comes from ideas in César Hidalgo’s Why Information Grows and Lisa Feldman Barrett’s How Emotions Are Made.

My conscious experience is akin to navigating a dense, messy network of concepts. I construct reality by creating and organizing these concepts. One way in which I organize concepts is by defining relationships between them and me. The nature of the relationship can be viewed as a continuum of judgements I can make about whether a concept is me or not me. For concepts that are closer to “me” in this continuum, I see my relationship with them more like a connection.

For concepts that are closer to “not me,” I judge the relationship with them as more like a boundary.

Sometimes the choice is clearly “me”, like my Self. Sometimes it’s nearly perfectly “me”, like my nose. Sometimes the choice is clearly not me. For example,  last time I touched a hot stove I suddenly became informed that it definitely was not me. Sometimes, the position is somewhere in the middle. My bike is not me, since it’s not actually my body part, but it can feel like an extension of me on a Sunday ride.

Think of this continuum of judgements as a continuum between boundary and connection. The more I feel that the concept is “me,” the closer it is to the “connection” end of the continuum. The more I feel that the concept is “not me,” the farther it sits toward the “boundary” end of the continuum.

In this framing, the choices I make about my relationship with concepts are points on this continuum. As I interact with concepts, I define my relationship with them by picking the spot on this continuum based on my interaction experiences.

This is where the next turn in this story comes. It seems that some concepts will naturally settle down into one spot and stay there. “Hot stove” will stick far toward the “boundary” end of the continuum. “My bike” will be closer to “connection.”  On the other hand, some concepts will resist this simple categorization. Depending on the interaction experience, they might appear as one of several points on the continuum. They might appear as a range, or maybe even a fuzzy cloud that covers part or the entire continuum.

The concepts that settle down into steady spots become part of my environment: they represent things that I assume to be there. They are my reference points. Things like ground, gravity, water, and so on take very little effort to acknowledge and rely on, because our brains evolved to operate on these concepts exceptionally efficiently.

The concepts whose position on the continuum is less settled are more expensive to the human brain to interact with. Because our brains are predictive devices, they will struggle to make accurate predictions. By expending extra energy, our brains will attempt to “make sense” of these concepts. A successful outcome of this sense-making process is the emergence of new concepts. Using the hot stove example from above, the brain might split the seemingly-binary concept of a “stove that sometimes hurts and sometimes doesn’t” into “hot stove” and “cold stove.” This new conceptual model is  more subtle and allows for better prediction. It is also interesting to note how concept-splitting retains transitive relationships (“hot stove” is still a “stove”) and seems to form a relational network for stable concepts.

There’s also a possibility of a stalemate in this seemingly mechanical game of concept-splitting: a relationship polarity. A relationship polarity occurs when the concept appears to resist being split into a connected network of more stable concepts. 

Sometimes I feel cranky, and sometimes I feel happy. Is it because I am hungry? Sometimes. Is it because of the weather? Sometimes. Relationship polarities are even more energy-consuming, because they produce this continuous churn within the relational network of concepts. My mind creates a model using one set of concepts, then the new experiences disconfirm it, the mind breaks it down, and creates another model, and so on. There’s something here around affect as well: this churn will likely feel uncomfortable as the interoceptive network issues warnings about energy depletion. In terms of emotions, this might manifest as concepts of “dread”, “stress”,  “anxiety”, etc.

What was most curious for me is how a relationship polarity arises naturally as a result of two parties interacting. The key insight here is in adaptation being a source of the seeming resistance. As both you and I attempt to construct conceptual models of each other, we adjust our future interactions according to our models. In doing so, we create more opportunities for disconfirming experiences to arise, spurring the concept churn. The two adaptive parties do not necessarily need to be distinct individuals. As I learn more about myself, I change how I model myself, and in doing so, change how I learn about myself.

Adult Development Primer

Over Thanksgiving break, I wrote another one of those flip-through decks. It’s been a labor of love and I hope you find it useful in your journey. This one attempts to tell one coherent story about Adult Development Theory, based on the work of a few folks whose contributions to the field I found valuable to me.

Here’s a short link for it:

The Tree of Self

Diving into the Internal Family Systems (IFS) concepts and methodology led to these four insights at the intersection of the Four Needs Framework (4NF), IFS, and Adult Development Theory (ADT). 

Shearing Forces Lead to Multiplicity

At the core of the first insight is that the ever-present tension of the Four Needs and the resulting maelstrom of Existential Anxieties acts a powerful shearing force. I have been wondering before about the effect of the tug and pull of the Anxieties, and how there seems to be this rapid switching of Anxieties from one polar opposite to another. The framing of the internal family of Self, surrounded by Protectors and Exiles offered by Richard Schwartz provided a clarifying perspective: this process of rapid switching can be viewed as different, distinct parts within me taking the seat of consciousness one at a time. This idea hints at the notion that somehow, there’s an entire population of parts within me that emerged from my life experience. 
This is where it clicked: these parts are the outcome of my mind’s attempts to do its best to resolve the tension. Unable to bring coherence, especially at the earlier stages of my development, a split develops, creating two distinct parts. Each part embodies the respective Anxieties within the tension. Whenever the pull of Existential Anxieties proves impossible to resolve, the split repeats. Over time, the number of parts grows, populating the Internal Family System. Thus, the Fundamental Needs act as the part-splitting force, leading to multiplicity of agents within the System of Self that I was vaguely sensing last year. Unlike my early guesses back then, Dr. Schwartz provided a clear path to explore these parts and show that their formation does not correspond one-to-one to each Existential Anxiety (née Fear), but rather forms a unique sub-personality.

Branches Form a Tree

Taking this part-splitting idea further, it is evident that the splits form branches. The sub-personalities live the same story (that is, share identical models and sense-making capacity) until the split, but continue separate developmental journeys thereafter. When I have a conversation with a part–especially an Exile!–it is often a younger version of me, stuck in some past traumatic event. Unlike the branch that developed into a trunk to grow further, this branch remained undeveloped.

This is where the second insight arrived: these parts, these sub-personalities, branch by branch, form a Tree of Self. Some branches stop growing. Some turn into trunks to sprout new branches. This tree is a whole that is both coherent and disjoint. It is coherent, because every two parts share the same beginning of their life story. It is disjoint, because at some point, the story unfolds differently for each of the two. 

In this way, the Tree of Self is not like a tribe or a family. A tribe comes together as separate people deciding to become whole. When a family is formed, an offspring does not share the story of their parents: the story is conveyed through words, but not lived experience. In the Tree of Self, all parts are rooted in the same life story.

This same-rootedness is why the IFS practice appears so effective: at the core, all my parts recognize themselves in each other. They are innately connected in the ultimate kinship, and want to be whole. Unlike a tribe or a family, there is no “before” where the parts existed separately. The story began with oneness, and the part-splitting is just the middle of their hero’s journey. The happy ending that every part yearns for is togetherness.

The Tree Evolves as it Grows

This tree-like arrangement led to the third insight. The Tree of Self grows across developmental stages. Continuing the tree metaphor, each transformation to the next stage is a material change. New growth becomes more capable of managing the shearing forces. At earlier stages, the strategy for managing these forces might be the part-splitting. The later stages bring more resilience, leading to fewer splits. I am picturing this as the tree growing upward through the layers of atmosphere, drawing on the idea of “vertical development.” Each layer represents a developmental stage, starting with the earlier stages closer to the ground and later stages stacking on top.

Some branches reach into the later stages, and some are stuck at the earlier. Since each branch represents a sub-personality, each may occupy the seat of consciousness. As a result, what I show up like may appear as a scattering of selves across multiple stages of development. The ADT phenomenon of fallback effect speaks to the idea that in some situations, the lower-to-the-ground branches are more likely to claim the seat. 

Self-work as the Journey toward Wholeness

Recognizing this multiplicity and diversity across stages has been very clarifying and produced the fourth insight. The aim of self-work might not be about learning how to reach the higher stages faster, willing my branches to reach higher and higher. This process of growth seems to happen regardless of whether I want it or not. Instead, self-work might be about nurturing the Tree of Self to wholeness. The Tree of Self is whole when each branch has been examined and given attention, support and room to grow. IFS session transcripts often talk about how a healed Exile rapidly matures, as if catching up. My guess is that moment and the feeling of closure and quiet satisfaction that accompanies is the increase in wholeness of the Tree of Self. The previously-shunned branch soars to join the rest of its kin. 

There’s something very peaceful about this framing. Self-work is revealed as gardening, an infinite game of nourishing all branches of the Tree of Self, and helping it become whole even as each branch continues to grow.

Transformational Learning

Building on the ideas in The Suffering of Expectations, I want to look more closely at the expectation gradients. These are predictions of future experiences, and they can have negative or positive values. Negative values indicate that I expect a future that is worse for me than it is now, and positive values are the opposite: I expect the future to be better than the present. The steeper the slope, the more dramatic the future outlook. Worse outcomes look catastrophic, and better outcomes promise pure bliss. The gentler slope leads to a slightly worse or slightly better future. The way I like to imagine it is gauging how quickly a murky lake gets deep as I wade into it.

Expectation gradients are shaped by my previous experiences. My mind subconsciously sifts all of my past experiences, finds–or synthesizes!–the best match and this match now becomes the expectation gradient. So, if I had a really terrible experience and the situation appears to match the beginning of that experience, I will feel a steep negative expectation gradient — whoa, that lake bed is dropping away fast! Conversely, if the present appears to match the start of a mild or pleasant experience, I will feel a gentle or upward-sloping gradient.

Sifting through all past experiences can be expensive, and I am not blessed with a source of infinite energy, so there’s an optimization process at play that relies on prediction errors. Each prediction is compared with the actual outcome, and a prediction error is computed. Prediction errors are a signal to organize my past experiences. Lower prediction errors reinforce the value of the experience used to make the prediction. Higher errors weaken that value.  This continuous process fine-tunes how my mind makes predictions. Higher-valued past experiences are looked at first, as they are more likely to repeat. The experiences with the lower value are gently pushed to the bottom. This process of ranking allows my mind to work more efficiently: skim the top hits, and ignore the rest. Energy saved! Another word for this optimization process is informational learning: every bit of new information is incorporated to improve my ability to make accurate predictions.

At this point, I want to introduce the concept of prediction confidence. I continue to have experiences, and they fuel the learning. Ideally, this process results in effective predictions: a clear winner of a prediction for every situation. A less comfortable situation happens when there does not seem to be a clear winner. Here, matching past experiences to the present produces not one, but multiple predictions that vary in their slope. Expanding the wading-into-lake analogy, it feels like even though I took the same exact path, the lake bed had a different shape at times. Most times, it had a nice and gentle slope, but every so often, the same exact bed somehow felt steeper. I swear, it’s like the lake bed had shifted! Now that’s a puzzler.

To reflect on the nature of prediction confidence, consider the framing of complexity of the environment. If the environment is simple, then my experiential journey quickly produces a perfect map of this environment and I am able to make exact, 100% confidence predictions. If this then that — bam! In a simple environment, the list of my experiences wading into the lake has only one item, because it repeats every time with clockwork precision.

The more complex the environment, the more fuzzy the prediction matching. If the environment is highly complex, I may find myself in a situation where I have near-zero confidence predictions: every situation might as well be brand new, because I can’t seem to find a match that isn’t the whole set of my experiences. Walking into the lake is a total surprise. Each time, I find a seemingly differently-shaped lake bed. What the heck is going on?

In such an environment, energy-saving optimizations no longer seem to work, and if anything, hinder the progress. It’s clear that something is amiss, but the existing machinery just keeps chugging away trying to build that stack rank — and failing. How can the stupid lake bed be so different… Every. Fricking. Time?!

This crisis of confidence is an indicator that it’s time for change, for another kind of learning. Unlike informational learning, which is all about improving my ability to make predictions within a situation, the process of transformational learning is about uncovering a different way to see the situation. The outcome of transformational learning is a profound reevaluation of how I perceive the environment. It is by definition mind-boggling. Transformational learning feels like discovering that all this time, when I was feeling the lake bed shift under my feet, I was actually only perceiving my own movement along one axis. I was assuming a two-dimensional space, unaware that there’s another dimension! Whoa. So the lake bed isn’t moving. Instead, I wasn’t accounting for my own movements across the shore. If I incorporate the “lake shore” axis, all of these past experiences suddenly snap into a static, three-dimensional map of a lake bed.

Transformational learning is a rearrangement of my past experiences into a new structure, a new way to organize them and produce a whole different set of predictions. Also necessarily, the letting go of the old way and the acceptance of the uncertainty that comes with that. A three-dimensional map of the lake bed represents the environment more usefully, but it is also more complex, allowing for more degrees of freedom and requiring more energy to operate. Another long journey of informational learning awaits to optimize my prediction-making machinery and turn this novel perspective into a familiar surrounding — until the next transformation time.

Whenever I get that sense of the shape-shifting lake bed, in these “what the heck just happened, this is wrong!” moments, I take comfort in the notion that transformational learning awaits. Though it might not offer immediate insight right then and there, this movement of the surface, a seemingly exogenous change is a signal. It tells me that I am approaching yet another edge of my current understanding of the environment, and a new perspective beckons to be revealed.


There is no past or future. There are only stories of the past and stories of the future, and both kinds are mutable. We resolve the pervasive ontological uncertainty by matching the story we know (beginning, middle, end) and placing its middle in the “now.” This allows us to imagine that we understand why the past happened in a way that led to the present, and what the future brings. As long as the present follows the arc of this story, we are content. We’re in control of the narrative.

When inevitably, the infinite complexity of the world manifests itself in bucking our chosen story, we encounter decoherence: our expectations of the past that was “now” just a moment ago no longer match what we experience–a prediction error!–and the future stops looking as certain, causing us to pattern-match to another story we know that might fit. Depending on the cache of stories we draw, these stories might be cataclysmic (aversion) or blissful (craving), guiding our expectation gradients. As we latch onto that new story, we repeat the cycle: the story’s beginning reshapes our past, the middle constructs the present, and the end predicts the future. This metamorphosis happens quite seamlessly and magically in our minds. The new story snaps into place in a way that neatly explains or just plain forgets the old one.

But in that moment of decoherence, in that struggle to regain control over the narrative, we experience suffering. Our sense-making revolts, unable to cope with the blood-curling contact with uncertainty, grasping to regain that elusive handle on the narrative. A global pandemic, an unthinkable tragedy, or even just an unexpected act of someone you care about. Each holds that decoherence potential, the hidden token of suffering.

So we strive to reduce this suffering. We escape, trying to hide in environments where only familiar stories could play out–or so we believe. We try to dial down that prediction error, denying the markers of decoherence, continuing to hold on to the chosen story for as long as we can. We rebel and rise up, hoping to shape the world into our stories. We accommodate, looking to find our places in the stories of others. We hone our sense-making to produce the most accurate predictions. We hoard stories lest we are faced with the one we don’t know.

And yet, we continue to suffer. Somehow, all of these efforts backfire, bringing more suffering, a sense of a vicious cycle at play. As we white-knuckle our way through life, exhausted and beaten down, the paradox of decoherence is revealed to us. Decoherence is the glimpse into the nature of reality. The richer our models, our attempts to capture the complexity of reality, the more they will look like decoherence. The ultimate insight of understanding is that there is no insight and the understanding itself is a neat trick that our minds invented to cope with that fact. The paradox of decoherence is that the most accurate, crystal clear representation of reality is just as incomprehensible as the reality itself. The race toward clarity is the race toward decoherence.

… And I catch myself trying to hold on, trying to come up with a bigger story that includes decoherence being part of something bigger, something spiritual, something God-like. I fall into the mysticism of the Unknowable, hoping that this would do as an okay-ish substitute for true letting go. But deep in my heart, I know that it won’t. Decoherence just is, and the bigger picture is decoherence.

Somatic Signals: How I use the Four Needs Framework

This post marks a second anniversary of my self-work journey. Wow. 2020 has been kind of impossible, and I am grateful to have held on to my self-work routine and even to have made more progress.

When I first started doing self-work, I had this idea of identifying and documenting somatic signals that I experienced. Throughout the day, I would try to capture the concrete physical sensations of that moment. Instead of narrating my state as “I am stressed out” or “I am elated,” I would try to focus on what my body was experiencing: “A knot in my upper shoulders,” or “Very tense muscles around my mouth,” or “Warm pressure, like sunshine, at the top of my head.”  My hope was that by cataloging these experiences, I could create a sort of a topographical map of emotions and be able to orient myself: “aha, I am experiencing <somatic signal>, therefore, I must be feeling <emotion>” I reasoned that once so oriented, I could step out of the context of the emotion onto the riverbank, and then somehow find my way to inner peace.

This exercise yielded surprising results. The somatic signal catalog did indeed produce a fairly stable set of signals that seemed to cluster into a few buckets. I also recognized that these signals are squishy: they tend to move around and not always be consistent or specific in their manifestation. Looking back, there appears to have been a progression in my understanding of what the heck the somatic signals were. First notes on the somatic signals sounded more like role descriptions–“A Silicon Valley Guy”, “Forgot-my-homework Guy”, etc. Then, I started classifying them more as behaviors: “relitigating conversations,” “fret-scanning”, etc. My next big shift was toward trying to generalize these signals as markers of emotions: shame, anxiety, depression, anger… Eventually, I settled down on the descriptions that you’ve seen above. These seemed to be the least context-dependent and most useful as topographical markers on my map of Self.

In parallel, I’ve made some progress on developing the Four Needs Framework. The framework started as notes on similarities and differences of mental models in various books, and my attempts to apply these mental models during Archaeology of Self. At some point late last year, I realized that the framework and somatic signals are pointing at the same thing: they are both telling the story of the tension of Existential Anxieties. This terminology and the narrative didn’t solidify until July, but it’s been on my mind most of this year. As an aside: I find it fascinating how the understanding of a concept comes so much earlier than my capacity to tell a story about it. 

Anyhow, I started recognizing that there’s a distinct cluster of signals that I experience for each Existential Anxiety. My catalog, full of disparate records at first, collapsed to four groupings. To describe these groupings, I’ll also use the typical Western culture emotions that I associated with the experiences.

Not-Belonging feels like a sense of shame, of not being enough. It’s the thing that Brené Brown talks about in her books. The strongest physical markers for me are the onrush of blood to my cheeks or ears, a heavy weight in the stomach.

Not-Agency feels like irritability or desire to be aloof. It comes with a weird feeling of suspended reality, like an out-of-body-experience, tense nostrils, tension in the upper part of the neck, which tends to lift the chin upward, sometimes a sense of cold around my face and heat concentrating in my chest.

Not-Safety feels like angsty worry, with the somatic signals of tension in the lower part of neck, around shoulders, jaws, chin, mouth, increased heart rate.

Not-Purpose feels like depression and existential dread, and the signals usually feel like a heavy pressure on my chest, loss of muscle tone on face and shoulders.

Viewed within the quadrants of the Four Needs Framework, the combinations of these markers create a rich canvas of emotions. For example, I recognized that the Not-Agency + Not-Safety quadrant of the coordinate space is occupied by anger, blame, defensiveness, and fuming. Whenever I say that I am “mad at someone,” the somatic signals help me identify that I am in that quadrant. I am now roughly able to place myself into a quadrant pretty quickly based on the somatic signals I perceive. In the moment, I imagine pressing the pause button to introspect. “Whoa, I am suddenly feeling very tense. What’s happening? This feels like Not-Safety. Where is this coming from?” Asking these questions–gently and without judgement–usually yields insights about my state of expectations, or specific injured identities.

What helps the most, however, is the exploration itself. This process of inquiry helps me sit with an Anxiety, embrace it and be next to it, rather than be ridden by it. By focusing on a somatic signal, I can go “aha, I hear you, Not-Purpose. Yep, I see the giant negative delta prediction error, and yep, it sucks. I can feel the face muscles forming into the frown … there they go. I hear you, Not-Purpose, and I feel you.” Somehow, and I am not yet sure how, the mere fact of just observing somatic signals and accepting them opens up this tiny space of serenity, where the next action is not another reaction, but something altogether different.