Letting My Purpose Find Me

In my own clock, I spoke metaphorically about learning to listen to my own Self, learning to discern what I want from what I believe I am supposed to want. As this process continues, I am becoming more aware of the distinction between the two. In exploring this distinction, an interesting question arises: what is it that I actually want?

I am realizing that my desire to answer to this question is animated by a force that is much like gravity: subtle, yet unyielding. Like gravity, it manifests whether I want it or not. Like gravity, I can only pretend to ignore it. I’ve come to see it as the fundamental need for Fulfillment, the need to do something more than just surviving and living out my life.

I’ve also begun recognizing that this need for Fulfillment, this force is directed. There’s a definite sense of “more-of-that” and “less-of-that” when I make my own choices. And when there’s a direction, there’s a destination. There is a Purpose, my own sense of meaning that is clearly there, present within me.

I am neither used to looking for it, nor it is easy to see. With other forces pulling in their own directions, it’s easy for me to get distracted and disoriented. Did I act in a certain way because I was stumbling toward my Purpose, or was because I was trying to protect myself in some way?

I’d found that the direction is most easily seen when I sit down at the end of the day and spend a few minutes reflecting on what is happening within me, and how the day felt as a whole. This does not need to take a long time: I usually just set my fingertips to the keyboard and let them go. More often than not, exhaustion and angst of the day gives way to curiosity and wonder. That’s when I start seeing a bit more. I start tracing outlines of what is meaningful to me, what’s important, what’s purposeful.

It’s almost like my Purpose is always there. I am just blocking with the whirlwind of the mundane. In the moments when I can stop and let go, I am able to let it find me.

A Classroom for Humanity

Why is it that some people are unable to see a different perspective? Why is it that when I try to introduce them to that perspective, I get an angry, hateful response, and often, trigger them holding more firmly onto their current perspective? What is happening here?

To gain more insights around these questions, I’ve been looking into the constructive-developmental theories, and they open some very intriguing possibilities.

They posit that we all have a meaning-making device, something that helps us take external inputs from the world and turn it into meaning, constructing a coherent reality in our minds. Through grounded theory methodology, they’ve found that our meaning-making devices are in the process of continuous evolution, rapidly and massively transformative in the young age, eventually slowing down, becoming more rare and far in-between for adults. Unlike children, who upgrade their meaning-making device every few years, we grown-ups are typically stuck with the one we acquired as young adults.

They also identified distinct stages of these transformations, the plateaus where people situate. According to the research, the current center of gravity for adults in our world seems to be in the Socialized Mind stage. (As an aside, different theories assign different names for the stages and I will use the ones from Kegan’s work.) When my meaning-making device is at the Socialized Mind plateau, my sense of rightness and wrongness is defined by those around me. I seek the principles and values from my leaders, and good and bad is what everyone in my “tribe” says it is. I tend to strongly identify with my tribe, and thus experience severe injured identity pain when I perceive that this identity is threatened. When I encounter ideas or perspectives that aren’t aligned with those of my tribe, I unconsciously feel the fear of the abyss and also react to it as a threat.

It is not a huge leap from here to see that the frictionless global interconnectedness of the modern world would leave a Socialized Mind reeling, feeling as if it’s under constant attack from the overwhelming, unbearable threat of the entire meaning to come undone, made incoherent. If I am constructing my reality using a Socialized Mind, I see it as the world coming apart, with The End fast-approaching. I am not having a good time, prone to polarization, further entrenchment in the ideology of my tribe, and blame, pointing fingers at others. And in doing so, along with others in the same state, I am acting out the fulfillment of my own prophecy. Trapped in this vicious cycle, we appear to be doomed.

The constructive-developmentalists suggest that there is another perspective. They reframe the bleak picture as a developmental environment, a classroom of sorts. This struggle of the Socialized Mind is actually a challenge, a tough assignment that is presented to the humanity in that classroom. And the goal of this assignment is transformational learning, the evolution of our meaning-making devices to the next plateau, the Self-authoring Mind. This next plateau offers me the capacity to hold my own perspective, seeing it as one of many, no longer viewing others’ differences as threats to core identity. Thus, the graduation looks like shifting humanity’s center of gravity toward to the Self-authoring Mind.

This learning continues. A graduation is just a milestone in this classroom of humanity. The next plateau over, the Self-transforming Mind is even more equipped to thrive in the modern world: with this meaning-making device, I learn to appreciate and cherish the multitude of perspectives, holding mine lightly.

Just like in any effective classroom, the challenge in itself is not sufficient to foster transformational learning. The support, the sense of safety and grounding is a fundamental part of the process. My best teachers weren’t those who took it easy on me. They were those who, while presenting a seemingly impossible challenge, made it clear that they were with me, empathetic and steady, supporting me as I flailed and struggled. They weren’t paternalistic or over-protective, but they coached me, and guided me back to the path when I was going in circles. They created an environment where I learned to thrive.

This framing creates a fairly clear–and frankly, more positive–way to view humanity’s current predicament. Shifting away from apocalyptic lamentations, I can now focus on exploring this question: what would it take to create an effective developmental learning environment for humanity?

Fear of the abyss

I’ve been thinking about how I experience the fear of the unknown in the context of constructive-developmental theory, this fear of what’s beyond the edge of my current capacity to make meaning, the fear of the abyss.

This fear pops up when I am experiencing something that is incongruent with the reality I construct, something that I can’t reconcile and make coherent according to my current understanding of the world. I may experience it as something that is grabbing at me, trying to “get me”, something to be feared.

The subject-object shift in constructive-developmental theory refers to a developmental transformation of how I make meaning, where invisible is made visible. I see this shift as me improving my capacity to make meaning, to see an ever-increasing context. With each jump to a larger context, I start seeing things that were driving me, invisibly influencing me, being part of me, or part of the unknown become things that I see and have the capacity to hold apart, things about which I can reason. 

For example, let’s suppose that I am subject to some principle and value-making device. That is, I accept my principles and values as “things that are”, rather than something I can author. When I see that someone else has different principles and values, I experience the fear of the abyss: how can it be that someone else has different “things that are?”. In that moment, I may also be subject to the fear itself, and thus be subject to cognitive distortions (I am guessing those are an outcome of being subject to fear),  experiencing anxiety, shame, depression, anger, etc.

Once my meaning-making capacity grows to see a principle and value-authoring device in myself, I am no longer seeing principles and values as “things that are”, but rather things that people have. I can see how they can be different across people and the fear of the abyss no longer overcomes me when I encounter someone with a different set of principles and values. With the subject-object shift, the fear of the abyss has moved on to the next frontier of my meaning-making.

Archaeology of Self

Every experience I have is a new lesson in life, a new addition to things that I’ve already learned. This process of learning happens whether I am aware of it or not. With every lesson, I also learn how to learn. That is, I continuously increase my capacity to learn.

Depending on the depth of my capacity to learn, the nature of the lesson shifts. When I was a child, my learning capacity was still nascent, and the lessons I learned were simple. Things were high-contrast: bad or good, sad or happy, dangerous or safe.

As I grew up, my capacity to learn deepened. I started seeing a more subtle and complex world and that this growing complexity is not the change in the world itself, but rather a change in how I make meaning of it. As my capacity to learn deepened, new dimensions of complexity opened up, with new opportunities to learn.

Thus far, the learning happened in the context of my previous learning. What I learned in the past shaped how I would learn in the future. The stark nature of the early childhood lessons created sharp edges in the foundation of my continuous construction of meaning.

I am now recognizing that these sharp edges limit the extent of my capacity to learn. They are these unseen forces that collapse my choices, blind me to alternatives, especially in challenging situations. David Burns called them “cognitive distortions”. James Hollis gave them a more dramatic name of “woundings”. Brené Brown talks of armor. All these are different takes on the same thing: the natural outcome of learning when my learning capacity is a work-in-progress itself.

Thus, the deepening of my capacity to learn now includes re-examining the lessons from the past. Through careful archaeology of Self, I am challenged to understand the nature of my assumptions and beliefs, the context of meaning-making in which I learned those lessons, and learning different lessons with my current understanding of the world.

Going Forward Is Leaving Past Behind

Greetings, hypothetical Web app developer. So I wrote this thing. It’s for your eyes only. Your non-Web developer buddy will find my ramblings mostly trivial and nonsensical. But you… you are in for a trip. I think. Ready?

Prologue

It’s hard to part with the past. Things we’ve done, stuff we’ve learned, the experiences we’ve had — they aren’t just mental artifacts. They shape us, define us. Yet, they are also the scars we carry, the cold-sweat nightmares that keep us awake. And thus continues our struggle between embracing and shedding what has been.

In this way, a platform is somewhat like us. If it only embraces its past, it can’t ever evolve. If it only looks into the future, it’s no longer a platform. Striking the balance is a delicate game.

Progress

In order to survive, a platform has to move forward. A tranquil platform is a dead platform.

In particular, the Web platform had been caught napping. It awoke startled, facing the mobile beast that’s eating the world and went: Oh shit.

Turns out, our platform has gotten a bit plump. All those bells and whistles are now just flab in the way of scampering limbs. It’s time to get lean–or be lunch.

What’s worse, we aren’t even in the same league. While we’re still struggling to run without apoplexy, the other guy can fly and shoot lasers. We’re so screwed. Gotta get cranking on those lasers. And start losing weight.

Cost

Losing weight is hard work. Like with anything where we give up our habits, the way we steel ourselves and go through is by thinking of the cost of not changing.

For the platform, the obvious one is the code size, which is really a proxy for the cost of complexity — the engineering and maintenance complexity, to be precise. Making a modern browser is an incredibly large task and adding urgency further skews the triangle to higher costs.

Then there’s this paradoxically-sounding thing:

The less often a feature is used, the more it costs.

This is the opportunity cost. The more complicated the system, the more confused is the reasoning about the next steps. At the limit, you can’t step forward at all — there’s always an artifact from your past in the way, be it the fun extra nested event loop, the thing you thought was cool back then, or the the dead appendages you grew one day, just for the hell of it.

Here’s another way to put it (now with action figures!): You have a platform with features and users. Bob is a user of a feature. The cost of keeping this feature in the platform is evenly distributed among all users.

However, if Bob is the only user of the feature, something weird happens: all the users still pay the costs, but now they’re paying them to fund Bob’s habit.

As other users ask for new capabilities and polish, Bob’s feature slowly sinks to the bottom of priorities. A statistical wash, the code of the feature grows a beard and stops doing laundry. Bad smells and bugs creep in. With the rest of the code base moving on, the likelihood of a fire drill around this forgotten mess only goes up.

Time is finite and you spend non-zero time on every feature. There’s some feature work you’re not doing to keep this Bob-only feature.

At this point, all other users should be strongly motivated to make Bob stop using his feature. Bob’s dragging everyone down.

Budget

These are all pretty obvious thought experiments, I guess. Web platform engineers (aka browser builders) are a limited resource. For you, Web app developers, they are your home improvement budget.

Thus we arrive to the main point of this anecdote: how would you rather have this budget allocated?

The answer likely goes like this (pardon for possibly putting words in your mouth):

I want you bastards to let me build things that are not terrible on mobile. You are moving too slowly and that’s highly annoying. My boss is telling me to build a native app, and I SWEAR I will, if you don’t start moving your goofy ass NOW. You know what? I am leaving now. Hear these footsteps?

I’m holding my breath, hoping you’re just faking those footsteps. And if you are (whew!), it seems fair to assume that you want most of your budget spent on making browser leaner, meaner, and capable of flying while shooting lasers. Not maintaining the old stuff. Which means that we need to get serious about deprecation. And being serious means we need data.

Data

In Blink land, we have a fairly comprehensive usage measuring system. Despite some limitations, it provides a reasonable approximation of how widely a feature is used.

Just knowing how widely it is used isn’t enough. There’s definitely some extra dimensions here. Despite their equally low usage, there’s a difference between a newly-launched feature and a forgotten one. Similarly, something that’s rarely used could be so well-entrenched in some enterprise front-end somewhere that removing it will be met with tortured screams of anguish.

We also need to give you, web developers, clear indicators of our past mistakes. There are plenty of platform features that are frequently used, but we platform peeps can’t look at without a frown. It seemed like a good idea at the time. Then the reality happened. Communicating this frown is sometimes difficult, but necessary to look forward.

Action

Clearly, we have work to do. Arriving at a clear framework of deprecation principles is trial, error, and likely yet more tears. But we know we need it. We’re working on it.

As for you, my dear web developer… I need your help.

Use feature metrics in planning your new work and refactoring. If you see a feature that’s falling behind in statistics, think twice about using/keeping it. Talk to your friends and explain the danger of relying on old, rarely used things.

Don’t be a bob. Don’t let your friends be bobs. Being a bob sucks. Not the actual person named “Bob”, of course. Being that Bob is totally awesome.

Wrap Your Head Around Gears Workers

Google I/O was a continuous, three-thousand-person mind meld, so I talked and listened to a lot of people last week. And more often than not I discovered that Gears is still mostly perceived as some wand that you can somehow wave to make things go offline. Nobody is quite sure knows how, but everyone is quite sure it’s magic.

It takes a bit of time to accept that Gears is not a bottled offlinifier for sites. It takes even more time to accept that it’s not even an end-user tool, or something that just shims between the browser and the server and somehow saves you from needing to rethink how you approach the Web to take it offline. That the primitives, offered by Gears, are an enabling technology that gives you the capability to make completely new things happen on the Web, and it is your, developer’s task to apply it to solve problems, specific to your Web application.

And it’s probably the hardest to accept that there is no one-size-fits-all solution to the problem of taking your application offline. Not just because the solution may vary depending on what your Web application does, but also because the actual definition of the problem may change from site to site. And pretty much any way you slice it, the offline problem is um, hard.

It’s not surprising then that all this thinking often leaves behind a pretty cool capability of Gears: the workers. Honestly, workers and worker pools are like the middle child of Gears. Everybody kind of knows about them, but they’re prone to be left behind in an airport during a family vacation. Seems a bit unfair, doesn’t it?

I missed the chance to see Steven Saviano‘s presentation on Google Docs, but during a hallway conversation, it appears that we share similar thoughts about Gears workers: it’s all about how you view them. The workers are not only for crunching heavy math in a separate thread, though that certainly is a good idea. The workers are also about boundaries and crossing them. With the cross-origin workers and the ability to make HTTP requests, it takes only a few mental steps to arrive at a much more useful pattern: the proxy worker.

Consider a simple scenario: your JavaScript application wants to consume content from another server (the vendor). The options are fairly limited at the moment — you either need a server-side proxy or use JSON(P). Neither solution is particularly neat, because the former puts undue burden on your server and the latter requires complete trust of another party.

Both approaches are frequently used today and mitigated by combinations of raw power or vendor’s karma. The upcoming cross-site XMLHttpRequest and its evil twin XDR will address this problem at the root, but neither is yet available in a released product. Even then, you are still responsible for parsing the content. Somewhere along the way, you are very likely to write some semblance of a bridge that translates HTTP requests and responses into methods and callbacks, digestible by your Web application.

This is where you, armed with the knowledge of the Gears API, should go: A-ha! Wouldn’t it be great if the vendor had a representative, who spoke JavaScript? We might just have a special sandbox for this fella, where it could respond to our requests, query the vendor, and pass messages back in. Yes, I am talking about a cross-origin worker that acts as a proxy between your Web application and the vendor.

As Steven points out at his talk (look for the sessions on YouTube soon — I saw cameras), another way to think of this relationship is the RPC model: the application and the vendor worker exchange messages that include procedure name, body, and perhaps even version and authentication information, if necessary.

Let’s imagine how it’ll work. The application sets up a message listener, loads the vendor worker, and sends out the welcome message (pretty much along the lines of the WorkerPool API Example):

// application.js:
var workerPool = google.gears.factory.create('beta.workerpool');
var vendorWorkerId;
// true when vendor and client both acknowledged each other
var engaged;
// set up application listener
workerPool.onmessage = function(a, b, message) {
  if (!message.sender != vendorWorkerId) {
    // not vendor, pass
    return;
  }
  if (!engaged) {
    if (message.text == 'READY') {
      engaged = true;
    }
    return;
  }
  processResponse(message);
}
vendorWorkerId = workerPool.createWorkerFromUrl(
                                'http://vendorsite.com/workers/vendor-api.js');
workerPool.sendMessage('WELCOME', vendorWorkerId);

As the vendor worker loads, it sets up its own listener, keeping an ear out for the WELCOME message, which is its way to hook up with the main worker:

// vendor-api.js:
var workerPool = google.gears.workerPool;
// allow being used across origin
workerPool.allowCrossOrigin();
var clientWorkerId;
// true when vendor and client acknowledged each other
var engaged;
// set up vendor listener
workerPool.onmessage = function(a, b, message) {
  if (!engaged) {
    if (message.text == 'WELCOME') {
      // handshake! now both parties know each other
      clientWorkerId = message.sender;
      workerPool.sendMessage('READY', clientWorkerId);
    }
    return;
  }
  // listen for requests
  processRequest(message);
}

As an aside, the vendor can also look at message.origin as an additional client validation measure, from simple are you on my subdomain checks to full-blown OAuth-style authorization schemes.

Once both application and the vendor worker acknowledge each other’s presence, the application can send request messages to the vendor worker and listen to responses. The vendor worker in turn listens to requests, communicates with the vendor server and sends the responses back to the server. Instead of being rooted in HTTP, the API now becomes a worker message exchange protocol. In which case the respective processing functions, processRequest and processResponse would be responsible for handling the interaction (caution, freehand pseudocoding here and elsewhere):

// vendor-api.js
function processRequest(message) {
  var o = toJson(message); // play safe here, ok?
  if (!o || !o.command) {
    // malformed message
    return;
  }
  switch(o.command)
    case 'public': // fetch all public entries
      // make a request to server, which fires specified callback on completion
      askServer('/api/feed/public', function(xhr) {
        var responseMessage = createResponseMessage('public', xhr);
        // send response back to the application
        workerPool.sendMessage(responseMessage, clientWorkerId);
      });
      break;
    // TODO: add more commands
  }
}

// application.js
function processResponse(message) {
  var o = toJson(message);
  if (!o || !o.command) {
    // malformed message
    return;
  }
  switch(o.command) {
    case 'public': // public entries received
      renderEntries(o.entries);
      break;
    // TODO: add more commands
  }
}

You could also wrap this into a more formal abstraction, such as the Pipe object that I developed for one of my Gears adventures.

Now the vendor goes, whoa! I have Gears. I don’t have to rely on dumb HTTP requests/responses. I can save a lot of bandwidth and speed things up by storing most current content in a local database, and only query for changes. And the fact that this worker continues to reside on my server allows me to continue improving it and offer new features, as long as the message exchange protocol remains compatible.

And so you and the vendor live happily ever after. But this is not the only happy ending to this story. In fact, you don’t even have to go to another server to employ the proxy model. The advantage of keeping your own server’s communication and synchronization plumbing in a worker is pretty evident once you realize that it doesn’t ever block UI and provides natural decoupling between what you’d consider the Model part of your application. You could have your application go offline and never realize it, because the proxy worker could handle both monitoring of the connection state and seamless switching between local storage and server data.

Well, this post is getting pretty long, and I am no Steve Yegge. Though there are still plenty of problems to solve (like gracefully degrading this proxy model to a non-Gears environment), I hope my rambling gave you some new ideas on how to employ the worker goodness in your applications and gave you enough excitement to at least give Gears a try.

.NET on Gears: A Tutorial

The google-gears group gets a lot of questions from .NET developers. So I decided to help out. In this tutorial, we will build a simple ASP.NET timesheet entry application (because, you know, everybody loooves timesheet entry). Then, we’ll bolt on Google Gears to eliminate any excuses for not entering timesheets. Say, you’re sitting 15,000 feet above ground in cushy herd-class accommodations, with your laptop cracked about 60 degrees, keyboard pressed firmly against your chest, praying that the guy in front of you doesn’t recline. This is the point where Google Gears comes to save you from yet another round of Minesweeper. You fire up your browser, point it to your corporate Intranet’s timesheet entry page and — boom! — it comes right up. You enter the data, click submit and — boom! — the page takes it, prompting your aisle-mate to restart the search for the Ethernet socket in and around his tiny personal space. And after you cough up 10 bucks for the internet access at the swanky motel later that night, your browser re-syncs the timesheets, uploading the data entered offline to the server. Now, that’s what I call impressive. You can read more about features and benefits of Gears on their site later. Right now, we have work to do (if you’d like, you can download the entire project and follow along).

Step 1: Create ASP.NET Application

First, we start with the ASP.NET part of the application, which is largely drag-n-drop:

  • Create new database, named YourTimesheets.
  • In this database, create a table Entries, with the following fields (you can also run this script):
    • ID int, this will also be our identity field
    • StartDateTime datetime
    • DurationsMin int
    • Project nvarchar(100)
    • Billable bit
    • Comment ntext
  • In Visual Studio 2005, create an ASP.NET project, also named YourTimesheets.
  • In design mode, drag SqlDataSource from Toolbox bar onto the Default.aspx file.
  • Specify the connection string using SqlDataSource designer, pointing to the newly created database (figure 1). Save the connection string in Web.config (figure 2).
  • Then, drag GridView onto the same design surface and connect it to the data source (figure 3 and figure 4).
  • Now, the input form. I added mine by hand, by you can use the same drag from Toolbox technique to get yours (figure 5).
  • Hook up the form fields and SqlDataSource INSERT query (figure 6). You don’t have to do anything but point and click here.
  • Finally, go to the Default.aspx.cs code-behind file and make sure that the Postback saves form data by typing (gasp!) a C# statement (listing 1). Notice that I also added a self-redirect there to make sure that the page is always rendered as a result of a GET request. This may seem odd to you, but how many times did you hit Refresh on your browser and saw the woo wee! let’s repost the data you just entered, again! message? This one-liner prevents that situation.
  • At this point, you have a fully functional ASP.NET timesheet entry application, but let’s go ahead and add some styling to it (figure 7 and listing 2).

It is probably worth mentioning that no amount of styling will fix some obvious usability problems with the data entry in this particular piece of user interface, but hey, I didn’t call this article ASP.NET on Gears: A Production-ready Application, right?

Step 2: Develop Gears Application Concept

Moving on to Gears stuff. This part of the show contains graphic hand-coding and conceptual thinking that may not be appropriate for those who build their stuff using the Toolbox bar and Design View. People who are allergic to Javascript should ask their doctor before taking this product. Just kidding! You’ll love it, you’ll see … I think. For this tutorial, I chose to use the 0.2.2.0 build of Gears, which is not yet a production build, but from what I heard will be shortly. This build offers a quite a bit more functionality for workers, such as HttpRequest and Timer modules, and as you’ll see shortly, we’ll need them in this application. Let’s first figure out how this thing will work. When connected (online), the application should behave as if Gears weren’t bolted on: entry submissions go directly to the server. Once the connection is severed (application goes offline), we can use LocalServer to serve application resources so that the page still comes up. Obviously, at this point we should intercept form submission to prevent the application from performing a POST request (those are always passed through the LocalServer). As we intercept the submission, we put the submitted data into a Database table. Then, when back online, we replay the submissions back to the server asynchronously, using WorkerPool and HttpRequest, reading from the Database table. Speaking of back online, we’ll need some way to detect the state of the application. We’ll do this by setting up a WorkerPool worker, making periodic HttpRequest calls to a URL that’s not registered with the LocalServer. When request fails, we deem the state to be offline. When request succeeds, we presume that things are online. Simple enough? To keep our dear user aware of what’s going on, we’ll need to do quite a bit of DOM manipulation. No, not that Dom. This DOM. For instance, the data, entered offline should be displayed for the user in a separate, clearly labeled table. We will also need to know of events like the user attempting to submit the form, so that we could intercept the submission and stuff it into Database. Oh, and there’s one more thing. Since we build this application to operate both offline and online, we can’t rely on server-based validation. For this task, I chose to write my own client-side validation, but you can try and tinker with the standard ASP.NET 2.0 validation controls and the crud they inject in your document. To summarize, we need the following components (let’s go ahead and name them, because naming things is fun):

  • Database, to write and read entries, entered offline.
  • DOM, to intercept submits, changes of input values, writing offline entries table, and other things that involve, well, DOM manipulation.
  • Monitor, to poll server and detect when the application becomes offline and online.
  • Store, to keep track of the resources that will be handled by LocalServer when application is offline.
  • Sync, to relay submitted offline data back to the server.
  • Validator, to ensure that the field data is valid before it’s submitted, whether online or offline.

Step 3: Define Interfaces

Piece of cake! The only thing left is writing some code. Perhaps we should start with defining how these pieces of the puzzle will interact. To keep code digestible and easy to hack on (it’s a tutorial, right?), we will make sure that these interactions are clearly defined. To do that, let’s agree on a couple of rules:

  • Each component exposes a consistent way to interact with it
  • A component may not call another component directly

It’s like an old breadboard from your science club tinkering days: each component is embedded into a block of non-conductive resin, with only inputs and outputs exposed. You plug the components into a breadboard and build the product by wiring those inputs and outputs (figure 8). In our case, since our components are Javascript objects, we’ll define an input as any Javascript object member, and an output as an onsomethinghappened handler, typical for DOM0 interfaces. And here we go, starting with the Database object, in the order of the alphabet:


// encapsulates working with Gears Database module
// model
function Database() {

    // removes all entries from the model
    this.clear = function() {}

    // opens and initializes the model
    // returns : Boolean, true if successful, false otherwise
    this.open = function() {}

    // reads entries and writes them into the supplied writer object
    // the writer object must have three methods:
    // open() -- called before reading begins
    // write(r, i, nextCallback) -- write entry, where:
    // r : Array of entry fields
    // i : Number current entry index (0-based)
    // nextCallback : callback function, which must be called
    // after the entry is written
    // close() -- called after reading has completed
    this.readEntries = function(writer) {}

    // writes new entry
    // params : Array of entry fields (StartDateTime, DurationMins,
    // Project, Billable, Comment, FormData)
    this.writeEntry = function(params) {}
}

It’s worth noting that the readEntries method mimics the archetypical Writer and asynchronous call patterns from the .NET framework. I hope you’ll think of them as the familiar faces in this crowd. The DOM component has the most ins and outs, primarily because, well, we do a lot of things with the browser DOM:


// encapsulates DOM manipulation and events
// view
function DOM() {

    // called when the browser DOM is ready to be worked with
    this.onready = function() {}

    // called when one of the inputs changes. Sends as parameters:
    // type : String, type of the input
    // value : String, value of the input
    this.oninputchange = function(type, value) {}

    // called when the form is submitted.
    // if it returns Boolean : false, the submission is cancelled
    // submission proceeds, otherwise
    this.onsubmit = function() {}

    // hooks up DOM event handlers
    this.init = function() {}

    // loads (or reloads) entries, entered offline by creating
    // and populating a table just above the regular timesheets table
    // has the same signature as the writer parameter of the
    // Database.readEntries(writer)... because that's what it's being
    // used by
    this.offlineTableWriter = {
    open: function() {},
    write: function(r, i, nextCallback) {},
    close: function() {}
    }

    // provides capability to show an error or info message. Takes:
    // type : String, either 'error' or 'info' to indicate the type of
    // the message
    // text : String, text of the message message
    this.indicate = function(type, text) {}

    // grabs relevant input values from the form inputs
    // returns : Array of parameters, coincidentally in exactly the
    // format that Database.writeEntry needs
    this.collectFieldValues = function() {}

    // returns : String, URL that is set in of the form action attribute
    this.getPostbackUrl = function() {}

    // removes a row from the offline table. Takes:
    // id : String, id of the entry
    this.removeRow = function(id) {}

    // remove the entire offline table
    this.removeOfflineTable = function() {}

    // enable or disable submit. Takes:
    // enable : Boolean, true to enable submit button, false to disable
    this.setSubmitEnabled = function(enable) {}

    // iterate through fields and initialize field values, according to type
    // Takes:
    // action : Function, which is given:
    // type : String, the type of the input
    // and expected to return : String, a good initial value
    this.initFields = function(action) {}
}

Monitor has a rather simple interface: start me and I’ll tell you when the connection changes:


// provides connection monitoring
// controller
function Monitor() {

    // triggered when connection changes
    // sends as parameter:
    // online : Boolean, true if connection became available,
    // false if connection is broken
    this.onconnectionchange = function(online) {};

    // starts the monitoring
    this.start = function() {}
}

Is this a simplicity competition? Because then Store takes the prize:


// encapsulates dealing with LocalServer
// model
function Store() {

    // opens store and captures application assets if not captured already
    // returns : Boolean, true if LocalServer and ResourceStore
    // instance are successfully created, false otherwise

    this.open = function() {}
    // forces refresh of the ResourceStore
    this.refresh = function() {}
}

Synchronization algorithm in this tutorial is exceedingly simple, we basically just start it and wait for it to complete. As each entry is uploaded, the Sync component reports it, so that we could adjust our presentation accordingly:


// synchronizes (in a very primitive way) any entries collected offline
// with the database on the server by replaying form submissions
function Sync() {

    // called when a synchronization error has occured. Sends:
    // message : String, the message of the error
    this.onerror = function(message) {}

    // called when the synchronization is complete.
    this.oncomplete = function() {}

    // called when an entry was uploaded to the server. Sends:
    // id : String, the rowid of the entry
    this.onentryuploaded = function(id) {}

    // starts synchronization. Takes:
    // url : String, the url to which to replay POST requests
    this.start = function(url) {}
}

Finally, the Validator. It’s responsible both for providing good initial values for the form, as well as making sure the user is entering something legible.


// encapsulates validation of values by type
function Validator() {

    // provides good initial value, given a type. Takes:
    // type : String, the type of the input, like 'datetime' or
    // 'number'
    // returns : String, initial value
    this.seedGoodValue = function(type) {}

    // validates a value of a specified type. Takes:
    // type : String, the type of the input.
    // value : String, value to validate
    // returns : Boolean, true if value is valid, false otherwise
    this.isValid = function(type, value) {}
}

Whew! Are we there yet? Almost.

Step 4: Write Code

This is where we pull up our sleeves and get to work. There’s probably no reason to offer a play-by-play on the actual process of coding, but here are a couple of things worth mentioning:

  • Javascript is a dynamic, loosely-typed language. Enjoy it. Don’t square yourself into compile-time thinking. This is funk, not philharmonic.
  • Javascript is single-threaded. The trick that you might have learned with 0-timeout doesn’t actually start a new thread. It just waits for its opportunity to get back on the main thread.
  • Gears workers, on the other hand, are truly multi-threaded. There is some pretty neat plumbing under the hood that sorts out this dichotomy by queueing the messages, and you might want to be aware of that when writing the code. For instance, calling main thread with UI operations from a worker doesn’t make them asynchronous: the message handlers will still line up and wait for their turn. So, if your worker does a lot of waiting on the main thread, you may not see as much benefit from using the worker pools.
  • Gears currently lack a database or resource store management console with slick user interface (hint: you should perhaps join the project and lend a hand with that). But dbquery and webcachetool are good enough. For this project, I cooked up asmall pagethat, upon loading, blows away all known state of the application, and that was pretty handy in development (listing 3).
  • There is a very simple way to simulate offline state on your local machine. It’s called iisreset. From command line, run iisreset /stop to stop your Web server and you’ll have a perfect simulation of a broken connection. Run iisreset /start to get the application back online.

Armed with these Gear-ly pearls of wisdom, you jump fearlessly on the interfaces above and get coding. Or… you can just see how I’ve done it (listing 3

).

Step 5: Feed the Monkey

Feed the monkey? Wha… ?! Just wondering if you’re still paying attention. Technically, we’re done here. The application is working (to see for yourself, download the screencast or watch it all fuzzy on YouTube). As you may have gleaned from our coding adventure, Google Gears offers opportunities that weren’t available to front-end developers before: to build Web applications that work offline or with occasionally-available connection, to add real multi-threading to Javascript, and much more. What’s cool is that Gears are already available on many platforms and browsers (including Internet Explorer), and the list is growing quickly. Perhaps PC World is onto something, calling it the most innovative product of 2007. But don’t listen to me: I am a confessed Gearhead. Try it for yourself.