Dimitri Glazkov

Web and About

Archive for the ‘Uncategorized’ Category

Going Forward Is Leaving Past Behind

with 2 comments

Greetings, hypothetical Web app developer. So I wrote this thing. It’s for your eyes only. Your non-Web developer buddy will find my ramblings mostly trivial and nonsensical. But you… you are in for a trip. I think. Ready?

Prologue

It’s hard to part with the past. Things we’ve done, stuff we’ve learned, the experiences we’ve had — they aren’t just mental artifacts. They shape us, define us. Yet, they are also the scars we carry, the cold-sweat nightmares that keep us awake. And thus continues our struggle between embracing and shedding what has been.

In this way, a platform is somewhat like us. If it only embraces its past, it can’t ever evolve. If it only looks into the future, it’s no longer a platform. Striking the balance is a delicate game.

Progress

In order to survive, a platform has to move forward. A tranquil platform is a dead platform.

In particular, the Web platform had been caught napping. It awoke startled, facing the mobile beast that’s eating the world and went: Oh shit.

Turns out, our platform has gotten a bit plump. All those bells and whistles are now just flab in the way of scampering limbs. It’s time to get lean–or be lunch.

What’s worse, we aren’t even in the same league. While we’re still struggling to run without apoplexy, the other guy can fly and shoot lasers. We’re so screwed. Gotta get cranking on those lasers. And start losing weight.

Cost

Losing weight is hard work. Like with anything where we give up our habits, the way we steel ourselves and go through is by thinking of the cost of not changing.

For the platform, the obvious one is the code size, which is really a proxy for the cost of complexity — the engineering and maintenance complexity, to be precise. Making a modern browser is an incredibly large task and adding urgency further skews the triangle to higher costs.

Then there’s this paradoxically-sounding thing:

The less often a feature is used, the more it costs.

This is the opportunity cost. The more complicated the system, the more confused is the reasoning about the next steps. At the limit, you can’t step forward at all — there’s always an artifact from your past in the way, be it the fun extra nested event loop, the thing you thought was cool back then, or the the dead appendages you grew one day, just for the hell of it.

Here’s another way to put it (now with action figures!): You have a platform with features and users. Bob is a user of a feature. The cost of keeping this feature in the platform is evenly distributed among all users.

However, if Bob is the only user of the feature, something weird happens: all the users still pay the costs, but now they’re paying them to fund Bob’s habit.

As other users ask for new capabilities and polish, Bob’s feature slowly sinks to the bottom of priorities. A statistical wash, the code of the feature grows a beard and stops doing laundry. Bad smells and bugs creep in. With the rest of the code base moving on, the likelihood of a fire drill around this forgotten mess only goes up.

Time is finite and you spend non-zero time on every feature. There’s some feature work you’re not doing to keep this Bob-only feature.

At this point, all other users should be strongly motivated to make Bob stop using his feature. Bob’s dragging everyone down.

Budget

These are all pretty obvious thought experiments, I guess. Web platform engineers (aka browser builders) are a limited resource. For you, Web app developers, they are your home improvement budget.

Thus we arrive to the main point of this anecdote: how would you rather have this budget allocated?

The answer likely goes like this (pardon for possibly putting words in your mouth):

I want you bastards to let me build things that are not terrible on mobile. You are moving too slowly and that’s highly annoying. My boss is telling me to build a native app, and I SWEAR I will, if you don’t start moving your goofy ass NOW. You know what? I am leaving now. Hear these footsteps?

I’m holding my breath, hoping you’re just faking those footsteps. And if you are (whew!), it seems fair to assume that you want most of your budget spent on making browser leaner, meaner, and capable of flying while shooting lasers. Not maintaining the old stuff. Which means that we need to get serious about deprecation. And being serious means we need data.

Data

In Blink land, we have a fairly comprehensive usage measuring system. Despite some limitations, it provides a reasonable approximation of how widely a feature is used.

Just knowing how widely it is used isn’t enough. There’s definitely some extra dimensions here. Despite their equally low usage, there’s a difference between a newly-launched feature and a forgotten one. Similarly, something that’s rarely used could be so well-entrenched in some enterprise front-end somewhere that removing it will be met with tortured screams of anguish.

We also need to give you, web developers, clear indicators of our past mistakes. There are plenty of platform features that are frequently used, but we platform peeps can’t look at without a frown. It seemed like a good idea at the time. Then the reality happened. Communicating this frown is sometimes difficult, but necessary to look forward.

Action

Clearly, we have work to do. Arriving at a clear framework of deprecation principles is trial, error, and likely yet more tears. But we know we need it. We’re working on it.

As for you, my dear web developer… I need your help.

Use feature metrics in planning your new work and refactoring. If you see a feature that’s falling behind in statistics, think twice about using/keeping it. Talk to your friends and explain the danger of relying on old, rarely used things.

Don’t be a bob. Don’t let your friends be bobs. Being a bob sucks. Not the actual person named “Bob”, of course. Being that Bob is totally awesome.

Written by Dimitri Glazkov

April 24, 2014 at 3:13 pm

Posted in Uncategorized

Wrap Your Head Around Gears Workers

with one comment

Google I/O was a continuous, three-thousand-person mind meld, so I talked and listened to a lot of people last week. And more often than not I discovered that Gears is still mostly perceived as some wand that you can somehow wave to make things go offline. Nobody is quite sure knows how, but everyone is quite sure it’s magic.

It takes a bit of time to accept that Gears is not a bottled offlinifier for sites. It takes even more time to accept that it’s not even an end-user tool, or something that just shims between the browser and the server and somehow saves you from needing to rethink how you approach the Web to take it offline. That the primitives, offered by Gears, are an enabling technology that gives you the capability to make completely new things happen on the Web, and it is your, developer’s task to apply it to solve problems, specific to your Web application.

And it’s probably the hardest to accept that there is no one-size-fits-all solution to the problem of taking your application offline. Not just because the solution may vary depending on what your Web application does, but also because the actual definition of the problem may change from site to site. And pretty much any way you slice it, the offline problem is um, hard.

It’s not surprising then that all this thinking often leaves behind a pretty cool capability of Gears: the workers. Honestly, workers and worker pools are like the middle child of Gears. Everybody kind of knows about them, but they’re prone to be left behind in an airport during a family vacation. Seems a bit unfair, doesn’t it?

I missed the chance to see Steven Saviano‘s presentation on Google Docs, but during a hallway conversation, it appears that we share similar thoughts about Gears workers: it’s all about how you view them. The workers are not only for crunching heavy math in a separate thread, though that certainly is a good idea. The workers are also about boundaries and crossing them. With the cross-origin workers and the ability to make HTTP requests, it takes only a few mental steps to arrive at a much more useful pattern: the proxy worker.

Consider a simple scenario: your JavaScript application wants to consume content from another server (the vendor). The options are fairly limited at the moment — you either need a server-side proxy or use JSON(P). Neither solution is particularly neat, because the former puts undue burden on your server and the latter requires complete trust of another party.

Both approaches are frequently used today and mitigated by combinations of raw power or vendor’s karma. The upcoming cross-site XMLHttpRequest and its evil twin XDR will address this problem at the root, but neither is yet available in a released product. Even then, you are still responsible for parsing the content. Somewhere along the way, you are very likely to write some semblance of a bridge that translates HTTP requests and responses into methods and callbacks, digestible by your Web application.

This is where you, armed with the knowledge of the Gears API, should go: A-ha! Wouldn’t it be great if the vendor had a representative, who spoke JavaScript? We might just have a special sandbox for this fella, where it could respond to our requests, query the vendor, and pass messages back in. Yes, I am talking about a cross-origin worker that acts as a proxy between your Web application and the vendor.

As Steven points out at his talk (look for the sessions on YouTube soon — I saw cameras), another way to think of this relationship is the RPC model: the application and the vendor worker exchange messages that include procedure name, body, and perhaps even version and authentication information, if necessary.

Let’s imagine how it’ll work. The application sets up a message listener, loads the vendor worker, and sends out the welcome message (pretty much along the lines of the WorkerPool API Example):

// application.js:
var workerPool = google.gears.factory.create('beta.workerpool');
var vendorWorkerId;
// true when vendor and client both acknowledged each other
var engaged;
// set up application listener
workerPool.onmessage = function(a, b, message) {
  if (!message.sender != vendorWorkerId) {
    // not vendor, pass
    return;
  }
  if (!engaged) {
    if (message.text == 'READY') {
      engaged = true;
    }
    return;
  }
  processResponse(message);
}
vendorWorkerId = workerPool.createWorkerFromUrl(
                                'http://vendorsite.com/workers/vendor-api.js');
workerPool.sendMessage('WELCOME', vendorWorkerId);

As the vendor worker loads, it sets up its own listener, keeping an ear out for the WELCOME message, which is its way to hook up with the main worker:

// vendor-api.js:
var workerPool = google.gears.workerPool;
// allow being used across origin
workerPool.allowCrossOrigin();
var clientWorkerId;
// true when vendor and client acknowledged each other
var engaged;
// set up vendor listener
workerPool.onmessage = function(a, b, message) {
  if (!engaged) {
    if (message.text == 'WELCOME') {
      // handshake! now both parties know each other
      clientWorkerId = message.sender;
      workerPool.sendMessage('READY', clientWorkerId);
    }
    return;
  }
  // listen for requests
  processRequest(message);
}

As an aside, the vendor can also look at message.origin as an additional client validation measure, from simple are you on my subdomain checks to full-blown OAuth-style authorization schemes.

Once both application and the vendor worker acknowledge each other’s presence, the application can send request messages to the vendor worker and listen to responses. The vendor worker in turn listens to requests, communicates with the vendor server and sends the responses back to the server. Instead of being rooted in HTTP, the API now becomes a worker message exchange protocol. In which case the respective processing functions, processRequest and processResponse would be responsible for handling the interaction (caution, freehand pseudocoding here and elsewhere):

// vendor-api.js
function processRequest(message) {
  var o = toJson(message); // play safe here, ok?
  if (!o || !o.command) {
    // malformed message
    return;
  }
  switch(o.command)
    case 'public': // fetch all public entries
      // make a request to server, which fires specified callback on completion
      askServer('/api/feed/public', function(xhr) {
        var responseMessage = createResponseMessage('public', xhr);
        // send response back to the application
        workerPool.sendMessage(responseMessage, clientWorkerId);
      });
      break;
    // TODO: add more commands
  }
}

// application.js
function processResponse(message) {
  var o = toJson(message);
  if (!o || !o.command) {
    // malformed message
    return;
  }
  switch(o.command) {
    case 'public': // public entries received
      renderEntries(o.entries);
      break;
    // TODO: add more commands
  }
}

You could also wrap this into a more formal abstraction, such as the Pipe object that I developed for one of my Gears adventures.

Now the vendor goes, whoa! I have Gears. I don’t have to rely on dumb HTTP requests/responses. I can save a lot of bandwidth and speed things up by storing most current content in a local database, and only query for changes. And the fact that this worker continues to reside on my server allows me to continue improving it and offer new features, as long as the message exchange protocol remains compatible.

And so you and the vendor live happily ever after. But this is not the only happy ending to this story. In fact, you don’t even have to go to another server to employ the proxy model. The advantage of keeping your own server’s communication and synchronization plumbing in a worker is pretty evident once you realize that it doesn’t ever block UI and provides natural decoupling between what you’d consider the Model part of your application. You could have your application go offline and never realize it, because the proxy worker could handle both monitoring of the connection state and seamless switching between local storage and server data.

Well, this post is getting pretty long, and I am no Steve Yegge. Though there are still plenty of problems to solve (like gracefully degrading this proxy model to a non-Gears environment), I hope my rambling gave you some new ideas on how to employ the worker goodness in your applications and gave you enough excitement to at least give Gears a try.

Written by Dimitri Glazkov

June 2, 2008 at 7:38 pm

Posted in Uncategorized

.NET on Gears: A Tutorial

with 7 comments

The google-gears group gets a lot of questions from .NET developers. So I decided to help out. In this tutorial, we will build a simple ASP.NET timesheet entry application (because, you know, everybody loooves timesheet entry). Then, we’ll bolt on Google Gears to eliminate any excuses for not entering timesheets. Say, you’re sitting 15,000 feet above ground in cushy herd-class accommodations, with your laptop cracked about 60 degrees, keyboard pressed firmly against your chest, praying that the guy in front of you doesn’t recline. This is the point where Google Gears comes to save you from yet another round of Minesweeper. You fire up your browser, point it to your corporate Intranet’s timesheet entry page and — boom! — it comes right up. You enter the data, click submit and — boom! — the page takes it, prompting your aisle-mate to restart the search for the Ethernet socket in and around his tiny personal space. And after you cough up 10 bucks for the internet access at the swanky motel later that night, your browser re-syncs the timesheets, uploading the data entered offline to the server. Now, that’s what I call impressive. You can read more about features and benefits of Gears on their site later. Right now, we have work to do (if you’d like, you can download the entire project and follow along).

Step 1: Create ASP.NET Application

First, we start with the ASP.NET part of the application, which is largely drag-n-drop:

  • Create new database, named YourTimesheets.
  • In this database, create a table Entries, with the following fields (you can also run this script):
    • ID int, this will also be our identity field
    • StartDateTime datetime
    • DurationsMin int
    • Project nvarchar(100)
    • Billable bit
    • Comment ntext
  • In Visual Studio 2005, create an ASP.NET project, also named YourTimesheets.
  • In design mode, drag SqlDataSource from Toolbox bar onto the Default.aspx file.
  • Specify the connection string using SqlDataSource designer, pointing to the newly created database (figure 1). Save the connection string in Web.config (figure 2).
  • Then, drag GridView onto the same design surface and connect it to the data source (figure 3 and figure 4).
  • Now, the input form. I added mine by hand, by you can use the same drag from Toolbox technique to get yours (figure 5).
  • Hook up the form fields and SqlDataSource INSERT query (figure 6). You don’t have to do anything but point and click here.
  • Finally, go to the Default.aspx.cs code-behind file and make sure that the Postback saves form data by typing (gasp!) a C# statement (listing 1). Notice that I also added a self-redirect there to make sure that the page is always rendered as a result of a GET request. This may seem odd to you, but how many times did you hit Refresh on your browser and saw the woo wee! let’s repost the data you just entered, again! message? This one-liner prevents that situation.
  • At this point, you have a fully functional ASP.NET timesheet entry application, but let’s go ahead and add some styling to it (figure 7 and listing 2).

It is probably worth mentioning that no amount of styling will fix some obvious usability problems with the data entry in this particular piece of user interface, but hey, I didn’t call this article ASP.NET on Gears: A Production-ready Application, right?

Step 2: Develop Gears Application Concept

Moving on to Gears stuff. This part of the show contains graphic hand-coding and conceptual thinking that may not be appropriate for those who build their stuff using the Toolbox bar and Design View. People who are allergic to Javascript should ask their doctor before taking this product. Just kidding! You’ll love it, you’ll see … I think. For this tutorial, I chose to use the 0.2.2.0 build of Gears, which is not yet a production build, but from what I heard will be shortly. This build offers a quite a bit more functionality for workers, such as HttpRequest and Timer modules, and as you’ll see shortly, we’ll need them in this application. Let’s first figure out how this thing will work. When connected (online), the application should behave as if Gears weren’t bolted on: entry submissions go directly to the server. Once the connection is severed (application goes offline), we can use LocalServer to serve application resources so that the page still comes up. Obviously, at this point we should intercept form submission to prevent the application from performing a POST request (those are always passed through the LocalServer). As we intercept the submission, we put the submitted data into a Database table. Then, when back online, we replay the submissions back to the server asynchronously, using WorkerPool and HttpRequest, reading from the Database table. Speaking of back online, we’ll need some way to detect the state of the application. We’ll do this by setting up a WorkerPool worker, making periodic HttpRequest calls to a URL that’s not registered with the LocalServer. When request fails, we deem the state to be offline. When request succeeds, we presume that things are online. Simple enough? To keep our dear user aware of what’s going on, we’ll need to do quite a bit of DOM manipulation. No, not that Dom. This DOM. For instance, the data, entered offline should be displayed for the user in a separate, clearly labeled table. We will also need to know of events like the user attempting to submit the form, so that we could intercept the submission and stuff it into Database. Oh, and there’s one more thing. Since we build this application to operate both offline and online, we can’t rely on server-based validation. For this task, I chose to write my own client-side validation, but you can try and tinker with the standard ASP.NET 2.0 validation controls and the crud they inject in your document. To summarize, we need the following components (let’s go ahead and name them, because naming things is fun):

  • Database, to write and read entries, entered offline.
  • DOM, to intercept submits, changes of input values, writing offline entries table, and other things that involve, well, DOM manipulation.
  • Monitor, to poll server and detect when the application becomes offline and online.
  • Store, to keep track of the resources that will be handled by LocalServer when application is offline.
  • Sync, to relay submitted offline data back to the server.
  • Validator, to ensure that the field data is valid before it’s submitted, whether online or offline.

Step 3: Define Interfaces

Piece of cake! The only thing left is writing some code. Perhaps we should start with defining how these pieces of the puzzle will interact. To keep code digestible and easy to hack on (it’s a tutorial, right?), we will make sure that these interactions are clearly defined. To do that, let’s agree on a couple of rules:

  • Each component exposes a consistent way to interact with it
  • A component may not call another component directly

It’s like an old breadboard from your science club tinkering days: each component is embedded into a block of non-conductive resin, with only inputs and outputs exposed. You plug the components into a breadboard and build the product by wiring those inputs and outputs (figure 8). In our case, since our components are Javascript objects, we’ll define an input as any Javascript object member, and an output as an onsomethinghappened handler, typical for DOM0 interfaces. And here we go, starting with the Database object, in the order of the alphabet:


// encapsulates working with Gears Database module
// model
function Database() {

    // removes all entries from the model
    this.clear = function() {}

    // opens and initializes the model
    // returns : Boolean, true if successful, false otherwise
    this.open = function() {}

    // reads entries and writes them into the supplied writer object
    // the writer object must have three methods:
    // open() -- called before reading begins
    // write(r, i, nextCallback) -- write entry, where:
    // r : Array of entry fields
    // i : Number current entry index (0-based)
    // nextCallback : callback function, which must be called
    // after the entry is written
    // close() -- called after reading has completed
    this.readEntries = function(writer) {}

    // writes new entry
    // params : Array of entry fields (StartDateTime, DurationMins,
    // Project, Billable, Comment, FormData)
    this.writeEntry = function(params) {}
}

It’s worth noting that the readEntries method mimics the archetypical Writer and asynchronous call patterns from the .NET framework. I hope you’ll think of them as the familiar faces in this crowd. The DOM component has the most ins and outs, primarily because, well, we do a lot of things with the browser DOM:


// encapsulates DOM manipulation and events
// view
function DOM() {

    // called when the browser DOM is ready to be worked with
    this.onready = function() {}

    // called when one of the inputs changes. Sends as parameters:
    // type : String, type of the input
    // value : String, value of the input
    this.oninputchange = function(type, value) {}

    // called when the form is submitted.
    // if it returns Boolean : false, the submission is cancelled
    // submission proceeds, otherwise
    this.onsubmit = function() {}

    // hooks up DOM event handlers
    this.init = function() {}

    // loads (or reloads) entries, entered offline by creating
    // and populating a table just above the regular timesheets table
    // has the same signature as the writer parameter of the
    // Database.readEntries(writer)... because that's what it's being
    // used by
    this.offlineTableWriter = {
    open: function() {},
    write: function(r, i, nextCallback) {},
    close: function() {}
    }

    // provides capability to show an error or info message. Takes:
    // type : String, either 'error' or 'info' to indicate the type of
    // the message
    // text : String, text of the message message
    this.indicate = function(type, text) {}

    // grabs relevant input values from the form inputs
    // returns : Array of parameters, coincidentally in exactly the
    // format that Database.writeEntry needs
    this.collectFieldValues = function() {}

    // returns : String, URL that is set in of the form action attribute
    this.getPostbackUrl = function() {}

    // removes a row from the offline table. Takes:
    // id : String, id of the entry
    this.removeRow = function(id) {}

    // remove the entire offline table
    this.removeOfflineTable = function() {}

    // enable or disable submit. Takes:
    // enable : Boolean, true to enable submit button, false to disable
    this.setSubmitEnabled = function(enable) {}

    // iterate through fields and initialize field values, according to type
    // Takes:
    // action : Function, which is given:
    // type : String, the type of the input
    // and expected to return : String, a good initial value
    this.initFields = function(action) {}
}

Monitor has a rather simple interface: start me and I’ll tell you when the connection changes:


// provides connection monitoring
// controller
function Monitor() {

    // triggered when connection changes
    // sends as parameter:
    // online : Boolean, true if connection became available,
    // false if connection is broken
    this.onconnectionchange = function(online) {};

    // starts the monitoring
    this.start = function() {}
}

Is this a simplicity competition? Because then Store takes the prize:


// encapsulates dealing with LocalServer
// model
function Store() {

    // opens store and captures application assets if not captured already
    // returns : Boolean, true if LocalServer and ResourceStore
    // instance are successfully created, false otherwise

    this.open = function() {}
    // forces refresh of the ResourceStore
    this.refresh = function() {}
}

Synchronization algorithm in this tutorial is exceedingly simple, we basically just start it and wait for it to complete. As each entry is uploaded, the Sync component reports it, so that we could adjust our presentation accordingly:


// synchronizes (in a very primitive way) any entries collected offline
// with the database on the server by replaying form submissions
function Sync() {

    // called when a synchronization error has occured. Sends:
    // message : String, the message of the error
    this.onerror = function(message) {}

    // called when the synchronization is complete.
    this.oncomplete = function() {}

    // called when an entry was uploaded to the server. Sends:
    // id : String, the rowid of the entry
    this.onentryuploaded = function(id) {}

    // starts synchronization. Takes:
    // url : String, the url to which to replay POST requests
    this.start = function(url) {}
}

Finally, the Validator. It’s responsible both for providing good initial values for the form, as well as making sure the user is entering something legible.


// encapsulates validation of values by type
function Validator() {

    // provides good initial value, given a type. Takes:
    // type : String, the type of the input, like 'datetime' or
    // 'number'
    // returns : String, initial value
    this.seedGoodValue = function(type) {}

    // validates a value of a specified type. Takes:
    // type : String, the type of the input.
    // value : String, value to validate
    // returns : Boolean, true if value is valid, false otherwise
    this.isValid = function(type, value) {}
}

Whew! Are we there yet? Almost.

Step 4: Write Code

This is where we pull up our sleeves and get to work. There’s probably no reason to offer a play-by-play on the actual process of coding, but here are a couple of things worth mentioning:

  • Javascript is a dynamic, loosely-typed language. Enjoy it. Don’t square yourself into compile-time thinking. This is funk, not philharmonic.
  • Javascript is single-threaded. The trick that you might have learned with 0-timeout doesn’t actually start a new thread. It just waits for its opportunity to get back on the main thread.
  • Gears workers, on the other hand, are truly multi-threaded. There is some pretty neat plumbing under the hood that sorts out this dichotomy by queueing the messages, and you might want to be aware of that when writing the code. For instance, calling main thread with UI operations from a worker doesn’t make them asynchronous: the message handlers will still line up and wait for their turn. So, if your worker does a lot of waiting on the main thread, you may not see as much benefit from using the worker pools.
  • Gears currently lack a database or resource store management console with slick user interface (hint: you should perhaps join the project and lend a hand with that). But dbquery and webcachetool are good enough. For this project, I cooked up asmall pagethat, upon loading, blows away all known state of the application, and that was pretty handy in development (listing 3).
  • There is a very simple way to simulate offline state on your local machine. It’s called iisreset. From command line, run iisreset /stop to stop your Web server and you’ll have a perfect simulation of a broken connection. Run iisreset /start to get the application back online.

Armed with these Gear-ly pearls of wisdom, you jump fearlessly on the interfaces above and get coding. Or… you can just see how I’ve done it (listing 3

).

Step 5: Feed the Monkey

Feed the monkey? Wha… ?! Just wondering if you’re still paying attention. Technically, we’re done here. The application is working (to see for yourself, download the screencast or watch it all fuzzy on YouTube). As you may have gleaned from our coding adventure, Google Gears offers opportunities that weren’t available to front-end developers before: to build Web applications that work offline or with occasionally-available connection, to add real multi-threading to Javascript, and much more. What’s cool is that Gears are already available on many platforms and browsers (including Internet Explorer), and the list is growing quickly. Perhaps PC World is onto something, calling it the most innovative product of 2007. But don’t listen to me: I am a confessed Gearhead. Try it for yourself.

Written by Dimitri Glazkov

January 31, 2008 at 8:30 pm

Posted in Uncategorized

Back into the Future of Web: HTML5 SQL Player

with 3 comments

Like a Three Stooges carpet-pull slapstick stunt, the HTML5 client-side storage spec changed drastically the night I released my Gears wrapper. Thump. Ow!

I am ok! I am ok! And better for it! This time, I am back with the vengeance. Why fight the change? Embrace it! Dear reader, allow me to present the HTML5 SQL Player, a tool that spec developers and curious bystanders alike can use to poke and prod the spec in action. Essentially, this is a Google Gears-based sandbox, in which a user can run Javascript code to query and test the interfaces, implemented by the specification. If I were into that kind of thing, there would be a picture of a crazy-eyed Christopher Lloyd and some reference to the movie that doomed his career. Yes, my friends, this sandbox is a glimpse into the yet-to-be-implemented technology.

And as such, beware of the bleeding edge. Some things in the spec are somewhat under… erm… specified (like the mode of transaction and its effect on sequential calls of the transaction method) and some things in the sandbox are under… erm… implemented (like changeVersion or SQL sanitation). But regardless, this approach is still the best if you’re trying to evaluate spec’s viability in an effort to make it better. And that’s what this is all about.

Written by Dimitri Glazkov

November 17, 2007 at 9:48 pm

Posted in Uncategorized

Jumping Off Audience Navigation Bandwagon

leave a comment »

Future Endeavor has another insightful post, followed by an interesting UX example of the University of Virginia front door. I am a big fan of this blog and would highly recommend it to anyone involved in higher education Web development. This time, Tony Dunn talks about the future of the University Web site. I like his thinking and I feel that my thinking is mostly aligned with it. Where we diverge is on the future of the audience-based navigation.

The truth is, I no longer believe in the necessity (or usefulness) of the audience-based navigation for a University. There, I said it. Having been the advocate for the last 8 years, I eventually came to realize that all it does is create an extra barrier for the user (umm, who am I? Which is the right door?) and is mostly ignored by the visitors, anyway (I am basing this on my observations and thought experiments).

Self-selection is a myth: as you probably know, the user commonly belongs to multiple or none of the offered audiences, and this artificial ritual of forcing the visitor to put the right hat on is not only confusing, it’s actually a little bit insulting.

What’s the alternative? Concentrate on three things:

  • Needs-based Clusters. Envelop topics, relevant to specific needs (How do I become a student?) into a cohesive (spit-and-polished!) and limited in scope sites.
  • Lifeline Links. Identify 3-5 most desperate and immediate needs of your visitor (I have to check my grades) and by golly, put them on the home page.
  • Ambient Findability. Make sure that each page on your site carries a potential of getting the user closer to achieving their goals.

That’s all for now. I am eager to hear your thoughts and opinions on my little turn-about.

Written by Dimitri Glazkov

November 15, 2007 at 10:12 am

Posted in Uncategorized

Chewing on Open Social

leave a comment »

So, the cat is out of the bag, in case you haven’t heard (and if you haven’t, what remote island are you living on?). I spent a bit of time this weekend, playing with the new toys, trying to analyze by immersion. Essentially, on the Javascript side, it’s one part existing Gadget API, one part new feature (you guessed it, named opensocial), and you’ve got yourself a whole new playing field to tinker with. Not being familiar with the Gadget API, I was learning both parts at the same time, which is never a bad thing.

After getting my sandbox permit, I hastily cooked up two simple gadgets, er.. social applications, the Twitter and the OpenZombie. Both of these are skeletal proofs-of-concept, on whose I have no intention to continue development. So, feel free to borrow in parts or in whole — it’s public domain, baby! I intentionally tried to keep them light-weight, client-side-only. Both have been casually tested with Firefox and IE7. In other words, don’t call me if you have a problem running either.

First application grabs data from Twitter using Gadget API calls and renders it to somewhat resemble a Twitter feed. It doesn’t actually use any of the OpenSocial API functionality and can be run in the iGoogle. It does use the UserPrefs to ask for the Twitter username, and Orkut’s current way of dealing with this is rather jarring, so be prepared for that.

Second one is my 45-minute take on the ever-ridiculous Zombies application on Facebook. Except this one actually bites automatically. As soon as the user stumbles upon my profile page, they are bitten by the OpenZombie application (with the corresponding activity stream message), and offered to install the application themselves as a vengeance-laden consolation prize. No stats are kept (and that would be hard, given that API doesn’t yet allow you to update owner’s person data), and no blood-curdling imagery is displayed. I figured, the next guy will come along and make it pretty. And by pretty I mean despicably horrific.

Speaking of the next guy, here are a couple of tips that I have for you:

  • When debugging the application, appending &bpc=1 to the URL of the page itself will disable caching of the application. Someone already built a Greasemonkey script for that.
  • Modularize your development. Make your application a harness that calls scripts and styles remotely:
  • <Module>
    	<ModulePrefs [attributes go here]>
    		<Require feature="opensocial-0.5"/>
                    [more feature requirements go here]
    	</ModulePrefs>
            [user prefs, etc. go here]
    	<Content type="html">
    	<![CDATA[
    		<script type='text/javascript'
    			src='[absolute script url]'></script>
    		<style>
    			@import url('[absolute style url');
    		</style>
    		<div id="message"></div>
    		<div id="panel"></div>
     	]]>
      </Content>
    </Module>
    

    Then, in your script, do something like this:

    _IG_RegisterOnloadHandler(function() {
            // your code goes here
    });
    
  • Once you have modularized your application, you can do another simple trick: edit your hosts file to temporarily point the hostname in the script and style URLs to localhost. Then make sure that these files are accessible from your local web server. Now you can edit them and see the changes without having to push the files to the server on which the application will eventually be hosted. Just don’t forget to remove the edits in the hosts file when you’re done developing.

Now, for a quick technology review of the OpenSocial Javascript API (can’t speak for the GData stuff, haven’t played with it). On the contrary to the few negative reactions in the blogosphere, I find OpenSocial pretty impressive. I think the API is easy to learn and follow, the transparent authentication and identity data management model is neat, and there’s plenty of room to play, or even build something useful. Bringing application development into the Javascript domain is a good thing. Yeah, the sandbox squeaks and rattles, but that’s typical for an early release. Give it a little time.

The API itself is wordy and a bit inelegant, though this may be a viewpoint,
skewed by the laconic beauty of JQuery. I am guessing that its current shape is probably a result of being tailored toward the more arcane Javascript
implementations. I can’t find any other explanation for the gratuitous global
namespace pollution or things like API objects having accessible underscored methods/fields.

But my biggest beef is with the Gadget API. With it’s let’s start now, it’s so simple! approach, it practically encourages hacky, spaghetti-style Web development. Adding even a primitive asset management to the XML declaration would be a win-win: developers are nudged to separate behavior, presentation, and markup, and the server gets to know in advance what’s needed to render a gadget, thus providing opportunities for caching, embedding, or aggregating the assets:

<Assets>
    <Asset Type="js" Src="http://example.com/twitter.js" />
    <Asset Type="css"  Src="http://example.com/twitter.css" />
...

Another thing that stood out is the lack of user experience management. Facebook went a long way (they invented their own language!) to keep the consistency of the user interface by offering common primitives, like profile action or the freshly baked board. Walking from application to application, you can easily see where the primitives end and developer’s own creative aspirations begin (and believe me, in 8 cases out of 10, it ain’t pretty). But at least they tried. The only thing that Gadget API has in this regard is handling of user preferences. That’s it. The containing IFRAME is essentially an open canvas. This is something that has to be addressed, especially considering that some partners in the alliance are pretty good about keeping their UX noses clean.

I hesitate to draw any sort of conclusions in regard to direction or
viability of the project. Obviously, this is a very early developer’s
preview, where it’s perfectly acceptable to come across matchsticks and
duct tape. As it stands right now, OpenSocial is certainly not as
well-oriented and focused as Facebook, and Orkut doesn’t make a good sandbox
container, because… well, let’s just say it won’t win any usability
awards. And certainly not visual design awards. Even with that, I can
see fairly clearly what Google wants to become: they want to be the social networking plumbing. Just like their search became the the Internet
for many users, I can speculate that Google hopes to offer free,
ubiquitous, and highly mashable pieces of infrastructure that power the
majority of person and community-centric software on the Web. Ultimately, I don’t believe
it’s a move in a game of chess, but a tiny step in the strategy that reaches much
farther and wider than everyone’s favorite blue-shaded time waster.

Written by Dimitri Glazkov

November 4, 2007 at 6:52 pm

Posted in Uncategorized

Slides from my IPSA presentation on HTML5 and Google Gears

with 2 comments

Today, at the monthly IPSA meeting, I gave a presentation on Google Gears and HTML5 client-side storage part of the spec. As promised, I uploaded the slides to this blog.

… Yes, I am going slide-less from now on. Jeff Keeton and I have already done a couple of browser tabs-only presentations before, and the simple method works as well or better than the slides.

Written by Dimitri Glazkov

November 1, 2007 at 2:43 pm

Posted in Uncategorized