Coblogging: Spontaneous aggregation

In my fairly short experience with blogging, I’ve noticed one thing about these new creatures of the Web: they are both connecting and isolating. The are connecting in the sense that anyone can read your stuff and attach their opinions about it, but they are also isolating, because your stuff typically sits in its own blog silo, all alone until some aggregating service graciously decides to let your posts socially mingle with the others. Folks over at weblogs.asp.net and other blog-hosting establishments have this advantage in the form of the front page feed, but what about the rest of us?

Also, the blogs are horrible for sustaining an online conversation: the comments are given such a third-world status that they are easily lost in contrast with the main post. As a solution, bloggers use pingbacks, but those are hard to track navigationally, taking some time to reconstruct the original flow of discussing. Take our recent exchange with Bertrand, Nikhil, and Peter for example. All three of us posted our own posts in our own blogs, and responded in comments of the others’ posts, which created a fairly messy pool of opinions with hard-to-discern direction or conclusion (of course, you know that in this case Nikihl gets the final word, so backtracking might help :).

Wouldn’t it be great if there was a way to organize these posts into a forum-like threaded discussion?

Enter coblog: a way to bring blogs together — as needed. 

Imagine that at the time of entering the post, you can designate it as the coblog root. Coblog root looks and acts just like an ordinary post, except maybe it has a “coblog” icon next to it (or some other visual hint of this sort).

Suppose your (friend|colleague), after reading your insightful (prose|poem), gets all (upset|excited|riled-up|flabbergasted) and decides to open a discussion on the topic of your post.

At this point, this (great|well-known|total-nobody) acquaintance of yours opens their blog and writes up a (heated|supportive|contemplative) response to your original post. At the end, (he|she) specifies that this is not an ordinary post, but a coblog post.

Coblog post requires a URL to the coblog root or any other coblog post. Using this URL, your buddy’s blog server determines location of the coblog root and sends it a TrackBack to notify of the new post in the coblog.

Instead of (or maybe in addition to) just sending a URL of the actual post in the TrackBack, the server sends the feed URL for all coblog posts that correspond to the specified coblog root on this server. Think of it as a variation on a category feed.

Upon receiving a TrackBack, the coblog host (the server which hosts the coblog root) adds the URL, sent with the TrackBack, to the coblog roll. There may be some moderation functionality in place (no, I don’t accept coblog feed from “MaskedBandit34252” — sorry, you look like a spammer to me) to prevent any server from being able to feed any posts to your coblog.

Coblog roll serves as a list of all potential sources of posts for the coblog. Each coblog root has its own coblog roll.

All of the coblog posts, including the coblog root, will be available as part of the aggregated feed, which always originates from the coblog host.

Now, after the discussion had started, both your and your comrade’s blogs will prominently feature a link to the coblog in the (sidebar|top bar|inconspicuous drop-down menu) of your home page. Clicking on this link will reveal the discussion as a chronological list of coblog posts, starting from the coblog root. Those participants who don’t have their own blogs could use comments as usual.

Naturally, coblogs should not be limited to just two people. The purpose of the coblogs is to facilitate discussion, and a good discussion often takes more than two (voices|flames).

So, what do you think? This is just a concept, but I think it has a pretty good promise.

Coding For Multithreading

Between the two, Ian Griffiths and Phil Haack have some good info on coding for multithreading. Developing multithreaded applications is hard. It takes experience, which comes in most painful forms of debugging and learning from engineering mistakes — unless you are a multithreading genius who sports a built-in support of multitasking and thread modeling directly in the brain.

Otherwise, every little bit of information helps. Here’s my contribution:

  • If you haven’t started coding yet, please spend some time at the whiteboard, modeling your threads and locks. No need to dive into a UML book — simple lines and arrows will suffice. Try to understand when a resource needs locking along the timeline, what possibilities are there for deadlocks and racing conditions. Multiple threads can be easily expressed in two dimensions — the easiest one is a graph where one axis is relative timeline and the other is application scope (local, thread, application, persisted data). Traverse your application data through it and study any point where it deeps below the application scope. There are more elaborate methods of engineering and modeling multithreaded applications, but if you’ve got nothing to begin with, the whiteboard is your best friend.
  • If you are dealing with existing code that is not thread-safe, definitely do try replacing your “lock“ statements with Phil’s TimedLock to trace potential deadlocks. You will still suffer extensive debugging therapy, but the TimedLock pill will help a little.
  • When writing a type that has to be thread-safe, pay very close attention to every member and how it is used. Consider marking those members that aren’t changed through the lifespan of the instance as “readonly“. This is not going to affect your performance, but will help you organize your type’s members into mutable and immutable piles.
  • Speaking of Immutable — this is a good pattern to use as far as thread safety is concerned. It may or may not apply to your scenarios, but should be considered when dealing with types, especially as a complement to Memento and State patterns.
  • Just one more on patterns — Proxy is a good way to encapsulate thread-safe access to data that is known to be continually updated.
  • Be twice as careful when writing a type that is instantiated statically. Some people mistakenly assume that once they’ve accomplished the feat of thread-safe instantiation, they are out of the woods. Just the opposite — static instances are the marked men of the “racing condition police“.
  • Post-constructor initialization of a static member is a bad smell as far as thread safety is concerned. It may be required to implement it in certain circumstances, but should be avoided at all costs.
  • Lazy instantiation is not something you apply blindly to any type — it may not save you much performance or memory, but it will make your architecture more complex. If you have to do lazy instantiation, try to use the lockless static instantiation (fifth version on the referenced page).
  • According to latest and greatest, just a simple double-lock is not enough. MemoryBarrier is required to ensure correct order of null comparisons.
  • Try to be conscious about where and why you place a lock — and what kind of a lock is it. Generally, you’ll either use a Monitor lock (or the lock statement) or the ReaderWriterLock. Ian warns of being too liberal with the use of ReaderWriterLock. Don’t just place a “lock brace“ around your whole method, although you may be tempted to. The longer is the body of the lock, there more possibilities there are that there are other methods and type instances called in that body, and thus the more possibilities there are for a deadlock.
  • Just because the method is documented as “thread-safe“ in .NET Reference doesn’t mean that you can’t use it in a non-thread-safe way.
  • And last, but not least — make sure you test and debug your multi-threaded application on at least a dual-processor machine.

Like I mentioned before, multithreaded programming is not an easy task. Hopefully, these tips will help you in your exploration of .NET’s and your own potential of enterprise application development.

The Bogeyman of ASP.NET

Let me first start by saying that ASP.NET is an awesome framework. It is now my primary development platform and it is a joy to work with. It brings to the table something that is not built-in in most of the scripting platforms — a pattern-driven development paradigm, based on “Page Controller” pattern, mentioned in Martin Fowler’s “Patterns of Enterprise Application Architecture” book.

Unfortunately, not all is well in the current (ASP.NET 1.1) model. One of framework’s most significant problems is that the rendering of a control is tightly bound to its implementation (I first mentioned this in one of my earlier posts). In my book, mashing functionality and presentation together is never a good thing. While the initial results may end up looking shnazzy and easy to slap together, the long-term liability of not following the rules will sooner or later catch up with you.

The way Controls are implemented, Render is one of the methods of the control, encouraging (forcing sometimes) developers to embed HTML and CSS code into the actual controls.

Now, from the designer perspective, this effectively shreds the HTML output into a number of small nuggets that are hard-coded into each of the controls. With proliferation of third-party controls, the control of look and feel of the site is placed firmly into the developer’s hands, leaving designer with only maybe a banner and a footer to play. That’s not a very good proposition.

As any good book on Web process methodology would suggest, there are many roles involved in the process of building a site. If you work for a large company that means that you have a team of people working on the project. If you are working alone, that means you have to wear many hats. It does not mean that you have to forego those roles and just hack your site together.

He who wears the developer hat should not be concerned whether the box corners are rounded. He who wears the designer hat should not be concerned whether the box with rounded corners is a control or just a literal markup, emitted by a larger control.

Whidbey made a few good steps forward with the introduction of Themes, but from what I understand, the Themes stop short by simply being a CSS style manager, not a way to manage the actual markup rendering of controls.

Then there is a new layer of abstraction for rendering in the form of Adapters, but they are somewhat locked into a special case of browser/browsing device support. I also have doubts that Adapters will be powerful enough for site of any reasonable complexity without the support of more flexible control selectors.

This maybe too late for ASP.NET 2.0, but if I were developing the next iteration of the framework, I would do the following (this is a first stab, just trying to give a sense of direction):

  • Elevate Adapters to the full-citizen status by making them the de facto rendering layer for controls (not just the “special“ case rendering)
  • Integrate Themes and Adapters so that the markup can be controled completely by a Theme.
  • Take the .skin files out of the theme and into its own Skin category — skins control how a Theme applies to a particular application. In effect, themes become truly interchangeable. Talk about an ISV market potential here!
  • Provide more flexible (maybe even CSS2-style) selectors for Skin controls.
  • Develop several built-in Themes for the developers.
  • Split the current development surface of Visual Studio (the “design“ view) into three views along the lines of Web process roles.
  • The first new development surface will be the “proto form“ view, for developers. You can drag and drop controls on this surface, arrange their order and parameters, but not the visual style.
  • The second development surface will be the designer view, which would allow designers to visually develop application skins (basically, connect Theme elements to actual controls in the application).
  • The third development surface will concentrate on the actual development of Themes and be focused toward more advanced developers.

By creating a well-defined layer of abstraction for rendering, the framework would create a more organized and process-oriented approach to Web application development and most importantly, a way to affect the look and feel of your application without having to change its code.

What’s interesting, if all of this is in place, the development paradigm will not become more complex for beginners and Mort users — it will actually be easier. Instead of tweaking the colors on each individual controls, they would have capability to paint their application with large brush strokes, using themes. And if they want to re-paint — just find another theme, pick out cool gadgety thingies that blink and roll, and brush away.

My workplace

Since Cyrus is talking about his multi-monitor set up, I guess I want to brag, too. As many of my co-workers and friends know it, I take my workplace very seriously. I like everything running just right, positioned at the right angle and available when I need it.

I don't like cute utilities, screens savers, or backgrounds, because inevitably they take up memory and the precious milliseconds of my CPU/GPU time. I don't like applications or services running in the background if I don't need them. I work hard to eliminate items from my notification area (a.k.a. tray icons).

I generally don't like add-ins or plug-ins, because they tend to affect the performance of the application. I would rather run NUnit and Reflector as separate applications and not deal with the extra windows and behaviors of their corresponding Visual Studio add-ins. I find it easier to cluster my workspaces around a specific activity, rather than plop it all on one big pallette of a Visual Studio workspace. That is why my Visual Studio Help is firmly switched to “External“ and the Clippy of the Visual Studio — “Dynamic Help“, is banned forever from the view.

Every week, I take a little bit of time to make sure that my projects, documents, and applications are organized the way I want them. I don't believe in having an application installed and not needing to use it for more than a couple of weeks. If it's not being used — it's gone.

I like working with multiple monitors. I find that having multiple screens helps me cluster my work around specific activities and also provides enough horizon for those activites that require multiple applications running at once (such as debugging, tweaking performance, CSS, etc.). Currently, I have 4 monitors surrounding me (from left to right):

  • An old Apple iBook — good enough for browser compatibility testing.
  • Two monitors of my main workstation: a 17“ utility monitor. It usually hosts my Outlook and MSDN Library, and a 20“ flat panel, which is where most of the work happens.
  • My mobile workstation with a 15“ screen, which is primarily used for performance monitoring or remote access. I always take this one home (just in case).

In addition to your usual CAT5 hookup, the machines are also networked with FireWire. This allows me to test networking issues and do marginal load testing, as well as some other netrickery.

All of the screens are “stitched“ together with Synergy, which is an absolute must if you have more than one machine sitting on your desk.

And finally, looking at the picture of the set up, you may notice that the laptops are held in their positions using riser doohickeys. A word of advice — before spending $130 on a laptop riser, stroll over to your neighborhood Staples store and pick up a book holder. It works just as well and costs around $10.

Credentials Screening: Windows Authentication without a Login Dialog Box

It is easy to turn on Windows (also known as NT) authentication on IIS:

  • Check “Integrated Windows authentication”
  • Uncheck the “enable anonymous access” box
  • Set permissions on the file or folder that you want to be the object of authentication.

Once the authentication is turned on, anytime anonymous users access that file (or folder), they will see a login dialog box pop up, asking the users to enter their credentials before letting them to view the file. A slightly different situation will happen to those users whose browser is configured to recognize the site as part of an Intranet domain — the browser will attempt to authentication automatically, and if the authentication is successful, they will be let in without any dialog boxes popping up.

Wouldn’t it be nice if your site would not display a dialog box in either case?

  • If the user can be authenticated automatically, let them in authenticated
  • Otherwise, let the user in as anonymous.

This type of authentication is often needed in Intranet Web sites, where the site provides degrees of customization based on the type of the user, screening every user for authentication, but never explicitly asking for credentials. Hence the term “credentials screening”: have valid credentials? Great! Don’t have any? That’s ok, too.

The biggest challenge in this scenario is the fact that most of the “behind the scenes” work of automatic (integrated) authentication happens outside the boundaries of your client or server applications, handled completely by the browser. In other words, there is no way to control how the process happens using either server-side or client-side code.

What we need here is a function of some sort (let’s call it the screening function) that would allow us to test whether the user can be authenticated and return “true” or “false”. Also, this function will need to be called before the authentication is first attempted. These two assumptions help us outline the following principles:

  • First page that the user hits must have anonymous authentication enabled
  • The screening function must be contained in that first page
  • In order to return a result, the screening function must somehow attempt authentication
  • Authentication attempt is triggered by accessing a page which requires authentication
  • Accessing a page from a client-side function can be performed using variety of methods. One of them is instantiating and using Msxml2.DOMDocument object (there are similar methods for browsers other than Internet Explorer).

After looking at the list above, the implementation of the screening function becomes crystal clear:

function CanAuthenticate()
{
    try
    {
        var dom = new ActiveXObject("Msxml2.DOMDocument");
        dom.async=false;
        dom.load("RequiresAuthentication.xml");
    }
    catch(e)
    {
        return false;
    }
    return true;
}

As you can see, this function attempts to open a document, named “RequiresAuthentication.xml”. If this document has anonymous authentication disabled, the browser will automatically attempt authenticating using existing user credentials. No dialog box will be shown – if authentication fails, an exception will be thrown and the function will return “false”. Otherwise, the document will be opened successfully and the function will return “true”.

The only other issue is to make sure that this function is always called at the beginning of the user session. In ASP.NET, you can accomplish this by subscribing to an event, fired by the built-in session state management module, System.Web.SessionState.SessionStateModule. The name of the event is “Start” and the easiest way to do this is to:

  • Open your application’s Global.asax.cs file
  • Add your code to the body of a pre-built “Session_Start” method.

Because this method is called only once for each user session, you can write it to simply emit the HTML which contains the screening function and then end the response, in order to prevent the actual page from being displayed. Of course, your client-side code must initiate a page reload right after calling the screening function.

Another way of doing it is to develop an HTTP module, which will do the same thing and leave the Global.asax.cs file in its pristine condition. Here you can download a sample implementation of such a module.

A couple of installation instructions:

  • Make “CredentialsScreening” folder a Web application
  • Disable anonymous access to “RequiresAuthentication.xml” file and set its permissions according to your authentication needs.
  • To test, access “Test.aspx” file with your browser.

Well, there you have it – a credentials screening solution. Hope you find it useful. If you do – drop me a note.

UPDATED 10/04/04: Referenced code was updated. For more details, look here.

Html-XPath project created on SourceForge.net

For those of you interested in using the DOM Level 3 XPath implementation for Microsoft Internet Explorer 5+ in your Web applications, I created a project on SourceForge.net:

http://sourceforge.net/projects/html-xpath/

The code is released under LGPL license and provides functionally complete implementation in its first release. Next project milestones are:

  • Allow passing instances of Msxml2.DOMDocument object as contextNode parameter in evaluate function
  • Implement ECMAScript binding model

If you would like to participate in the project, let me know.

The Objective of Information Architecture

I have recently stubmled upon a very interesting discussion, initiated by a fairly controversial article by Mark Hurst of Creative Good. In his piece, titled “The Page Paradigm”, Mark provides some radical reasoning regarding fundamental principles of Information Architecture process as we know it.

Here’s a Cliffs Notes version of the article:

  • Users come to the Web site only when they have a goal
  • The goal is very specific
  • On a page, users click something that appears to take them closer to the fullfillment of their goal
  • If they don’t see anything of such sort, they click the Back button on their Web browser
  • Users don’t care about where they are on the website — they care about achieving their goal
  • Therefore:
    • Painstakingly fitting all of the pages into a neat hierarchy is not as important
    • Consistency of the navigation is not important or necessary
  • What’s important is making sure each page takes the user closer to accomplishing their goal

Naturally, anything that challenges the two pillars of information architecture, content structure and navigation consistency, should (and did) cause a flurry of responses.

Some said that Mark oversimplified the problem. Some were conflicted. Others offered a “there is no spoon” philosophical exercize suggesting that navigation actually doesn’t exist.

I must say, I was in a bit of a shock after reading the article. However, I have to admit that Mark’s argument is valid. Consistent navigation and neat hierachy are not the objective of the information architecture process. They are merely tools, two of the many. The objective of the process is designing a user experience that aids the visitors of the site in achieving their goals.

In a way, this view is the opposite of oversimplification — it implies that a site with consistent navigation and impeccable structure may still fail at helping the users find what they want.

Excited about XInclude

Once again, Oleg‘s article was published on MSDN. This time he talks about XInclude. Like any great article, his offers not only a good conceptual overview with examples and “what-if” scenarios, but also provides a very solid implementation of the XInclude specification (which a recommendation at the moment), called XInclude.NET. Just as much as I enjoyed reading the article, I enjoyed reading the code, which I highly recommend as an educational excercise for everyone.

XPath, unleashed — coming to Internet Explorer 5+ HTML DOM near you

While coding away in JavaScript, reshaping/augmenting your HTML code using DOM, have you ever wondered why there is no support for XPath built-in? Actually, there is — Mozilla has a pretty solid support of DOM Level 3 XPath right at your fingertips through the document.evaluate method:

document.evaluate(expression, contextNode, resolver, type, result);

You’ll find the details of implementation over at the w3.org or mozilla.org, but for starters, expression is the XPath expression string and contextNode is the DOM node that you’d like to use as a root. The rest can be (and most often will be) specified as zeros and nulls. For instance, this expression will get you all div nodes that have class attribute set to DateTime in your document:


var iterator = document.evaluate(”//div[@class='DateTime']”, document, null, 0, null);

By default, the method returns an iterator, which can be worked through like so:


while(item = iterator.iterateNext())
{
// do something with item
}

As you might’ve guessed, the iterator returns null once all items are exhausted. By modifying the type parameter, you can make the method return other types, such as string, boolean, number, and a snapshot. Snapshot is kind of like an iterator, except the DOM is free to change while the snapshot still exists. If you try to do the same with the iterator, it will throw an exception.

Well, I thought that it is mighty unfair that Internet Explorer does not support such functionality. I mean, you can very much do XPath in JavaScript, except it can only occur in two cases (that I know of):

1) As call to an Msxml.DOMDocument object, created using the new ActiveXObject() statement.

2) If an HTML document was generated as a result of a client-side XSL transformation from an XML file.

Neither case offers us a solution if we want to use XPath in a plain-vanilla HTML. So, I decided to right the wrong. Here is the first stab at it — a JavaScript implementation of DOM Level 3 XPath for Microsoft Internet Explorer (all zipped up for your review). Here is the sample which should run in exactly the same way in IE and Mozilla.

Now counting all links on your document is just one XPath query:


var linkCount = document.evaluate(“count(//a[@href])“, document, null, XPathResult.NUMBER_TYPE, null).getNumberValue();

So is getting a list of all images without an alt tag:


var imgIterator = document.evaluate(“//img[not(@alt)]“, document, null, XPathResult.ANY_TYPE, null);

So is finding a first LI element of al UL tags:


var firstLiIterator = document.evaluate(“//ul/li[1]“, document, null, XPathResult.ANY_TYPE, null);

In my opinion, having XPath in HTML DOM opens up a whole new level of flexibility and just plain coding convenience for JavaScript developers.

I must say, I haven’t been able to resolve all implementation issues yet. For example, I couldn’t find a pretty way to implement properties of XPathResult. How do you make a property accessor that may throw an exception in JScript? As a result, I had to fall back to the Java model of binding to properties.

So guys, take a look. I can post more on details of implementation, if you’d like. Just let me know.