My workplace

Since Cyrus is talking about his multi-monitor set up, I guess I want to brag, too. As many of my co-workers and friends know it, I take my workplace very seriously. I like everything running just right, positioned at the right angle and available when I need it.

I don't like cute utilities, screens savers, or backgrounds, because inevitably they take up memory and the precious milliseconds of my CPU/GPU time. I don't like applications or services running in the background if I don't need them. I work hard to eliminate items from my notification area (a.k.a. tray icons).

I generally don't like add-ins or plug-ins, because they tend to affect the performance of the application. I would rather run NUnit and Reflector as separate applications and not deal with the extra windows and behaviors of their corresponding Visual Studio add-ins. I find it easier to cluster my workspaces around a specific activity, rather than plop it all on one big pallette of a Visual Studio workspace. That is why my Visual Studio Help is firmly switched to “External“ and the Clippy of the Visual Studio — “Dynamic Help“, is banned forever from the view.

Every week, I take a little bit of time to make sure that my projects, documents, and applications are organized the way I want them. I don't believe in having an application installed and not needing to use it for more than a couple of weeks. If it's not being used — it's gone.

I like working with multiple monitors. I find that having multiple screens helps me cluster my work around specific activities and also provides enough horizon for those activites that require multiple applications running at once (such as debugging, tweaking performance, CSS, etc.). Currently, I have 4 monitors surrounding me (from left to right):

  • An old Apple iBook — good enough for browser compatibility testing.
  • Two monitors of my main workstation: a 17“ utility monitor. It usually hosts my Outlook and MSDN Library, and a 20“ flat panel, which is where most of the work happens.
  • My mobile workstation with a 15“ screen, which is primarily used for performance monitoring or remote access. I always take this one home (just in case).

In addition to your usual CAT5 hookup, the machines are also networked with FireWire. This allows me to test networking issues and do marginal load testing, as well as some other netrickery.

All of the screens are “stitched“ together with Synergy, which is an absolute must if you have more than one machine sitting on your desk.

And finally, looking at the picture of the set up, you may notice that the laptops are held in their positions using riser doohickeys. A word of advice — before spending $130 on a laptop riser, stroll over to your neighborhood Staples store and pick up a book holder. It works just as well and costs around $10.

Credentials Screening: Windows Authentication without a Login Dialog Box

It is easy to turn on Windows (also known as NT) authentication on IIS:

  • Check “Integrated Windows authentication”
  • Uncheck the “enable anonymous access” box
  • Set permissions on the file or folder that you want to be the object of authentication.

Once the authentication is turned on, anytime anonymous users access that file (or folder), they will see a login dialog box pop up, asking the users to enter their credentials before letting them to view the file. A slightly different situation will happen to those users whose browser is configured to recognize the site as part of an Intranet domain — the browser will attempt to authentication automatically, and if the authentication is successful, they will be let in without any dialog boxes popping up.

Wouldn’t it be nice if your site would not display a dialog box in either case?

  • If the user can be authenticated automatically, let them in authenticated
  • Otherwise, let the user in as anonymous.

This type of authentication is often needed in Intranet Web sites, where the site provides degrees of customization based on the type of the user, screening every user for authentication, but never explicitly asking for credentials. Hence the term “credentials screening”: have valid credentials? Great! Don’t have any? That’s ok, too.

The biggest challenge in this scenario is the fact that most of the “behind the scenes” work of automatic (integrated) authentication happens outside the boundaries of your client or server applications, handled completely by the browser. In other words, there is no way to control how the process happens using either server-side or client-side code.

What we need here is a function of some sort (let’s call it the screening function) that would allow us to test whether the user can be authenticated and return “true” or “false”. Also, this function will need to be called before the authentication is first attempted. These two assumptions help us outline the following principles:

  • First page that the user hits must have anonymous authentication enabled
  • The screening function must be contained in that first page
  • In order to return a result, the screening function must somehow attempt authentication
  • Authentication attempt is triggered by accessing a page which requires authentication
  • Accessing a page from a client-side function can be performed using variety of methods. One of them is instantiating and using Msxml2.DOMDocument object (there are similar methods for browsers other than Internet Explorer).

After looking at the list above, the implementation of the screening function becomes crystal clear:

function CanAuthenticate()
{
    try
    {
        var dom = new ActiveXObject("Msxml2.DOMDocument");
        dom.async=false;
        dom.load("RequiresAuthentication.xml");
    }
    catch(e)
    {
        return false;
    }
    return true;
}

As you can see, this function attempts to open a document, named “RequiresAuthentication.xml”. If this document has anonymous authentication disabled, the browser will automatically attempt authenticating using existing user credentials. No dialog box will be shown – if authentication fails, an exception will be thrown and the function will return “false”. Otherwise, the document will be opened successfully and the function will return “true”.

The only other issue is to make sure that this function is always called at the beginning of the user session. In ASP.NET, you can accomplish this by subscribing to an event, fired by the built-in session state management module, System.Web.SessionState.SessionStateModule. The name of the event is “Start” and the easiest way to do this is to:

  • Open your application’s Global.asax.cs file
  • Add your code to the body of a pre-built “Session_Start” method.

Because this method is called only once for each user session, you can write it to simply emit the HTML which contains the screening function and then end the response, in order to prevent the actual page from being displayed. Of course, your client-side code must initiate a page reload right after calling the screening function.

Another way of doing it is to develop an HTTP module, which will do the same thing and leave the Global.asax.cs file in its pristine condition. Here you can download a sample implementation of such a module.

A couple of installation instructions:

  • Make “CredentialsScreening” folder a Web application
  • Disable anonymous access to “RequiresAuthentication.xml” file and set its permissions according to your authentication needs.
  • To test, access “Test.aspx” file with your browser.

Well, there you have it – a credentials screening solution. Hope you find it useful. If you do – drop me a note.

UPDATED 10/04/04: Referenced code was updated. For more details, look here.

Html-XPath project created on SourceForge.net

For those of you interested in using the DOM Level 3 XPath implementation for Microsoft Internet Explorer 5+ in your Web applications, I created a project on SourceForge.net:

http://sourceforge.net/projects/html-xpath/

The code is released under LGPL license and provides functionally complete implementation in its first release. Next project milestones are:

  • Allow passing instances of Msxml2.DOMDocument object as contextNode parameter in evaluate function
  • Implement ECMAScript binding model

If you would like to participate in the project, let me know.

The Objective of Information Architecture

I have recently stubmled upon a very interesting discussion, initiated by a fairly controversial article by Mark Hurst of Creative Good. In his piece, titled “The Page Paradigm”, Mark provides some radical reasoning regarding fundamental principles of Information Architecture process as we know it.

Here’s a Cliffs Notes version of the article:

  • Users come to the Web site only when they have a goal
  • The goal is very specific
  • On a page, users click something that appears to take them closer to the fullfillment of their goal
  • If they don’t see anything of such sort, they click the Back button on their Web browser
  • Users don’t care about where they are on the website — they care about achieving their goal
  • Therefore:
    • Painstakingly fitting all of the pages into a neat hierarchy is not as important
    • Consistency of the navigation is not important or necessary
  • What’s important is making sure each page takes the user closer to accomplishing their goal

Naturally, anything that challenges the two pillars of information architecture, content structure and navigation consistency, should (and did) cause a flurry of responses.

Some said that Mark oversimplified the problem. Some were conflicted. Others offered a “there is no spoon” philosophical exercize suggesting that navigation actually doesn’t exist.

I must say, I was in a bit of a shock after reading the article. However, I have to admit that Mark’s argument is valid. Consistent navigation and neat hierachy are not the objective of the information architecture process. They are merely tools, two of the many. The objective of the process is designing a user experience that aids the visitors of the site in achieving their goals.

In a way, this view is the opposite of oversimplification — it implies that a site with consistent navigation and impeccable structure may still fail at helping the users find what they want.

Excited about XInclude

Once again, Oleg‘s article was published on MSDN. This time he talks about XInclude. Like any great article, his offers not only a good conceptual overview with examples and “what-if” scenarios, but also provides a very solid implementation of the XInclude specification (which a recommendation at the moment), called XInclude.NET. Just as much as I enjoyed reading the article, I enjoyed reading the code, which I highly recommend as an educational excercise for everyone.

XPath, unleashed — coming to Internet Explorer 5+ HTML DOM near you

While coding away in JavaScript, reshaping/augmenting your HTML code using DOM, have you ever wondered why there is no support for XPath built-in? Actually, there is — Mozilla has a pretty solid support of DOM Level 3 XPath right at your fingertips through the document.evaluate method:

document.evaluate(expression, contextNode, resolver, type, result);

You’ll find the details of implementation over at the w3.org or mozilla.org, but for starters, expression is the XPath expression string and contextNode is the DOM node that you’d like to use as a root. The rest can be (and most often will be) specified as zeros and nulls. For instance, this expression will get you all div nodes that have class attribute set to DateTime in your document:


var iterator = document.evaluate(”//div[@class='DateTime']”, document, null, 0, null);

By default, the method returns an iterator, which can be worked through like so:


while(item = iterator.iterateNext())
{
// do something with item
}

As you might’ve guessed, the iterator returns null once all items are exhausted. By modifying the type parameter, you can make the method return other types, such as string, boolean, number, and a snapshot. Snapshot is kind of like an iterator, except the DOM is free to change while the snapshot still exists. If you try to do the same with the iterator, it will throw an exception.

Well, I thought that it is mighty unfair that Internet Explorer does not support such functionality. I mean, you can very much do XPath in JavaScript, except it can only occur in two cases (that I know of):

1) As call to an Msxml.DOMDocument object, created using the new ActiveXObject() statement.

2) If an HTML document was generated as a result of a client-side XSL transformation from an XML file.

Neither case offers us a solution if we want to use XPath in a plain-vanilla HTML. So, I decided to right the wrong. Here is the first stab at it — a JavaScript implementation of DOM Level 3 XPath for Microsoft Internet Explorer (all zipped up for your review). Here is the sample which should run in exactly the same way in IE and Mozilla.

Now counting all links on your document is just one XPath query:


var linkCount = document.evaluate(“count(//a[@href])“, document, null, XPathResult.NUMBER_TYPE, null).getNumberValue();

So is getting a list of all images without an alt tag:


var imgIterator = document.evaluate(“//img[not(@alt)]“, document, null, XPathResult.ANY_TYPE, null);

So is finding a first LI element of al UL tags:


var firstLiIterator = document.evaluate(“//ul/li[1]“, document, null, XPathResult.ANY_TYPE, null);

In my opinion, having XPath in HTML DOM opens up a whole new level of flexibility and just plain coding convenience for JavaScript developers.

I must say, I haven’t been able to resolve all implementation issues yet. For example, I couldn’t find a pretty way to implement properties of XPathResult. How do you make a property accessor that may throw an exception in JScript? As a result, I had to fall back to the Java model of binding to properties.

So guys, take a look. I can post more on details of implementation, if you’d like. Just let me know.

Dare’s Holy Grail vs. Rory’s “Huh?”

Dare and Rory and others have been conversing on a subject very close to my heart (and my work) — mapping XML code to objects.

In his post, Dare demonstrates how having dynamically typed properties would make it easier (and more elegant) for developers to write code vs. doing the same thing in a statically typed language. He then laments that there is no system today that would combine both strongly and dynamically typed features.

Although JavaScript is not a strongly typed language, it is a dynamically typed one. I thought it might be interesting for y’alls to continue exploring the building of a type dynamically. I make a weak attempt of strengthening the type system, but it is not something that should mislead you to think that JS type system is suddenly “cured“.

Consider the following code fragment (complete code and results of its execution can be seen here):

var entry = new entryType("Happy New Year!", "1/1/2004");
entry.subject = new typedXmlElement("Happy New Year, Yay!", "xs:string", "dc");
entry.requireModeration = true;
entry.subject.longDesc = new typedXmlElement("Common holiday with gifts and stuff", "xs:string", "myns");
entry.subject.longDesc.modifiedOn = new Date();
// write complete XML of this entry
var xml = entry.serialize();

In my humble opinion, the beauty of dynamic types is that it elevates a type instance to the level of a type and brings the sense of evolution to type. One can see it as bringing more “realism“ to the OOP: I am an instance of a Person, but, though sharing the same traits, a generic Person and I are different — I am a semantic singleton, a one-of-a-kind type-and-instance-in-one. As I evolve, I gain (or lose) new methods and properties, yet remain the same instance that I was born.

I may be wrong (or under-educated), but it seem that SOA, the “next big thing“ looming on the horizon, would benefit from  dynamically and strongly typed language of some kind.

DevDays 2004, Atlanta

Sittin’ in the hotel room, looking through the freshly acquired DevDays materials… Although the folks who organized it managed to put together a good collection of information and links to more information, I can’t shake off a feeling that I’ve watched a trailer rather than a full-length feature. It seems that the speakers, having only an hour to talk, sped up past the meat of the topic and just ran through the basics. I guess I was really expecting more from the threat modeling talk, but I understand how it can’t be covered within one hour.

On a positive note, I did have a good conversation with Jeff Prosise about the process of book writing. Also met Doug Turnure, who also recently started his blog and seems to be having the same “non-commental” issues. I can’t offer any good advice, except, to paraphrase Don Park’s post:

 “Keep feeding your Tamagotchi“.

Crashing Through Deadlocks

Although concurrency management is somewhat represented in C# with the lock statement, there’s plenty more to be found in System.Threading namespace. Starting from Monitor class, which is what the lock statement is based on, to the ReaderWriterLock and the Mutex class. There’s even an Interlocked class, which provides very nice facilities for atomic operations.

In debugging deadlocks, I found myself using the Monitor.TryEnter method in order to determine the actual place of a deadlock. It basically operates as your standard Enter, except after a specified time span, it times out and returns false as the return value. Otherwise, when the thread is acquired, it returns true.

Inspired by a neat hack by Peter Golde, I created a simple wrapper — the ImpatientLock — for the TryEnter method, which allows adding the deadlock debugging statement with minimal modification to the existing code:

// lock(obj)
using(ImpatientLock.Try(obj))
{
   // .. do stuff
}

Except in the case of ImpatientLock, the exception is thrown, documenting the stack trace and thus pointing out location of the deadlock. Here’s the complete source code for the ImpatientLock, just in case you find it useful.

using System.Threading;

public class ImpatientLock
{
  public static IDisposable Try(object obj)
  {
    return new DisposableLock(obj, DisposableLock.DefaultTimeSpan);
  }

  public static IDisposable Try(object obj, TimeSpan timeSpan)
  {
    return new DisposableLock(obj, timeSpan);
  }

  private class DisposableLock : IDisposable
  {
    private static readonly TimeSpan DefaultTimeSpan = TimeSpan.FromMinutes(5);
    private readonly object Obj;
    public DisposableLock(object obj, TimeSpan timeSpan)
    {
      Obj = obj;
      if (!Monitor.TryEnter(obj, timeSpan))
      { 
          throw new ImpatientLockException();
      }
    }

    public void Dispose()
    {
      Monitor.Exit(Obj);
    }
  }

  private class ImpatientLockException : Exception
  {
    public ImpatientLockException()
      :base("Your time's up. I am quittin'!")
    {
    }
  }
}

In case you are battling deadlock issues, there are a couple of really good articles on the subject — most notably this one by Dr. GUI, and this one by unknown (to me) MS folks. There are also some pretty intriguing applications that attempt detection of potential deadlocks by analyzing the call tree of your code, although I found those too simplistic to detect anything realistically serious.

UPDATE: Ian Griffith has more information about this technique over at his blog, enhanced and updated with the help of Eric Gunnerson.

Regrettable Software, Tossed Salad, and Web Development

I’ve done some reading (and posting) on the excellent ASP.NET forum and, by far it seems, the most frequent cause of the issues that people have with their code is making the distinction between server side and client side of a Web application. I am not exactly sure why. Maybe the problem stems from the fact that most of today’s Web development platforms allow (and encourage) using the “tossed salad” declaration of server side and client side: your server-side code is mixed together with your client-side code.

ASP, ASP.NET, ColdFusion, JSP, PHP, and other scripting language platforms all attempt to marry DHTML and server-side logic in a single file. I am not saying that they are doing it wrong — there are far too many more of them than just me. But looking at posts full of confusion and hacking burn-out makes me wonder if the cow of architecture and strategic thinking was sacrificed to the gods of convenience.

It stands to reason whether it would be better to engineer your development platform in a way that would encourage architectural thinking rather than aid the hapless member of a cargo cult in quickly hacking something together. Something that inevitably becomes a piece of “regrettable software” — software that is painful to build, painful to use, and painful to maintain.

In the meantime, there’s definitely a market for a “know thy (sever|client) side” training. Speaking of which, trying to learn by coding in one of the “tossed salad” platforms may lead to a Gordian Knot of concepts in the students’ head. So, if you are trying to learn Web development platform from scratch, be that ASP.NET or ColdFusion MX, make sure to keep the layers separate:

  • Start from HTML
  • Grok CSS
  • Learn JavaScript
  • Move on to the server side language

Just not all at once.