Feelin’ all AJAXy: How Prince of Salamis Became the Moniker for Stuff Done Before

Developers of the MSDN Library site, I feel your pain. You’ve perfected technology years ago (and I am not talking dog or Internet years!), quietly enjoying its success, only to have all of its coolness taken away by Google. Then, some hotshot comes along, slaps a four-letter moniker on it and bam! — you are so out of the picture, it’s not even funny.

For those who haven’t been paying attention (Watchful Mary Jo and even the Noble Scoble included, methinks), Microsoft had used AJAX in production sites for many, many years. How many? Well, let’s just say that there wasn’t an XmlHttpRequest spec properly implemented in Mozilla. And certainly before the term AJAX was coined. Observe Exhibit A, the tree menu of the MSDN library site: asynchronously loaded using JavaScript and XML, beta in 2001, live in early 2002.

What can you do, Microsoft dudes? You just don’t know how to coin cool four-letter acronyms. Although I have a feeling you could think of a few four-letter words right now…

Progressive Enhancement: Couldn’t have said it better myself

Jeremy Keith has a very nice and complete summary of pitfalls of using augmented client-server interaction on a page (the term Ajax seems to be catching on).

In short, just because you’ve mastered XmlHttpRequest or IFRAME tricks doesn’t mean that your page had suddenly overcome all traditional limitations of a Web page browsing paradigm and now you are free to do whatever you feel like. All this Ajax stuff do is offer you a capability to enhance the user experience, but applying it to actually achieve better user experience is a far more difficult task.

In fact, this task is now actually more complex than it was prior to Ajax existence. Now you have to carefully balance between the old-fashioned page paradigm and new functionality and make sure that neither steps on each other’s feet.

Here are two more examples of poor progressive enhancement:

  • GMail. Don’t take me wrong, I love my GMail, but have you looked at the source code of this thing lately? Don’t — it will blind you. It’s not even valid HTML. GMail now offers “less-fancy“ version for “less-supported’ browsers. Well, there’s a novel idea — you might have not needed to create a separate version of your service if you started your coding with progressve enhancement model in mind.
  • MSN Spaces. It seems that while the rest of Microsoft is slowly catching up with structural markup movement and starting to understand importance of valid and accessible HTML code, the Spaces seem to be stuck in the 1999’s “World’o’Tables“. I won’t go into details (but I can, if you ask me :), but their user experience could have been dramatically better if they used progressive enhancement and structural markup.

Complete Credentials Screening

Here’s the latest iteration of the Credentials Screening module (for more information see this and this posts). It further improves the way credentials screening operates, introducing things like Windows authentication cookie reminder (because sometimes Explorer “forgets” that it had already authenticated with the site and issues a new challenge), better browser detection (since no other browser supports silent authentication, only MSIE will invoke the credentials screening process), corrected support of Forms authentication, and last, but not least — Authenticate event, which your application can subscribe to by simply implementing a CredentialsScreening_Authenticate method in the Global.asax.cs class. The signature of your method should look like this:

protected void CredentialsScreening_Authenticate(Object sender, EventArgs e) { /* body of your method */ }

This method will be called any time the Credentials Screening module successfully authenticated a user. Neat, huh? Oh, by the way, almost forgot:

If you want to explicitly make the login dialog pop-up (for example, you have a “Login” button for those who couldn’t authenticate through screening), use this static method to retrieve it:

public static string ScreeningModule.GetManualLoginUrl(HttpRequest request);

Well, this about sums it up. Let me know if you find any more bugs or opportunities for improvement.

Of Men with the Brain on Both Sides and a Utopian Cult

Has it really been that long since my last post?

Nikhil Kothari updated his already cool site and I just had to leave a comment to compliment him on the great design work that he’s done. It is rare that a function person (i.e. software developer) has such a good handle on the form (visual presentation). In my experience, most people have either function side or form side developed. You are either a brilliant software developer who can only draw stick figures or a design genius whose eyes glaze over as soon as you hear the word “polymorphism“.

Unfortunately, the modern software development process outright requires you to possess both in order to be successful. Software design is half user experience. And the rising expectations of a quality user experience is what makes form suddenly so important. The users no longer find stark green-on-black TTY terminals acceptable — the PCs had spoiled them to point and click. They no longer want to fill out ten-page-length forms and type in dates — the expedias and amazons had made them expect to click once to get the reward. They seem to gravitate, sometimes subconsciously, toward professionally designed interfaces. The user wants to feel comfortable, and the bar of “comfortable“ slides up continuously.

Those who bridge the gap between form and function can keep up with this slide easier, simplifying the complexity of a development process by keeping both parties in the same brain. These types are the real gems that companies need to hunt for — if they want to stay in software development business. Soon enough, if you don’t have one of them, you won’t be able to keep up.

One other thing that I had in my comment is an offer to convert Nikhil to the “structural markup” religion. I neglected to mention that it’s actually more of a cult, where we share unyielding, radical beliefs about how HTML markup should be done, shave our heads and sing creepily monotonous chants. Well, we’re not yet doing the last two — the logistics of choosing the right chants are insurmountable, given our fondness for standards and the reputation for quickness that W3C has.

However, the “structural markup“ people do have strong beliefs about the markup. In its most abbreviated form, the idea of structural markup has to do with making sure that the HTML code reflects the structure of content, not its presentation. The presentation must rest entirely on the shoulders of CSS and client-side scripting. The markup is only there to reflect the logical structure of content.

For instance, my article about Piped Lists is an example of how you should only think of a horizontal list of links, separated by pipe characters in terms of its structure — it’s a list of links, which is expressed using UL, LI, and A elements. The parts about “horizontal” and “separated by pipe characters” is immaterial to the content structure and must be expressed using presentational means — CSS in this case. Similarly, the TABLE element should only be used to express two-dimensional relationships in content, not how the content is laid out. I am sure you’ve heard of “table-less” layouts before.

What are the benefits of structural markup? Well, quick googling reveals lots of existing articles attempting to list them. In my mind, it all comes down to separation of content from presentation. With structural markup, we have an opportunity to make a clean separation between the structure of a Web page and its look and feel, which is a requirement for any modern Web application, driven by the need for better maintainability of code, accessibility, and even performance.

Making your way into the structural markup world is not easy — the road, while marked, is quite bumpy. The uneven support of CSS and disparities in DHTML DOM implementation across browsers make for some hair-raising rides, frustrating enough to turn away even the stubbornest of the followers. Even so, the promise of the future where Web content is just content and nothing more is good enough for me to keep going.

Better Credentials Screening

In my first post about credentials screening, I’ve offered sample code, which implements the solution. Being the quickly thrown together piece of code that it was, the sample had a couple of quirks. Oh, nothing big — just things like being stuck in an endless loop when you access it with a browser that has cookies disabled. So, here’s the new, improved rendition:

  • Provides more graceful handling of various browsers. The actual screening will only work on IE, but at least other browsers will not get stuck in a loop.
  • Handles browsers with no scripting enabled — while this is not a perfect solution, at least the script-less users will see a message asking them to click through to the actual page.
  • Silently supports forms authentication. If forms authentications is enabled, successful screening will result in user authentication. Otherwise, the system will revert to the login screen.
  • Thanks to the tip, provided by Craig Andera, the module no longer requires a special RequiresAuthentication.xml file. This is almost completely a drag-n-drop solution now — you still have to register it in Web.config.

Take a look — let me know how it works for you. I welcome any suggestions, especially on how to improve no-scripting support.

NOTE: The code in this post is deprecated by the code in this one. The link to the old code is for historical purposes only.

Implementing Piped List in CSS

In my one of my previous posts, I offered a couple of suggestions to my colleagues over at Microsoft.com. One of them had to do with rendering lists of links, separated by a vertical bar (pipe), also known as piped lists.

I suggested that since vertical bars in those lists are purely decorative elements, they don’t need to appear as text content of the page.

Here is a simple implementation of such a list.

PHTML: Dumbing It Down For Sanity’s Sake

If you have built large sites, you have faced a familiar maintenance problem: content authors, either unwittingly or in their creative genius adding horrific boogers of markup to the site. Over a year of such maintenance, the site is no longer the perfect lean and mean standards-driven machine, all-or-nothing DHTML/CSS perfection capable of invoking jealousy of the greatest Web minds. No, it’s more like a Frankenstein on the high-carb diet, the Joseph Merrick of the Web, covered in disgusting muck of Office-specific tags, with repulsive smell of FONT and O tags emanating from it.

Sure, there’s Tidy. And there are ways to delay the impending “markup junk-up” crisis, but philosophically speaking, the problem stems from the fact that HTML casually mixes content and context of page into one nice tag soup, and in that it discourages content developers to think of Web content and Web context as being two separate things.

What if we address the problem head on? What if we “disable” context features of HTML for content authors in some organized fashion? What do you think about Primitive HTML, a subset of HTML that is designed to prevent introduction of unwanted coding by the authors? Call it PHTML, if you will.

Here’s what I am thinking. Disallow tags like FONT, CENTER, SCRIPT, OBJECT, IFRAME, MAP, TITLE, ISINDEX, BASE, all of the HEAD tags, including HEAD tag itself, and of course, BODY, FRAMESET, and HTML tags. PHTML is only used to create fragments of Web content, not complete documents, so there is no need for style or meta tag declarations.

Disable STYLE, ALIGN, ID, and all other attributes that may affect style. I would leave the CLASS attribute, so that the authors could pick from pre-defined styles in the page stylesheet.

The only tags allowed in PHTML are those that are used to create content and provide semantic distinction in it, not style it. In fact, the author of PHTML wouldn’t (and shouldn’t) worry or even know how the content would end up appearing on the site — that’s why it’s called content. The designers will make it “pretty”.

It is a very idealistic thing to say, but if PHTML is standardized, and hopefully the tools are built that support PHTML (authoring, conversion, etc.), the World Wide Web of markup may just be a little better place to work with and live in.