Feelin’ all AJAXy: How Prince of Salamis Became the Moniker for Stuff Done Before

Developers of the MSDN Library site, I feel your pain. You’ve perfected technology years ago (and I am not talking dog or Internet years!), quietly enjoying its success, only to have all of its coolness taken away by Google. Then, some hotshot comes along, slaps a four-letter moniker on it and bam! — you are so out of the picture, it’s not even funny.

For those who haven’t been paying attention (Watchful Mary Jo and even the Noble Scoble included, methinks), Microsoft had used AJAX in production sites for many, many years. How many? Well, let’s just say that there wasn’t an XmlHttpRequest spec properly implemented in Mozilla. And certainly before the term AJAX was coined. Observe Exhibit A, the tree menu of the MSDN library site: asynchronously loaded using JavaScript and XML, beta in 2001, live in early 2002.

What can you do, Microsoft dudes? You just don’t know how to coin cool four-letter acronyms. Although I have a feeling you could think of a few four-letter words right now…

Progressive Enhancement: Couldn’t have said it better myself

Jeremy Keith has a very nice and complete summary of pitfalls of using augmented client-server interaction on a page (the term Ajax seems to be catching on).

In short, just because you’ve mastered XmlHttpRequest or IFRAME tricks doesn’t mean that your page had suddenly overcome all traditional limitations of a Web page browsing paradigm and now you are free to do whatever you feel like. All this Ajax stuff do is offer you a capability to enhance the user experience, but applying it to actually achieve better user experience is a far more difficult task.

In fact, this task is now actually more complex than it was prior to Ajax existence. Now you have to carefully balance between the old-fashioned page paradigm and new functionality and make sure that neither steps on each other’s feet.

Here are two more examples of poor progressive enhancement:

  • GMail. Don’t take me wrong, I love my GMail, but have you looked at the source code of this thing lately? Don’t — it will blind you. It’s not even valid HTML. GMail now offers “less-fancy“ version for “less-supported’ browsers. Well, there’s a novel idea — you might have not needed to create a separate version of your service if you started your coding with progressve enhancement model in mind.
  • MSN Spaces. It seems that while the rest of Microsoft is slowly catching up with structural markup movement and starting to understand importance of valid and accessible HTML code, the Spaces seem to be stuck in the 1999’s “World’o’Tables“. I won’t go into details (but I can, if you ask me :), but their user experience could have been dramatically better if they used progressive enhancement and structural markup.

Complete Credentials Screening

Here’s the latest iteration of the Credentials Screening module (for more information see this and this posts). It further improves the way credentials screening operates, introducing things like Windows authentication cookie reminder (because sometimes Explorer “forgets” that it had already authenticated with the site and issues a new challenge), better browser detection (since no other browser supports silent authentication, only MSIE will invoke the credentials screening process), corrected support of Forms authentication, and last, but not least — Authenticate event, which your application can subscribe to by simply implementing a CredentialsScreening_Authenticate method in the Global.asax.cs class. The signature of your method should look like this:

protected void CredentialsScreening_Authenticate(Object sender, EventArgs e) { /* body of your method */ }

This method will be called any time the Credentials Screening module successfully authenticated a user. Neat, huh? Oh, by the way, almost forgot:

If you want to explicitly make the login dialog pop-up (for example, you have a “Login” button for those who couldn’t authenticate through screening), use this static method to retrieve it:

public static string ScreeningModule.GetManualLoginUrl(HttpRequest request);

Well, this about sums it up. Let me know if you find any more bugs or opportunities for improvement.

Of Men with the Brain on Both Sides and a Utopian Cult

Has it really been that long since my last post?

Nikhil Kothari updated his already cool site and I just had to leave a comment to compliment him on the great design work that he’s done. It is rare that a function person (i.e. software developer) has such a good handle on the form (visual presentation). In my experience, most people have either function side or form side developed. You are either a brilliant software developer who can only draw stick figures or a design genius whose eyes glaze over as soon as you hear the word “polymorphism“.

Unfortunately, the modern software development process outright requires you to possess both in order to be successful. Software design is half user experience. And the rising expectations of a quality user experience is what makes form suddenly so important. The users no longer find stark green-on-black TTY terminals acceptable — the PCs had spoiled them to point and click. They no longer want to fill out ten-page-length forms and type in dates — the expedias and amazons had made them expect to click once to get the reward. They seem to gravitate, sometimes subconsciously, toward professionally designed interfaces. The user wants to feel comfortable, and the bar of “comfortable“ slides up continuously.

Those who bridge the gap between form and function can keep up with this slide easier, simplifying the complexity of a development process by keeping both parties in the same brain. These types are the real gems that companies need to hunt for — if they want to stay in software development business. Soon enough, if you don’t have one of them, you won’t be able to keep up.

One other thing that I had in my comment is an offer to convert Nikhil to the “structural markup” religion. I neglected to mention that it’s actually more of a cult, where we share unyielding, radical beliefs about how HTML markup should be done, shave our heads and sing creepily monotonous chants. Well, we’re not yet doing the last two — the logistics of choosing the right chants are insurmountable, given our fondness for standards and the reputation for quickness that W3C has.

However, the “structural markup“ people do have strong beliefs about the markup. In its most abbreviated form, the idea of structural markup has to do with making sure that the HTML code reflects the structure of content, not its presentation. The presentation must rest entirely on the shoulders of CSS and client-side scripting. The markup is only there to reflect the logical structure of content.

For instance, my article about Piped Lists is an example of how you should only think of a horizontal list of links, separated by pipe characters in terms of its structure — it’s a list of links, which is expressed using UL, LI, and A elements. The parts about “horizontal” and “separated by pipe characters” is immaterial to the content structure and must be expressed using presentational means — CSS in this case. Similarly, the TABLE element should only be used to express two-dimensional relationships in content, not how the content is laid out. I am sure you’ve heard of “table-less” layouts before.

What are the benefits of structural markup? Well, quick googling reveals lots of existing articles attempting to list them. In my mind, it all comes down to separation of content from presentation. With structural markup, we have an opportunity to make a clean separation between the structure of a Web page and its look and feel, which is a requirement for any modern Web application, driven by the need for better maintainability of code, accessibility, and even performance.

Making your way into the structural markup world is not easy — the road, while marked, is quite bumpy. The uneven support of CSS and disparities in DHTML DOM implementation across browsers make for some hair-raising rides, frustrating enough to turn away even the stubbornest of the followers. Even so, the promise of the future where Web content is just content and nothing more is good enough for me to keep going.

Better Credentials Screening

In my first post about credentials screening, I’ve offered sample code, which implements the solution. Being the quickly thrown together piece of code that it was, the sample had a couple of quirks. Oh, nothing big — just things like being stuck in an endless loop when you access it with a browser that has cookies disabled. So, here’s the new, improved rendition:

  • Provides more graceful handling of various browsers. The actual screening will only work on IE, but at least other browsers will not get stuck in a loop.
  • Handles browsers with no scripting enabled — while this is not a perfect solution, at least the script-less users will see a message asking them to click through to the actual page.
  • Silently supports forms authentication. If forms authentications is enabled, successful screening will result in user authentication. Otherwise, the system will revert to the login screen.
  • Thanks to the tip, provided by Craig Andera, the module no longer requires a special RequiresAuthentication.xml file. This is almost completely a drag-n-drop solution now — you still have to register it in Web.config.

Take a look — let me know how it works for you. I welcome any suggestions, especially on how to improve no-scripting support.

NOTE: The code in this post is deprecated by the code in this one. The link to the old code is for historical purposes only.

Implementing Piped List in CSS

In my one of my previous posts, I offered a couple of suggestions to my colleagues over at Microsoft.com. One of them had to do with rendering lists of links, separated by a vertical bar (pipe), also known as piped lists.

I suggested that since vertical bars in those lists are purely decorative elements, they don’t need to appear as text content of the page.

Here is a simple implementation of such a list.

PHTML: Dumbing It Down For Sanity’s Sake

If you have built large sites, you have faced a familiar maintenance problem: content authors, either unwittingly or in their creative genius adding horrific boogers of markup to the site. Over a year of such maintenance, the site is no longer the perfect lean and mean standards-driven machine, all-or-nothing DHTML/CSS perfection capable of invoking jealousy of the greatest Web minds. No, it’s more like a Frankenstein on the high-carb diet, the Joseph Merrick of the Web, covered in disgusting muck of Office-specific tags, with repulsive smell of FONT and O tags emanating from it.

Sure, there’s Tidy. And there are ways to delay the impending “markup junk-up” crisis, but philosophically speaking, the problem stems from the fact that HTML casually mixes content and context of page into one nice tag soup, and in that it discourages content developers to think of Web content and Web context as being two separate things.

What if we address the problem head on? What if we “disable” context features of HTML for content authors in some organized fashion? What do you think about Primitive HTML, a subset of HTML that is designed to prevent introduction of unwanted coding by the authors? Call it PHTML, if you will.

Here’s what I am thinking. Disallow tags like FONT, CENTER, SCRIPT, OBJECT, IFRAME, MAP, TITLE, ISINDEX, BASE, all of the HEAD tags, including HEAD tag itself, and of course, BODY, FRAMESET, and HTML tags. PHTML is only used to create fragments of Web content, not complete documents, so there is no need for style or meta tag declarations.

Disable STYLE, ALIGN, ID, and all other attributes that may affect style. I would leave the CLASS attribute, so that the authors could pick from pre-defined styles in the page stylesheet.

The only tags allowed in PHTML are those that are used to create content and provide semantic distinction in it, not style it. In fact, the author of PHTML wouldn’t (and shouldn’t) worry or even know how the content would end up appearing on the site — that’s why it’s called content. The designers will make it “pretty”.

It is a very idealistic thing to say, but if PHTML is standardized, and hopefully the tools are built that support PHTML (authoring, conversion, etc.), the World Wide Web of markup may just be a little better place to work with and live in.

 

Microsoft.com Redesign

Oh, well, count me in into the critics’ crowd. Microsoft.com had redesigned its home page. As Douglas Bowman at Stopdesign notes, this is a step in the right direction. It looks like a lot of work has been done in “de-cluttering” the page and organizing links in clusters — I smell a card-sort or two. As the page layout got simpler, the usability has improved as well — gone are the drop-downs, the concept of audience silos is much better articulated, the home page is full of links, yet the design keeps it “breathy“ and legible. The page loads quickly and conveys a nice tactile feel, inviting further site exploration. Good job on that — it takes a lot of effort to make something as complex as Microsoft Web presence seem so simple and straightforward.

Things are less sunny under the hood. While Douglas praises the improvement (and I join him on that), I’d like to keep the developers focused on things that still need to be fixed (in no particular order and probably incomplete):

  • Please eliminate obvious markup errors, such as lack of DOCTYPE declaration, nesting block elements inside inline elements, atavisms such as WBR, etc. Strive to make the page at least XHTML 1.0 Transitional compliant. It’s not that hard.
  • Consider removing tables altogether. This is not a very complex layout, you don’t really need them.
  • Move your inline style declarations into a separate stylesheet. Having some styles inline and some in the stylesheet is a maintenance nightmare.
  • Speaking of CSS styles, clean that up, too: there are some obvious errors, such as unitless padding and margin declarations, and even just plain misspelled elements (xheight).
  • Consider organizing ALL of your lists of links into… er, lists, using ul and li elements. Perfect candidate —  your bottom navigation bar: semantically, vertical bars that separate links have no meaning. Therefore, they shouldn’t be in your markup.
  • Subsections in main content area — “Popular Downloads“, “Popular Destinations“, etc. look pretty as graphics, but do they really need to be? I would suggest replacing them with list items as well. If you’d like to retain the “prettiness“, use one of the image replacement techniques.
  • Finally, let’s remove CSS filters (gradients up at the top) on the home page of your site. Why? Because there’s really no reason for it. Your page has a fixed-width layout and replacing the IE-only gradient with a background image will go a long way in making your site look the same on all browsers.

Your Web site’s home page is your company’s face. In your case, the importance of keeping that face well-shaven is critical. If your company made toasters or lightbulbs, I would probably not care as much about the quality of HTML/CSS code. However, given that Microsoft produces world-dominant browser and world-class Web development tools, and how much flak Microsoft takes on a daily basis in regards to Web standards support, a squeaky clean markup of the home page is a foundational practice that has as much marketing and evangelist power as a dozen of boisterous dudes in MSDN t-shirts — at a fraction of the cost.

Coblogging: Spontaneous aggregation

In my fairly short experience with blogging, I’ve noticed one thing about these new creatures of the Web: they are both connecting and isolating. The are connecting in the sense that anyone can read your stuff and attach their opinions about it, but they are also isolating, because your stuff typically sits in its own blog silo, all alone until some aggregating service graciously decides to let your posts socially mingle with the others. Folks over at weblogs.asp.net and other blog-hosting establishments have this advantage in the form of the front page feed, but what about the rest of us?

Also, the blogs are horrible for sustaining an online conversation: the comments are given such a third-world status that they are easily lost in contrast with the main post. As a solution, bloggers use pingbacks, but those are hard to track navigationally, taking some time to reconstruct the original flow of discussing. Take our recent exchange with Bertrand, Nikhil, and Peter for example. All three of us posted our own posts in our own blogs, and responded in comments of the others’ posts, which created a fairly messy pool of opinions with hard-to-discern direction or conclusion (of course, you know that in this case Nikihl gets the final word, so backtracking might help :).

Wouldn’t it be great if there was a way to organize these posts into a forum-like threaded discussion?

Enter coblog: a way to bring blogs together — as needed. 

Imagine that at the time of entering the post, you can designate it as the coblog root. Coblog root looks and acts just like an ordinary post, except maybe it has a “coblog” icon next to it (or some other visual hint of this sort).

Suppose your (friend|colleague), after reading your insightful (prose|poem), gets all (upset|excited|riled-up|flabbergasted) and decides to open a discussion on the topic of your post.

At this point, this (great|well-known|total-nobody) acquaintance of yours opens their blog and writes up a (heated|supportive|contemplative) response to your original post. At the end, (he|she) specifies that this is not an ordinary post, but a coblog post.

Coblog post requires a URL to the coblog root or any other coblog post. Using this URL, your buddy’s blog server determines location of the coblog root and sends it a TrackBack to notify of the new post in the coblog.

Instead of (or maybe in addition to) just sending a URL of the actual post in the TrackBack, the server sends the feed URL for all coblog posts that correspond to the specified coblog root on this server. Think of it as a variation on a category feed.

Upon receiving a TrackBack, the coblog host (the server which hosts the coblog root) adds the URL, sent with the TrackBack, to the coblog roll. There may be some moderation functionality in place (no, I don’t accept coblog feed from “MaskedBandit34252” — sorry, you look like a spammer to me) to prevent any server from being able to feed any posts to your coblog.

Coblog roll serves as a list of all potential sources of posts for the coblog. Each coblog root has its own coblog roll.

All of the coblog posts, including the coblog root, will be available as part of the aggregated feed, which always originates from the coblog host.

Now, after the discussion had started, both your and your comrade’s blogs will prominently feature a link to the coblog in the (sidebar|top bar|inconspicuous drop-down menu) of your home page. Clicking on this link will reveal the discussion as a chronological list of coblog posts, starting from the coblog root. Those participants who don’t have their own blogs could use comments as usual.

Naturally, coblogs should not be limited to just two people. The purpose of the coblogs is to facilitate discussion, and a good discussion often takes more than two (voices|flames).

So, what do you think? This is just a concept, but I think it has a pretty good promise.

Coding For Multithreading

Between the two, Ian Griffiths and Phil Haack have some good info on coding for multithreading. Developing multithreaded applications is hard. It takes experience, which comes in most painful forms of debugging and learning from engineering mistakes — unless you are a multithreading genius who sports a built-in support of multitasking and thread modeling directly in the brain.

Otherwise, every little bit of information helps. Here’s my contribution:

  • If you haven’t started coding yet, please spend some time at the whiteboard, modeling your threads and locks. No need to dive into a UML book — simple lines and arrows will suffice. Try to understand when a resource needs locking along the timeline, what possibilities are there for deadlocks and racing conditions. Multiple threads can be easily expressed in two dimensions — the easiest one is a graph where one axis is relative timeline and the other is application scope (local, thread, application, persisted data). Traverse your application data through it and study any point where it deeps below the application scope. There are more elaborate methods of engineering and modeling multithreaded applications, but if you’ve got nothing to begin with, the whiteboard is your best friend.
  • If you are dealing with existing code that is not thread-safe, definitely do try replacing your “lock“ statements with Phil’s TimedLock to trace potential deadlocks. You will still suffer extensive debugging therapy, but the TimedLock pill will help a little.
  • When writing a type that has to be thread-safe, pay very close attention to every member and how it is used. Consider marking those members that aren’t changed through the lifespan of the instance as “readonly“. This is not going to affect your performance, but will help you organize your type’s members into mutable and immutable piles.
  • Speaking of Immutable — this is a good pattern to use as far as thread safety is concerned. It may or may not apply to your scenarios, but should be considered when dealing with types, especially as a complement to Memento and State patterns.
  • Just one more on patterns — Proxy is a good way to encapsulate thread-safe access to data that is known to be continually updated.
  • Be twice as careful when writing a type that is instantiated statically. Some people mistakenly assume that once they’ve accomplished the feat of thread-safe instantiation, they are out of the woods. Just the opposite — static instances are the marked men of the “racing condition police“.
  • Post-constructor initialization of a static member is a bad smell as far as thread safety is concerned. It may be required to implement it in certain circumstances, but should be avoided at all costs.
  • Lazy instantiation is not something you apply blindly to any type — it may not save you much performance or memory, but it will make your architecture more complex. If you have to do lazy instantiation, try to use the lockless static instantiation (fifth version on the referenced page).
  • According to latest and greatest, just a simple double-lock is not enough. MemoryBarrier is required to ensure correct order of null comparisons.
  • Try to be conscious about where and why you place a lock — and what kind of a lock is it. Generally, you’ll either use a Monitor lock (or the lock statement) or the ReaderWriterLock. Ian warns of being too liberal with the use of ReaderWriterLock. Don’t just place a “lock brace“ around your whole method, although you may be tempted to. The longer is the body of the lock, there more possibilities there are that there are other methods and type instances called in that body, and thus the more possibilities there are for a deadlock.
  • Just because the method is documented as “thread-safe“ in .NET Reference doesn’t mean that you can’t use it in a non-thread-safe way.
  • And last, but not least — make sure you test and debug your multi-threaded application on at least a dual-processor machine.

Like I mentioned before, multithreaded programming is not an easy task. Hopefully, these tips will help you in your exploration of .NET’s and your own potential of enterprise application development.