Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Books Media The Internet Book Reviews IT Technology

DHTML Utopia 154

Bruce Lawson submits the review below of Stuart Langridge's Excellent guide to creating dynamic web pages; scalable and sensible., writing "Don't be put off by the title: the DHTML here bears no resemblance to the stupid Web tricks of the late 90s that allowed animated unicorns to follow your cursor or silly Powerpoint-like transitions between Web pages." Read on for the rest.
DHTML Utopia: Modern web design using JavaScript and DOM
author Stuart Langridge
pages 300
publisher Sitepoint
rating 8
reviewer Bruce Lawson
ISBN 0957921896
summary Excellent guide to creating dynamic web pages; scalable and sensible.


This book is the opening salvo in the latest battle in the Web Standards war -- the battle for unobtrusive JavaScripting, or Unobtrusive DOMscripting as many call it, in order to rid it of all the negative connotations that "DHTML" and "JavaScript" bring. Combined with the non-standard XMLHttpRequest object, it's sometimes referred to as "Ajax". Terminology aside, though, what are the substantive differences between the old-skool and the "modern" of the title?
  • Graceful degradation. A great example of this is Google Suggest in which the DOMscripting enhances functionality by making the page feel more responsive, but if you don't have JavaScript for some reason, the page still works.
  • Separation of structure, presentation and behaviour. The DOMscript deals with the behaviour in the same way as CSS defines the presentation in the brave new Web standards world, and the three remain separate. The html has no JavaScript in it at all -- everything is handled by in separate code files.
  • No browser sniffing. This aims to future-proof code by testing for features rather than sniffing for browser name and version. So, before using the TimeTravelCureCancer method, the current browser is tested to see whether it's supported. If it is, the script continues. If it isn't,the script silently fails with graceful degradation.

Theory sounds cool -- how's the damn book?

I was pleasantly surprised by how quickly the book gets to work. Even before page numbering begins, the introduction has a lucid and compelling argument for using HTML 4.01 rather than XHTML as the markup language of choice.

Chapter 1 has a brief (6-page) overview of the importance of valid code and separating presentation into CSS, and a short description of the unobtrusive nature of Langridge's scripts: no script in the mark-up at all; instead, the .js files contain "event listeners." The reasons why this is desirable are promised later.

Chapter 2-4: The basics

Now that document.write in the html is no longer needed, you need to know the "proper" way to add text or elements to a Web document. So Langridge gives us a tour of the DOM, showing how to walk the DOM tree and create, remove and add elements to the tree. It's methodical, and by the time I was beginning to get a bit tired of theory and thinking that you'll have to prise document.write out of my cold, dead hands, we get an "Expanding form" which allows us to expand a form ad infinitum to sign up as many friends as you want to receive free beer, without ever going back to the server. (You can see such a thing in action in gmail, when you want to attach multiple documents to an email).

I started to warm to the author and his style. 33 pages into the book, and we get a real-world working example to examine (I like my theory liberally garnished with practice). I also feel a kinship with authors who fantasise about mad millionaire philanthropists giving away beer.

By chapter 3, we've really got going. Apart from one rather pedantic edict (the event is mouseover, the event handler is onmouseover and we should separate the nomenclature, even though it makes no practical difference), the focus here is on real-life browsers. And, as we all know, in Web dev books, real-life browsers means grotesque exceptions to our ideal-world rules .Strangely -- and oddly satisfyingly to this PC user -- the culprit isn't only the perennially despised IE/ Win; shiny Safari comes in for a good bit of stick!

The real-world example here is a data table that highlights the whole row and column of any cell that's being moused-over. Now, in any modern browser except for IE/ Win, the row could be given a hover pseudoclass (IE/ Win only allows :hover on anchors). But as there is (weirdly) no HTML construct for a column, this effect can only be achieved through DOMscripting. What the script does is to dynamically append a class name to every cell in the row and column at run time -- and the pre-defined CSS file determines the styling of that class.

Herein lies an advantage in Unobtrusive DOMscripting: you could just take this script and plug it into a Web site without changing any of the html (except to add a link to the script file in the head). But the script is relatively complex for a newbie to code, and for the techniques to be widely used, I suspect that the billion old-skool cut'n'paste JavaScript sites will need to be replaced with a single, canonical library of modern scripts for people to cut and paste from. For those who find CSS challenging, JavaScript is probably even more complex. .

Chapters 5 - 7: blurring the division between Web UI and application UI

It's a truism that the Web has set back UI development some years -- in fact, back to the old dumb-terminal paradigm of filling in a screen full of data, pressing the button to send it back to the mainframe and waiting for the next page to be sent -- or the old one returned with errors noted.

Langridge shows that we can make the experience smarter than this, going beyond the traditional JavaScript client-side validation interactivity by adding animation to allow text to fade in and out over time, styling tooltips to be sexier than the default yellow box and which can gently appear into view rather than the browser default on/ off state are examples that struck me.

When I first read these, I thought they were cheesy gimmicks -- the modern equivalent cursor-following unicorn -- until I considered more deeply and realised that many of the UI elements that we enjoy in modern desktop apps are precisely these small, cosmetic effects: abrupt transitions, lack of transparency, sharp edges to UI widgets all feel like old operating systems or clunky Web pages.

It's not all touchy-feely; we get auto-complete text entry, degradable calendar pop-ups, flyout menus and lessons in OOP, encapsulating code for re-usability, and avoiding Internet Explorer memory leaks.

Chapters 8- 10: seamlessly working with the Server

So far, so client-side. Where Unobtrusive DOMscripting really gets developers juices flowing is the ability to communicate with the server without obviously refreshing the page. Chapter 8 takes you through a variety of methods. Some, like the hacky iframe method or hideous 204 piggyback method are so gruesome that I breathed a sigh of relief loud enough to wake the cat when I finally turned the page to read "XMLHTTP". This method (which is non-standard and introduced by Microsoft) has ushered in the Next Great Web Thing: asynchronous communication with the server. Langridge walks through using the Sarissa library to make a user registration form that checks whether the user name you choose is taken, and if so, suggest some alternatives without refreshing the page.

There's a lot of unresolved accessibility problems with the Ajax method (how does a screenreader alert the listener to the fact that something new has appeared on the page? How do they navigate and hear the new stuff in context?) and while it is laudable that Langridge notes these issues exists, I'd hoped he would have suggested some solutions. He doesn't, but as he's a member of the Web Standards Project's DOMscripting task force I'm guessing it's being worked on.

The project that really kicks ass in this section is a file manager, like the one in most people's Web site control panels, where you can actually drag and drop the icons, like an operating system, and the server does the work. Langridge carefully goes through all of the steps, all of the pitfalls and all of the code needed to make this happen in any modern browser.

It doesn't take a lot of imagination to realise just how this could revolutionise the Web experience. Drag and drop products into a shopping cart. Drag the shopping cart to the checkout icon. Moving money around bank accounts in some integrated internet banking application. The possibilities are huge.

Conclusion

The whole technique of unobtrusive DOMscripting needs further research before it's ready for prime time -- particularly from an accessibility point of view, but then as an accessibility bore you'd expect me to say that. I think it's beyond question that there's ideas in here that radically enhance the usability of Web-based applications by making them more intuitive and more like the desktop drag-and-drop interface we know from our desktops.

This is a good-humoured, thoroughly-researched book that combines theory with practical learn-by-doing examples. To this reviewer, the code appears scalable and sensible. This book is never going to appeal to the quivering aesthete designers -- probably because it's fundamentally about code. But precisely because it proposes a complete separation of code and design, it facilitates the advancement of the Web.


You can purchase DHTML Utopia: Modern web design using JavaScript and DOM from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

DHTML Utopia

Comments Filter:
  • Unicorns (Score:5, Funny)

    by onion2k ( 203094 ) on Monday August 01, 2005 @05:07PM (#13217497) Homepage
    stupid Web tricks of the late 90s that allowed animated unicorns to follow your cursor

    If my boss reads Slashdot I think I know what I'll be asked to add to the corporate website tomorrow morning..
  • Birth of Ajax (Score:4, Informative)

    by webword ( 82711 ) on Monday August 01, 2005 @05:07PM (#13217504) Homepage
    For your reference, this is how "Ajax" got started. I'd say about 75% of the article is useless, but about 25% is actually useful. The main value of the article can be summed up in two words: Mi>eye candy. Jesse James Garrett is good with graphics.

    Ajax: A New Approach to Web Applications [adaptivepath.com]
    • Well, when I worked at Experian 3-4 years ago we were doing all of that with JavScript, XSL, CSS, XML, Cobol for the main frame and no server side web scripting. On a typical page we would run a number of dynamic transactions that just modified the DOM instead of posting the page. Everything was based on standards where support was available (e.g. IE hack around code wasn't standards based), everything had to pass Bobby accessability and it all had to be simple to use (The police were one of our clients).

      A
  • Why do it yourself? (Score:5, Interesting)

    by MosesJones ( 55544 ) on Monday August 01, 2005 @05:10PM (#13217528) Homepage
    The challenge with Ajax, and complex DHTML is that a slight error produces big problems. Its a shame that the book doesn't look at the tooling approaches for instance the Ajax plug-in [java.net] that Sun have released (via Open Source). DHTML and these active elements can be great, but as a practice I'd be more inclined to have a few people invest time in developing components that the majority of people can use, rather than having lots of people trying to understand the complexities, and buggering it up.

    Interesting technology, but too easy to use REALLY badly. It would have been nice if the the book had covered how to build SOLUTIONS using DHTML, rather than focusing on how an individual can use it.

    • While plugins like the one that Sun released are nice, the battle of course is with flexibility. That's why people build their own content management or forum systems, even though there are plenty of choices like Mambo and PHPBB. Sure, plugins are at a lower level, but the point stands--they can't always provide the flexibility necessary.

      The challenge with Ajax, and complex DHTML is that a slight error produces big problems.

      I think the challenge with any program is that errors produce problems. That'
    • The challenge with Ajax, and complex DHTML is that a slight error produces big problems.

      I think that slight errors in any framework or programming language can cause big problems and not something particular about AJAX.

    • but as a practice I'd be more inclined to have a few people invest time in developing components that the majority of people can use, rather than having lots of people trying to understand the complexities, and buggering it up.

      Did you RTFA or even the post? An Open Source cross-browser implementation called Sarissa [sourceforge.net] does that.

      Sarissa Oveview:

      Sarissa is an ECMAScript library acting as a cross-browser wrapper for native XML APIs. It offers various XML related goodies like Document instantiation, XML loa

  • Only in the past few years has client-side web scripting come out of its 'kiddie' version to demonstrate some really cool hacks (many google examples come to mind) which- coupled with CSS- have changed how we interact with web sites and how they are built. Standards-compliance and strict separation of content and presentation makes CSS+'AJAX' seem like a leap forward in technology when it is actually just a prudent and smart use of old technology. It is a matured technology and with the latest news of IE7
  • Please... Don't (Score:4, Insightful)

    by esconsult1 ( 203878 ) on Monday August 01, 2005 @05:14PM (#13217559) Homepage Journal
    It doesn't take a lot of imagination to realise just how this could revolutionise the Web experience. Drag and drop products into a shopping cart. Drag the shopping cart to the checkout icon. Moving money around bank accounts in some integrated internet banking application. The possibilities are huge.
    I'd really hate having to do the above. What would be better and smarter (IMHO), is to:
    1. click on the "Add to Cart" button
    2. AJAX adds the product to the card without refreshing the page.
    It seems to me that AJAX works worderfully in limited cases that gets rid of the annoying browser refresh problem.

    Expect to see flooding the internets(tm) over the next few months annoying over-use of AJAX in every single way that programmers can -- even if they don't need to.

    • I'm working on something like that - check this out: https://inkjet.ie/shopping [inkjet.ie] cart?catid=22

      It's not quite perfect yet, and is not accessible yet, but the functionality is there.

      When I'm happy with the cart's functionality, I'll be writing an article explaining exactly how it was all done, and what problems I came across while building it (uploading files through AJAX is a pain, for example).

  • Ruby on Rails is really sweet for that AJAX stuff.
    • Rather than invite flames (as your other replies) with a useless comment, why not tell why this is so?
      How about explaining your statement.
      Give a small example.
      Saying RoR is sweet doesn't help the Ruby community or the Slashdot community. I could say using modperl or IIS/ASP is sweet for AJAX but, it would serve no purpose.
      Why not add a comment that informs and helps the reader?
      Why not respond to this post saying how Ruby on Rails and AJAX can work together? That might provoke interesting and informative dis
      • Why not respond to this post saying how Ruby on Rails and AJAX can work together?

        Ruby on Rails has clean, built-in support for template code that generates the client-side Javascript for using the prototype.js library.

        It makes it next to trivial to create, for example, Google Suggest text fields because the developer focuses on using Ruby syntax to indicate the server action to invoke, based on what client behavior, and what to do when the results come back to the browser. The annoying little details

  • Bookpool has this book for $24.95 with no club to join. Bn.com has it for $39.95 (save $5 if you're a club member).
  • I give up (Score:2, Interesting)

    by Locke2005 ( 849178 )
    Why is HTML 4.01 better than XHTML?
    • It's not. DHTML (if this is the same DHTML of old) is a leftover of the browser pissing-contests of the 90's, and represents an obstacle to standards adaptation.

      Let's drop DHTML like the steaming sack of decaying garbage it is and focus on writing quality, structurally sound, and accessible code.
      • I'm sorry, that makes no sense. DHTML has nothing to do with browser pissing contests (those just made it hell to implement.) DHTML is the aggregation of (X)HTML/Javascript/CSS to make a page less static.

        The lack of standards has made pretty much anything implementing DHTML a steaming pile, but DHTML itself is just a technique. And if you're willing to ignore four year old browsers, the state of affairs has gotten to the point that you can write decent code properly separated from your presentation.

        • But add XMLHttpRequest to it and suddenly you have don't have a standard. Wooo!
          • XMLHttpRequest is pretty much a defacto standard in the major modern browsers. All the modern browsers are the same except (of course) for IE which doesn't implement it internally and requires an ActiveX instantiation. However, that is very easy to code around and then you have a cross-browser implementation of XMLHttp such as Sarissa [sourceforge.net].

            As long as you don't need to support 4-5 year old browsers, XMLHttp works fine. If you need to support 4-5 year old browsers, either force your users to upgrade (by not s

            • You're forgetting Konqueror. It's not 4-5 years old. Just because it doesn't run on Windows doesn't mean it's not a major browser, as it's pretty damned major in the Unix world.

              Websites should use the freaking standards. Doesn't ANYONE remember the browser wars?
              • Konqueror on my boxes supports XMLHttpRequest just fine, at least in my experience.

                And I wouldn't call it a major browser by any means, I don't know anyone who uses it...
                • I don't know anyone who uses it...

                  Of course you won't find any Windows users who use it! Duh! But it's the default browser for KDE, which I understand is a wildly popular desktop for Unix.

                  As for XMLHttpRequest, I only know what I've been told. And I've been told that the only reason all the new Google apps don't support Konqueror is the lack of that non-standard item.
                  • I'm curious. At what point does something become a standard? After all, I'm pretty sure Sir Berners-Lee didn't worry about all that standardization stuff when he released WorldWideWeb in 1990 - he simply told people how to write HTML that his code could read. This is approximately four years before W3C even existed.

                    XMLHttpRequest is available, in a mostly identical form, in the current stable versions of Internet Explorer, Gecko-based browsers (including Firefox, Netscape, Mozilla, and Camino), Opera, an
                    • There are two kinds of standards. One is the defacto just-because-everyone-uses-it standard. Such as Windows. Windows is a standard because 90% of the people use it. Not fair to all other operating systems? Fine, then another example is GCC. It's free, but has extensions that no other compiler has, to the point that some major open source projects cannot be built with any other because they were silly enough to use them.

                      The other kind of standard is the formal standard. It does NOT need to be a government m
                    • You may have answered my question, but I don't see it. Who decides that the W3C is "the formal standards body for the web?" We could just as easily put Microsoft, Apple, or Mozilla in that role. The only reason W3C defines the standards is that the browser manufacturers and content providers decide it does.

                      Is your complaint that XMLHttpRequest isn't documented? I argue that it is [microsoft.com].

                      RSS has been through several revisions. Is 2.0 a "formal standard"? If so, what makes XMLHttpRequest different? If not, wh
            • you can dynamically load content without XMLHttp and without frames [dhtmlcentral.com], but everything has its limits and i think most of us are sick of supporting netscape 4.7 either way :)
    • simple (Score:2, Funny)

      by Anonymous Coward
      Two years ago, the "Valid XHTML" button was leet as all hell. Now that everyone and their grandma's pantyhose produces valid XHTML (well, mostly (cough)), it's not cool. Since XHTML 2.0 isn't supported anywhere, people had no choice but to declare HTML 4.0 to be the new leetness.

      I think slashdot is smart for sticking with terribly invalid HTML 3.2. In another year or so, they'll be leet as fuck.

      I'll be lanching my new markup language soon, which is going to be so leet you'll shit your friends' pants.

      <web
    • Re:I give up (Score:3, Informative)

      by gusnz ( 455113 )
      Long story short, to make effective use of XHTML you have to serve it as "text/html" to the browser, as IE doesn't support its proper MIME type (thanks, Microsoft!). Here's a good summary [hixie.ch] of the issues surrounding XHTML vs HTML 4.

      Personally, I don't mind using XHTML 1.0 as text/html, as although it's not quite "ivory-tower" perfect, it's still IMHO a little cleaner and more elegant. Either way, (X)HTML+CSS still beats the living daylights out of "any-old-HTML + tables".
      • Content negotiation is an old concept. In Apache with mod_rewrite:

        RewriteEngine on
        RewriteBase /
        RewriteCond %{HTTP_ACCEPT} application/xhtml\+xml
        RewriteCond %{HTTP_ACCEPT} !application/xhtml\+xml\s*;\s*q=0
        RewriteCond %{REQUEST_URI} \.html$
        RewriteCond %{THE_REQUEST} HTTP/1\.1
        RewriteRule .* - [T=application/xhtml+xml]

        will serve application/xhtml+xml to any client that claims to accept it. Mozilla does, IE doesn't. Safari just sends */* as its accept string, so it gets text/html.

        • Content negotiation is an old concept.

          It also reduces your cache hits, increasing server load and bandwidth costs, and slowing down your website.

          Your code is broken, BTW. You need to transmit a Vary header when you vary content based upon request headers, otherwise you can screw up caches so they send application/xhtml+xml to Internet Explorer.

      • So HTML 4.01 is better than XHTML because IE is broken? Isn't that like saying a Kia is better than a Ferrari because there is a 55mph speed limit?
    • Re:I give up (Score:2, Informative)

      by brianmf ( 571620 )
      Because XHTML is a pointless and overhyped waste of time and energy at this point.

      Read more of Stuart's thoughts on his website:

      http://www.kryogenix.org/days/2002/11/28/whats [kryogenix.org]

      http://www.kryogenix.org/days/2005/02/21/xhtmlHtml [kryogenix.org]
  • browser problem (Score:4, Interesting)

    by MatD ( 895409 ) on Monday August 01, 2005 @05:39PM (#13217727)

    A lot of the annoyance of 'web apps' comes from the fact that browsers can't just refresh a simple tag on the page from the server. They have to re-render the entire page, causing a jarring visual experience for the user.

    Browsers should be able to realize that since the url is the same, diff the previous stream, and the current one, and modify the current page inline.

    As it stands now, web developers have to jump through a lot of hoops to get that sort of functionality. They shouldn't.

    • Sounds good. Let's see the code.
    • That *might* work to reduce rendering time depending on the engine. Some engines are slow with innerHTML calls, so it could be a wash.

      Most of the slowness with a page refresh is typically network latency and bandwidth bottlenecks. Browser tricks won't help with that.
    • A lot of the annoyance of 'web apps' comes from the fact that browsers can't just refresh a simple tag on the page from the server. They have to re-render the entire page, causing a jarring visual experience for the user.

      Yeah, like:

      Chapters 8- 10: seamlessly working with the Server

      Where Unobtrusive DOMscripting really gets developers juices flowing is the ability to communicate with the server without obviously refreshing the page. Chapter 8 takes you through a variety of methods. Some, like the h
    • Browsers should be able to realize that since the url is the same, diff the previous stream, and the current one, and modify the current page inline.

      Why don't you submit the code to re-parse the DOM tree and update the page in-line? You could submit it as BSD-licensed code so all browsers could use it, you would be famous!

      As it stands now, web developers have to jump through a lot of hoops to get that sort of functionality. They shouldn't.

      AJAX is _really_ not that hard as another poster pointed out. S

    • A full refetch of the page text (whether or not the screen is redrawn) couldn't let the user carry on with field input while a server fetch is pending. Not keeping the user waiting while network round trips are happening makes a much better "responsive" impression.

      Plus, with larger pages and/or slower networks a full refetch would become a lot slower than DOM modifications.
    • Actually, I think IE has already been doing something like that for some time now. I don't know the exact details of how it works, but it definitely does partial refreshes of only specific areas of the screen in some cases. I think it may actually have something to do with optimization when IE is communicating with IIS, but I could be wrong...
  • ... stupid Web tricks of the late 90s that allowed animated unicorns to follow your cursor or silly Powerpoint-like transitions between Web pages.

    Wow! I been running my website [creimer.ws] since 1998 and I never came across a web book with stupid web tricks like that. Did I missed something?
  • Granted, the reach is good and for some tasks there's nothing better than web UI, but as soon as you find yourself in need of building something more than just a bunch of web pages with links it all turns into mind-boggligly weir hackery. You start using javascript for layout. You need to manage state. The most trivial UI things become a fucking mess of JS and CSS "hacks", etc., etc. It's also a pain to turn all this crap into components for future reuse, so chances are next time you do something similar yo
    • Sure the *web* was originally designed around the idea of hyperlinking documents.
      But, whats wrong with building rich applications for the browser? Is this somehow more evil than hard to maintain, bloated, crappy VisualBasic/Swing/Delphi/Powerbuilder applications that will be very platform specific?
      A mess of code will happen in any language on any platform. Its up to the developer to prevent it.

      The browser is just a tool. Why not exploit that its much easier to build/test an app for a handful of browsers tha
      • 1. You need to maintain state (and re-parse it on every request, or execute dozens of SQL queries to fetch it from the DB).
        2. Any user action that alters state (and often even those that do not) require a round-trip to the sever, including #1.
        3. Creating a quality UI has become a voodoo-like thing where you have to rely on all kinds of unwritten laws because CSS2 is not flexible enough beyond simple document formatting. Place a login UI in the center of the screen in XHTML1.1 Strict without using Javascript
  • I haven't read the book myself, but well done on a coherent review that covers the resurgence of unobtrusive scripting. The author [kryogenix.org] is quite well respected in the community, and I can only hope books like these begin to replace the "omg dhtml netscape 4!!11one" fare usually found on shelves.

    I think AJAX et. al. could be a bit of a diversion though from the ideals of "unobtrusive scripting" though. Many sites using XMLHttpRequest and similar techniques aren't easy to degrade in older, non-JS-supporting browse
  • Can anyone tell me why Microsoft's "XML Data Islands [w3schools.com]" didn't take off in the mainstream given that IE has had a 90% browser share? For external XML they are simple and neat and don't require any javascript whatsoever for basic fetch-and-display use (but you can enhance functionality using javascript).

    Here's a demo [w3schools.com] (use IE5 or later). I figure they must be in use somewhere because there's even a Mozilla article [mozilla.org] on getting them working in Mozilla.
    • Cause Mozilla wasn't doing it too.

      Kindof like XSL/XML/CSS display bullshite circa 2003:

      In the Firefox/Mozilla camp, CSS was the only way for a long time to manipulate XML for display.

      In the Internet explorer camp, XSL was the only way to manipulate XML for display.

      Hopefully we can get XML to display with CSS 100% in IE7, cause you can't in IE6. I think Mozilla has XSL embedded in the browser now (I hope).

  • Websites are now using these instead of regular popups to get by the popup blockers.


  • Animated Unicorns. [chrismullins.net]
  • For an example of drag and drop in use, check out Panic's website [panic.com]
  • Try this small DHTML thingy we made :-)

    http://assembly.mbnet.fi/asm05/compos/browser_demo /pure_javascript_demo_by_IKU.zip [mbnet.fi]

    If you use 800 x 600 resolution you can see it fullscreen. It placed third in the Assembly 2005 demo competition. IE recommended for watching, though it runs on Firefox as well.
    • umm is it written in assembly?

      IE recommended... now there's something you hope not to see.
    • Quite impressive, especially the polygon rendering. How did you do that?
      • I am not the coder of this demo - but I am very impressed by what I have seen.

        Basically, this demo uses javascript and DHTML in some interesting ways, but the code behind the scenes is pretty "standard".

        Essentially, they render the sides of the cube using a scanline renderer, which simply renders the sides as two separate triangles, built by drawing horizontal lines top to bottom (hence "scanline"), with the widths dependent on the slopes of the sides of the polygon (triangle). This is accomplished via a si

    • HOLY SHIT!

      This is very impressive - sorta runs under Mozilla (had to load index2.html directly (for some reason the popup redirect in index.html didn't work) - and it was kinda jerky on my box (but my box isn't the latest, greatest, or anything like that) - but I am impressed nonetheless! I really loved the ending spinny-twisty thing! Great job!

  • I'm so far ahead of this guy it hurts...

    My pages are nothing but a single, empty html and body tag... and EVERYTHING is dynamically cross format focused in Nashian triplicate (yeah! it goes there) on the back-end.

    You cannot touch this!
  • Whenever I see books like this, I always wonder why people do not use the dynlayer api. DynLayer Api at SourceForge [sourceforge.net] This has been around for quite sometime. Handles drags, moves, events, layers, z index, etc. Everything. True, it is now a bit big, but do like I do and remove what you do not need and away you go. You can AJAX as a separate piece.
    • > Whenever I see books like this, I always wonder why people do not use the dynlayer api. DynLayer Api at SourceForge

      Lots of reasons.

      Firstly from a philosophical stand-point it is always best to have more than one way to do things.

      Secondly because not everyones' experiences of dynapi are positive.

      I found it:

      1.) awkward to use.
      2.) able to cause even well specced machines to grind to a halt.
      3.) a pain to extend.
      4.) clunky, overwrought, fairly incomprehensible.

      Also I'm not sure that it entirely separates t
  • Neither B&N nor Amazon see fit to display the TOC for this book. If somebody knows of somewhere it's listed could you post that here? I'd especially like to know if 1/3 of the book is appendices we don't need.

    Good review, BTW. Too bad about the book's cover. What's up with that? Although it did make me think of the Brazil dream sequences. Perfect for those dreary winter afternoons in the cubicle farm.

  • Funny, this book arrived today at my house. I was hoping the section on AJAX would be more extensive, but it's relatively slim.

    IMO, the layout of the book is good and it's written well - from first glance anyways.

    Can anybody point out any strong online resources on AJAX development? I guess it's not that difficult to grasp, but I would like a little more of a foundation.
  • If '404 not found' is graceful degradation I don't want to see ungraceful degradation.
  • by Elixon ( 832904 )
    I love AJAX. It's fast for development and powerfull. JS+XUL=POWER ;-)
  • ... as a herd of corporate lawyers stumble against each other to file a new non obvious technology: drag & drop... but on the internet!

    It doesn't take a lot of imagination to realise just how this could revolutionise the Web experience. Drag and drop products into a shopping cart. Drag the shopping cart to the checkout icon. Moving money around bank accounts in some integrated internet banking application. The possibilities are huge.

  • The one part that I don't get about the "new Javascript" philosophy that has been going around is, why people are so obsessed with keeping the scripts optional.

    Why not use the full power of DHTML and build applications that totally rely on it?

    The one piece of the Ajax puzzle that I could see being optional is XMLHttpRequest, as it is only a de-facto standard. ECMA script, CSS, and DOM are standards, and the browsers can be expected to support them.

    Attempting to maintain support for non-conforming browsers s

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...