Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Book Reviews

Building the Realtime User Experience 102

rheotaxis writes "Many professional web developers have spent years building dynamic, database-driven web applications, but some of us, like myself, want to make the user experience more interactive and instantaneous. The book Building the Realtime User Experience, by Ted Roden, is an introduction to some new techniques making that happen now. New web servers like Cometd and Tornado power solutions that keep HTTP connections open until data is available for the clients requesting it, a technique called 'long-polling.' This means web developers can provide a real-time user experience using HTTP for all sorts of client devices now connecting to the Internet, not just web browsers, but mobile devices as well." Read below for the rest of rheotaxis's review.
Building the Realtime User Experience
author Ted Roden
pages 320
publisher O'Reilly Media
rating 8/10
reviewer rheotaxis
ISBN 0596806159
summary Shows you how to build realtime user experiences by adding features on your site without making big changes to the existing infrastructure
This book covers SUP and PubSubHubbub syndication, messaging with Bayeux protocol and Cometd, and asynchronous Python using Tornado, contrasting these with well-known client-side JavaScript methods. It then demonstrates how long-polling can implement and integrate chat, IM, SMS, and analytics. The last chapter wraps up with an example using all these technologies, a multi-user, real-time, interactive game using geo-location with mobile clients. Ted's writing style is concise and to the point, focused exclusively on the challenges presented and solved in each chapter, including just enough details for experienced programmers to download and setup the software tools being used, including the Google App Engine. The code samples are straight-forward, but be forewarned, it will be easier for readers with some experience building server-side scripts like PHP, Python, or Java, and a database server like MySQL. On the other hand, even if you never used Google App Engine before, that's OK, because Ted covers that in enough detail to get you started quickly. The sample code wasn't yet available on the O'Reilly web site, so you'll need to type in the code samples to try them. Check the O'Reilly errata page for the book to get a head start making the code work. (Full disclosure: I posted some of the errata.) The sample code for Cometd and Tornado ran easily on my laptop (HP 2.2 GHz with Windows Vista), and should be fine on Linux or Mac. Everything you need is open-source and easily downloaded.

The author explains that real-time web development puts the user at the center of all web interactions, and that developers have struggled with solving the push versus pull problem. The pull method requires multiple, periodic queries for updates from server information feeds, something that wastes server CPU and bandwidth when no changes have occurred, and is compounded by the number of different users making these queries. The push method allows the servers to contact the clients when information feeds have been updated, saving CPU and bandwidth.

RSS was designed for easy syndication of information feeds, but it suffers from the limitations of the pull methodology. While several push technologies have been proposed to solve this problem, only Simple Update Protocol (SUP) and PubSubHubbub are covered in detail here. Both of these are demonstrated with PHP code, so they should be easy to implement on hosted web account with PHP and MySQL. The author explains that while SUP isn't a real push methodology, it does address some the CPU and bandwidth issues. PubSubHubbub, a true push methodology when compared to SUP, is described with an equal amount of detail.

Next, the book covers techniques already familiar to JavaScript programmers who have experience building AJAX enabled web pages. Skim the text and glance at the code and diagrams in Chapter 3, if you already have this experience. The subtitle for this chapter is "Widgets in Pseudorealtime", and the key take away from this chapter is that client-side JavaScript can be used with pull or push technologies, depending upon the server-side implementation. If you don't yet have experience with AJAX, then be sure you can follow these code examples, because AJAX will be used in all the other chapters.

Have you ever wished your blog could send live updates to your readers the moment you post them? You'll learn how, using Bayeux protocol, Java, Cometd, and the Jetty web server. The sample code allows you to grasp how long-polling works with modern browsers. Once a client browser opens an HTTP connection to a web server using a POST method, the server leaves this connection open until it has data to deliver to the client. This chapter suggests using Firebug, a Firefox plugin for debugging web applications from the client side, to discover and track long-polling seasons.

Do you need to handle a large amount of incoming data, and then redisplay it on client browsers with almost no delay? Tornado, the Python web server, provides a solution. Tornado was created by FriendFeed, and made open source after being acquired by Facebook. Kudos to Facebook for making Tornado available. Please read Chapter 5 and 6 together, since they both explain how the Tornado server works. The sample code starts with Python threads that cache a Twitter feed, process and filter it, then send it out to web browsers already connected to Tornado using long-polling and asynchronous callbacks. Tornado is then used to implement a peer to peer chat system using long-polling. Again, each client stays connected to the Tornado server until messages are ready to deliver to each chat participant. Taken together, Chapter 5 and 6 lay the groundwork for more advanced Tornado web applications covered later in the book.

This is followed by two chapters using the Google App Engine to support real-time user experiences even though the Google App Engine does not support long-polling. If you have never used Google App Engine before this, don't worry. The author spends 10 pages explaining how to sign up. Then you build an application in the cloud and connect with your IM client, instead of the web browser. You can make your IM server accept commands and respond with information from other web services. The section, "Setting Up an API", gives you a tantalizing glimpse of possibilities explored later in the book. After adding Python code from the next chapter, you have SMS capabilities. Why would you want to do this? Because it allows users to keep informed while they're away from the web, making SMS another part of the real-time user experience.

Once you have implemented and deployed your real-time application, you can add analytics that give you immediate feedback about user interactions with your site. Instead of paying for a service, you can build your own custom web analytics using Tornado and client-side JavaScript. I especially like the authors approach to summarizing all the incoming web usage data into a single, super-simple, HTML template that is immediately updated as web usage changes. It should satisfy your curiosity to watch users interact with your web site in real-time, and you can make it track IM and SMS traffic connecting to your server too. Finally, the last chapter demonstrates how all the know-how you learn from the rest of the book can be combined in new and highly imaginative ways. The author provides all the details you need to setup a location-based, multi-user, real-time, interactive, game played by users with mobile web devices.

This book would be good for anyone that needs to quickly learn how to use Tornado and integrate it with other web services. It's also helpful for people who want to integrate the Google App Engine with other web services. Whether you're going to build a real-time web experience from the ground up, or just add a few more dynamic features to an existing site, the lessons you can learn from this book will help you.

You can purchase Building the Realtime User Experience from Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.


This discussion has been archived. No new comments can be posted.

Building the Realtime User Experience

Comments Filter:
  • by iONiUM ( 530420 ) on Wednesday October 06, 2010 @02:28PM (#33813842) Journal

    Look, I'm going to be honest here, I didn't read the whole summary; it's long. But I did read the main part of it, at the beginning, and what it's suggesting is keeping open an HTTP connection so "real-time polling" (which is sort of an oxymoron to begin with) can occur.

    I don't know how others feel about this, but to me it feels like.. putting two things together that don't belong. HTTP was never really designed for this sort of thing. Why don't they use straight sockets and TCP/IP? Why does it have to be HTTP?

    • by clone53421 ( 1310749 ) on Wednesday October 06, 2010 @02:30PM (#33813906) Journal

      Why don't they use straight sockets and TCP/IP? Why does it have to be HTTP?

      Straight sockets and TCP/IP in client-side Javascript?

      The point is to make it work with Javascript/AJAX. Since you can’t open an arbitrary port and listen, you open an XMLHttpRequest and wait for data.

      • by iONiUM ( 530420 )

        Doesn't HTML5 support WebSocket? Or was this specifically speaking of HTML4?

        • Shows you how to build realtime user experiences by adding features on your site without making big changes to the existing infrastructure

          Without making big changes to the existing infrastructure. It also mentions that the Tornado web server was acquired by Facebook and made open source. This is stuff that works right now, not whenever browsers get around to supporting HTML5.

          • by iONiUM ( 530420 )

            Well that's annoying, because when I posted ( that it's not surprising that W3C says not to use HTML5, I got a +5 response to my comment saying we should use it anyways. So which is it?

            • by clone53421 ( 1310749 ) on Wednesday October 06, 2010 @02:51PM (#33814346) Journal

              It’s both. Use HTML5, but also build a way that works right now on existing infrastructure so you don’t drop compatibility for the users who don’t support HTML5 yet. As the referred-to +5 poster said:

              As long as people are putting in "safe" fall-backs, then this really isn't a problem.

            • Re: (Score:3, Informative)

              by nacturation ( 646836 ) *

              Well that's annoying, because when I posted ( that it's not surprising that W3C says not to use HTML5, I got a +5 response to my comment saying we should use it anyways. So which is it?

              The three people who modded up that comment first ran a poll of the entire Slashdot population, verified that 95% of users agree with it, and only then proceeded to mod up that comment. This way we can be sure that those three moderators and the opinions expressed in that comment fully represent all Slashdot users (except the remaining 5%, who are pussies).

          • Re: (Score:2, Interesting)

            by fatius ( 245729 )

            Well said. That's exactly what I'm trying to do with this book (I wrote it). This is a very practical book in that you should be able to start implementing these technologies at scale in any modern browser and working with your apps existing ecosystem.

            While I would have loved to put in a chapter on web sockets, It wasn't practical in terms of time or practicality. However, the 2nd edition (fingers crossed) will definitely cover it.


          • to be fair, all browsers support websockets. Well, all but IE of course.

            For intranet type apps, there's no reason not to use them. For internet apps, you're probably best off sticking with flash.

    • We will be able to use web sockets [] in the future, but until then we'll have to use this workaround.
    • Re: (Score:3, Insightful)

      HTTP 1.1 supports keepalive natively, so yes -- it was made for this. The bigger question is: are the current HTTP daemons (and clients) made for it? I think most of them are optimized for high throughput on short-lived connections. Some of them even have a per-connection process or thread, which is definitely not going to work with large amounts of idle connections.

      • HTTP 1.1 supports keepalive natively, so yes -- it was made for this.

        Wasn't something like that used by the Netscape fishcam way the hell back when? I seem to recall it kept the connection open and updated the image on the fly that way.

        • The US National Institute of Standards and Technology had something like that not too long ago to display a .gif clock that updated every second for x number of seconds (specified in a URL parameter). They got rid of that, though: now they have either a Java applet or a static (non-updating) gif.


      • by phek ( 791955 )

        You apparently don't understand what keepalive is for.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      At least in browser based Java, for security reasons, a socket can only be opened back to the same host as the page was loaded from (a good idea).

      Due to firewalls, especially corporate firewalls, the connection needs to go out on port 80.

      Combine port 80 with the originating host, and you are stuck with the web server accepting the new connection and expecting it to be HTTP.

      Due to proxys (again especially corporate), and some firewalls it is a good idea if the port 80 traffic looks like HTTP.

      To cover the lar

    • If there's one thing that annoys the nerd in me it's the use of the term "real time". To truly be real time a monumental breakthroughs in physics which would need to occur. IMO real time should really be reserved for something which has so much hardware behind it that an individual would find it impossible to purchase on their own (stock exchanges, modern flight tracking, nuclear simulations, etc). But even then the definition doesn't stick so well.

      Basically, I just hate the term "real time". Good n
      • by BoneyM ( 209824 )
        IMO real-time has a perfectly good definition. It means a system which doesn't miss any of it's deadlines. So an ABS system *has* to update the current in the solenoids every xms +/-yus - if it meets that, it's real-time target is met. A human oriented system may have to respond within 10ms to a keypress so as not to appear sluggish. Real-time has no numerical value, it's about meeting timing specifications. Then there's hard-real-time and soft-real-time - which is about how badly things go wrong *if*
    • Re: (Score:3, Interesting)

      by RAMMS+EIN ( 578166 )

      ``HTTP was never really designed for this sort of thing.''

      Exactly. The World Wide Web is good at what it was designed for: static pages with text and hyperlinks. Go to your favorite search engine, type in some keywords to describe what you're looking for, and it will find you pages with information about that. The Web does this better than any other system I know of.

      If you want to build interactive, responsive, graphical applications, HTML and HTTP aren't the right tools for the job. This isn't what they we

      • by Harik ( 4023 )

        Ugh, don't say 'pixel perfect', I utterly despise pixel-perfect designs. What DPI? 100? 80? 120? 600 for print?

        pixel-perfect designs that break when I set a minimum font size are obnoxious as hell, when I have to select text that flows under other elements and paste it somewhere to read it. Ugh. Seriously, don't even think about giving designers that kind of control until they've proven themselves able to design for something other than their own monitor.

    • by geggo98 ( 835161 )

      Look, I'm going to be honest here, I didn't read the whole summary; it's long. But I did read the main part of it, at the beginning, and what it's suggesting is keeping open an HTTP connection so "real-time polling" (which is sort of an oxymoron to begin with) can occur.

      It is even harder than that. You have to deal with proxy servers, connection limits and the missing "flush" support in http.

      You can find a good summary of the problems in the GWT Server Push FAQ [].


      1. Use only one connection for the event notification. Multiplex all events on this single connection. (Reason: Usually limit for 3 connections to the same server).
      2. Close the connection after each event. (Reason: No flush in http).
      3. About each 50 seconds close the connection and create a new one (Reason: Ti
  • ...and the topic is interesting. This said, I am starting to find the Internet a less pleasant place to be day after day. All the dynamism somehow makes the experience more stressful, and whenever I am just looking for some plain information I feel "bombed" with banners, moving stuff, colors, etc. To the point that I ended up working with a computer disconnected from the Internet to keep focused and relaxed. The Internet is becoming more and more entertainment and less and less focused processing of infor
    • by fatius ( 245729 )

      Hi, I'm the author (of the book, not the review). I totally agree with you sentiment. I spend some time in the book (and to whoever will listen) that you shouldn't just dump more and more stuff in front of the user, you have to be smart about it.

      • by AHuxley ( 892839 )
        Well thanks for the great book, you will help many smart people do interesting and creative things with the web :)
      • by phek ( 791955 )

        wouldn't the best way to be smart about it be to allow users to grab more stuff when they wanted, kind of like a tug of war situation where you tug in order to get more stuff at which point it may tug back to take some of the no longer needed stuff. If only there was a name for this methodolgy. I think I'm going to have to patent this idea and call it the tug method! from hence point forward if any application grabs more stuff as needed it should be called the tug method. Perhaps I'll write up an RFC fo

  • I'm not convinced that "tricking" an HTTP connection into staying open really buys you all that much over polling your system every 5 or 10 seconds and seeing if any of your applications need updating. A previous poster mentioned using a regular socket, which seems the right way to go about it if you really do need a persistent connection. I've written applets that do this, and it's not a big deal.

    • Re: (Score:3, Insightful)

      by clone53421 ( 1310749 )

      When you try to scale it up, the overhead becomes significant.

      Of course, as I understand it every open HTTP connection normally means a corresponding thread running on the server, which scales even less well. But that was what Facebook did with Tornado, IIRC: broke the one-to-one relationship so that one thread on the server could handle multiple idle HTTP connections.

      • by AuMatar ( 183847 ) on Wednesday October 06, 2010 @03:03PM (#33814556)

        There's never been any reason for threads:connections to be 1 to 1, it's just an easy way to program it. Most serious servers haven't used 1:1 in years. The way around it is to use select() or poll() and wait for any socket to have data, then process it. Common implementations are 1 thread for all connections, or a pool of threads each of which handles N connections.

        • With NPTL 1 thread per client is actually faster than select/poll.

          • by AuMatar ( 183847 )

            For some definitions of fast and at some scales. Things like that are hard to make a blanket statement on. Eventually the context switching will kill you. Definitely easiest to write that way though.

        • by geggo98 ( 835161 )
          While the event based approach (usually based on select) was state of the art for a while, recent research shows that the 1:1 approach can have advanteges for high-concurrency servers.

          The reasons for this are mainly improvements in the thread handling in modern operation systems. With the event based approach you must handle the states for multiple sessions all for your self. Usually the space for state handling is stored on the heap. Communication between the sessions must be impelemnted by hand. And whe

    • Re: (Score:3, Insightful)

      by NevarMore ( 248971 )

      Like most engineering problems, it depends. Regardless of protocol many small connections or transactions is often a sign that you're doing too little in one connection and could either create a persistent connection or start combining payloads.

      There is some overhead to create and allocate an HTTP connection, send the headers around, authenticate it, etc. If you know that you're going to need to be feeding the user a stream of data its not a bad option.

      I do agree with other posters that HTTP may not be the

    • by qdotdot ( 816516 ) on Wednesday October 06, 2010 @02:50PM (#33814328)

      Having worked at a large firm that is one of the local market leaders in social networking, I can tell you that polling your system every 5 or 10 seconds can cost you a lot of money in hardware and bandwidth.

      Even an empty HTTP request is about 1kB with all the overheads (browsers sending cookies, etc). If you've got 1M visits daily, each spending 60 mins on your site, that easily gets you in the bandwagon of 720GB of traffic *daily*.

      This is in addition to all the server workload - the best server can roughly deliver 1000 requests/s. Your 500k of peak simultaneous users would then require 50 servers just to handle their idling.

      At the same time, a long pooled connection can stay open at least 3 minutes, and often enough well over a day, sending small (60byte) keep-alive packets every few minutes.

      This is much, much cheaper.

      • OP Here. Good to know. But it seems like there's a downside to thousands of open sockets sitting around doing nothing but sending keep-alives.

        • Re: (Score:2, Interesting)

          by qdotdot ( 816516 )

          Well, the trick is having them open for longer than the frequency of querying the server. Most server software (unless you use a very inefficient code/server) tend to be CPU and not memory-bound (noteworthy exception - cheap virtual machines).

          Besides, creating a connection is more costly than actually maintaining it for longer, even if done at the same frequency. You might run out of file descriptors, you might want to mitigate it, besides that, a single open stalled connection is maybe 4-32kB of RAM, depen

  • If you want a fast, reliable interface, use a thick client. Even after all these years, the web application remains a vastly inferior choice in almost all respects.
    • by Tei ( 520358 )

      Still you probably uses a lot of AJAX websites. And ajax means pooling, and pooling is bad, lame, evil, and dumb.
      Commet and these means server initiated communication. So you removes pooling from the equation. So all your AJAX powered websites become faster, less dumber, get more responsive, waste less resources, etc...

      Do I really need to make a argument why pooling is bad? the web has reasons to get away from pooling, and comet and stuff like that is how to do it.

    • Re: (Score:2, Insightful)

      by TelavianX ( 1888030 )
      I totally agree. It sounds to me like "web developers" just do not want to branch out. They want to do everything in the web. I have designed desktop applications for years and the interactivity and responsiveness is well beyond what the web can offer. With new deployment methods like Click-Once or even Silverlight the web (HTML, AJAX, JS) is a very bad idea for application development.
      • by Cidolfas ( 1358603 ) on Wednesday October 06, 2010 @03:22PM (#33814900)

        I couldn't disagree more. Writing desktop apps (even using your new deployment methods) propagates the number 1 issue designing an app for the web tries to solve: platform independence. If you work in a windows ecosystem, live in a windows ecosystem, you might not understand the problem, but when I write an app for the web I know that it'll work with any modern platform (even if I have to do workarounds for IE), including modern smartphones.

        No other platform can say that.

        While I need to optimize the experience on different screen resolutions/devices/etc, that's a lot easier to do than cross compiling for every platform I want people to use my application on. HTML/AJAX/JS is a brilliant way to do this. Yes, to get additional functionality you have to be a bit hacky about protocols until the full-featured replacements become ubiquitous (I'm looking at you WebSockets!), but in the end it works better for the kind of apps that lend themselves to the web. Trust me, it's not because we don't want to branch out - it's because we have a reason to work with this platform.

        • by mini me ( 132455 )

          Native platform independence was already a solved problem through OpenStep []. In fact, the web itself was born out of OpenStep.

          What is most interesting is that most of the modern web app frameworks (Cappuccino, SproutCore, etc.) are based on the work of OpenStep. Funny how we continually reinvent the wheel.

        • by PJ6 ( 1151747 )

          HTML/AJAX/JS is a brilliant way to do this

          You have no real understanding of software development if you can say this and believe it.

          • Really? Why? What are you metrics for evaluation and decison-making?

            • by PJ6 ( 1151747 )
              Do you mean, what criteria do I use to decide if an application needs to be on the web? That's rather different than asking me why the HTML/JS combination is so laughably bad in terms of programming, UI, and all the other dimensions of development.
        • Are you sure you don't have to craft different views for different platforms? A web page designed for a large screen doesn't work so well on a small screen, much less a tiny screen.

          • That's why I said I still need to optimize the experience on different resolutions, but when I do my job right and keep my HTML semantic, it's as simple as using @media-like css (the actual @media doesn't work well except for print css) to change the output to look better on a smaller screen. Then I don't have to change my php/ruby code based on browser sniffing - which is just a pain to maintain - and still get my site to look good across devices.

      • Re: (Score:3, Insightful)

        by 2megs ( 8751 )

        It's *users* who want it, not developers. Getting a potential customer to download and install a thick client is a big hurdle compared to just providing the experience directly from the web page. It's not just the extra work required from the user in the first place, although the number of clicks required to become a customer is a huge factor in conversions. It's also, from the user's perspective, all the potential problems adware, malware, bundled toolbars, DLL conflicts, and applications that don't uninst

      • by phek ( 791955 )

        The problem is 2 fold. One as others have said web applications are cross platform (and have been for the past 10 years). Second is that the user doesn't have to install software. Either one of those on their own doesn't hold much ground but combined, it's hard to compete with. Cross platform languages such as JRE fails in that not only do you have to install software, but you also have to install JRE. If you could compile JRE applications so that they didn't need the whole java library installed as we

  • Real-time is two words, hyphenated. [] </pedant>

  • HTML5 (Score:5, Insightful)

    by Kickboy12 ( 913888 ) on Wednesday October 06, 2010 @02:48PM (#33814284) Homepage

    I'd rather just wait for HTML5 WebSockets. I've done a few demos on Google Chrome using node.js, and it's very fast, efficient, and simple to use. Much more practical than "long-polling", since it is a truly persistent bi-directional connection with the server.

    Any attempt to do this with HTTP is just hacking the protocol to do something it was never intended to do.

    • Re:HTML5 (Score:5, Insightful)

      by gmurray ( 927668 ) on Wednesday October 06, 2010 @03:10PM (#33814662)

      Any attempt to do this with HTTP is just hacking the protocol to do something it was never intended to do

      just like most http/javascript programming.

    • I agree on WebSockets, but you don't get paid if you don't ship code, and I don't feel like putting off eating until WebSockets is deployed enough to use.
      • by jd ( 1658 )

        Personally, I'd have thought it better to be able to daemonize a Java applet and have it be able to feed commands to Javascript. Reason being is that Java already supports sockets, both Java and Javascript are already in use, and protocols are better if not weighed down. You could do a lot more with a Java controller sitting on the webpage, new methods inevitably create new vulnerabilities, and those programmers able to use communications safely already know Java sockets whereas they don't know WebSockets.

        • In theory, that would be better. But every time I've seen that method used it tends to create more issues. Whether this is because of shoddy java, shoddy javascript-java interface, or because the idea in flawed in a way I can't figure out, I tend to groan when a page loads up java without holding a visible applet. But from what I've done playing around with WebSockets, it seems to not have those problems. If I had to guess, I'd say it's because much better than I am with sockets wrote the device, and it let

  • So, what they're saying is that we simply need applets so Web sites can just run their client application locally on the client machine without needing a lot of setup first. Hmm, I wonder however could we do that?

    When trying to figure out how to best put in a screw, the first step is to put down the hammer and go get a screwdriver.

    • It seems to me that people have totally forgotten how to run things locally. Did computers exist before the web?
      • by qdotdot ( 816516 )

        Javascript runs locally :)

        This is a yet another rehash of the thin client paradigm. Once some of your data is remote, why not make it all remote, and keep your local machine as thin as possible?

        • Yes. The Web browser is today's iteration of the IBM 3270 workstation. You'll notice we abandoned the 3270 because it wasn't nearly as flexible as the character-oriented interactive terminals that replaced it.

      • by Urkki ( 668283 )

        It seems to me that people have totally forgotten how to run things locally. Did computers exist before the web?

        Well, data is moving / has moved to the cloud and accessible from anywhere, so the applications to use that data need to be too. And when you just *have* to work locally (for example when files are too big for bandwidth, or when there's no suitable online application), it becomes apparent that "locally" is not so good. I can't count how many times I have wished an online application support for DWG (Autodesk cad format), for example. Being tied to a specific machine with the correct software is a pain!

        • by Nursie ( 632944 )

          Is data really moving to 'the cloud'?

          Because I still only hear marketing speak whenever anyone mentions said cloud.

          And I see no evidence that there's anything much to it other than a drive to sell more servers and virtual machines. The term itself seems to be used to cover everything from remote storage to SaaS, with a lot in between.

          So is it really happening and can anyone tell me if it's actually anything useful?

          I just don't see it. I especially don't see how CAD work would benefit from it. Don't you have

          • by Urkki ( 668283 )

            I just don't see it. I especially don't see how CAD work would benefit from it. Don't you have a main workstation?

            Well, that was just a personal example. I don't do CAD work, but I do have CAD drawings done for me, and surprisingly oftenI need to make decisions based on those drawings, or check things based on those drawings. I mean, case in point, just today I had to say on the phone that I don't have the drawings with me, I'll have to get back to you, even though I was sitting in front of an internet-connected computer.

            I guess I could convert them to PDF, but not only would that be a lot of work, also normal PDF view

    • by Fizzl ( 209397 )

      It seems that in the last two years, the brainless masses that started studying IT in hopes of big moaneyz have graduated and unleashed their clueless wits upon the industry.
      I think that anyone who didn't disassemble and (succesfully?) reassemble their toys when they were 5 should stay the fuck out of IT.

    • actually depending on the type of screw and material being used the first step is to put down the hammer and pick up a DRILL (creating a pilot hole).

  • by Nadaka ( 224565 ) on Wednesday October 06, 2010 @03:00PM (#33814524)

    Sounds like this completely ignores the cost of keeping a socket open. For a lot of servers, the number of simultaneous open connections is a far more limited resource than bandwidth available for the occasional request.

    • by fatius ( 245729 ) on Wednesday October 06, 2010 @03:45PM (#33815320)

      Hi, Ted Roden here, author of this book.

      I definitely spend a lot of time talking about that issue in the book.The reviewer (understandably) didn't write about all of the pros/cons and content in the book.

      I talk about how many servers (apache) are really bad at keeping these types of connections open and suggest using servers that were specifically designed for this (tornado). I also spend a while talking about how to get them all working together so you don't have to rearchitect everything just to get something set up.


      • by janeuner ( 815461 ) on Wednesday October 06, 2010 @04:36PM (#33816184)

        It is the protocol itself. HTTransferP and TransmissionCP is designed to transport data - not sit around and wait for it. Yes, tornado is better at handling multiple connections than apache - but it will still result in poor server performance when you have tens or hundreds of thousands of socket descriptors sitting in swapped-out virtual memory.

        If you want to build a real-time, instant notification application, you need to forget HTTP and use a protocol suited to the requirement. The best solution I can think of right now is the SUBSCRIBE/NOTIFY model that was designed for SIP. The next best choice would be a straightforward application of SMTP. Both of these deliver data on-demand. Both have been around for several years. Both of them will scale into the millions without using a relay or a masquerade.

        Popularity != Good design

  • It's basically a telnet GUI, right?
  • You have no "experience" while browsing the Web. When I pick you up by your lapels, and slam you against a wall, I'll announce at about 80 or 100 decibels in your face that you're "having an experience".

    And as for browsing speed, overwhelmingly, websites are designed and built by folks on their own system, with the fastest, hottest thing they can afford. Try to find some that were actually tested over the 'Net, using a one or more generation old computer.

    Then look at how big those idiot pictures are (hint:

The absent ones are always at fault.