Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Books Media Software Book Reviews

GPU Gems 116

Martin Ecker writes "Following other entrants in the successful series of graphics and game programming-related "Gems" books, Randima Fernando of NVIDIA has recently released GPU Gems - Programming Techniques, Tips, and Tricks for Real-Time Graphics through Addison- Wesley. As the title indicates, GPU Gems contains a collection of tips and tricks for real-time graphics programming with graphics processing units (GPUs) that are found on modern graphics adapters." Read on for the rest of Ecker's review, and for a few more notes on the book.
GPU Gems – Programming Techniques, Tips, and Tricks for Real-Time Graphics
author Randima Fernando (Editor)
pages 816
publisher Addison-Wesley Publishing
rating 9
reviewer Martin Ecker
ISBN 0321228324
summary An excellent book containing many "gems" for real-time shader developers.

The book is intended for an audience already familiar with programmable GPUs and high-level shading languages and is divided into six parts that concentrate on particular domains of graphics programming. Each part contains between five andd nine chapters, with the entire book containing a total of 42 chapters. Each chapter was written by a different renowned expert(s) from a gaming company, tool developer, film studio, or the academic community. About half of the contributors are from NVIDIA's Developer Technology group. The chapters focus on effects and techniques that help developers to get the most out of current programmable graphics hardware. With approximately twenty pages per chapter, the contributors are able to describe various effects and techniques in-depth, as well as delve into the required mathematics.

All the shaders in the book are written in the high-level shading languages Cg and HLSL. The demo programs on the CD-ROM that accompanies the book use both Direct3D and OpenGL as graphics API, depending on the authors' preferences. Even though the shaders are in Cg and HLSL, it should be fairly straightforward for OpenGL programmers who might prefer to use the recently released OpenGL Shading Language to port the shaders, as the syntax is very similar.

The first part of the book deals with natural effects and contains chapters on rendering realistic water surfaces, water caustics, flames, and grass. Two chapters look behind the scenes of NVIDIA's Dawn demo, which shows a dancing fairy with realistically lit skin. There is also a chapter on Perlin noise (improved version) and its implementation on GPUs that was written by Ken Perlin himself.

The second part of the book concentrates on lighting and shadows. There are chapters from people at Pixar Animation Studios that describe some of the lighting and shadow techniques used in their computer-generated movie productions, as well as a chapter on managing visibility for per-pixel lighting. In the shadow department, the two predominant ways of rendering shadows in real-time, shadow mapping and shadow volumes, are discussed with possible optimizations and improvements. The chapter by Simon Kozlov on methods to improve perspective shadow maps presents some especially interesting new material on the topic.

The third part of the book covers materials and contains chapters on subsurface scattering, ambient occlusion, image-based lighting, spatial BRDFs, and how to use them efficiently in real-time, while part four describes various techniques for image processing (being used more frequently in computer games), mostly in the form of post-processing filters. The chapters presented in this section deal with various depth-of-field techniques, a number of filtering techniques using shaders, and the real-time glow effect seen in many of the newer games (especially in Tron 2.0). Not surprisingly, one of the authors of this chapter is John O'Rorke from Monolith Productions, a developer of the game. Contributors from Industrial Light & Magic introduce the OpenEXR file format used for storing high-dynamic-range image files (see openexr.org).

Part five, titled "Perfomance and Practicalities," is a collection of chapters that deal more with software engineering aspects of developing software that uses shaders. In particular, there are chapters on optimizing performance and detecting bottlenecks, using occlusion queries efficiently, integrating shaders into applications and content creation packages (in particular Cinema4D), and how to develop shaders using NVIDIA's FX Composer tool. There is also an interesting chapter on converting shaders written in the RenderMan shading language, a language for offline rendering, to real-time shaders. The chapter uses a fur shader from the movie "Stuart Little" to demonstrate this conversion. With the large increase of GPU processing power, more shaders from the offline rendering world will enter the realm of real-time graphics and it will be useful to re-use already existing resources, such as RenderMan shaders.

The final part of the book deals with a topic that has recently received a lot of attention by graphics researchers - a topic called General Purpose GPU or GPGPU programming, i.e. using the GPU for other things than rendering triangles. This part comprises chapters on performing computations, in particular fluid dynamics, on the GPU, chapters on volume rendering, and a nice chapter on generating stereograms on the GPU. As a side note, there is a website that deals exclusively with news in the GPGPU community at gpgpu.org.

The book contains a many images that show the presented effects in action, and also plenty of diagrams and illustrations that explain more complicated techniques in detail. Unlike Randima Fernando's previously released book, The Cg Tutorial, which I have also reviewed in the past on Slashdot, the book and all of its illustrations and images are printed entirely in color. The large number and high quality of the illustrations is probably one of the best features of this book that makes even the more advanced effects easily comprehensible.

The book comes with a CD-ROM that contains sample applications for most of the chapters in the book. Some of these applications include the full source code, whereas others, such as NVIDIA's Dawn demo (also described in some of the book's chapters), are included as executables only. It must be noted that all applications run exclusively on Windows, even though some of the samples that are available in source code form and use OpenGL could probably be built to run on other operating systems as well. Furthermore, about half of the samples require what Fernando and Kilgard in The Cg Tutorial call a fourth-generation graphics card to run, in particular, an NVIDIA GeForceFX card. Note that most samples that require a GeforceFX will not run on comparable ATI hardware. This comes as no surprise since GPU Gems is predominantly an NVIDIA book. It should be noted, however, that the techniques, effects, and shaders presented in the book's text are generally applicable to programmable GPUs and are equally useful when working with graphics hardware from vendors other than NVIDIA.

This is a great book that every programmer involved in game development and/or real-time computer graphics should have on his/her shelf. For the game programmer it is critical to stay up-to-date with the latest and greatest effects available with modern GPUs in order to remain competitive when creating the gaming experience. For the graphics developer, it is interesting to see how the immense processing power of current graphics hardware can be exploited in graphics applications. This book offers insight on both of these topics and more, and I highly recommend it.

A few notes from reader Akalgonov:

Reader akalgonov contributes a few more thoughts on the book:

"The sample programs and demos require shader support, Cg, OpenGL, or the latest version of DirectX to run. On the plus side, the majority of the companion topics included pre-compiled binaries (but not the runtime dynamic link libraries) or an AVI illustrating the subject in addition to the source code. While the CD contains over 600 MB of examples from the text, it provided only 23 of the 42 topics covered in the book. Since most of the articles provide an overview and references to a topic, additional material on the CD would have been beneficial.

I found the wide range of subjects quite interesting - and was refreshed that the topics actually seemed "ahead of the curve" in terms of hardware requirements. However in order to provide more subject depth, it seemed that the text could have been split into two volumes in order to expand the existing chapters with sufficient depth. As the material is just enough to get one started, the subject treatment may disappoint some readers seeking to apply the clever and unique techniques presented in the book directly or those hoping to use the book as an opportunity to learn some of the advanced features provided in a programming graphical processing unit."


Martin Ecker has been involved in real-time graphics programming for more than 9 years and works as a games developer for arcade games, and works on the open source project XEngine. You can purchase GPU Gems -- Programming Techniques, Tips, and Tricks for Real-Time Graphics from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

GPU Gems

Comments Filter:
  • gems? (Score:4, Funny)

    by lawngnome ( 573912 ) on Tuesday June 01, 2004 @02:14PM (#9305841)
    no wonder high end cards are expensive!
  • by millahtime ( 710421 ) on Tuesday June 01, 2004 @02:15PM (#9305853) Homepage Journal
    Get it cheaper here [textbookx.com]
  • Yawn... (Score:2, Interesting)

    by AKAImBatman ( 238306 )
    Call me when NVidia and ATI open up their specs so I can finally code that real time raytracing engine I've been dreaming of. Otherwise, you're just tweaking OpenGL or DirectX until the cows come home.

    Actually, I'm a bit surprised that the big names haven't started looking at raytracing. Sure, it has a reputation for being slow, but graphics technology has grown by leaps and bounds. Combined with about 5 billion caching and approximation tricks, and the fact that ray tracing is a highly parallel operation,
    • I'm thinking that we should already have games that are raytraced.

      Why bother? All that computational power is being put to use dealing with the real challenges of interactive real-time 3D games: collision, animation, physics, AI.
      • He's talking about offloading the raytracing to the GPU. AFAIK, collision, physics and AI aren't dealt with there. (Animation is iffy...you could do some of it on the GPU, but, AFAIK, most is still done on the CPU.)
        • Collision, physics, AI, and animation are all dependent on transformations, which can now be done quickly by the GPU.

          However, raytracing still requires the same transformations, so GPUs as they work now are no more useful for physics, etc. than a raytracing GPU. In fact, with per-pixel shading, modern GPUs practically *are* raytracers.

          Can someone point out exactly what differentiates a per-pixel polygon shader from a raytracing engine from a practical point of view? I'd be interested to know.

          • In fact, with per-pixel shading, modern GPUs practically *are* raytracers.

            Which is basically my problem with current cards. Programming them has become exceedingly complex because they stick to a polygon/raster model instead of simply declaring rays outright.

            • Re:Yawn... (Score:3, Insightful)

              This is like saying "Programming CPUs has become exceedingly complex because they stick to a 'floating-point' model instead of declaring vectors outright."

              I'm not even asking you to do it from scratch yourself - borrow liberly from people like Purcell, and from GPGPU.org, and from BrookGPU and from other stream-processing-on-GPU sources.

              When you say that you want a new "driver," I think you should really consider using a wrapper layer like BrookGPU - or just figure out how to do things the way Purcell did
          • Re:Yawn... (Score:2, Interesting)

            by KewlPC ( 245768 )
            The difference is that, with pixel shaders, you aren't necessarily tracing rays of light.

            A per-pixel polygon shader is just that: a small program that gets run for every pixel of that polygon on the screen. That says absolutely nothing about what lighting method is used.

            Now, that pixel shader can do raytracing, but simply being a pixel shader doesn't mean that raytracing is being done. The pixel shader could instead do shadow mapping or something.

            Raytracing is just what it sounds like: you literally trac
    • Re:Yawn... (Score:4, Interesting)

      by Anonymous Coward on Tuesday June 01, 2004 @02:33PM (#9306089)
      Ray tracing can be done in real time today, at around a million rays per second on a P4-class host CPU. The real bottleneck is not the CPU, but memory bandwidth. It turns out that conventional PC "random-access" memory does not like to be accessed randomly, but that's exactly what a ray tracer needs to do. Memory performance has become seriously dependent on caching over the past 10-15 years, and ray tracers are about the least cache-friendly class of algorithms in existence.

      So don't look to CPU or GPU manufacturers for help with ray tracing... you want to bitch at the short-bus-riding DRAM people instead.
      • 1 million rays per second on a high-end CPU does not seem all that impressively useful for games. It may get you about 320x240 at 13fps, which is not good at all for games. Hardly real time. For 640x480 at 60fps you need closer to 18 million rays per second. That would qualify as realtime. So I take it that means memory needs to get about 20 times faster then it is today. Thats going to be a while.
        • Re:Yawn... (Score:1, Insightful)

          by Anonymous Coward
          Right. It's not useful for games, which is why few if any games use real-time ray tracing yet.

          With another 10X performance improvement, that may change. But my point is, the 10X improvement is going to have to occur on the memory side, not the CPU and GPU side where it usually happens.
      • Re:Yawn... (Score:3, Informative)

        by Speare ( 84249 )
        Ray tracing can be done in real time today, at around a million rays per second on a P4-class host CPU.

        I fail to see how one million rays per second is "real time" for most images people associate with ray-tracing. Even at one ray per pixel, you're limited to a single 500x500 image per second. But the value of ray tracing is the recursion: one ray hits an object, and anywhere between 2 and 200 rays result (counting for any subsequent recursions, lights and diffusions).

        Your budget: 1000000 rays per

      • Re:Yawn... (Score:3, Informative)

        by Ann Coulter ( 614889 )

        There are serious investigations into making cache optimized algorithms. For example, the matrix transposition and array index bit reversal algorithms have been investigated in two papers. Also, Bailey's 4-step and 6-step FFT algorithms are also cache efficient. The latter example shows that a complex algorithm such as a FFT can be made cache efficient with the sacrafice of only a few extra computations. Perhaps it would be prudent to use a hybrid ray-tracer/polynomial renderer to section each portion of th

        • Has anyone told you that your choice of nickname is distracting? The real Ann Coulter has a very limited range of output that can be trivially replicated by Markovian string techniques. This leads most readers to skip over anything under her name, because the content is entirely predictable.
    • Re:Yawn... (Score:5, Interesting)

      by bradkittenbrink ( 608877 ) on Tuesday June 01, 2004 @02:35PM (#9306116) Homepage Journal

      Actually, I'm a bit surprised that the big names haven't started looking at raytracing. Sure, it has a reputation for being slow, but graphics technology has grown by leaps and bounds. Combined with about 5 billion caching and approximation tricks, and the fact that ray tracing is a highly parallel operation, I'm thinking that we should already have games that are raytraced.

      I'm not sure that's gonna happen. The fact of the matter is that current graphics hardware is fast approaching the point where raytracing will be irrelevant. The lighting algorithms that can be coded on GPUs will one day match the complexity of raytracers and you won't know the difference. The fact of the matter is that scan conversion is not actually mathematically inferior to raytracing as a rendering technique, it's just a way to quickly generate the first recursive step of the raytracer. That advantage isn't going to go away. In actuality, the end result will probably be something of a hybrid between raytracing and traditional scan conversion techniques and you won't really be able to identify it as one or the other.

      • Re:Yawn... (Score:2, Informative)

        I'm not sure that's gonna happen. The fact of the matter is that current graphics hardware is fast approaching the point where raytracing will be irrelevant.

        Actually, AFAIK the opposite is true.
        Raytracers scale very nicely with geometric complexity: O(log n). So as the virtual environments continue to grow, raytracing should gain popularity over scan conversion. Have a look at this [uni-sb.de] - that's 50 million triangles raytraced at 4-5 fps!

        Most of the current interactive raytracing is still done on parallel
        • Re:Yawn... (Score:3, Interesting)

          I realize 250 Mtriangles/sec aren't quite the 380 stated by ATI for their current generation GPU (Radeon 9800 Pro), but the paper I linked to is from 2001.
          The hardware raytracing site has a nice video of their FPGA-based system rendering about 187 million triangles at about 15 - 40 fps (512x384, 90MHz FPGA).
        • Re:Yawn... (Score:3, Interesting)

          by captaineo ( 87164 )
          Ray tracing does seem to have some on-paper theoretical advantages, but I've always found that it's a render time killer for my scenes. While the asymptotic running time of ray tracing is good, the coefficient is so much higher that "polygon splatting" has won every time so far.

          You also need to consider that the O(log N) figure for ray tracing does not include the cost of building a ray-acceleration data structure, and it also assumes the entire scene fits in RAM. Polygon splatting is O(N), but the coeffi
    • Re:Yawn... (Score:3, Informative)

      by gr8_phk ( 621180 )
      "I'm thinking that we should already have games that are raytraced."

      google for rtChess.

      The ray tracing engine has since seen a 40% performance boost and has added photon mapping and scales nicely with more CPUs - I just haven't written a game with it since. I don't think a GPU implementation will be much faster. nVidia seems to think they make general purpose processors now - HAH what a laugh.

      • I'm not sure why no one has mentioned gpgpu.com (been posted on slashdot before). They have real time ray tracers based on GPU processing. I'm pretty sure that most people doing generic stream processing on a GPU have done so without the behest of nvidia. The only thing they've said is that their chip is a super scalar design, which is true. If you want to see real uses of crazy graphics take a look at the million particle demo (gdc 2004 paper I think), and see if you can implement that in real time on
      • Nethack is open source, so you _could_ add raytracing to it, but I'm not sure that it'd buy you that much :-)

        You hit the Umber Hulk -more-
        His pixels shimmer gracefully -more-
        The Umber Hulk hits -more-
        You die -more-
        You leave a good-looking corpse

    • Stop yawning and start reading. http://graphics.stanford.edu/papers/rtongfx http://graphics.stanford.edu/papers/tpurcell_thesi s/ http://graphics.stanford.edu/papers/photongfx
    • Re:Yawn... (Score:5, Informative)

      by Viking Coder ( 102287 ) on Tuesday June 01, 2004 @02:49PM (#9306278)
      Two points:

      First, Why? Most people don't even make movies that are raytraced.

      Second, they already are doing raytracing on the GPU. Purcell [stanford.edu] had one working in 2002. There was a presentation on it, in a course at SIGGRAPH 2003. The GPU is maybe a little faster than the CPU, right now, for raytracing.

      "Tweaking OpenGL" is kind of like saying "tweaking the CPU", any more. It's fairly close to a generalized stream processor. And their specs already are open enough to have figured this out. Look at GPGPU [gpgpu.org] and read some more about how people are doing amazing stuff on the GPU today. No need to wait for ATI and NVidia to open up any specs - they already did. Cg and GLSlang are fully up to the task.

      And, photon mapping and similar techniques are much more sophisticated than raw raytracing.
      • First, Why? Most people don't even make movies that are raytraced.

        Because current methods are getting too complex. The shear number of details in writing a modern 3D engine is daunting, even to an experienced 3D coder. A raytracer would allow you to hit a big reset button and go back to times that were simpler. As a bonus, quality could eventually be taken much farther than today's polygon/shader methods.

        And, photon mapping and similar techniques are much more sophisticated than raw raytracing.

        Raw ra
        • [M]any of these features can be planned for by an artist rather than a coder.

          That's naive in the extreme. A coder will always have to be involved. If for no other reason than for optimizing performance, which is honestly no easier than writing a "modern 3D engine" as you described it.

          Now, if your point is that dog-slow rendering is "better" than fast rendering, then pick your fight elsewhere. But don't blame the GPU for being fast, especially since it is now just as capable of high-accuracy rendering
          • Now, if your point is that dog-slow rendering is "better" than fast rendering, then pick your fight elsewhere. But don't blame the GPU for being fast, especially since it is now just as capable of high-accuracy rendering and the full richness of a software raytracer!

            What am I blaming the GPU for? I just want to reprogram it and make everyone's lives easier. Sure, the scene will need to be optimized by a coder who understands, but the artist should be capable of deciding what effects will work and which w
            • I simply want to reprogram it for use as a Ray Tracer instead of a polygon rasterizer.

              SOMEONE ELSE HAS ALREADY DONE IT.

              How many times do I have to say this?

              Timothy Purcell [stanford.edu] at Stanford University did it two years ago.

              So stop wishing 'if it were only possible' to do something that people have already done. Read my link, and if you want to be polite, thank me for showing you where to find exactly the kind of information that you were complaining didn't exist.
    • Re:Yawn... (Score:4, Informative)

      by hawkstone ( 233083 ) on Tuesday June 01, 2004 @02:54PM (#9306354)
      Open up their specs so you can write a real-time raytracer? Why can't you use Cg or HLSL like others have done? Why do you need to write to the video card directly? You have full access to the programmability of the GPU through these languages. If not, program the damned thing in their version of assembler through the DirectX or OpenGL APIs. Unless by "tweaking OpenGL or DirectX" you mean "programming the GPU", your statement seems flat-out wrong.

      Don't believe you can do it? Here's a link some projects that do real-time raytacing, radiosity, photon mapping, and subsurface scattering [gpgpu.org], all on GPUs. These GPUs are programmable without them opening up their specs.

      (The desire for them to open up their specs is for other reasons, not because they are hiding some functionality from you.)
      • I'll grant you that I haven't spent too much time investigating all the new shader languages. However, I have poked at Cg, and it simply wasn't general purpose enough to meet the needs of a high performance raytracer. For maximum performance, a general purpose, real-time raytracing engine would need to be able to reprogram and reuse all of the card's pipelines, not just the vertex shader.

        • Ah, yes, things have probably progressed pretty quickly in this arena. The vertex shaders are not nearly as flexible and powerful as the pixel shaders. A common technique is to draw a single quadilateral across the entire framebuffer, and with the right mapping every pixel will be visited once in the fragment (pixel) program. This fragment program is where you write the raytracer.

          (Simplified concept, of course, but you get the point.)
        • simply wasn't general purpose enough to meet the needs of a high performance raytracer

          You don't know what you're talking about. Reread the specs, and go and read the Purcell papers that people keep pointing you to. And learn what a stream processor is, so maybe you can understand why the mathematics actually are general purpose enough to meet those needs. Just because you don't understand the capabilities doesn't mean that they don't exist.

          not just the vertex shader

          They're using pixel shaders to do
          • And learn what a stream processor is, so maybe you can understand why the mathematics actually are general purpose enough to meet those needs.

            It's not about whether the GPU can do the math or not. It's about whether the GPU is programmed to do the math for raytracing or not. Many RayTracing engines have twisted OpenGL a bit to get their ray tracing operations done. Which is fine since it gives them a performance boost. But this boost is insignificant compared to what could be achieved with dedicated G
            • AKAImBatman: "Gee, thanks Viking Coder for telling me about GPGPU and the Purcell paper."

              Viking Coder: "No problem, AKAImBatman."

              It's about whether the GPU is programmed to do the math for raytracing or not.

              This is a nonsense statement. I'm sorry, but it really is. That's like saying that, "the CPU is programmed to do the math for raytracing or not."

              I understand your argument, in that back in the day, a generalized CPU needed an FPU to do mathematics operations that the CPU could do in software... .
    • Actually, I'm a bit surprised that the big names haven't started looking at raytracing. Sure, it has a reputation for being slow, but graphics technology has grown by leaps and bounds.

      To match the quality of anti-aliased triangle rendering, you need at least 4 samples per pixel. Then you need to support full-screen resolutions (2048x1536). To be officially real-time, you need at least 15 frames/second, if not the full 60/80 that most games provide now.
      That would give you a budget requirement of at least 2
      • Re:Yawn... (Score:3, Interesting)

        by AKAImBatman ( 238306 )
        Remember the three magic words to making high speed 3D graphics work: "Cheat like hell" I'd actually done some research into this area not so long ago (most of which I can't remember) and I found that about 95% of calculations can be stored in lookup tables, or calculated once for all rays. I don't remember all the details of my evil plan (I really need to start writing this stuff down) but I had pivoted the calculations in such a way as to make multiple, pipeline friendly passes.

        The first stage or two got
        • I had a go some time back. The camera rays were pre-calculated. A bounding box was calculated for each triangle in image space. These were ordered by starting row and column. Lists of the possibly visible triangles were maintained as the image space was scanned. Groups of geometry had bounding spheres. I was getting around 15 seconds/frame.

          However, there seem to be many open source real-time ray-tracing projects going on:

          OpenRT [openrt.de], with it's own FAQ [acm.org]. This project seems to have several games written for it.
          • I had a go some time back. [snip] I was getting around 15 seconds/frame.

            I never said it was easy. :-) You have to carefully limit your rays as much as possible. With extremely complex scenes, you may even have to render only some of the pixels for each frame. However, things get much better when you get to the GPU. Most of today's cards have at least two pipelines. Some even have 16! Now raytracing is a highly parallel operation, and GPUs tend to have very deep pipelines with excellent floating point supp
      • Remember that those shiney Pixar movies you see generally don't bother with ray-tracing (I think Finding Nemo did a little bit of it; the previous ones didn't) because, as has been said, you can get 'good enough' using rules-of-thumb and hacks, most of the time.

    • Re:Yawn... (Score:2, Insightful)

      by obelixn13 ( 515027 )
      Many of the 'pretty' effects that come with raytracing such as reflections and highlights are easily approximated in most games using cheap hacks, eg environment and normal mapping.

      We have become so used to these in games now, that I dare say if you did produce a real-time raytracer you would be hard-pushed to explain to the average gamer what was so cool about it.

      The bar has been raised significantly since ray-tracing was first presented in the 70s. And we've long since started looking beyond what raytr
    • another poster mentioned memory bandwidth as a bottleneck... the bandwidth between an agp card and the cpu has even less bandwidth, and the amount of memory on graphics cards is usually well below system ram, making the process nto really seem like the best way to do things currently...
      • the bandwidth between an agp card and the cpu has even less bandwidth, and the amount of memory on graphics cards is usually well below system ram, making the process nto really seem like the best way to do things currently...

        The necessary geometry and graphics for RayTracing works out to pretty much the same cost as polygon stuff (sometimes even smaller). Given that today's cards have 64-128MB of RAM, the memory on the card is not the issue. The bandwidth can be an issue, but no more than today's graphic
  • by tcopeland ( 32225 ) * <tom AT thomasleecopeland DOT com> on Tuesday June 01, 2004 @02:20PM (#9305924) Homepage
    ...if only to give an appreciation for how hard it is to write 3D games/engines these days. An article on A* will start off with a paragraph or two saying "of course you know A*, and you've read the three papers on A* optimizations, so here's a fourth optimization you may not have seen before".

    A lot of the articles are practical, too, if you're working in the field. When I was fiddling with some fuzzy logic [rubyforge.org] stuff the articles from Game Programming Gems II was very helpful.
  • Perlin (Score:4, Funny)

    by happyfrogcow ( 708359 ) on Tuesday June 01, 2004 @02:23PM (#9305968)
    There is also a chapter on Perlin noise (improved version) and its implementation on GPUs that was written by Ken Perlin himself.

    Wow.. there's a person behind Perlin noise? I always thought it was a random noise generator based on the chaos found in Perl programs. Thus, the noise was generated by an http client that has "gone perlin'" -- which means to crawl the web in search of arbitrary bits of Perl.

    who knew!?

    • Ken Perlin actually sang a song at SIGGRAPH 2002 before he presented his "Improving Noise" paper, and didn't even fail to be funny, sadly I can't find the text anymore, but it was hillarious. This guy manages to bring technical stuff to a tired audience and getting the whole crowd to laugh with his witty lyrics, on the subject of something as interesting as noise.

      Ken Perlin is also the guy who has brought together much of the talent that is responsible for the ongoing success of Pixar. I guess you could
  • I love how shaders have taken a very hard step and made it into a much easier step. I can tell you about the days before shaders, and doing something like fur was just unthinkable. Now, thanks to Pixar, et al. you can practically make a whole character from a shader, and not ever have to make anything but spheres with cylinders sticking out of them. I am actually anxious to see what happens when any shader can be a real-time shader!
  • Also Check Out... (Score:3, Informative)

    by th1ckasabr1ck ( 752151 ) on Tuesday June 01, 2004 @02:32PM (#9306080)
    If you're interested in thsi stuff, also check out Real Time Rendering by Tomas Moller and Eric Haines. It's one of my favorites and contains an amazing amount of information..
  • Does anybody remember the Commodore Vic II chip used in the C-64? 16 beautiful colors, 320x200 (monochrome) or 160x200 (four-color) modes, bitmap, character graphics and up to eight fully independent sprites!

    And don't even get me started on the clear, crisp sounds of the SID chip!

    • You'd be surprised what people still get done with these specs. keep an eye on C64.sk [c64.sk] for example to see some demo's. Being able to code to the bare metal certainly has it's advantages over todays videocards.
  • by tloh ( 451585 ) on Tuesday June 01, 2004 @02:49PM (#9306287)
    This post reminds me of a question that I haven't thought about since High School. I was taking programming classes right around the time I was discovering the gaming phenomenon. The dizzying pace of hardware evolution at the time (still going strong as ever many would say) prompt me to ask my computer teacher if computer video hardware was designed in such a way that when graphics were not being processed, the GPU could be used for general number crunching. In other words, if it is possible to do load balancing between the GPU and the CPU. I seemed to recall reading something (possibly on /.) about someone investigating this exact thing I was wondering about so long ago. I should probably STFW, but if someone could point me in the proper direction, I would be as grateful as anyone would to have a long-irritated itch finaly scratched.
    • if computer video hardware was designed in such a way that when graphics were not being processed, the GPU could be used for general number crunching. In other words, if it is possible to do load balancing between the GPU and the CPU.

      While it would probably be possible to use a GPU for general purpose number crunching, I believe it would make the GPU unable to send a signal to your monitor at the same time.

      I asked the same question back in the days of RC5-64 and I was told that it was not feasible for ju
      • by roystgnr ( 4015 )
        Or, at least you're wrong about modern programmable GPUs; you might have been right about the first generations of 3D cards.

        See this paper [in.tum.de] for some examples which not only use the GPU simultaneously for graphics and number crunching, but which use the graphics to give real-time output of computational fluid results.

        The only remaining problem I remember is that the bandwidth to current video cards is very asymmetric, which is fine for video games that just push a lot of data to the video card but not so go
        • you might have been right about the first generations of 3D cards.

          But regarding the first generation of 3D cards, the question is irrelevant- because those cards had NO 2D output ability, so you always had a separate 2D card running anyhow.
      • The readback rate on AGP is abysmially slow. That's why it's only for graphics cards.

        It's got an amazingly high downstream rate, GBps, but reading back from the card can be as low as 256 KB/sec in some models.

        Far too slow to do any kind of processing on a high-bandwidth stream at a time, although the circuitry of a GPU (matrix optimizations) would be useful in crypto, the rate at which data could be returned from the card would choke the stream, and the buffer would fill up, and you'd start losing data.
        • One thing it could be very handy for is compression. Video compression is, of course, the first thing that springs to mind, but I guess other types of compression could work too, as long as there is a data path back out of the GPU, to the hard drive or wherever else you want that compressed data to go.

          For applications like that, the back channel isn't that much of an issue, because the data coming out of the process is so very much smaller, ie - a lot of data is being thrown away in the GPU

          Conversely, on
      • If you could utilise the GPUs of 2/3/4 PCI video cards, leaving your AGP card of choice to cope with the monitor, would that work?

        I have my doubts(but love playing devil's advocate), the overhead of managing everything may well negate any benefit of farming out work to the other cards. It would also shift the bottle-neck to the various communication channels, by increasing the traffic between components I guess.

        Not to mention the effort inherent in setting something like that...
      • If you've got a fast machine with a high-end GaM3Rz GPU, you've probably got an older machine lying around which was your former cutting-edge game platform. So fire it up with X Windows, and let your apps run on your fast machine where you've got the GPU, assuming you've got some kind of Unix OS on the fast machine (Linux, *bsd, etc.) (This trick is unlikely to work if you're running Windows on the fast machine - most of the solutions like VNC, Carbon Copy, etc. are likely to require you to be running th
    • Why do you think there was such a frenzy over terrorists stealing Playstation2s and Xboxs? They were afraid of the computing power of of one these high end gaming systems being used to provide cheap powerful computing solutions against us.
    • Try GPGPU.org [gpgpu.org] - "General-Purpose Computation Using Graphics Hardware". Useful clearinghouse for this sort of thing.
  • lol... Why did they bother to use Cg at all? Could it be because nVidia is putting this book out? Some conflict of interest? Hehe. There are books on HLSL and OSL that are more valuable than this one.
    • Cg is still very useful if you intend to develop cross-platform shader-driven graphics apps. Plus, its also API-independent, which makes it the only viable alternative to rewriting all shaders for each API if you are about to write some API-independent graphics code. Remember, GLSL support is still not widespread. Heck, even the ARB FPrograms arent supported on cards older than a radeon9500/geforceFX. If you do not want to develop half a dozen of different codepaths, use Cg.
      • It is ridiculous, imho, to claim that something has value as being API dependent when it is HARDWARE dependent (which is much, much worse.) If you're going to learn something (the purpose of this book) and GLSL is out and available for use, use it. If you're looking to put code into production, you're not going to want to use Cg in any case unless you can say "you MUST use nVidia hardware to run my application."
        • Dude, I'm using Cg. And i do have a RADEON 9600 PRO. Wow, am I using some magic? Or is it using the OpenGL ARB vertex/fragment program extensions? So much for your hardware dependence.
          • Dood,apparently you're not aware that Cg optimizations (when producing HLSL or vertex/fragment programs) are HEAVILY nVidia focused and tend to produce rather poor code for the Radeon series (even, mysteriously, when converting to HLSL which has the same syntax, hmmm?) Ergo, calling it hardware independent is ridiculous.

            In any case, the nVidia Cg compiler produces much more inefficient code than you would get from the HLSL compiler; ergo, if you're actually interested in producing games, learn GLSL/OSL an
            • Fine, but this isn't a production environment. These are examples in a book. Your original statement is absolutely ridiculous.

              lol... Why did they bother to use Cg at all? Could it be because nVidia is putting this book out? Some conflict of interest? Hehe. There are books on HLSL and OSL that are more valuable than this one.

              This is the dumbest thing anyone has said about this book. The book is bad because you disagree with their choice of shading language? Hardware shaders are not complex things, an

              • Yes, the book is bad because it purports to be a book about 'GPU programming' "Gems", when it should be titled "Cg Programming" with a little mention of other shader languages for consideration. Talk about dumb...
                • No, you're wrong. It's not about Cg programming. It's about the underlying algorithms and techniques. They had to choose languages to implement the techniques in, and it apparently happens to be the language you don't like. Oh, "boo hoo" for you!

                  This is a Gems book, written by numerous people. These people tell the editor, "Yeah, I have a cool GPU technique that would make a good 'gem' for the book." and they say, "Okay, give us an implementation." and that happens to be in either HLSL or Cg.

                  The fa

                  • Actually, the guy who comes along and says "It should be called Programming games in C++" is correct. You seem to think it is pedantic to believe that many book titles seem to be misleading (even if only somewhat.)

                    As for why they used Cg, surely you're not stupid enough to believe they just 'picked one', or (from reading your post) perhaps you are. Do you know who the author works for? Do you know what company pushes Cg? It isn't some 'conspiracy theory' but it IS a conflict of interest given that it V
                    • Actually, the guy who comes along and says "It should be called Programming games in C++" is correct. You seem to think it is pedantic to believe that many book titles seem to be misleading (even if only somewhat.)

                      I guess we'll just have to disagree on this one, because I think a book can be written about a topic that is not the language used to implement the topic of discussion. I don't believe every book needs to advertise on the title what language they use to implement the thing they're interested in

                    • I guess we'll just have to disagree on this one, because I think a book can be written about a topic that is not the language used to implement the topic of discussion. I don't believe every book needs to advertise on the title what language they use to implement the thing they're interested in discussing. I don't see anything wrong with it when a book does say "Doing such and such in C++", but usually those books are very interested in the details of implementing the topic in that language. There isn't
                    • I think a book can be written about a topic that is not the language used to implement the topic of discussion. What this has to do with our discussion I don't know because you seem to think I have a problem with the fact that the book uses Cg to implement ideas, I do not.

                      You gave every indication before that you thought this book's content was not valuable, and it seemed that your reasoning was simply because it was written in Cg. From a previous post:

                      lol... Why did they bother to use Cg at all? There a

                    • Good points, I guess I wasn't very clear in the beginning that what I didn't like about the book was the (imho) feel that they were intentionally making the book sound vendor neutral and part of the 'gems' series. It has value, just as Shader X does, as a reference for shading in general (because ideas are not language specific.) I'm sure it does have use, I just took issue with the presentation of the book as such.
                    • Good points, I guess I wasn't very clear in the beginning that what I didn't like about the book was the (imho) feel that they were intentionally making the book sound vendor neutral and part of the 'gems' series. It has value, just as Shader X does, as a reference for shading in general (because ideas are not language specific.) I'm sure it does have use, I just took issue with the presentation of the book as such.

                      Yeah, I understand. But at the same time, from their perspective as publishers and writers

            • Currently GLslang is not an interesting option due to the fact that almost no drivers support it yet.
              HLSL, well, I dont use D3D, so its of no use for me.
              Cg is a valid choice I have if i want my shader code to work on my ti4400 and on my radeon9600. The only other option is to rewrite ALL shaders for both cards, which is a real pain in the ass (especially the fragment shader on the ti4400, which has to be constructed with NV texture shaders + register combiners). Fortunately, the ARB vertex programs are supp
              • I'm not particularly anti Cg; however, what is it that you actually use it for? In house tools or applications, no problem using it imho. Public applications and commercial games? It is a bad choice afaic.

                My original complaint about the book is that a book is being published which purports to be a guide to programming GPUs and yet rather than use GLSL or HLSL it uses a private corporations shading language.

                It is like someone producing a book on C++ and making Microsoft friendly examples rather than mai
    • I've looked over this book and I'm planning on getting a copy of it, even though I prefer vendor-neutral API's (and use ATI hardware myself) when I actually do real-time stuff. From what I've seen, yes, most of the pieces of example code are Cg or nVidia oriented. The concepts are pretty universal however, and shouldn't be too difficult to adapt.
  • B&N? Ripoff! (Score:4, Informative)

    by TastyWords ( 640141 ) on Tuesday June 01, 2004 @03:47PM (#9307127)
    Why does everyone insist on considering Amazon and B&N to be the only online bookstores? I have news for you folks: it's almost always cheaper to go to AddAll or BookPool and get a book cheaper including shipping than Amazon and B&N.

    In the case of this book, I've taken the liberty of making your life easier by providing you with urls which will take you directly to the price list for the book. For future reference: AddAll is a shopping 'bot, looking at thirty-six stores. AddAll [addall.com] Results and BookPool [bookpool.com]

    Now, if you insist upon paying Amazon and B&N prices, let me know. You can PayPal the money to me and I'll order the book for you from AddAll or BookPool and have it shipped to you. (Of course, I'll keep the difference. After all, you were willing to pay the extra price!) If you're willing to waste your money, I'd rather collect the waste than Amazon or B&N.

    p.s. Remember this the next time you see someone post a message saying, "it's -this price- at Amazon!"

    p.p.s.
    Here's [google.com] the listing from Froogle [google.com] (just in case you haven't used it yet)
  • Cheka gems, NKVD gems, MVD gems and KGB gems.

    Cue "in Soviet Russia" jokes...

E = MC ** 2 +- 3db

Working...