Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Image

Book Review: PostgreSQL 9.0 High Performance 75

eggyknap writes "Thanks in large part to the oft-hyped 'NoSQL' movement, database performance has received a great deal of press in the past few years. Organizations large and small have replaced their traditional relational database applications with new technologies like key-value stores, document databases, and other systems, with great fanfare and often great success. But replacing a database system with something radically different is a difficult undertaking, and these new database systems achieve their impressive results principally because they abandon some of the guarantees traditional database systems have always provided." Keep reading for the rest of eggyknap's review.
PostgreSQL 9.0 High Performance
author Gregory Smith
pages Packt Publishing
publisher 468
rating Packt Publishing
reviewer 184951030X
ISBN takes the reader step-by-step through the process of building an efficient and responsive database using "the world's most advanced open source database"
summary 8/10
For those of us who need improved performance but don't have the luxury of redesigning our systems, and even more for those of us who still need traditional transactions, data integrity, and SQL, there is an option. Greg Smith's book, PostgreSQL 9.0 High Performance takes the reader step-by-step through the process of building an efficient and responsive database using "the world's most advanced open source database".

Greg Smith has been a major contributor to PostgreSQL for many years, with work focusing particularly on performance. In PostgreSQL 9.0 High Performance, Smith starts at the lowest level and works through a complete system, sharing his experience with systematic benchmarking and detailed performance improvement at each step. Despite the title, the material applies not only to PostgreSQL's still fairly new 9.0 release, but to previous releases as well. After introducing PostgreSQL, briefly discussing its history, strengths and weaknesses, and basic management, the book dives into a detailed discussion of hardware and benchmarking, and doesn't come out for 400 pages.

Databases vary, of course, but in general they depend on three main hardware factors: CPU, memory, and disks. Smith discusses each in turn, and in substantial detail, as demonstrated in a sample chapter available from the publisher, Packt Publishing. After describing the various features and important considerations of each aspect of a database server's hardware, the book introduces and demonstrates powerful and widely available tools for testing and benchmarking. This section in particular should apply easily not only to administrators of PostgreSQL databases, but users of other databases, or indeed other applications as well, where CPU, memory, or disk performance is a critical factor. Did you know, for instance, the difference between "write-through" and "write-back" caching in disk, and why it matters to a database? Or did you know that disks perform better depending on which part of the physical platter they're reading? How does memory performance compare between various common CPUs through the evolution of their different architectures?

At every step, Smith encourages small changes and strict testing, to ensure optimum results from your performance efforts. His discussion includes methods for reducing and correcting variability, and sticks to easily obtained and interpreted tools, whose output is widely understood and for which support is readily available. The underlying philosophy has correctly been described as "measure, don't guess," a welcome relief in a world where system administrators often make changes based on a hunch or institutional mythology.

Database administrators often limit their tools to little more than building new indexes and rewriting queries, so it's surprising to note that those topics don't make their appearance until chapters 9 and 10 respectively, halfway through the book. That said, they receive the same detailed attention given earlier to database hardware, and later on to monitoring tools and replication. Smith thoroughly explains each of the operations that may appear in PostgreSQL's often overwhelming query plans, describes each index type and its variations, and goes deeply into how the query planner decides on the best way to execute a query.

Other chapters cover such topics as file systems, configuration options suitable for various scenarios, partitioning, and common pitfalls, each in depth. In fact, the whole book is extremely detailed. Although the tools introduced for benchmarking, monitoring, and the like are well described and their use nicely demonstrated, this is not a book a PostgreSQL beginner would use to get started. Smith's writing style is clear and blessedly free of errors and confusion, as is easily seen by his many posts on PostgreSQL mailing lists throughout the years, but it is deeply detailed, and the uninitiated could quickly get lost.

This is also a very long book, and although not built strictly as a reference manual, it's probably best treated as one, after an initial thorough reading. It covers each topic in such detail that each must be absorbed before further reading can be beneficial. Figures and other non-textual interruptions are, unfortunately, almost nowhere to be found, so despite the author's clear and easy style, it can be a tiring read.

It is, however, one of the clearest, most thorough, and best presented descriptions of the full depth of PostgreSQL currently available, and doubtless has something to teach any frequent user of a PostgreSQL database. Those planning a new database will welcome the straightforward and comprehensive presentation of hardware-level details that are difficult or impossible to change after a system goes into production; administrators will benefit from its discussion of configuration options and applicable tools; and users and developers will embrace its comprehensive description of query planning and optimization. PostgreSQL 9.0 High Performance will be a valuable tool for all PostgreSQL users interested in getting the most from their database.

You can purchase PostgreSQL 9.0 High Performance from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

*

This discussion has been archived. No new comments can be posted.

Book Review: PostgreSQL 9.0 High Performance

Comments Filter:
  • Good impression (Score:5, Informative)

    by GooberToo ( 74388 ) on Wednesday February 09, 2011 @03:24PM (#35154200)

    I've not read the book but have read comments from several developers who contribute to PostgreSQL. All comments I've read on this book give it a strong thumbs up.

    Take it or leave it, but based on feedback from the people who know the internals of PostgreSQL, this book is worth owning if PostgreSQL is important to you in the least.

  • by Escaflowne ( 199760 ) on Wednesday February 09, 2011 @03:27PM (#35154236)

    I realize this is off-topic, but I have e-mailed the admins and gotten no response so I figured a fellow slashdot member might be able to help me.

    Ever since the new design went live, slashdot seems to ignore my preferences. For example, I have "book reviews" filtered so I do not see the entire article (since I am not interested) and yet they still pop up on the front page with full text. It's a minor annoyance, but I was wondering if this is just a problem in the new design or something I'm doing wrong.

    Similarly, the exclusions by keyword does not seem to be working beyond a single word (in my case "idle"). If I put multiple keywords (like "bookreview") it makes my entire front page empty. I have tried with spaces, commands, semi-colons, etc.

    Again, I realize this is grossly off-topic, but I am hoping a fellow user could help me make my slashdot experience better (or back to how I enjoyed it).

    Thanks!

    • by Pseudonym Authority ( 1591027 ) on Wednesday February 09, 2011 @03:52PM (#35154498)
      Everything is broken, even italics. Now you have to use the shitty >em< tag. And the "This resource is no longer available is still broken, quotes are broken half the time, the "fetch more" button for the front page rarely actually gets more stories, and you can no longer click a comment ID and only see that comment.

      Anyway Brolstoy, by point is that your problem is just a tiny part of a huge torrent of problems that the keep introducing and it isn't going to be fixed. Ever. So just get used to filtering book review out manually.
      • This is another ignorant question, but how do you filter them out manually? I thought by setting them to "brief best only" in the Sections option was doing just that?

        Or did you simply mean filtering them manually in my head? If you mean that then well..that sucks :(.

        Deep down I know you're right and it won't be fixed, but here's hoping. I've read /. for over 10 years and it's upsetting that the site just gets more "broken" or rather, doesn't allow users to even manually edit it to their liking. The same thi

        • Re: (Score:2, Flamebait)

          Or did you simply mean filtering them manually in my head? If you mean that then well..that sucks :(.

          Yup, that's about it. You could probably write some userJS, but that seems a bit impractical, and they will probably break it Slashdot 4.0 hits. They seem to think that the slashcode has no bugs; that it lacks in features.

          THIS IS WHAT HAPPENS WHEN YOU CODE IN PERL.

      • I'm also missing on the front page, the number of replies to articles so far.

        And my main gripe..on the meta-moderate, I see the comments, but there is NO + or - I can use to rate the comments, much less mark insightful, overrated...etc.

        Impossible to meta-mod...at least on Firefox.

        • by equex ( 747231 )
          the number of replies to articles so far.

          Aye. And there are small formatting/CSS problems all over.
      • Like this? [slashdot.org]

        • No, not like that. Used to, it would take you exactly to that post, which was extremely useful if the thread had gone on to long and had stopped being embedded. Now there is no way to get round that.
      • Something horrible has also happened to the articles' previews in newsfeeds. The previous version used to be usable, but now not so much. I know I haven't upgraded NetNewsWire or changed its configuration in the meantime, so it must be Slashdot that's at fault here.
      • What is all the fuss about using "em" instead of "i" tags? It's not something that you see so much of here that the doubling of letters is a serious waste of time, surely?
    • Hey, it also ignores settings like "low bandwidth" and "simple". All I can see is a bright light, with occasional letters scattered around, and my mouse wheel is on fire from trying to read text at a normal speed.
  • by Anonymous Coward

    While relational and non-relational databases have been on the rise over the last thirty years, largely owing to their ease-of-use, portability, and documentation, there is something to be said for going directly to the source.

    The fastest possible way to store and retrieve data of all shapes and sizes is B-trees. These are commonly used in video games but can also sometimes be found in business applications. The fastest possible code is assembler, hand-optimized by those who are good in assembler; not coi

    • Database programmers work in queries per millisecond. Their product has a hell lot of know-how built into it. I would rather trust that than homebrew.
      If you are going to create a mission critical system in assembler your supply of assembler programmers will become mission critical. Who will be able to fix bugs and add features later? People who breathe assembler have better to do than yet another .com.
      The multiple of performance and maintainability is a constant. Choose your sweetspot...

    • by Anonymous Coward on Wednesday February 09, 2011 @03:47PM (#35154442)

      Point 1:
      Databases already use B-trees for their indexes.

      Point 2:
      Coding business logic for performance instead of verifiability, testability or stability is beyond retarded.

      • I was thinking the same thing... the switch for some instances to NoSQL from RDBMS-SQL is a paradigm shift, as most situations don't need every feature of an ACID RDBMS, and having structured data and faster/distributed lookups can be more important. Mostly-read scenarios benefit greatly from NO-SQL, but when you need transactions, and absolute compliance RDBMS have their place.

        In business development, clear understandable code etc (your point 2) is far more important... hardware gets faster, more obtus
    • by DarkOx ( 621550 )

      I have my doubts. I don't think anyone could write a multi-user ACID storage engine in assembly an have it be faster the one a similary competent developer writes in C, C++, or even Java. That is not say there may be no gains to be had profiling and then optimizing some of the compilers asembler output here an there, but to suggest that a human developer can produce a non trivial program working only in asm which is faster than the output of a modern compiler is questionable.

      Chips these days are just to c

    • by ratboy666 ( 104074 ) <fred_weigel@ho[ ]il.com ['tma' in gap]> on Wednesday February 09, 2011 @04:13PM (#35154728) Journal

      Um...

      I considered modding, but "wrong" is not a mod.

      B-Trees are certainly not the fastest way to store and retrieve data. You may want to investigate those strange things called "hashes". Fortunately for you, most database systems know how.

      As well, "assembler" isn't the fastest coding system. It is so very slow and tedious to use that algorithm tends to get overlooked. Back to your "B-Tree" again -- have you tried coding a B-Tree in assembler?

      To give a hard example -- back in the days of OS/2, the filesystem was coded in assembler. I imagine that it was considered the right way. After being recoded in C, the thing ran faster -- mostly because the developers could concentrate on better ways of doing things, rather than the drudge work of getting the assembler correct.

      Another example -- replacing a "C" coded inner loop in a console game with a scheme interpreted version, that ended up being faster because it could fit into cache.

      Sometimes upstream is simply better, even for performance.

      Database systems are in the same category. It is rather difficult to optimize specifically, and, by the time you did this, typically the data store would be "out of touch" with the actual business requirements. So, most of us use a good database, with a query optimizer, and let it take care of that part of the problem. Yes, it may need some tweaking, but those tweaks will tend to be independent of the business logic. This can give rise to specialists who can be called on if needed.

      Using your model, the "assembler tuned B-Tree" custom coded, there would be no paladins available to assist in the tuning. Indeed, moving to another architecture would be difficult (or impossible). It would be ridiculously expensive, and would likely never be upgraded. Hey, CICS and IMS are still in use, aren't they? And THEY are more portable than your suggestion. (CICS and IMS just turned 40).

      PS. There are CICS and IMS paladin around. There wouldn't be any for your "solution". Also, IMS can use direct storage links, and, because data in IMS is hierarchical, even IMS can outperform "B-Tree".

    • Please don't feed the trolls.

      Please moderate parent post into oblivion.

    • Have you seen compilers lately? In most scenarios, the compiler can do better optimization than a human. In most data situations, B-Trees are good, but in certain applications, there are other structures that yield better results, particularly if your data set is smaller than your RAM size. Both DBAs and software engineers know that an inappropriate data structure or join order can produce order of magnitude differences in performance.

    • There's more to a database than simply speed. There's transactions, rich data types, index types that support those rich data types (you can only us a b-tree when your data types come from a metric space, hence, for instance, PostgreSQL's GIN and GiST index types), a standardized (if somewhat arcane and problematic) query language...
    • by Z34107 ( 925136 )

      The fastest possible way to store and retrieve data of all shapes and sizes is B-trees

      You're kidding, but you haven't heard of MUMPS [wikipedia.org]. It's time-shared B-trees, and is a lot more widespread than you'd believe.

      (Though if you're in marketing, you'd call it "The World's Fastest Object Database" or "post-relational" rather than "lots of arrays.")

    • by mws ( 170981 )

      The fastest possible way to store and retrieve data of all shapes and sizes is B-trees.

      So, you should have a look at http://use-the-index-luke.com/ [use-the-index-luke.com] to learn how SQL databases use B-Trees.

  • by Cutting_Crew ( 708624 ) on Wednesday February 09, 2011 @03:36PM (#35154324)
    unfortunately for me i am in the process of using postgres with the postgis add-on to do spatial analysis of all types in the database, instead of, lets say, using java to do all your spatial awareness, intersections and so forth, or writing your own code to do all of it. This leads to better code maintenance, performance and other optimizations. I am part of the Postgres/postgis mailing list and the amount of questions regarding spatial queries in the database has risen immensely. Too bad this book doesn't tackle this.
    • Re: (Score:3, Informative)

      by Anonymous Coward

      You need this book: PostGIS in Action [manning.com]

    • by Anonymous Coward

      I wasn't going to tackle GIS knowing that PostGIS In Action was pending. There was enough material to keep me busy way past the expected publication length without touching it. Once Regina and Leo's book is out there, I do plan to revisit what topics in this area haven't been fully covered yet.

  • by AchilleTalon ( 540925 ) on Wednesday February 09, 2011 @03:54PM (#35154520) Homepage
    Am I the only one to notice nothing in the summary box for the book, except the title and author is depicted at the right place with the right label?

    Summary: 8/10
    Pages: Packt Publishing
    Publisher: 468
    ...

  • NoSQL hype indeed (Score:3, Informative)

    by Anonymous Coward on Wednesday February 09, 2011 @04:12PM (#35154712)

    NoSQL is great solution for some problems, but it's hardly a panacea. Gavin Roy did a presentation at the last PgCon comparing Pg to a number of the NoSQL options. I thought it was interesting reading.

    http://www.pgcon.org/2010/schedule/events/219.en.html

    • Re:NoSQL hype indeed (Score:5, Informative)

      by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Wednesday February 09, 2011 @05:22PM (#35155380) Homepage

      Clickable link [pgcon.org]. The summary of that publicly available benchmark was that just turning off the normal data integrity features in PostgreSQL, specifically its aggressive use of the fsync system call, was enough to make PostgreSQL run as about as fast or faster as any of the popular NoSQL approaches. Some of the NoSQL alternatives had significantly lower data loading times however. But as a whole, only MongoDB really had any significant performance gain, everything else was hard pressed to keep up with boring old Postgres when comparing fairly--MongoDB certainly doesn't use fsync [google.com] for example.

      There are some intermediate steps between "no integrity" and "full transactional integrity" available in PostgreSQL as well, so you can adjust how much you're willing to pay per commit on a per transaction basis. Combine this with the fact that a key-value store [postgresql.org] with good performance is available, and you can get most of what NoSQL promises with it when that's appropriate, while still having full database features available when you need those too.

    • I think of this whenever [youtube.com] I hear nosql crowd.

      People who bash SQL usually are mysql users who have never used a real database and they bash all that is sql because of one bad imitation. Then they go to one that is even less. lol

      PostgreSQL is good if you know how to use triggers and foreign keys. It can speed up your ques if you know what you are doing and can scale.

  • by GooberToo ( 74388 ) on Wednesday February 09, 2011 @04:32PM (#35154914)

    Time and time again, the question of Oracle-like hints for PostgreSQL pops up on the PostgreSQL mailing lists. I thought I share some links as I find the topic fairly interesting. Hopefully the DBAs out there will too.

    Why PostgreSQL Doesn't Have Query Hints [toolbox.com]
    Why PostgreSQL Already Has Query Hints [j-davis.com]
    Plan Tuner - Ripped from the above link [sai.msu.su]

    And in case you don't know, this is a great place to stay current with PostgreSQL development and technology. [postgresql.org]

    • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Wednesday February 09, 2011 @04:46PM (#35155024) Homepage

      There's also Hinting at PostgreSQL [2ndquadrant.com], by the author of the book being reviewed here (me), covering what hinting mechanism are available. And finally Why the F&#% Doesn't Postgres Have Hints?!?! [xzilla.net] suggesting why some feel that still isn't enough.

      • Ahh Greg thanks! I quickly tried to find some of the other links but I couldn't remember the article name. Good job!

        I'd already read those posts. Thanks for round this out.

        P.S. I frequently enjoy your blog posts. I look forward to reading whatever the winds bring.

    • Thx for the links.

      I am in the camp that says no hints is the lesser of two evils; yet I am of the view that the SQL language standard is deeply flawed and this whole argument persists because of the flaws in SQL itself. Recognizing and correcting those flaws will close the argument.

      SQL implicitly requires a cost based planner, for everything . CBP is great for reporting and analytics, but CBP on primary key OLTP operations keep me awake at night. Some things you just want the software to always use a pr

      • SQL databases are just too complicated for the average IT professional, let alone the average person. And their proliferation into even desktop software, such as various accounting packages, is a development that will keep our industry on it's toes for some time to come.

        Well, there are DBAs with lots of stripes and then there are DBAs by title. A decade ago I found many a professional DBA to which I could rub shoulders. These days, I rarely find a DBA, who is only in title. Seemingly, more and more DBAs exist not by skill and deep knowledge but rather by one's ability to install the corresponding RDBMS. With the advent of MySQL, seemingly many a web developer fancies themselves to be a DBA. Made worse, most of these guys don't even know the difference between a b-tree or h

      • SQL databases are just too complicated for the average IT professional, let alone the average person. And their proliferation into even desktop software, such as various accounting packages, is a development that will keep our industry on it's toes for some time to come.

        The alternative is worse.

  • More free samples (Score:4, Informative)

    by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Wednesday February 09, 2011 @05:05PM (#35155230) Homepage

    Thanks to Joshua for the nice review here. There are actually a few more samples from the book than just the one chapter; here's a full list of them:

    In addition to this one and the customer reviews at Amazon, there have been two other reviews by notable PostgreSQL contributors: Buy this book, now [planetpostgresql.org] and PostgreSQL 9 High Performance Book Review [postgresonline.com].

    As alluded to in the intro, the book tries to cover PostgreSQL versions from 8.1 though 9.0, with a long listing of what has changed between each version to help you figure out what material does and doesn't apply. So most of the advice applies even if you're running an older version. There is also a companion volume to this one of sorts also available, PostgreSQL 9 Admin Cookbook [packtpub.com], that was written at the same time and coordinated such that there's little overlap between the two titles. That one focuses more on common day to day administration challenges, less on the theory.

  • ..explain how to pronounce "PostgreSQL".

    Or at least it should.. I don't think I've met two people who say it the same way.

  • Last time I checked, key-value stores were not a new technology. We are talking about arrays here.

    Frankly, I'm beginning to suspect that the only reason the editors post database stories is that they enjoy the catfights between the SQL and NoSQL crowds, and it fills time on days when there are no "Apple [does something awful]" or "Microsoft [screws up something]" stories to fill the space between announcements of the latest minor revisions of Firefox and Ubuntu.

  • ...because it's one of the ONLY Postgres books ever. Not a lot of competition out there unless you want a foreign language reprint of the online documentation or a "programmers guide". Definitely little to nothing useful in the admin category.
    • "...because it's one of the ONLY Postgres books ever."

      Uh!? What's that "Practical PostgreSQL" (the mammoth) from O'Reilly that I own since 2002, then?

    • even seen the postgres wiki? Some of the best documentation of any software (closed/open) anywhere. The documentation is probably one reason why you don't see a lot of books on the subject. Anytime I've had a question, I've usually found it in the documentation.

      Unfortunately, the project I'm working on now is MySQL based because that's what is readily available for proof of concept. But we're still in the rapid development/proof of concept. I also made damn sure to stay away from anything MySQL specifi

      • Oh that sig.

        Try: "The trouble with Capitalism is that eventually you run out of ... other people's money" - Lehman Brothers.

"Imitation is the sincerest form of television." -- The New Mighty Mouse

Working...