Book Review: PostgreSQL 9.0 High Performance 75
eggyknap writes "Thanks in large part to the oft-hyped 'NoSQL' movement, database performance has received a great deal of press in the past few years. Organizations large and small have replaced their traditional relational database applications with new technologies like key-value stores, document databases, and other systems, with great fanfare and often great success. But replacing a database system with something radically different is a difficult undertaking, and these new database systems achieve their impressive results principally because they abandon some of the guarantees traditional database systems have always provided." Keep reading for the rest of eggyknap's review.
For those of us who need improved performance but don't have the luxury of redesigning our systems, and even more for those of us who still need traditional transactions, data integrity, and SQL, there is an option. Greg Smith's book, PostgreSQL 9.0 High Performance takes the reader step-by-step through the process of building an efficient and responsive database using "the world's most advanced open source database".PostgreSQL 9.0 High Performance | |
author | Gregory Smith |
pages | Packt Publishing |
publisher | 468 |
rating | Packt Publishing |
reviewer | 184951030X |
ISBN | takes the reader step-by-step through the process of building an efficient and responsive database using "the world's most advanced open source database" |
summary | 8/10 |
Greg Smith has been a major contributor to PostgreSQL for many years, with work focusing particularly on performance. In PostgreSQL 9.0 High Performance, Smith starts at the lowest level and works through a complete system, sharing his experience with systematic benchmarking and detailed performance improvement at each step. Despite the title, the material applies not only to PostgreSQL's still fairly new 9.0 release, but to previous releases as well. After introducing PostgreSQL, briefly discussing its history, strengths and weaknesses, and basic management, the book dives into a detailed discussion of hardware and benchmarking, and doesn't come out for 400 pages.
Databases vary, of course, but in general they depend on three main hardware factors: CPU, memory, and disks. Smith discusses each in turn, and in substantial detail, as demonstrated in a sample chapter available from the publisher, Packt Publishing. After describing the various features and important considerations of each aspect of a database server's hardware, the book introduces and demonstrates powerful and widely available tools for testing and benchmarking. This section in particular should apply easily not only to administrators of PostgreSQL databases, but users of other databases, or indeed other applications as well, where CPU, memory, or disk performance is a critical factor. Did you know, for instance, the difference between "write-through" and "write-back" caching in disk, and why it matters to a database? Or did you know that disks perform better depending on which part of the physical platter they're reading? How does memory performance compare between various common CPUs through the evolution of their different architectures?
At every step, Smith encourages small changes and strict testing, to ensure optimum results from your performance efforts. His discussion includes methods for reducing and correcting variability, and sticks to easily obtained and interpreted tools, whose output is widely understood and for which support is readily available. The underlying philosophy has correctly been described as "measure, don't guess," a welcome relief in a world where system administrators often make changes based on a hunch or institutional mythology.
Database administrators often limit their tools to little more than building new indexes and rewriting queries, so it's surprising to note that those topics don't make their appearance until chapters 9 and 10 respectively, halfway through the book. That said, they receive the same detailed attention given earlier to database hardware, and later on to monitoring tools and replication. Smith thoroughly explains each of the operations that may appear in PostgreSQL's often overwhelming query plans, describes each index type and its variations, and goes deeply into how the query planner decides on the best way to execute a query.
Other chapters cover such topics as file systems, configuration options suitable for various scenarios, partitioning, and common pitfalls, each in depth. In fact, the whole book is extremely detailed. Although the tools introduced for benchmarking, monitoring, and the like are well described and their use nicely demonstrated, this is not a book a PostgreSQL beginner would use to get started. Smith's writing style is clear and blessedly free of errors and confusion, as is easily seen by his many posts on PostgreSQL mailing lists throughout the years, but it is deeply detailed, and the uninitiated could quickly get lost.
This is also a very long book, and although not built strictly as a reference manual, it's probably best treated as one, after an initial thorough reading. It covers each topic in such detail that each must be absorbed before further reading can be beneficial. Figures and other non-textual interruptions are, unfortunately, almost nowhere to be found, so despite the author's clear and easy style, it can be a tiring read.
It is, however, one of the clearest, most thorough, and best presented descriptions of the full depth of PostgreSQL currently available, and doubtless has something to teach any frequent user of a PostgreSQL database. Those planning a new database will welcome the straightforward and comprehensive presentation of hardware-level details that are difficult or impossible to change after a system goes into production; administrators will benefit from its discussion of configuration options and applicable tools; and users and developers will embrace its comprehensive description of query planning and optimization. PostgreSQL 9.0 High Performance will be a valuable tool for all PostgreSQL users interested in getting the most from their database.
You can purchase PostgreSQL 9.0 High Performance from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Good impression (Score:5, Informative)
I've not read the book but have read comments from several developers who contribute to PostgreSQL. All comments I've read on this book give it a strong thumbs up.
Take it or leave it, but based on feedback from the people who know the internals of PostgreSQL, this book is worth owning if PostgreSQL is important to you in the least.
Re:Good impression (Score:5, Informative)
I have heard the same from a postgres developer.
I realize this is off-topic... (Score:3, Interesting)
I realize this is off-topic, but I have e-mailed the admins and gotten no response so I figured a fellow slashdot member might be able to help me.
Ever since the new design went live, slashdot seems to ignore my preferences. For example, I have "book reviews" filtered so I do not see the entire article (since I am not interested) and yet they still pop up on the front page with full text. It's a minor annoyance, but I was wondering if this is just a problem in the new design or something I'm doing wrong.
Similarly, the exclusions by keyword does not seem to be working beyond a single word (in my case "idle"). If I put multiple keywords (like "bookreview") it makes my entire front page empty. I have tried with spaces, commands, semi-colons, etc.
Again, I realize this is grossly off-topic, but I am hoping a fellow user could help me make my slashdot experience better (or back to how I enjoyed it).
Thanks!
Re:I realize this is off-topic... (Score:5, Informative)
Anyway Brolstoy, by point is that your problem is just a tiny part of a huge torrent of problems that the keep introducing and it isn't going to be fixed. Ever. So just get used to filtering book review out manually.
Re: (Score:1)
This is another ignorant question, but how do you filter them out manually? I thought by setting them to "brief best only" in the Sections option was doing just that?
Or did you simply mean filtering them manually in my head? If you mean that then well..that sucks :(.
Deep down I know you're right and it won't be fixed, but here's hoping. I've read /. for over 10 years and it's upsetting that the site just gets more "broken" or rather, doesn't allow users to even manually edit it to their liking. The same thi
Re: (Score:2, Flamebait)
Or did you simply mean filtering them manually in my head? If you mean that then well..that sucks :(.
Yup, that's about it. You could probably write some userJS, but that seems a bit impractical, and they will probably break it Slashdot 4.0 hits. They seem to think that the slashcode has no bugs; that it lacks in features.
THIS IS WHAT HAPPENS WHEN YOU CODE IN PERL.
Re: (Score:2)
And my main gripe..on the meta-moderate, I see the comments, but there is NO + or - I can use to rate the comments, much less mark insightful, overrated...etc.
Impossible to meta-mod...at least on Firefox.
Re: (Score:1)
Aye. And there are small formatting/CSS problems all over.
Re: (Score:2)
Like this? [slashdot.org]
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Cut out the middleman (Score:2, Funny)
While relational and non-relational databases have been on the rise over the last thirty years, largely owing to their ease-of-use, portability, and documentation, there is something to be said for going directly to the source.
The fastest possible way to store and retrieve data of all shapes and sizes is B-trees. These are commonly used in video games but can also sometimes be found in business applications. The fastest possible code is assembler, hand-optimized by those who are good in assembler; not coi
Re: (Score:2)
hash tables are faster than B-trees for single lookups.
This depends on tons of factors. B-trees are almost invariably O(log n) unless they're totally braindead. Hash tables can be O(1) if they're perfect, but with a lot of collisions they can be O(n). And, of course, that's just the big-O; if the hash function is particularly slow then a single lookup could be a lot slower, even with better big-O performance.
And "for single lookups" is a big caveat; hash tables usually have terrible locality, which is so
Re: (Score:2)
Only assuming your hash function returns a perfect distribution and your hash table is the exact size of your data in all cases, which in a database table would be rarely the case. Add to that that most queries aren't single lookups, and require sorting, and your single instance O(1) hash table becomes about as useful as a lead balloon.
Re: (Score:2)
Given that many databases implement hash tables specifically because they are faster than b-trees in many uses cases.
Different algorithms have ideal applications. B-trees are a good, general purpose algorithm. When used properly, hash tables can be considerably faster than the fastest B-tree. Then again, used improperly, hash tables can be extremely slow; as you rightly pointed out.
Issues such as ram, cpu cache, and locality, and quality of the hash generator (perfect hash or as close as is possible), frequ
Re: (Score:2)
Database programmers work in queries per millisecond. Their product has a hell lot of know-how built into it. I would rather trust that than homebrew. .com.
If you are going to create a mission critical system in assembler your supply of assembler programmers will become mission critical. Who will be able to fix bugs and add features later? People who breathe assembler have better to do than yet another
The multiple of performance and maintainability is a constant. Choose your sweetspot...
Re:Cut out the middleman (Score:5, Insightful)
Point 1:
Databases already use B-trees for their indexes.
Point 2:
Coding business logic for performance instead of verifiability, testability or stability is beyond retarded.
Re: (Score:3)
In business development, clear understandable code etc (your point 2) is far more important... hardware gets faster, more obtus
Re: (Score:2)
I have my doubts. I don't think anyone could write a multi-user ACID storage engine in assembly an have it be faster the one a similary competent developer writes in C, C++, or even Java. That is not say there may be no gains to be had profiling and then optimizing some of the compilers asembler output here an there, but to suggest that a human developer can produce a non trivial program working only in asm which is faster than the output of a modern compiler is questionable.
Chips these days are just to c
Re:Cut out the middleman (Score:5, Insightful)
Um...
I considered modding, but "wrong" is not a mod.
B-Trees are certainly not the fastest way to store and retrieve data. You may want to investigate those strange things called "hashes". Fortunately for you, most database systems know how.
As well, "assembler" isn't the fastest coding system. It is so very slow and tedious to use that algorithm tends to get overlooked. Back to your "B-Tree" again -- have you tried coding a B-Tree in assembler?
To give a hard example -- back in the days of OS/2, the filesystem was coded in assembler. I imagine that it was considered the right way. After being recoded in C, the thing ran faster -- mostly because the developers could concentrate on better ways of doing things, rather than the drudge work of getting the assembler correct.
Another example -- replacing a "C" coded inner loop in a console game with a scheme interpreted version, that ended up being faster because it could fit into cache.
Sometimes upstream is simply better, even for performance.
Database systems are in the same category. It is rather difficult to optimize specifically, and, by the time you did this, typically the data store would be "out of touch" with the actual business requirements. So, most of us use a good database, with a query optimizer, and let it take care of that part of the problem. Yes, it may need some tweaking, but those tweaks will tend to be independent of the business logic. This can give rise to specialists who can be called on if needed.
Using your model, the "assembler tuned B-Tree" custom coded, there would be no paladins available to assist in the tuning. Indeed, moving to another architecture would be difficult (or impossible). It would be ridiculously expensive, and would likely never be upgraded. Hey, CICS and IMS are still in use, aren't they? And THEY are more portable than your suggestion. (CICS and IMS just turned 40).
PS. There are CICS and IMS paladin around. There wouldn't be any for your "solution". Also, IMS can use direct storage links, and, because data in IMS is hierarchical, even IMS can outperform "B-Tree".
Re: (Score:3)
Please don't feed the trolls.
Please moderate parent post into oblivion.
Re: (Score:2)
Have you seen compilers lately? In most scenarios, the compiler can do better optimization than a human. In most data situations, B-Trees are good, but in certain applications, there are other structures that yield better results, particularly if your data set is smaller than your RAM size. Both DBAs and software engineers know that an inappropriate data structure or join order can produce order of magnitude differences in performance.
Re: (Score:1)
Re: (Score:2)
The fastest possible way to store and retrieve data of all shapes and sizes is B-trees
You're kidding, but you haven't heard of MUMPS [wikipedia.org]. It's time-shared B-trees, and is a lot more widespread than you'd believe.
(Though if you're in marketing, you'd call it "The World's Fastest Object Database" or "post-relational" rather than "lots of arrays.")
Re: (Score:1)
The fastest possible way to store and retrieve data of all shapes and sizes is B-trees.
So, you should have a look at http://use-the-index-luke.com/ [use-the-index-luke.com] to learn how SQL databases use B-Trees.
I don't see anything about the PostGis extension.. (Score:3)
Re: (Score:3, Informative)
You need this book: PostGIS in Action [manning.com]
Re: (Score:1)
I wasn't going to tackle GIS knowing that PostGIS In Action was pending. There was enough material to keep me busy way past the expected publication length without touching it. Once Regina and Leo's book is out there, I do plan to revisit what topics in this area haven't been fully covered yet.
Summary box for the book completely mixed-up (Score:3)
Summary: 8/10
...
Pages: Packt Publishing
Publisher: 468
Re: (Score:1)
Re: (Score:2)
Well, Slashdot does store their information in a MySQL database...
Re: (Score:2)
NoSQL hype indeed (Score:3, Informative)
NoSQL is great solution for some problems, but it's hardly a panacea. Gavin Roy did a presentation at the last PgCon comparing Pg to a number of the NoSQL options. I thought it was interesting reading.
http://www.pgcon.org/2010/schedule/events/219.en.html
Re:NoSQL hype indeed (Score:5, Informative)
Clickable link [pgcon.org]. The summary of that publicly available benchmark was that just turning off the normal data integrity features in PostgreSQL, specifically its aggressive use of the fsync system call, was enough to make PostgreSQL run as about as fast or faster as any of the popular NoSQL approaches. Some of the NoSQL alternatives had significantly lower data loading times however. But as a whole, only MongoDB really had any significant performance gain, everything else was hard pressed to keep up with boring old Postgres when comparing fairly--MongoDB certainly doesn't use fsync [google.com] for example.
There are some intermediate steps between "no integrity" and "full transactional integrity" available in PostgreSQL as well, so you can adjust how much you're willing to pay per commit on a per transaction basis. Combine this with the fact that a key-value store [postgresql.org] with good performance is available, and you can get most of what NoSQL promises with it when that's appropriate, while still having full database features available when you need those too.
Re: (Score:1)
I think of this whenever [youtube.com] I hear nosql crowd.
People who bash SQL usually are mysql users who have never used a real database and they bash all that is sql because of one bad imitation. Then they go to one that is even less. lol
PostgreSQL is good if you know how to use triggers and foreign keys. It can speed up your ques if you know what you are doing and can scale.
Not a review comment, but interesting PostgreSQL (Score:4, Interesting)
Time and time again, the question of Oracle-like hints for PostgreSQL pops up on the PostgreSQL mailing lists. I thought I share some links as I find the topic fairly interesting. Hopefully the DBAs out there will too.
Why PostgreSQL Doesn't Have Query Hints [toolbox.com]
Why PostgreSQL Already Has Query Hints [j-davis.com]
Plan Tuner - Ripped from the above link [sai.msu.su]
And in case you don't know, this is a great place to stay current with PostgreSQL development and technology. [postgresql.org]
Re:Not a review comment, but interesting PostgreSQ (Score:5, Informative)
There's also Hinting at PostgreSQL [2ndquadrant.com], by the author of the book being reviewed here (me), covering what hinting mechanism are available. And finally Why the F&#% Doesn't Postgres Have Hints?!?! [xzilla.net] suggesting why some feel that still isn't enough.
Re: (Score:1)
Ahh Greg thanks! I quickly tried to find some of the other links but I couldn't remember the article name. Good job!
I'd already read those posts. Thanks for round this out.
P.S. I frequently enjoy your blog posts. I look forward to reading whatever the winds bring.
Re: (Score:2)
Thx for the links.
I am in the camp that says no hints is the lesser of two evils; yet I am of the view that the SQL language standard is deeply flawed and this whole argument persists because of the flaws in SQL itself. Recognizing and correcting those flaws will close the argument.
SQL implicitly requires a cost based planner, for everything . CBP is great for reporting and analytics, but CBP on primary key OLTP operations keep me awake at night. Some things you just want the software to always use a pr
Re: (Score:2)
SQL databases are just too complicated for the average IT professional, let alone the average person. And their proliferation into even desktop software, such as various accounting packages, is a development that will keep our industry on it's toes for some time to come.
Well, there are DBAs with lots of stripes and then there are DBAs by title. A decade ago I found many a professional DBA to which I could rub shoulders. These days, I rarely find a DBA, who is only in title. Seemingly, more and more DBAs exist not by skill and deep knowledge but rather by one's ability to install the corresponding RDBMS. With the advent of MySQL, seemingly many a web developer fancies themselves to be a DBA. Made worse, most of these guys don't even know the difference between a b-tree or h
Re: (Score:1)
SQL databases are just too complicated for the average IT professional, let alone the average person. And their proliferation into even desktop software, such as various accounting packages, is a development that will keep our industry on it's toes for some time to come.
The alternative is worse.
More free samples (Score:4, Informative)
Thanks to Joshua for the nice review here. There are actually a few more samples from the book than just the one chapter; here's a full list of them:
In addition to this one and the customer reviews at Amazon, there have been two other reviews by notable PostgreSQL contributors: Buy this book, now [planetpostgresql.org] and PostgreSQL 9 High Performance Book Review [postgresonline.com].
As alluded to in the intro, the book tries to cover PostgreSQL versions from 8.1 though 9.0, with a long listing of what has changed between each version to help you figure out what material does and doesn't apply. So most of the advice applies even if you're running an older version. There is also a companion volume to this one of sorts also available, PostgreSQL 9 Admin Cookbook [packtpub.com], that was written at the same time and coordinated such that there's little overlap between the two titles. That one focuses more on common day to day administration challenges, less on the theory.
The first four chapters.. (Score:2)
..explain how to pronounce "PostgreSQL".
Or at least it should.. I don't think I've met two people who say it the same way.
Re:The first four chapters.. (Score:4, Informative)
Official pronounciation recording [postgresql.org]. Postgres-Q-L. "Postgres" is also fine.
Re: (Score:2)
I don't think I've met two people who say it the same way.
Classic. The first two replies each offer two different ways of saying it, with all four being different from each other. "Postgres" is my favorite.
Re: (Score:2)
New technologies? (Score:2)
Last time I checked, key-value stores were not a new technology. We are talking about arrays here.
Frankly, I'm beginning to suspect that the only reason the editors post database stories is that they enjoy the catfights between the SQL and NoSQL crowds, and it fills time on days when there are no "Apple [does something awful]" or "Microsoft [screws up something]" stories to fill the space between announcements of the latest minor revisions of Firefox and Ubuntu.
The BEST Postgres book ever... (Score:1)
Re: (Score:2)
"...because it's one of the ONLY Postgres books ever."
Uh!? What's that "Practical PostgreSQL" (the mammoth) from O'Reilly that I own since 2002, then?
Re: (Score:2)
even seen the postgres wiki? Some of the best documentation of any software (closed/open) anywhere. The documentation is probably one reason why you don't see a lot of books on the subject. Anytime I've had a question, I've usually found it in the documentation.
Unfortunately, the project I'm working on now is MySQL based because that's what is readily available for proof of concept. But we're still in the rapid development/proof of concept. I also made damn sure to stay away from anything MySQL specifi
Re: (Score:1)
Oh that sig.
Try: "The trouble with Capitalism is that eventually you run out of ... other people's money" - Lehman Brothers.