Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Books Book Reviews

Book Review: How Google Tests Software 44

MassDosage writes "Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one had introduced regressions or new bugs. When I first came across unit testing I ardently embraced it and am a huge fan of testing of various forms — from automated to smoke tests to performance and load tests to end user and exploratory testing. So it was with much enthusiasm that I picked up How Google Tests Software — written by some of the big names in testing at Google. I was hoping it would give me fresh insights into testing software at "Google Scale" as promised on the back cover, hopefully coupled with some innovative new techniques and tips. While partially succeeding on these fronts, the book as a whole didn't quite live up to my expectations and feels like a missed opportunity." Read below for the rest of MassDosage's review.
How Google Tests Software
author James Whittaker, Jason Arbon, Jeff Carollo
pages 281
publisher Addison Wesley
rating 6/10
reviewer Mass Dosage
ISBN 978-0321803023
summary Testing at Google scale
The book is written in an informal, easy to read manner and organized in such a way that readers can read chapters in any order or just choose to focus on the parts that interest them. One annoying layout choice is to highlight and repeat certain key sentences (as is often done in magazines) resulting in one reading the same thing twice, often only words away from the original sentence. Thankfully this is only the case in the first two chapters, but it highlights the variable quality of this book — possibly due to the authors having worked separately on different chapters. How Google Tests Software isn't a book for people new to testing or software development. The authors assume you know a fair amount about the software development lifecycle, where testing fits into this and what different forms testing can take. It is also largely technology neutral, using specific examples of testing software that Google uses only to illustrate concepts.

After a brief introduction as to how testing has evolved over time at Google the book devotes a chapter to each of the key testing-related roles in the company: the 'Software Engineer in Test' (SET), the 'Test Engineer' (TE) and the 'Test Engineering Manager' (TEM). SETs are coders who focus on writing tests or frameworks and infrastructure to support other coders in their testing. The TE has a broader, less well-defined role and is tasked with looking at the bigger picture of the product in question and its impact on users and how it fits into the broader software ecosystem. These two sections form the bulk of the book in terms of pages and interesting content. The TEM is essentially what the name says — someone who manages testers and testing and coordinates these activities at a higher level within Google.

The descriptions of each of these testing roles highlights the ways Google's thinking about testing has matured and also shows how some of these approaches differ from other companies. There are also explanations of the tools and processes that people in these roles use and follow and this for me was the most interesting part of the book. Topics covered include: specific bug tracking and test plan creation tools; risk analysis; test case management over time; and automated testing. Particularly of note are discussions on using bots to perform testing of web pages to detect differences between software releases, cutting down on the amount of human interaction required as well as the opposite approach — using more humans via "crowd sourced testing" among first internal and then select groups of external users. The tools that Google utilizes to simplify tester's jobs by recording steps to reproduce bugs and simplifying bug reporting and management sound very useful. Many of the tools described in the book are open source (or soon to be opened) and are probably worth following up on and investigating if this is what you do for a living.

In addition to the main body of text most chapters also include interviews with Google staff on various testing related topics. Some of these are genuinely interesting and give the reader a good idea of how testing is tackled at Google on a practical level. However some of the interviews fall into the "navel gazing" camp (especially when the authors interview one of themselves) and feel more like filler material. I enjoyed the interviews with Google hiring staff the most — their take on how they recruit people for testing roles and the types of questions they ask and qualities they look for make a lot of sense. The interview with the GMail TEM was also good and illustrated how the concepts described in the book are actually performed in practise. The interviews are clearly marked and can thus be easily skipped or skim read but one wonders what more useful text could have been included in their place.

The book wraps up with a chapter that attempts to describe how Google intends to improve their testing in the future. The most valuable point here is how testing as a separate function could "disappear" as it becomes part and parcel of the product being developed like any other feature, and thus the responsibility of all of the people working on the product as opposed to it being a separate thing. Another key point made throughout the book is how the state of testing at Google is constantly in flux which makes sense in such a fast moving and innovative company but leaves one questioning how much of this book will still be relevant in a few year's time.

How Google Tests Software isn't a bad book but neither is it a great one. It has some good parts and will be worth reading for those who are interested in "all things Google." For everyone else I'd recommend skimming through to the parts that grab your attention most and glossing over the rest.

You can purchase How Google Tests Software from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

Book Review: How Google Tests Software

Comments Filter:
  • by Anonymous Coward on Wednesday June 06, 2012 @04:58PM (#40237175)

    I thought that was Microsoft, except they charge you to boot.

  • by game kid ( 805301 ) on Wednesday June 06, 2012 @05:09PM (#40237273) Homepage

    After a brief introduction as to how testing has evolved over time at Google the book devotes a chapter to each of the key testing-related roles in the company: the 'Software Engineer in Test' (SET), the 'Test Engineer' (TE) and the 'Test Engineering Manager' (TEM). SETs are coders who focus on writing tests or frameworks and infrastructure to support other coders in their testing. The TE has a broader, less well-defined role and is tasked with looking at the bigger picture of the product in question and its impact on users and how it fits into the broader software ecosystem. These two sections form the bulk of the book in terms of pages and interesting content. The TEM is essentially what the name says — someone who manages testers and testing and coordinates these activities at a higher level within Google.

    I see...so they get rid of bugs by boring them to death with explanations of their bureaucratic structure, and threatening to add additional layers of management!

    Shit, if I was a bug, I'd leave the affected program voluntarily, just to avoid that TPS-report-fest in the making and give the lower employees time to breathe.

  • by a2wflc ( 705508 ) on Wednesday June 06, 2012 @05:17PM (#40237359)

    The 80's really sucked when I worked on a Unix kernel. We had unit tests, integration tests, system tests, stress tests, performance tests, compatibility tests (AT&T, BSD, SunOS, DB, major apps, Orange Book/security tests, various CPUs & devices, with builds from both a commercial and gnu compilers), and others.

    In addition to working on the kernel, I managed our testing. I had to manually start the tests each morning (after the automatic nightly builds that took 10 hours). Then I had to manually start emacs toward the end of the day and load the result files (which were fortunately analyzed in lisp) rather than looking at a desktop widget, then manually send an email to anyone who caused a problem.

    And to make matters worse (as if it can get worse than 10-20 minutes of my time a day) I didn't have lots of people raving about my cool test setup (they all thought it was just a standard and trivial part of software development)

    And don't make me go off on the pain of alpha and beta tests. I had to email an ftp location to our major customers using !-notation.

"More software projects have gone awry for lack of calendar time than for all other causes combined." -- Fred Brooks, Jr., _The Mythical Man Month_

Working...