Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Books Book Reviews

Book Review: How Google Tests Software 44

MassDosage writes "Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage and the large number of bugs that had to be arduously found and fixed manually. The next step was nervously releasing the code without the safety net of a test bed and having no idea if one had introduced regressions or new bugs. When I first came across unit testing I ardently embraced it and am a huge fan of testing of various forms — from automated to smoke tests to performance and load tests to end user and exploratory testing. So it was with much enthusiasm that I picked up How Google Tests Software — written by some of the big names in testing at Google. I was hoping it would give me fresh insights into testing software at "Google Scale" as promised on the back cover, hopefully coupled with some innovative new techniques and tips. While partially succeeding on these fronts, the book as a whole didn't quite live up to my expectations and feels like a missed opportunity." Read below for the rest of MassDosage's review.
How Google Tests Software
author James Whittaker, Jason Arbon, Jeff Carollo
pages 281
publisher Addison Wesley
rating 6/10
reviewer Mass Dosage
ISBN 978-0321803023
summary Testing at Google scale
The book is written in an informal, easy to read manner and organized in such a way that readers can read chapters in any order or just choose to focus on the parts that interest them. One annoying layout choice is to highlight and repeat certain key sentences (as is often done in magazines) resulting in one reading the same thing twice, often only words away from the original sentence. Thankfully this is only the case in the first two chapters, but it highlights the variable quality of this book — possibly due to the authors having worked separately on different chapters. How Google Tests Software isn't a book for people new to testing or software development. The authors assume you know a fair amount about the software development lifecycle, where testing fits into this and what different forms testing can take. It is also largely technology neutral, using specific examples of testing software that Google uses only to illustrate concepts.

After a brief introduction as to how testing has evolved over time at Google the book devotes a chapter to each of the key testing-related roles in the company: the 'Software Engineer in Test' (SET), the 'Test Engineer' (TE) and the 'Test Engineering Manager' (TEM). SETs are coders who focus on writing tests or frameworks and infrastructure to support other coders in their testing. The TE has a broader, less well-defined role and is tasked with looking at the bigger picture of the product in question and its impact on users and how it fits into the broader software ecosystem. These two sections form the bulk of the book in terms of pages and interesting content. The TEM is essentially what the name says — someone who manages testers and testing and coordinates these activities at a higher level within Google.

The descriptions of each of these testing roles highlights the ways Google's thinking about testing has matured and also shows how some of these approaches differ from other companies. There are also explanations of the tools and processes that people in these roles use and follow and this for me was the most interesting part of the book. Topics covered include: specific bug tracking and test plan creation tools; risk analysis; test case management over time; and automated testing. Particularly of note are discussions on using bots to perform testing of web pages to detect differences between software releases, cutting down on the amount of human interaction required as well as the opposite approach — using more humans via "crowd sourced testing" among first internal and then select groups of external users. The tools that Google utilizes to simplify tester's jobs by recording steps to reproduce bugs and simplifying bug reporting and management sound very useful. Many of the tools described in the book are open source (or soon to be opened) and are probably worth following up on and investigating if this is what you do for a living.

In addition to the main body of text most chapters also include interviews with Google staff on various testing related topics. Some of these are genuinely interesting and give the reader a good idea of how testing is tackled at Google on a practical level. However some of the interviews fall into the "navel gazing" camp (especially when the authors interview one of themselves) and feel more like filler material. I enjoyed the interviews with Google hiring staff the most — their take on how they recruit people for testing roles and the types of questions they ask and qualities they look for make a lot of sense. The interview with the GMail TEM was also good and illustrated how the concepts described in the book are actually performed in practise. The interviews are clearly marked and can thus be easily skipped or skim read but one wonders what more useful text could have been included in their place.

The book wraps up with a chapter that attempts to describe how Google intends to improve their testing in the future. The most valuable point here is how testing as a separate function could "disappear" as it becomes part and parcel of the product being developed like any other feature, and thus the responsibility of all of the people working on the product as opposed to it being a separate thing. Another key point made throughout the book is how the state of testing at Google is constantly in flux which makes sense in such a fast moving and innovative company but leaves one questioning how much of this book will still be relevant in a few year's time.

How Google Tests Software isn't a bad book but neither is it a great one. It has some good parts and will be worth reading for those who are interested in "all things Google." For everyone else I'd recommend skimming through to the parts that grab your attention most and glossing over the rest.

You can purchase How Google Tests Software from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

Book Review: How Google Tests Software

Comments Filter:
  • crowdsourcing? (Score:5, Insightful)

    by peter303 ( 12292 ) on Wednesday June 06, 2012 @04:46PM (#40237051)
    I thought they call it "beta", release it and let the users find the bugs.
    • Re: (Score:3, Insightful)

      by msobkow ( 48369 )

      They may be beta-happy, but Google's initial release of a product usually gives me less grief than most company's 2.0's. (Or even 2.1's.)

      Plus the "beta" thing really did kind of die off after the multi-year gMail beta. Personally I thought they left the "beta" tag on more for the sake of being able to flip the bird at anyone running an obscure and broken browser than because it wasn't properly tested.

  • by rgbrenner ( 317308 ) on Wednesday June 06, 2012 @05:03PM (#40237213)

    Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage

    Umm.. wtf are you talking about? Extreme Programming is 13 years old, and it wasn't first. Even the waterfall model has testing, and it's 40 years old:

    1. Requirements specification
    2. Design
    3. Construction (AKA implementation or coding)
    4. Integration
    5. Testing and debugging (AKA Validation)
    6. Installation
    7. Maintenance

    Just because you didn't know how to test your software back then doesn't mean testing didn't exist.

    • I don't think he's suggesting testing didn't exist; just that it wasn't so formalized as it is now with QC teams that get hold of software before it leaves, people making their careers in QC, books written and courses given on the subject. Not that long ago, the dev'r was generally wholly responsible for testing the software vs now where the dev'r is expected to test, but the QC is there to really put software through its paces.
      • I don't think he's suggesting testing didn't exist; just that it wasn't so formalized as it is now with QC teams that get hold of software before it leaves, people making their careers in QC, books written and courses given on the subject.

        Software QA/QC wasn't even remotely new 15 years ago.

        Not that long ago, the dev'r was generally wholly responsible for testing the software vs now where the dev'r is expected to test, but the QC is there to really put software through its paces.

        The distinction between unit

        • well...late 80's...so 20-25 years ago? That's "not that long ago" in my book. Anyway, I never meant to imply that these things (careers, books, courses) weren't done before 2010, simply that they weren't ubiquitous as they are now in say the early 90's (was actually the time frame I was thinking).

          WARNING - Anecdotal, personal experience follows! Not to be taken as scientific fact! - WARNING
          Maybe I simply wasn't as aware of it as I am now, but I don't remember seeing many job postings even in the mid-/

      • Without putting words into his mouth, I think he's referring to "before testing was all the rage".

        I agree. wtf? Testing wasn't "the rage" if you were a hack. If you were writing software for any use, I'm sorry, testing was *always* "the rage".
      • by Darinbob ( 1142669 ) on Wednesday June 06, 2012 @06:01PM (#40237799)

        There has been formalized testing almost from the start. Just because some areas of software appear to have been populated by amateurs does not mean it was always this way. And by formalized it means that the developer has zero ability to get software shipped before a testing team validates it. I have been at places where software testing was lax, but it was also a very shoddy company in many ways. Other than that the only times there was little testing in my experience was when the software was for in-house use only or for research, etc. Except for one exception, anything shipped to a customer had a testing team and that testing team had the authority to halt a release.

        Do you think Nasa didn't do any formalized and detailed and exhaustive software testing before they put a man on the moon? Which I presume is before your "not that long ago" period.

    • Having developed software for nearly fifteen years, I remember the dark days before testing was all the rage

      Umm.. wtf are you talking about? Extreme Programming is 13 years old, and it wasn't first. Even the waterfall model has testing, and it's 40 years old:

      1. Requirements specification 2. Design 3. Construction (AKA implementation or coding) 4. Integration 5. Testing and debugging (AKA Validation) 6. Installation 7. Maintenance

      Just because you didn't know how to test your software back then doesn't mean testing didn't exist.

      He didn't say testing didn't exist. It definitely wasn't as prevalent or mandatory as now. He's right. Testing often wouldn't be done until the end of the cycle, and was frequently shortened or skipped entirely to meet deadlines. That doesn't seem to happen any more.

      • by DragonWriter ( 970822 ) on Wednesday June 06, 2012 @05:22PM (#40237409)

        Testing often wouldn't be done until the end of the cycle, and was frequently shortened or skipped entirely to meet deadlines. That doesn't seem to happen any more.

        This is what happens when you mistakenly generalize from what happens now where you work to what happens now generally.

        Let me assure you, there are all too many (which, come to think of it, is satisfied by "greater than zero") places where testing isn't done until the end of the cycle and is known to be shortened or skipped entirely to meet deadlines.

        Testing has for decades been recognized as important in the literature on software development and has for decades been practiced consistently at the places with the better development cultures.

        Also, for decades, its been skimped on and/or skipped entirely in environments with less organizational maturity or institutional understanding of software quality and long-term costs.

      • This depends entirely on the company and product. There was never a universal standard. There is not even one today. Every company is unique. Granted more and more companies are following the latest fads of course, but fads will change. And yes, _today_ there is software being done without certain types of testing (I don't do unit testing, it doesn't work well with certain operations and it bulks up the code). Every industry has a certain level of allowance for quality; in the medical industry there i

    • by LSD-OBS ( 183415 )

      Thank you. I just spent almost 10 minutes in acute spasmodic facepalm-mode at that comment.

  • by game kid ( 805301 ) on Wednesday June 06, 2012 @05:09PM (#40237273) Homepage

    After a brief introduction as to how testing has evolved over time at Google the book devotes a chapter to each of the key testing-related roles in the company: the 'Software Engineer in Test' (SET), the 'Test Engineer' (TE) and the 'Test Engineering Manager' (TEM). SETs are coders who focus on writing tests or frameworks and infrastructure to support other coders in their testing. The TE has a broader, less well-defined role and is tasked with looking at the bigger picture of the product in question and its impact on users and how it fits into the broader software ecosystem. These two sections form the bulk of the book in terms of pages and interesting content. The TEM is essentially what the name says — someone who manages testers and testing and coordinates these activities at a higher level within Google.

    I see...so they get rid of bugs by boring them to death with explanations of their bureaucratic structure, and threatening to add additional layers of management!

    Shit, if I was a bug, I'd leave the affected program voluntarily, just to avoid that TPS-report-fest in the making and give the lower employees time to breathe.

    • Shit, if I was a bug, I'd leave the affected program voluntarily, just to avoid that TPS-report-fest in the making and give the lower employees time to breathe.

      See? The standard marketing spiel that intense process-driven testing drives out defects is absolutely supportable.

    • I see...so they get rid of bugs by boring them to death with explanations of their bureaucratic structure, and threatening to add additional layers of management!

      Interestingly, the excerpt the above refers to contains description of exactly one layer of management, and two different engineering roles -- one of which is a software developer focussed on developing tests and testing tools, and the other of which is -- loosely -- a software architect focussed on testing.

      I hardly see having these three defined t

    • People see a successful company and think "they must do something right, we must do this also!" So just sticking the name of that company on a book increases the market value even if it's useless. It's almost like this cult of personality but with corporations. Google's Little Red Book. Now sometimes a book may be good but it is not necessarily due to the company (ie, Code Complete is from Microsoft Press but has nothing to do with Microsoft or their badly written software).

      As for Google, they have a so

  • Is there anything on the (automated) verification techniques used at Google? Microsoft has one of the best verification research labs around and they develop a handful of great tools that are freely available. Is there a Google analogue?
  • by a2wflc ( 705508 ) on Wednesday June 06, 2012 @05:17PM (#40237359)

    The 80's really sucked when I worked on a Unix kernel. We had unit tests, integration tests, system tests, stress tests, performance tests, compatibility tests (AT&T, BSD, SunOS, DB, major apps, Orange Book/security tests, various CPUs & devices, with builds from both a commercial and gnu compilers), and others.

    In addition to working on the kernel, I managed our testing. I had to manually start the tests each morning (after the automatic nightly builds that took 10 hours). Then I had to manually start emacs toward the end of the day and load the result files (which were fortunately analyzed in lisp) rather than looking at a desktop widget, then manually send an email to anyone who caused a problem.

    And to make matters worse (as if it can get worse than 10-20 minutes of my time a day) I didn't have lots of people raving about my cool test setup (they all thought it was just a standard and trivial part of software development)

    And don't make me go off on the pain of alpha and beta tests. I had to email an ftp location to our major customers using !-notation.

  • release it for public use, put "Beta" in the logo, monitor the complaints.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...