Please create an account to participate in the Slashdot moderation system


Forgot your password?

The Art of Unit Testing 98

FrazzledDad writes "'We let the tests we wrote do more harm than good.' That snippet from the preface of Roy Osherove's The Art of Unit Testing with Examples in .NET (AOUT hereafter) is the wrap up of a frank description of a failed project Osherove was part of. The goal of AOUT is teaching you great approaches to unit testing so you won't run into similar failures on your own projects." Keep reading for the rest of FrazzledDad's review.
The Art of Unit Testing with Examples in .NET
author Roy Osherove
pages 296
publisher Manning
rating 9/10
reviewer FrazzledDad
ISBN 1933988274
summary Soup-to-nuts unit testing with examples in .NET
AOUT is a well-written, concise book walking readers through many different aspects of unit testing. Osherove's book has something for all readers, regardless of their experience with unit testing. While the book's primary focus is .NET, the concepts apply to many different platforms, and Osherove also covers a few Java tools as well.

Osherove has a long history of advocating testing in the .NET space. He's blogged about it extensively, speaks at many international conferences, and leads a large number of Agile and testing classes. He's also the chief architect at TypeMock, an isolation framework that's a tool you may make use of in your testing efforts – and he's very up front about his involvement with that tool when discussing isolation techniques in the book. He does a very good job of not pushing his specific tool and also covers several others, leaving me feeling there wasn't any bias toward his product whatsoever.

AOUT does a number of different things really, really well. First off, it focuses solely on unit testing. Early on Osherove lays out the differences between unit and integration tests, but he quickly moves past that and stays with unit tests the rest of the book. Secondly, Osherove avoids pushing any particular methodology (Test Driven Development, Behavior Driven Development, etc.) and just stays on critical concepts around unit testing.

I particularly appreciated that latter point. While I'm a proponent of *DD, it was nice to read through the book without having to filter out any particular dogma biases. I think that mindset makes this book much more approachable and useful to a broader audience – dive in to unit testing and learn the fundamentals before moving on to the next step.

I also enjoyed that Osherove carries one example project through the entire book. He takes readers through a journey as he builds a log analyzer and uses that application to drive discussion of specific testing techniques. There are other examples used in the book, but they're all specific to certain situations; the brunt of his discussion remains on the one project which helps keep readers focused in the concepts Osherove's laying out.

The book's first two chapters are the obligatory introduction to unit testing frameworks and concepts. Osherove quickly moves through discussions of "good" unit tests, offers up a few paragraphs on TDD, and lays out a few bits around unit test frameworks in general. After that he's straight in to his "Core Techniques" section where he discusses stubs, mocks, and isolation frameworks. The third part, "The Test Code" covers hierarchies and pillars of good testing. The book finishes with "Design and Process" which hits on getting testing solidly integrated into your organization, plus has a great section on trying to deal with testing legacy systems. There are a couple handy appendices covering design issues and tooling.

Osherove uses his "Core Techniques" section to clearly lay out the differences between stubs and mocks, plus he covers using isolation frameworks such as Rhino.Mocks or TypeMock to assist with implementing these concepts. I enjoyed reading this section because too many folks confuse the concepts of stubbing and mocking. They're not interchangeable, and Osherove does a great job emphasizing where you should use stubs and mocks to deal with dependencies and interactions, respectively.

The walkthrough of splitting out a dependency and using a stub is a perfect example of why this book's so valuable: Osherove clearly steps through pulling the dependency out to an interface, then shows you different methods of using a stub for testing via injection by constructors, properties, or method parameters. He's also very clear about the drawbacks of each approach, something I find critical in any design-related discussion – let me know what things might cause me grief later on!

While the discussion on mocking, stubbing, and isolation was informative and well-written, I got the most out of chapters 6 ("Test hierarchies and organization") and 7 ("The pillars of good tests"). The hierarchy discussion in particular caused me to re-think how I've been organizing an evolving suite of Selenium-based UI tests. I was already making use of DRY and refactoring out common functionality into factory and helper methods; however, Osherove's discussion led to me re-evaluating the overall structure, resulting in some careful use of base class and inheritance. His concrete examples of building out a usable test API for your environment also changed how I was handling namespaces and general naming.

If you're in an organization that's new to testing, or if you're trying to deal with getting testing around legacy software, then the last two chapters of the book are must-read sections. Changing cultures inside organizations is never easy, and Osherove shows a number of different tools you can use when trying to drive the adoption of testing in your organizations. My own experience has shown you'll need to use combinations of many of these including finding champions, getting management buy off, and most importantly learning how to deal with the folks who become roadblocks.

The Art of Unit Testing does a lot of things really well. I didn't feel the book did anything poorly, and I happily include it in my list of top software engineering/craftsmanship books I've read. All software developers, regardless of their experience with unit testing, stand to learn something from it.

You can purchase The Art of Unit Testing with Examples in .NET from Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.


This discussion has been archived. No new comments can be posted.

The Art of Unit Testing

Comments Filter:
  • This will fit nicely besides my msbuild book collecting dust on my desk. Jokes aside, we do tons of unit testing and I have never seen a book solely on unit testing for .NET with TDD, mocking, etc. I'm stoked!
  • When I first saw the article's title, I thought that this was the UNIT [] it was referring to. Says a lot about the type of people I hang out with, doesn't it?
    • Re: (Score:3, Funny)

      by Hatta ( 162192 )

      When I read the blurb, I was wondering why they hadn't moved to ELF.

    • That's funny, when I saw the book cover thumbnail, I thought it was a picture of a timelord [], then realized there was no ceremonial headpiece, and thought it must be a Gallifrey Citadel Guard.

      Apparently I am a giant hulking geek.

  • xUnit Test Patterns (Score:5, Informative)

    by Nasarius ( 593729 ) on Wednesday February 10, 2010 @03:47PM (#31089842)
    For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios, I'd strongly recommend xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.

    The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled. I still struggle to follow TDD in many scenarios, especially where I'm closely interacting with system APIs, but just reading xUnit Test Patterns has given me tons of ideas that improved my code.
    • The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.

      I'd add to this that testable code often, not always but often, is well planned, well defined, and/or well managed code and this is what makes it fundamentally better. One might say that testable code is well engineered code.

      (Disclaimer: Haven't read that book yet, this is just an off the cuff remark from experiencing some of the best and some of the worst levels of unit testing and beyond)

    • Re: (Score:3, Insightful)

      by prockcore ( 543967 )

      but that testable code is fundamentally better because it needs to be loosely coupled.

      I disagree. It builds a false sense of security, and artificially increases complexity. You end up making your units smaller and smaller in order to keep each item discrete and separate.

      It's like a car built out of LEGO, sure you can take any piece off and attach it anywhere else, but the problems are not with the individual pieces, but how you put them together.. and you aren't testing that if you're only doing unit tes

      • by msclrhd ( 1211086 ) on Wednesday February 10, 2010 @04:35PM (#31090314)

        Kevlin Henny makes the following distinction:

        1. A unit test is a test that can fail if (a) the code under test is wrong, or (b) the test itself is wrong.

        2. An integration test is a test that can fail if (a) the code under test is wrong, (b) the test itself is wrong, or (c) the system environment has changed (e.g. the user does not have permission to write a file to a specific folder).

        John Lakos refers to individual things under test as components. In his model, there are layers of components that build on each other and interact with each other, but these are well-defined components that just happen to depend on other components.

      • by Lunix Nutcase ( 1092239 ) on Wednesday February 10, 2010 @04:47PM (#31090426)

        It's like a car built out of LEGO, sure you can take any piece off and attach it anywhere else, but the problems are not with the individual pieces, but how you put them together.. and you aren't testing that if you're only doing unit testing.

        And that's why you do integration testing too.

    • by MobyDisk ( 75490 )

      The idea is not only that automated testing is good, but that testable code is fundamentally better

      One of the main goals of of Typemock is to eliminate that. TypeMock allows you to mock objects that were not designed to be mocked, and are not loosely coupled.

    • by jgrahn ( 181062 )

      For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios, I'd strongly recommend xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.

      Aargh! They managed to mention unit tests, patterns and refactoring in the same title!

      Also, I really dislike xUnit, as I've seen it wedged into Python's unittest module and CPPUnit (C++). It's a horrible design which just gets in the way -- I don't understand what valid reasons a book has to rely on it (except buzz

      • by Lunix Nutcase ( 1092239 ) on Wednesday February 10, 2010 @05:19PM (#31090884)

        And loosely coupled code is fundamentally better *why*? "Because it can be easily unit tested" is the only argument I can swallow ...

        Because if the modules of your system have low to no coupling between themselves you can more easily make changes to individual modules of the system. In a highly coupled system, changes to one part can cause you to have to subsequently changes numerous other pieces of the system as a consequence. This is eliminated or greatly reduced if your modules have little to no dependency on the others. Even if you do no unit testing, having a highly modular and loose coupled system just makes subsequent maintenance work so much easier.

      • Re: (Score:3, Insightful)

        by geekoid ( 135745 )

        Smaller piece are easier to test, easier to maintain, easier to document and several reduce the chance of putting a new bug when changes need to be made.

        Unit testing helps enforce small code pieces.

        "f lots of overly-general and vague code."

        If that's true, then you have dealt with some extreme poor programmers. I suggest working with software engineers instead of programmers.

        re-use of common piece is a good thing, and loosely coupled code makes the easier to do as well.

      • And loosely coupled code is fundamentally better *why*?
        "Because it can be easily unit tested" is the only argument I can swallow ...

        On the past few systems I have worked on I have had the "fun" job of adding new features to existing legacy code. Adding features to the existing tightly coupled code was a nightmare, finding what did exactly what took ages, some functionality was partially performed in several different locations - each relying on the previous part - and the slightest spec change would need the whole thing to be re-done yet again. The exact same spec changes (eg a new element in a message) were trivial to do in the appli

    • Re: (Score:3, Insightful)

      The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.

      Which is a faulty assumption. Coming from this perspective, you want to unit test everything, and so you need to make everything loosely coupled. But the latter is not free, and sometimes the cost can be hefty - where a simple coding pattern would do before (say, a static factory method), you now get the mess with interface for every single class in your program, abstract factories everywhere (or IoC/DI with its maze of XML configs).

      Ultimately, you write larger amounts of code that is harder to follow and h

      • by bondsbw ( 888959 )

        A good chunk of your post assumes that the ideas of interface-based decoupling, IoC, etc. are all unnatural. My guess is that those things are not your enemy, but that design is your problem.

        It's probably true that most programmers decide, from day one of learning how to program, that tightly coupled code defers the difficult task of design until the very last possible minute. But that doesn't mean that decoupling is unnatural, and it is certainly not bad. It means that we need to teach programmers these

        • by shutdown -p now ( 807394 ) on Thursday February 11, 2010 @02:04AM (#31096312) Journal

          A good chunk of your post assumes that the ideas of interface-based decoupling, IoC, etc. are all unnatural.

          No, it doesn't. It assumes that they're not always natural, and that it's not always worth it.

          Sometimes it is right and proper for two classes to be tightly coupled. Sometimes, we want to decouple them, but that decoupling doesn't necessarily have to take the form of interface per class and IoC.

          By the way, I would argue that IoC is very unnatural in many things. Its use should be an exception rather than a rule. Among other things, it tends to replace proper object-oriented design with service-centric one.

          I don't see this at all. Maintainability is the point of loose coupling.

          It's at best a side effect (when it's there). The primary point of loose coupling is to be able to independently substitute parts - that is, extensibility, and testability to the extent that testing frameworks use that (rather than backdoors).

          Case in point... I have been working on a data entry system for a few years now that, through previous design and my own old habits, has become very tightly coupled. Unit testing probably won't ever happen. I once needed to add a field in section 2. I did and released an update. A few days later, we noticed that data had been half-entered in hundreds of records. It took days to track down the issue... it turns out that I didn't find all the places that my field needed to be updated, and because of consistency errors, anytime a button was pressed in section 5, any future attempts to save the record were lost.

          What you've described is a problem with code duplication, not tight coupling.

          Also, the problem would have been solved by unit tests (which do not require decoupling).

          Why do you have a "maze" of IoC configs?

          By that I mean that it's often entirely non-obvious where things come from, just looking at one particular piece of code. It's actually a problem with OOP at large, to some extent - it's what you inevitably get with decoupling - but IoC takes this to the extreme, where it actually becomes very noticeable.

          Let me try to give an analogy. A monolithic design is a single "brick" where everything is interconnected. A modular one is when you have several bricks, each doing its own thing. If those bricks are made such that you can only put them together, and cannot replace any brick, the design is tightly coupled. If you can freely replace any brick with a similar one (no matter what it's made of - so long as it's made to the spec), it's loosely coupled.

          The problem is that we, as programmers, don't see the system as a whole - we see individual bricks, and have to mentally reconstruct the whole thing. When there are too many of them (because they're too small), and they're so generic and interchangeable, it's not entirely obvious where any particular one fits without looking at many others.

          It's not an unsurmountable problem, and one can certainly train oneself to handle it. The problem, as with any "purist" approach, be it OO, FP, or anything else, is that at some point, the return on investment is negative - you spend a lot of time learning to put tiny bricks together, and then actually putting them together, while the problem can be solved by a less experienced programmer using smaller and cruder bricks, for cheaper, and pretty much just as good from a pragmatic point of view. The only thing that is left for your design is that it's more "elegant", but it's not a business goal in and of itself.

          It's important to maintain that balance. Slip too much to one side, and your design becomes an unmaintainable, unreadable mess of tightly coupled spaghetti code. Slip too much to another one, and it's an unmaintainable, unreadable elegant mess of tiny classes with single-liner methods, wired together by IoC, where all bits together produce the desired result, but no-one really knows how. I've seen both. Both are very painful to maintain, debug, and extend (though that said, I usually still prefer the latter - at least it's more amenable to refactoring).

          • by bondsbw ( 888959 )

            I can definitely agree with the non-purist point of view. You can take everything to an extreme. When decoupling, you can pull so much out of your classes that they become anemic, and really you have something that no longer gains the benefits of OOP.

            Among other things, it [IoC] tends to replace proper object-oriented design with service-centric one.

            I disagree. I'm sure you could take it to that extreme, but I would say it tends to promote OO design. IoC relies on inheritance of interfaces and base classes. Without IoC, I've many times seen entire programs created without any inheritance at all (excep

            • No, it's a problem with coupling. I had practically everything in one class (a single form... again, bad practice).

              Well, that's not quite coupling, either. It's the infamous "magic button" anti-pattern, where all logic gets shoved directly into event handlers.

              I guess you could call it coupling in some sense, but to me, coupling is about dependencies between seemingly distinct components (classes etc). When it's a single component that "does everything", it's a more fundamental code organization and OO design problem.

              That's the thing... this code is not unit testable. Let me take that back. I probably could try, but every test would have database hits or web service hits

              The trick is that something like TypeMock lets you mock e.g. ADO.NET or ASP.NET web service APIs directly

    • For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios,

      I think that all slashdot readers fall into this category....

  • by Tanman ( 90298 ) on Wednesday February 10, 2010 @03:51PM (#31089870)

    When I was a young man in the program, they tested the unit by having us march shoeless through 2 miles of uphill, mine-ridden, barbed-wire-laced snow! The unit got tested, and tested HARD! The program didn't allow for no pansy-ass pussy-footers. And did the unit in the program pass its tests? By God it did! You youngsters got it easy just havin to do some stupid vocabulary test to test your unit in the program. Plugging in words. HAH! Try plugging in the gaping hole left by the bark of an exploding tree!

  • by syousef ( 465911 ) on Wednesday February 10, 2010 @03:53PM (#31089888) Journal

    The only part that is an "art" is working out how to successfully isolate the component that you're trying to test. For simple components at lower layers (typically data CRUD) it's not so hard. Once you find you're having to jump through hoops to set up your stubs, it gets harder to "fake" them successfully and becomes a more error prone and time consuming process. It can also be difficult if there's security in the way. The very checks you've put in to prevent security violations now have to be worked around or bypassed for your unit tests. There's also a danger of becoming too confident in your code because it passes the test when run against stub data. You may find there's a bug specific to the interfaces you've stubbed. (For example a bug in a vendor's database driver, or a bug in your data access framework that doesn't show up against your stub).

    All of those distracting side issues and complications aside, we are dealing with fundamental engineering principles. Build a component, test a component. Nothing could be simpler, in principle. So it's disappointing when developers get so caught up in the side issues that they resist unit testing. There does come a point where working around obstacles makes unit testing hard and you have to way benefit against cost and ask yourself how realistic the test is. But you don't go into a project assuming every component is too hard to unit test. That's just lazy and self-defeating. It comes down to the simple fact that many programmers aren't very good at breaking down a problem. In industries where their work was more transparent, they wouldn't last long. In software development where your code is abstract and the fruit of your work takes a long time to get to production, bad developers remain.

    • Indeed. As a recovering premature optimizer, I think efficiency is a big reason people avoid breaking something into smaller parts that can be tested independently. Plus things like the Singleton pattern result in designs that are harder to test, because there are some states you cannot "rewind" back to without restarting the program.
      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Singletons are pretty easy to test as long as you don't use the *antipattern* of the class that enforces its own, uh, singletonicity. If you have .getInstance() methods, you have that antipattern. Yes, it's in the GoF book but frankly the GoF is just wrong on that point. It's a lifecycle pattern, and lifecycles like that should be taken care of by the context, like a class factory or a DI container. If you have a DI container, testing a singleton is an absolute snap and in fact *easier* than non-singlet

    • by msclrhd ( 1211086 ) on Wednesday February 10, 2010 @04:55PM (#31090522)

      When testing a system, if you cannot put a given component under test (or do so by "faking" its dependants -- e.g. by the things that talk to the database) then the architecture is wrong.

      I strive never to have any "fake" parts of the system in a test. It makes it harder to maintain (e.g. changing some of the real components will break the tests). You cannot easily change the data you are testing with, or having a method generate an error for a specific test. You are also not really testing the proper code; not all of it, at any rate.

      You should implement interfaces at the interface boundaries, and have it so that the code under test can be given different implementations of that interface. This means that you don't need to fake any part of your codebase -- you are testing it with different data and/or interface behaviours (e.g. exceptions) that are designed to exercise the code under test. The code under test should not need modification in order to run (aside from re-architecturing the system to make it testable).

      The main goal of testing is to have the maximum coverage of the code possible to ensure that any changes to the code don't change expected behaviour or cause bugs. Ideally, when a bug is found in manual testing, it should be possible to add a test case for that bug so that it can be verified and so that future work will not re-introduce that bug.

      Start where you can. If you have a large project, put the code that you are working on under test first to verify the existing behaviour. This also works as an exploratory phase for code that you don't fully understand.

      Also remember that tests should form part of the documentation. They are useful for verifying an interface contract (does a method accept a null string when the contract says it does? does the foo object always exist like the document says it does?)

      • by syousef ( 465911 )

        What you're arguing against is mocks. In theory what you strive for is fantastic. In practice you don't always get to determine the architecture. You get to choose the best of a bad bunch. Combine this with the thinking in some circles that mocks are the best thing since slice bread, and you can sure end up in a mess. I happen to agree with you in principle: Mocks are wasteful and can be dangerous. In practice, sometimes you have no choice, because setting up with real objects can be too complex or time con

        • by Taevin ( 850923 )
          I think you and msclrhd are conflating unit testing and integration testing. In order to test components as a single unit (at least for components with dependencies), mocks are critical. If component A fails when working with component B, did it fail because of a bug in A or because B is not behaving according to its contract? In other words, without mocking, unit tests end up being integration tests which are very important to verify the overall function of your application, but tell you nothing about your
      • Re: (Score:3, Insightful)

        by wrook ( 134116 )

        This is a really good post. I wish I could moderate you up. Like some people, I've become less enamoured with the word "test" for unit tests. It implies that I am trying to find out if the functionality works. This is obviously part of my effort, but actually it has become less so for me over time. For me, unit tests are used for telling me when something has changed in the system that needs my attention. I liken it to a spider's web. I'm not trying to find all the corner cases or prove that it works

    • You seem to think that "art" refers to something that is fundamentally mysterious []. A lot of art is, but that's not an intrinsic feature. The word itself has a lot different meanings. Here are some of the most fundamental, from the Oxford English Dictionary.

      1. Skill in doing something, esp. as the result of knowledge or practice.
      2. Skill in the practical application of the principles of a particular field of knowledge or learning; technical skill. Ob

      • by syousef ( 465911 )

        It does depend on your definition of the word "art" but I'm not the only one who uses the word in the context you describe as erroneous. At best the word art is ambiguous and should be avoided. The word engineering is less ambiguous and more accurate.

        • by fm6 ( 162816 )

          I'm not saying your usage is erroneous. In some contexts it does make sense. This just isn't one of them. When you use language, you need to be sensitive to context, you can't just blinding plug in whatever definition suits you.

          Unless you're in politics, of course...

          • by syousef ( 465911 )

            I'm not saying your usage is erroneous. In some contexts it does make sense. This just isn't one of them. When you use language, you need to be sensitive to context, you can't just blinding plug in whatever definition suits you.

            What you did was take a contrary definition and insist that it is the only one that applies. Do you even understand the irony here?

            Unless you're in politics, of course.

            Pot. Kettle. Black.

            • by fm6 ( 162816 )

              We're in flame mode, I see. When you grow up, you'll discover that people can disagree with you without attacking you.

              • by syousef ( 465911 )

                Take a close look at your own post before you accuse me of descending into flame mode. I don't have a problem with you disagreeing with me. I have a problem with you telling me my argument doesn't make sense unless I'm in politics. Then you add to the irony with the whole "when you grow up" routine.

                • by fm6 ( 162816 )

                  My quip about politics was meant as a joke. It was not meant as a personal attack. I'm sorry if it offended.

                  In the future, you might consider saying "I take offense at" instead of going on the offensive yourself.

        • by sjames ( 1099 )

          The problem IS one of semantics. Too many of the people who want to remove the word art (or worse the ones who insist that 'art' is inferior) believe that so long as correct procedures are followed at each step even drooling morons can crank out perfect programs (design and all) just like workers on an assembly line (in whatever country has the cheapest labor at the moment). They don't want "in my experience..." they want the result from a magic formula to cover their ass. After all, you can hardly be blam

  • Tiring to read (Score:5, Interesting)

    by noidentity ( 188756 ) on Wednesday February 10, 2010 @03:53PM (#31089894)

    I read this book recently and found it tiring. Much of it reads like a blog, and like many books, the author randomly switches stances. He'll refer to the reader as "the reader", "you", "we", and in the third person. This is the kind of book where it's hard to keep a clear idea of what the author is talking about, because he doesn't have a clear idea of what he's trying to communicate.

    When I think of tiring books like this, I cannot avoid always remembering Steve McConnel's Code Complete (first edition; I haven't looked at the second edition yet). Reading that book is like having your autonomy assaulted, because the author constantly tries to get you to accept the things he's claiming, via whatever means necessary, rather than presenting them along with rational arguments, and letting you decide when to apply them. I'm not saying Osherove's book is that bad, just that it has that same unenjoyable aspect that makes it a chore to read and get useful information from.

    I recently also read Kent Beck's Test-Driven Development and highly recommend it, if you simply want to learn about unit testing and test-driven development. It's concise and enjoyable to read. Unfortunately it doesn't cover as many details, and I don't have any good alternatives to books like Osherove's (and I've read many at my local large university library).

    • by weicco ( 645927 )

      Reading that book is like having your autonomy assaulted, because the author constantly tries to get you to accept the things he's claiming

      This is exactly what I'm looking for in books, blogs etc. I can read all the technical information about different desing/coding/testing/project-leading techniques I want from Wikipedia but I want to read how these things are done in the real life as well.

      Let's take an example. I've been recently focused on MS Sql Server and T/SQL. Couple of weeks ago I read everything t

    • As is Beck's book... (Score:1, Interesting)

      by Anonymous Coward

      Interesting that you should mention Kent Beck's book, as I too have read it recently and found it to be the shittiest pile of steaming turd at I've ever seen put into book form. It was *SO* slow going, so condescending, and it was so sorely lacking in the way of cohesive rational arguments that if I hadn't been sold TDD through other means, I would have abandoned the concept altogether. It might be okay as an introduction to a complete novice programmer, but if you've had any experience in the industry a

  • Read it; Loved it. (Score:3, Interesting)

    by fyrie ( 604735 ) on Wednesday February 10, 2010 @03:56PM (#31089924)
    I'm fairly experienced with unit testing, and I've read several books on the subject. This is by far the best introduction to unit testing I have read. The book, in very practical terms, explains in 300 pages that which took me about five years to learn the hard way. I think this book also has a lot of value for unit testers that got their start a decade or more ago but haven't kept up with recent trends.
  • by noidentity ( 188756 ) on Wednesday February 10, 2010 @04:03PM (#31090006)

    I've read several unit testing books recently, and another I found somewhat useful is Michael Feathers' Working Effectively with Legacy Code []. It has all sorts of techniques for testing legacy code, i.e. code that wasn't designed for testability, and which you want to make as few modifications to. So he gets into techniques like putting a local header file to replace the normal one for some class used by the code, so that you can write a replacement class (mock) that behaves in a way that better exercises the code. Unfortunately Feathers' book is also somewhat tiring to read, due to a verbose writing style and rough editing, but I don't know anything better.

  • by CxDoo ( 918501 ) on Wednesday February 10, 2010 @04:35PM (#31090310)

    I work on distributed real-time software (financial industry) and can tell you that unit tests for components I write are either

    1. trivial to write, therefore useless
    2. impossible to write, therefore useless

    I find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing.

    tl;dr - check return values, catch exceptions and dump them in your logs (and use state machines so you know where exactly you were, and so on...)

    • Re: (Score:2, Funny)

      by chromatic ( 9471 )

      Fortunately, reliable software is not a werewolf.

    • A lot of the methodologies were designed by people who have experience only in writing MOR (Middle Of the Road) code and in many cases haven't written any production code in years.

      So it's not surprising that it's a bad fit for most specialty projects.

    • by lena_10326 ( 1100441 ) on Wednesday February 10, 2010 @05:05PM (#31090678) Homepage

      Unit tests should be trivial for the majority of classes. Good OO design will cause your many of your classes to be single purpose and simplistic therefore the unit tests will also be simplistic. That's the point of OOD (or even modular design)--breaking down complex problems into many simpler problems*.

      Maybe you should consider that unit testing is not just for validating the current set of objects but also validating that future revisions do not break compatibility. In other words it makes regression testing possible or easier with automation.

      Writing the unit tests also serve to prove to your teammates you've thought about boundary conditions and logic errors. When you're forced to think them in a structured way then you're in a better position to catch code bugs while writing the unit tests. Many times you'll find them before even executing the test code.

      Note: If anyone responds with something along the lines of "complex problems cannot always be simplified" I will literally punch you--repeatedly.

      • simplistic

        I think you mean simple, or perhaps very simple. Simplistic means too simple or over-simplified. If your unit tests are simplistic then thay are not adequate for the job.

      • I don't think you understand the problem the parent described. Unit tests can't help you to diagnose multi-threaded and time-related issues. When you have a bug which only reproduces in the wild once every 3 months, just saying "unit tests" won't allow you to reproduce the bug, fix it and add the test to reproduce this bug to regression. At best you need to create the tools to reproduce the bug yourself, and with certain systems and certain bugs, this can be far from trivial to develop. Multi-threaded and
        • Of course I understand what the parent poster said. I've worked on real-time kernel based clustered applications. First off, you're assuming unit tests will find all bugs. Bad assumption. It's just another tool in the toolbox; it's not perfect. Nothing is. Second, if one is saying they cannot test a real-time application then they're not building scaffolding code right. Third, it is true there are synchronization and hardware scenarios that cannot be tested with unit tests because the scenario only exists i
    • I'll say this much.

      Unit testing has two big uses.
      1. it formalizes the testing you do anyways and keeps that test. Just today, I had to write a tricky regexp to split some logging apart. I used the unit test just to formalize the testing I'd do anyways (feed in some dummy strings) to verify it works.

      2. It forces you to write better code.

      2 is a bit flakly... as if someone writes crappy code, unit testing isn't going to make them a better coder. Yet, it does keep me check. There are countless times you j

    • Re: (Score:3, Insightful)

      by geekoid ( 135745 )

      That just means you are horrible at your job and that you think no one else will ever work on it.

      "I find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing."
      THAT is a separate issue, that you should ALSO do.

      I suspect you have no clue why you should be designing and using unit tests.

      • by CxDoo ( 918501 )

        So the answer to my fairly self-evident assertion that unit tests are not useful everywhere is that I am at best ignorant, and at worst an idiot?

        1. 'Units' we deal with are very simple. Their relationships are not.
        2. A good portion of 'units' we deal with are not written by us. Sometimes we get a usable specification, sometimes not. Code never. These are black boxes with often unpredictable behavior.
        3. We work real time, so we don't call methods, we send messages. What should I mock and test there - message

    • 1. trivial to write, therefore useless
      2. impossible to write, therefore useless

      This has been my experience as well.

  • by Zoxed ( 676559 ) on Wednesday February 10, 2010 @05:16PM (#31090840) Homepage

    Rule #1 of all testing: The purpose of testing is not to prove that the code works: the purpose of testing is to *try to break* the program.
    (A good tester is Evil: extremes of values, try to get it to divide by 0 etc.)

    • by theguru ( 70699 )

      This may be the purpose of manual testing, but the idea of having high code coverage with automated testing it to prevent regressions, minimize time to release, and even act as documentation via example usage.

      It's less about release N, and more about release N+1.

    • Uh no, it's to demonstrate that the code "works". The problem here is what it means "to work". Part of the usefulness of TDD is that you might not fully understand what it means "to work" yet, and the tests help you flesh that out.

      Let me clarify, so you don't think I'm 100% ditching what you're saying versus stating it a different way. A test suite will tend to have BOTH tests for what the correct behavior *is* and also tests for what the correct behavior *is not*. In other words, what you're doing is defin

  • by TheCycoONE ( 913189 ) on Wednesday February 10, 2010 @05:28PM (#31091002)

    I was at Dev Days in Toronto a few months ago, and one of the speakers brought up a very good point relating to different software engineering methodologies. He said that despite all the literature written on them, and the huge amount of money involved, there has been very few good studies on the effectiveness of various techniques. He went on to challenge the effectiveness of unit testing and 'agile development.' The only methodology he had found studies to demonstrate significant effectiveness was peer code review.

    This brings me to my question. Does this book say anything concrete with citations to back it up, or is it all the opinion of one person?

    • by Cederic ( 9623 ) on Wednesday February 10, 2010 @07:35PM (#31092632) Journal

      Does your speaker have anything concrete with citations to back his assertions up, or is he happily dismissing one of the few genuine advances in software engineering in the last decade?

      we found that the code developed using a test-driven development practice showed, during functional verification and regression tests, approximately 40% fewer defects than a baseline prior product developed in a more traditional fashion. The productivity of the team was not impacted by the additional focus on producing automated test cases. This test suite will aid in future enhancements and maintenance of this code.

      -- []

      A Spring 2003 experiment examines the claims that test-driven development or test-first programming improves software quality and programmer confidence. The results indicate support for these claims

      -- []

      Experimental results, subject to external validity concerns, tend to indicate that TDD programmers produce higher quality code because they passed 18% more functional black-box test cases.

      -- []

      We observed a significant increase in quality of the code (greater than two times) for projects developed using TDD compared to similar projects developed in the same organization in a non-TDD fashion.

      -- []

      My apologies for the rough and ready citations, I only picked the ones I could find on the first fucking page of Google search results.

      • He did, and I wish I could find his slides to better present what he was saying. I believe he said there were a lack of scientifically rigorous studies which would be necessary to adopt a practice in other disciplines (eg. business.) Your first citation for example is a study of less than two dozen people. The second I can't read, but in general you'll notice that while they have the same conclusions the actual number vary quite wildly which brings into doubt the methods and the conclusions.

        • Actually I found his slides: [] The slides themselves don't touch unit testing, and should be combined with his talk. I never meant to refute unit testing in the first place though, I just wanted to ensure before I spent the time and money going through the above book that it provided empirical evidence that his methods were better.

          • by Cederic ( 9623 )

            Ignore the book for a moment, and read the thoughts of leading software engineers.

            What do Kent Beck, Martin Fowler, Alistair Cockburn, Erich Gamma, Steve McConnell, Scot Ambler, Rob Martin, Andy Hunt and Dave Thomas all say? Hit Google, do some 'free' exploration and reading.

            Then read up on people's experience on these things. There are various mail lists, where people have tried these (and other techniques) and report their experiences.

            You can reach the point fairly quickly where you can decide whether you

    • Re: (Score:3, Interesting)

      by Aladrin ( 926209 )

      I have never seen any scientific studies on it, but I use Unit Testing as a tool to help me code and debug better and it works a LOT better than anything I tried prior to that. And when I break some of my old code, I know exactly what's breaking with just a glance.

      Also, I have occasionally be charged with massive changes to an existing system, and Unit Testing is the only thing I know of that lets me guarantee the code functions exactly the same before and after for existing uses.

      tl;dr - I don't need a sci

    • by wrook ( 134116 )

      The problem with measuring the effectiveness of programming techniques is that it is very difficult. It is quite valid to say that there are few studies to back up the effectiveness of various "agile" techniques. But I will point out that this is true of every programming technique.

      The problem with measuring this is that it is impossible to get a baseline. There is a huge difference in productivity based simply on individual talent. This has been shown. So you will need thousands of programmers to test

  • Somebody likes their Anime a bit too much.

  • [waves hand in front of face]
  • I think many are getting caught up in terminology and forgetting (or perhaps they never knew) that the overall general purpose of any testing is to eliminate assumptions. Do the requirements really reflect what the customer wants? Does the system really meet the specs? Does component X really perform its job? Or are these things just assumed to be true? Testing gives one the ability to find out.

    Now the decision of what to test and what to ignore is an important one, and ultimately it comes down to recognizi

  • and unit specification early on in my career with a documentation technique which let me specify the order of as well as the limits of the API (whether human or systemic components were involved.)

    My success and income over the years was derived from the work doe in 1983-84, printer in Computer Language Magazine in 1990 and released into the wild in 2007.

    Check out []

When you are working hard, get up and retch every so often.