Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Image

The Art of Unit Testing 98

FrazzledDad writes "'We let the tests we wrote do more harm than good.' That snippet from the preface of Roy Osherove's The Art of Unit Testing with Examples in .NET (AOUT hereafter) is the wrap up of a frank description of a failed project Osherove was part of. The goal of AOUT is teaching you great approaches to unit testing so you won't run into similar failures on your own projects." Keep reading for the rest of FrazzledDad's review.
The Art of Unit Testing with Examples in .NET
author Roy Osherove
pages 296
publisher Manning
rating 9/10
reviewer FrazzledDad
ISBN 1933988274
summary Soup-to-nuts unit testing with examples in .NET
AOUT is a well-written, concise book walking readers through many different aspects of unit testing. Osherove's book has something for all readers, regardless of their experience with unit testing. While the book's primary focus is .NET, the concepts apply to many different platforms, and Osherove also covers a few Java tools as well.

Osherove has a long history of advocating testing in the .NET space. He's blogged about it extensively, speaks at many international conferences, and leads a large number of Agile and testing classes. He's also the chief architect at TypeMock, an isolation framework that's a tool you may make use of in your testing efforts – and he's very up front about his involvement with that tool when discussing isolation techniques in the book. He does a very good job of not pushing his specific tool and also covers several others, leaving me feeling there wasn't any bias toward his product whatsoever.

AOUT does a number of different things really, really well. First off, it focuses solely on unit testing. Early on Osherove lays out the differences between unit and integration tests, but he quickly moves past that and stays with unit tests the rest of the book. Secondly, Osherove avoids pushing any particular methodology (Test Driven Development, Behavior Driven Development, etc.) and just stays on critical concepts around unit testing.

I particularly appreciated that latter point. While I'm a proponent of *DD, it was nice to read through the book without having to filter out any particular dogma biases. I think that mindset makes this book much more approachable and useful to a broader audience – dive in to unit testing and learn the fundamentals before moving on to the next step.

I also enjoyed that Osherove carries one example project through the entire book. He takes readers through a journey as he builds a log analyzer and uses that application to drive discussion of specific testing techniques. There are other examples used in the book, but they're all specific to certain situations; the brunt of his discussion remains on the one project which helps keep readers focused in the concepts Osherove's laying out.

The book's first two chapters are the obligatory introduction to unit testing frameworks and concepts. Osherove quickly moves through discussions of "good" unit tests, offers up a few paragraphs on TDD, and lays out a few bits around unit test frameworks in general. After that he's straight in to his "Core Techniques" section where he discusses stubs, mocks, and isolation frameworks. The third part, "The Test Code" covers hierarchies and pillars of good testing. The book finishes with "Design and Process" which hits on getting testing solidly integrated into your organization, plus has a great section on trying to deal with testing legacy systems. There are a couple handy appendices covering design issues and tooling.

Osherove uses his "Core Techniques" section to clearly lay out the differences between stubs and mocks, plus he covers using isolation frameworks such as Rhino.Mocks or TypeMock to assist with implementing these concepts. I enjoyed reading this section because too many folks confuse the concepts of stubbing and mocking. They're not interchangeable, and Osherove does a great job emphasizing where you should use stubs and mocks to deal with dependencies and interactions, respectively.

The walkthrough of splitting out a dependency and using a stub is a perfect example of why this book's so valuable: Osherove clearly steps through pulling the dependency out to an interface, then shows you different methods of using a stub for testing via injection by constructors, properties, or method parameters. He's also very clear about the drawbacks of each approach, something I find critical in any design-related discussion – let me know what things might cause me grief later on!

While the discussion on mocking, stubbing, and isolation was informative and well-written, I got the most out of chapters 6 ("Test hierarchies and organization") and 7 ("The pillars of good tests"). The hierarchy discussion in particular caused me to re-think how I've been organizing an evolving suite of Selenium-based UI tests. I was already making use of DRY and refactoring out common functionality into factory and helper methods; however, Osherove's discussion led to me re-evaluating the overall structure, resulting in some careful use of base class and inheritance. His concrete examples of building out a usable test API for your environment also changed how I was handling namespaces and general naming.

If you're in an organization that's new to testing, or if you're trying to deal with getting testing around legacy software, then the last two chapters of the book are must-read sections. Changing cultures inside organizations is never easy, and Osherove shows a number of different tools you can use when trying to drive the adoption of testing in your organizations. My own experience has shown you'll need to use combinations of many of these including finding champions, getting management buy off, and most importantly learning how to deal with the folks who become roadblocks.

The Art of Unit Testing does a lot of things really well. I didn't feel the book did anything poorly, and I happily include it in my list of top software engineering/craftsmanship books I've read. All software developers, regardless of their experience with unit testing, stand to learn something from it.

You can purchase The Art of Unit Testing with Examples in .NET from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

*

This discussion has been archived. No new comments can be posted.

The Art of Unit Testing

Comments Filter:
  • by syousef ( 465911 ) on Wednesday February 10, 2010 @03:53PM (#31089888) Journal

    The only part that is an "art" is working out how to successfully isolate the component that you're trying to test. For simple components at lower layers (typically data CRUD) it's not so hard. Once you find you're having to jump through hoops to set up your stubs, it gets harder to "fake" them successfully and becomes a more error prone and time consuming process. It can also be difficult if there's security in the way. The very checks you've put in to prevent security violations now have to be worked around or bypassed for your unit tests. There's also a danger of becoming too confident in your code because it passes the test when run against stub data. You may find there's a bug specific to the interfaces you've stubbed. (For example a bug in a vendor's database driver, or a bug in your data access framework that doesn't show up against your stub).

    All of those distracting side issues and complications aside, we are dealing with fundamental engineering principles. Build a component, test a component. Nothing could be simpler, in principle. So it's disappointing when developers get so caught up in the side issues that they resist unit testing. There does come a point where working around obstacles makes unit testing hard and you have to way benefit against cost and ask yourself how realistic the test is. But you don't go into a project assuming every component is too hard to unit test. That's just lazy and self-defeating. It comes down to the simple fact that many programmers aren't very good at breaking down a problem. In industries where their work was more transparent, they wouldn't last long. In software development where your code is abstract and the fruit of your work takes a long time to get to production, bad developers remain.

  • by prockcore ( 543967 ) on Wednesday February 10, 2010 @04:23PM (#31090192)

    but that testable code is fundamentally better because it needs to be loosely coupled.

    I disagree. It builds a false sense of security, and artificially increases complexity. You end up making your units smaller and smaller in order to keep each item discrete and separate.

    It's like a car built out of LEGO, sure you can take any piece off and attach it anywhere else, but the problems are not with the individual pieces, but how you put them together.. and you aren't testing that if you're only doing unit testing.

  • Re:Error coding... (Score:3, Insightful)

    by jgrahn ( 181062 ) on Wednesday February 10, 2010 @05:14PM (#31090792)

    Could I go out on a limb here and ask why error handling is considered a black art, requiring truckloads of books to understand? I've done well following a few basic rules;

    1. Know exactly what the system call does before you use it.
    2. Check the return value of every one.
    3. Check the permissions when you access a resource.
    4. Blocking calls are a necessary evil. Putting them in the main loop is not.
    5. Always check a pointer before you use it.
    ...

    *Detecting* the problem isn't hard. What's hard is *handling* it -- and there was nothing about that on your list. Hint: calling abort(2) is not always acceptable.

  • by Zoxed ( 676559 ) on Wednesday February 10, 2010 @05:16PM (#31090840) Homepage

    Rule #1 of all testing: The purpose of testing is not to prove that the code works: the purpose of testing is to *try to break* the program.
    (A good tester is Evil: extremes of values, try to get it to divide by 0 etc.)

  • by Lunix Nutcase ( 1092239 ) on Wednesday February 10, 2010 @05:19PM (#31090884)

    And loosely coupled code is fundamentally better *why*? "Because it can be easily unit tested" is the only argument I can swallow ...

    Because if the modules of your system have low to no coupling between themselves you can more easily make changes to individual modules of the system. In a highly coupled system, changes to one part can cause you to have to subsequently changes numerous other pieces of the system as a consequence. This is eliminated or greatly reduced if your modules have little to no dependency on the others. Even if you do no unit testing, having a highly modular and loose coupled system just makes subsequent maintenance work so much easier.

  • by TheCycoONE ( 913189 ) on Wednesday February 10, 2010 @05:28PM (#31091002)

    I was at Dev Days in Toronto a few months ago, and one of the speakers brought up a very good point relating to different software engineering methodologies. He said that despite all the literature written on them, and the huge amount of money involved, there has been very few good studies on the effectiveness of various techniques. He went on to challenge the effectiveness of unit testing and 'agile development.' The only methodology he had found studies to demonstrate significant effectiveness was peer code review.

    This brings me to my question. Does this book say anything concrete with citations to back it up, or is it all the opinion of one person?

  • by shutdown -p now ( 807394 ) on Wednesday February 10, 2010 @05:40PM (#31091182) Journal

    The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.

    Which is a faulty assumption. Coming from this perspective, you want to unit test everything, and so you need to make everything loosely coupled. But the latter is not free, and sometimes the cost can be hefty - where a simple coding pattern would do before (say, a static factory method), you now get the mess with interface for every single class in your program, abstract factories everywhere (or IoC/DI with its maze of XML configs).

    Ultimately, you write larger amounts of code that is harder to follow and harder to maintain, for 1) a real benefit of being able to unit test it, and 2) for an illusory benefit of being able to extend it easier. The reason why that last benefit is illusory is because, in most cases, you'll never actually use it, and in most cases when you do use it, the cost of maintaining the loosely coupled code up to that point is actually much more than the price you'd have paid for refactoring it to suit your new needs if you left it simple (and more coupled) originally.

    Also, it does promote some patterns that are actively harmful. For example, in C#, methods are not virtual by default, and it's a conscious design decision [artima.com] to avoid the versioning problem with brittle base classes [msdn.com]. But "testable code" must have all methods virtual in order for them to be mocked! So you either have to carefully consider the brittle base class issue for every single method you write, or just say "screw them all" and forget about it (the Java approach). The latter is what most people choose, and, naturally, it doesn't exactly increase product quality.

    Of course, this all hinges on the definition of "testable code". The problem with that is that it's essentially defined by the limitations of current mainstream unit testing frameworks, particularly their mocking capabilities. "Oh, you need interfaces everywhere because we can't mock sealed classes or non-virtual members". And then a convenient explanation is concocted that says that this style is actually "testable code", and it's an inherently good one, regardless of any testing.

    Gladly, TypeMock is about the only sane .NET unit testing framework out there - it lets you mock anything. Sealed classes, static members, constructors, non-virtual methods... you name it, it's there. And that is as it should be. It lets you design your API, thinking of issues that are actually relevant to that design - carefully considering versioning problems, not forgetting ease of use and conciseness, and providing the degree of decoupling that is relevant to a specific task at hand - with no regard to any limitations the testing framework sets.

    It's no surprise that some people from the TDD community are hostile towards TypeMock [wordpress.com] because it's "too powerful", and doesn't force the programmer to conform to their vision of "testable code". But it's rather ironic, anyway, given how TDD itself is by and large an offshoot of Agile, which had always promoted principles such as "do what works" and "make things no more complicated than necessary".

  • Re:Error coding... (Score:3, Insightful)

    by shutdown -p now ( 807394 ) on Wednesday February 10, 2010 @05:43PM (#31091270) Journal

    Check the permissions when you access a resource.

    Careful, you can easily have a race condition there. Say, you're trying to open a file. You check for permissions before doing so, and find out that everything is fine. Meanwhile, another process in the system does `chmod a-r` on the file - and your following open() call fails, even though the security check just succeeded.

  • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Wednesday February 10, 2010 @08:19PM (#31093160) Homepage Journal

    Smaller piece are easier to test, easier to maintain, easier to document and several reduce the chance of putting a new bug when changes need to be made.

    Unit testing helps enforce small code pieces.

    "f lots of overly-general and vague code."

    If that's true, then you have dealt with some extreme poor programmers. I suggest working with software engineers instead of programmers.

    re-use of common piece is a good thing, and loosely coupled code makes the easier to do as well.

  • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Wednesday February 10, 2010 @08:24PM (#31093212) Homepage Journal

    That just means you are horrible at your job and that you think no one else will ever work on it.

    "I find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing."
    THAT is a separate issue, that you should ALSO do.

    I suspect you have no clue why you should be designing and using unit tests.

  • by wrook ( 134116 ) on Wednesday February 10, 2010 @08:43PM (#31093416) Homepage

    This is a really good post. I wish I could moderate you up. Like some people, I've become less enamoured with the word "test" for unit tests. It implies that I am trying to find out if the functionality works. This is obviously part of my effort, but actually it has become less so for me over time. For me, unit tests are used for telling me when something has changed in the system that needs my attention. I liken it to a spider's web. I'm not trying to find all the corner cases or prove that it works in every case. I want bugs to have a high probability of hitting my web and informing me. When writing new code I also want to be informed when I make an assumption about existing code that is different from the original author. I think about my assumptions and try to write unit tests that verify the assumptions. This often fills out most of my requirements for a "spider's web" since when people start messing with code and break my assumptions, my tests will also break.

    Finally, your point about documentation is extremely good. A large number of people, even if they are used to writing unit tests, don't understand unit testing as documentation. I've gone to the extreme of thinking about my tests as being literate programming written in the programming language rather than English. To this extent, I've embraced BDD and write stories with tests. For each story that I'm developing, I'll create unit tests that explain how each part of the interface is used. I then refactor my stories mercilessly over time to maintain a consistent narrative. However, I often feel like I want a "web" (as in TeX's literate programming tool) tool that will generate my narrative, but will still allow me to view the code as units (which is useful for debugging).

  • by shutdown -p now ( 807394 ) on Thursday February 11, 2010 @02:04AM (#31096312) Journal

    A good chunk of your post assumes that the ideas of interface-based decoupling, IoC, etc. are all unnatural.

    No, it doesn't. It assumes that they're not always natural, and that it's not always worth it.

    Sometimes it is right and proper for two classes to be tightly coupled. Sometimes, we want to decouple them, but that decoupling doesn't necessarily have to take the form of interface per class and IoC.

    By the way, I would argue that IoC is very unnatural in many things. Its use should be an exception rather than a rule. Among other things, it tends to replace proper object-oriented design with service-centric one.

    I don't see this at all. Maintainability is the point of loose coupling.

    It's at best a side effect (when it's there). The primary point of loose coupling is to be able to independently substitute parts - that is, extensibility, and testability to the extent that testing frameworks use that (rather than backdoors).

    Case in point... I have been working on a data entry system for a few years now that, through previous design and my own old habits, has become very tightly coupled. Unit testing probably won't ever happen. I once needed to add a field in section 2. I did and released an update. A few days later, we noticed that data had been half-entered in hundreds of records. It took days to track down the issue... it turns out that I didn't find all the places that my field needed to be updated, and because of consistency errors, anytime a button was pressed in section 5, any future attempts to save the record were lost.

    What you've described is a problem with code duplication, not tight coupling.

    Also, the problem would have been solved by unit tests (which do not require decoupling).

    Why do you have a "maze" of IoC configs?

    By that I mean that it's often entirely non-obvious where things come from, just looking at one particular piece of code. It's actually a problem with OOP at large, to some extent - it's what you inevitably get with decoupling - but IoC takes this to the extreme, where it actually becomes very noticeable.

    Let me try to give an analogy. A monolithic design is a single "brick" where everything is interconnected. A modular one is when you have several bricks, each doing its own thing. If those bricks are made such that you can only put them together, and cannot replace any brick, the design is tightly coupled. If you can freely replace any brick with a similar one (no matter what it's made of - so long as it's made to the spec), it's loosely coupled.

    The problem is that we, as programmers, don't see the system as a whole - we see individual bricks, and have to mentally reconstruct the whole thing. When there are too many of them (because they're too small), and they're so generic and interchangeable, it's not entirely obvious where any particular one fits without looking at many others.

    It's not an unsurmountable problem, and one can certainly train oneself to handle it. The problem, as with any "purist" approach, be it OO, FP, or anything else, is that at some point, the return on investment is negative - you spend a lot of time learning to put tiny bricks together, and then actually putting them together, while the problem can be solved by a less experienced programmer using smaller and cruder bricks, for cheaper, and pretty much just as good from a pragmatic point of view. The only thing that is left for your design is that it's more "elegant", but it's not a business goal in and of itself.

    It's important to maintain that balance. Slip too much to one side, and your design becomes an unmaintainable, unreadable mess of tightly coupled spaghetti code. Slip too much to another one, and it's an unmaintainable, unreadable elegant mess of tiny classes with single-liner methods, wired together by IoC, where all bits together produce the desired result, but no-one really knows how. I've seen both. Both are very painful to maintain, debug, and extend (though that said, I usually still prefer the latter - at least it's more amenable to refactoring).

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...