Update 18/10/2012: as of version 1.42, released on the 15th Oct 2012, NCrunch requires a licence to use. Although expanding in terms of functionality offered and with no doubt over the developers commitment to the project, you may find it difficult to justify 159 USD for the benefit of automated test execution.
In February 2012 I was given the opportunity to step onto a project that was nearing its first release. Given it was the first bit of greenfield development I’ve done for a while (read: yonks) I was very keen, it also gave me a chance to try out NCrunch properly.
NCrunch is a Visual Studio add-in that sits in the background running your unit tests, providing visual feedback in a ye-olde dockable panel.
After checking out the project’s code base, I was fairly sceptical to begin with. Having had a great deal of experience of Murphy’s Law, I was expecting NCrunch to implode catastrophically, that I’d see red, un-install it and ultimately never touch it again, ever.
Not the case. I was pleasantly surprised to see it run through the majority of the tests quickly and some of them even passed! I learned that the failing tests fell over due to external dependencies. The story is, if your test has external dependencies, i.e. files read/written, you have to include the path to these in the configuration.
Every time you start up Visual Studio, NCrunch has to crunch all your tests which you might think would be a little bit of a pain (and you can judge that for yourself), but I didn’t find it a big deal as I usually start off sessions working out where I got to last time, generally allowing it enough time to get on with it and because it’s a background operation you don’t notice it – other than the visual feedback.
The thing I really like is the test coverage visualisation you get. Every line of code is given a little circle (or rectangle or line, which is configurable) which is coloured to indicate success or failure and coverage. Green if the line is covered by a passing test, red if it’s covered by a failing test and black if it’s not covered by any tests. There are other colours/combinations thereof that mean other things, which I haven’t fully worked out yet but experience so far indicates that this is used to identify long-running lines of code. This feature by itself is fantastic but one way it could be improved is if there was some sort of solution-wide report of code coverage metrics so you can drill into different areas of the solution (if I’m not just missing it).
I think my workflow has improved as a result of using NCrunch. Allowing the test runners to handle running the tests for me means I don’t have to constantly manually run tests once I’ve made a change – I can stick to the code and stay in the zone for longer.
Debugging – set a breakpoint, debug through, work out what’s broken, fix it, rerun – you do still have to do all this stuff but it’s a hell of a lot quicker. If a test fails and you can’t fathom why, debugging is easy, just kick off the debugger by right-clicking the offending test in the NCrunch tests panel and choosing the debug option.
I guess one scenario where you might not want to run NCrunch is if your tests aren’t really unit tests, or at least fast-executing integration tests. I know that crunching tests in some of the systems I’ve worked on in the past would kill my machine, largely because the tests are top-to-bottom integration tests that take 45+ minutes to execute (all tests). As a counter to this, it is possible to tell NCrunch to ignore certain tests.
Also, as NCrunch is a Visual Studio extension, it isn’t a replacement for your normal command-line test run that is usually executed as part of a pre-commit confidence build or release build, for example, on your Continuous Integration server. I see this really as an in-the-zone development-time IDE tool.
Knowing NCrunch is whirling away in the background has provided me a lot of confidence in the changes I make a lot earlier and a lot more interactively than I’ve been able to achieve before using manual test runners. If I make a change, within seconds and with no manual intervention I know if I’ve broken anything (assuming my tests are good enough of course). I won’t be going back to manually running my tests again, if I can help it. That said, I’m not sure it would fit into everyone’s workflow, I don’t consider myself a control freak but I understand it maybe takes a little leap of faith to let go of running tests manually and trust your tools. I suspect those that do don’t look back.