- Salesforce Development Posts by Matt Lacey

Three Reasons Why You Should Write Proper Tests

Posted 2013-04-12

As anybody who has worked with me will know, I've not always been a fan of the enforced test methods that need to be written when working with I've written as much on this blog, but over time opinions can change, and even though mine have I don't believe in shuffling previous thoughts aside; I chose long ago to leave old blog posts here because honesty and transparency is key to many, if not all, of lifes encounters.

Why Test? Because Good

The subtitle above was one of the notes I made for this post, but I've let it stand simply because a few minutes after tweeting about finding a blog topic for today, the ineffably brilliant FAKEGRIMLOCK happened to be retweeted by someone I very much admire for their work, Adam Seligman.



Because awesome. That pretty much sums up my thoughts on the subject these days. Embrace test methods and you'll wonder why you every complained about them.

Why Good? Because

I previously lamented writing code to test other code, and although some aspects of it still feel particularly chore-like, I have learned to appreciate the benefits of doing so. It's taken a few years of learning the hard way, but I got there in the end and if this helps you get there faster then I'll consider the mission accomplished.

1. Knowing Your Code Works

Every programmer who's ever written code knows that their code works. They'll tell everybody, even when everybody turns around and points to the contrary the response will more often than not be: "How does that not work? It worked for me".

If you're truly honest with yourself you'll be able to recall at least once when you've made a small, insignificant and above all else, completely safe change to some code... only to have it blow up in your face the moment somebody else tries to use it. Chances are you made that change, and knowing it's safe you didn't even test it. You just responded to that email, or closed that case, with a comment along the lines of "It was just a minor discrepancy, all sorted". I've done it, and I know many others who have. I've watched junior developers make such a change and look at me incredulously when I've asked if they have tested it, only to see shock on their face moments later, when I force them to do so. I force them to do so because I've made that mistake, and I still do.

Test methods can help. If you write proper, valid, tests that not only reach the minimum coverage but actually test results you can make a change and click a button; you'll soon find out whether or not the change you made was truly safe.

2. Rubber Duck Problem Solving

True, rubber ducking is really describing a problem such that you stumble across the solution without actually getting feedback from anywhere, but test coverage can be considered a similar exercise. Last week I was writing test coverage for a Visualforce controller and for the life of me couldn't work out why I wasn't getting the expected result. I was mimicking the process I used when using the page itself, but I was unfathomably encountering an unexpected error. It turned out that when I was using the UI, I was taking an extra step: more specifically I was setting a custom field with the name Active__c to true, but doing so outside of the page. I bet you can't guess what the problem was. I wasn't setting the Active__c field to true for my test record.

I wasn't doing so because the field in question wasn't on the Visualforce page, and it was only after being perplexed by the test method failure that I realised that my page was not built as it should be; I had discovered a fundamental flaw in my user interface by writing code that didn't even use the interface.

Writing code to test a particular component will help you determine whether that component's implementation is correct. You're explaining your problem to the hypothetical yellow duck.

3. Test Code Can Be More Comprehensive Than Testing

You could argue that this is the same as point number one, but I beg to differ. In the prior instance I was discussing a lack of testing altogether, but now I refer to testing in a different manner to that which you might use otherwise.

Often when writing test methods for a controller you will find yourself calling various methods in a random order, you may, for instance, be calling all of the setter and getter methods just to get coverage up to the required level. The thing is, you're trying to cover everything, every single line of code. Chances are when you test the UI you will make the classic programmer mistake of testing something by using it how it is intended to be used, and it's only when you're trying to teach an autonomous machine to do the same will you make a mistake, and do something wrong. And by something wrong, I mean using the system in a way it was not designed.

I'll never forget a particular project whereby users were somehow creating duplicate records in the system and it took the best part of a year to figure out the cause: they were pressing a button twice. We never pressed that button twice because we knew we didn't need to, regardless of whether the application appeared to be working we simply knew that it was.

When you write test code you often perform actions in a non-realistic manner, trying to cover every branch in a conditional or every type of input in a string; but just because the test isn't realistic, that doesn't mean it's not something that might happen in the real world.

Test code often invokes methods and actions in a different order to the 'correct' order, but in doing so it can highlight mistaken assumptions about what is correct.

One Way to Make Life Easier

Quick Disclaimer: This blog post circulated farther than the usual, and so is being seen by those who don't work on the platform. For those that don't this following hint may well seem like a hack, an abomination or laziness; it is specific to the platform, which requires 75% coverage and often involves try {} catch {} blocks because the end user can make system changes causing things to fail beyond the developer's control. 90% of these catch blocks simply display a message and rather than modify logic to test whether a message works (that I know won't fail) I prefer to leave these uncovered but minimise their impact on my test coverage score.

One gripe I still have with test coverage is that it's just that, coverage. Of course there is no way to verify logical results unless the tests are written specifically to do so (and to that end, I encourage you to write your tests with this in mind), and coverage is not a good indicator of functionality. I tend to use try/catch blocks a lot, even in places where I think an exception will never happen, and that can be detrimental when trying to achieve the required code coverage.

I do however, have a quick tip for those in a similar boat. Coverage for test methods on is calculated per line, which means if you swap this:

    // some code
catch (Exception e)
    // a single line of code that sends a user message

for this:

catch (Exception e) {  // the same code inline }

you'll start to see those coverage numbers rising, simply because the former variant takes three times as many lines for the exception you'll never cover as the latter. I don't encourage cheating, but this is a viable pain reduction tactic for when you've been a little over cautious.

Related Posts