UnitTesting

The Basics of Unit Testing

Definition 1.0: A unit test is a piece of code(usually a method) which calls another piece of code and checks the correctness of some assumptions afterwards. If the assumption turns out to be wrong, the unit test has failed. The unit is a method or function.

A unit of work can be described as the collection of actions that take place in betweeen invoking a public api<------>observing a single noticeable end result. This end result can be observed without looking inside the private implementations of the system and only through a public api.

End result can be one of the following

  • a return value (will happen only for public api which do not return a void)
  • a noticeable change to the state or behavior to the system visible only through public apis.(example a state machine that changed its state)
  • a callout to a third party system( this callout is a noticeable end result)

If the idea is to carry out a unit of work and see something noticeable, why should we have small unit of work?. This is like checking every small work if it is working fine when instead the bigger work can be checked in lesser time. Of course if you are 100% sure that your small works will work fine without any issue.

Definition 1.1: A unit test is a piece of code that invokes a unit of work and checks one specific end result of that unit of work. if the assumptions on the end result fails then the unit test has failed. A unit test's scope can span as little as a method or as big as classes.

We do test our code, but directly by using the product. Of course in development environment. But this is not a good unit test

Properties of a Good Unit Test
  • It should be automated and repeatable(should not be like everytime you run a test, you have a variable in between that you are chaning before you run)
  • It should be easy to implement
  • It should be relevant tomorrow(unit test should be time scalable, there are many system parameters that may change tomorrow)
  • Anyone should be able to run it at the push of a button
  • it shouldn't take too much time to run(you are not running the production product methods that might take lot of time to complete, or are asynchronous.)
  • It should be consistent in the results(result should not change until you change the test itself)
  • It should have full control of the unit under test(the unit under test should be inside a single environment i.e. your test environment, no part of it should be able to be changed by something outside the test environment)
  • It should be fully isolated
  • When it fails, it should be easy to find the problem.
Any test method that does not follow the above, is Integration Testing
Integration Tests
Properties of Integration Tests
  • Not fast
  • Not consistent
  • Uses real system time
  • Uses real file system
  • Uses real database

Lets take an example of a Car. If a Car breaks down, its the whole car that doesn't works. Various small parts may work independently but when brought together fails. This is similar to when we test our application through UI.

Definition: Integration testing is testing a unit of work without having full control over all of it and using one or more of its real dependencies like time, network, database, threads, random number generators etc.

A unit test isolates the unit of work from its dependencies so that they are easily consistent in their result and can easily control and simulate any aspect of the unit's behavior.

What makes unit tests good

Final Definition: A unit test is an automated piece of code that invokes the unit of work being tested, and then checks some assumptions about a single end result of that unit. A unit test is almost always written using a unit test framework. It can be written easily and runs quickly. It's trustworthy, readable, maintainable. It's consistent in its results as long as production code hasn't changed.

A simple unit test example
Create console applications and have a reference of your code in it. Write test classes/test methods in the console application. Call those test methods in the console's Main method.

This is only when you are not using any unit testing framework

Test-driven Development

Many people think that the best time to write tests is after the production code is written. But there is a growing number of group that prefer writing tests even before the production code is written. What will you test if there is no code?

But what TDD actually is :

Steps of TDD

  • Write a failing test to prove code or functionality is missing from the end product
  • Make the test pass by writing production code that meets the expectations of your test
  • Refactor your code
  • Write another test

Refactoring means changing a piece of code without changing its functionality. The code still does the same thing, but it becomes easier to maintain, read, debug, and change.

Technically, one of the biggest benefits of TDD nobody tells you about is that by seeing a test fail, and then seeing it pass without changing the test, you’re basically testing the test itself. If you expect it to fail and it passes, you might have a bug in your test or you’re testing the wrong thing. If the test failed and now you expect it to pass, and it still fails, your test could have a bug, or it’s expecting the wrong thing to happen.

The 3 core skills of successfull TDD

  • Knowing how to write good tests(this book teaches this)
  • Writing them test-first
  • Designing them well


A First Unit Test

Why NUnit, Why not MSTest
  • Because NUnit has more features. These features make it more readable, maintainable.
  • Available via Nuget.
Frameworks for Unit Testing
What unit testing framework offer?
  • Easy to implement. Earlier we might have needed a Console app, a UI or a Web Form for testing because we didn't have much time to implement testing.
  • Repeatable. With time your unit tests had to be written again.
  • Coverage. Earlier each and every part of logic might not have been covered.

In short we have been missing a framework for writing, running and reviewing unit tests and their results

The LogAn project(log and notification)
The Scenario:

Your company has many internal products it uses to monitor its applications at customer sites. All these products write log files and place them in a special directory. The log files are written in a proprietary format that your company has come up with that can't be parsed by any existing third-party tools. You're tasked with building a product, LogAn, that can analyze these log files and find special cases and events in them. When it finds these cases and events, it should alert the appropriate parties.

In this course, we will learn to write tests that verify LogAn's

  • parsing
  • event-recognition
  • notification abilities

Loading up the Solution
I will be using Visual Studio 2013

The full solution can be found here https://github.com/royosherove/aout2

Or Let's begin by creating our own solution. Use the downloaded solution for reference of project setup.

  • Open Visual Studio
  • ctrl+shift+n to create a new project
  • Choose a Class Library project
  • Project name: "LogAn", Solution name: "ArtOfUnitTesting"
The Solution
Let's begin with our first class:


public class LogAnalyzer
{
    public bool IsValidLogFileName(string fileName)
    {
        if(fileName.EndsWith(".SLF"))
        {
            return false;
        }
        return true;
    }
}

This method has a bug, it returns false instead of true if the fileName ends with ".SLF". This is to show how a test looks like when it fails.

The first test is to make sure the method returns true if the filename is valid. Here are the first steps for writing an automated test for the IsValidLogFileName method.

  • Add a new Class Library project to your solution which will contain your tests. Name it "LogAn.UnitTests"
  • Add a new Class to that project that will hold your test methods. Name if "LogAnalyzerTests"
  • Add a new method, name it IsValidLogFileName_BadExtension_ReturnFalse

Here are the three parts of the test method name:

  • UnitOfWorkName: The name of the method or group of methods or classes you're testing
  • Scenario: The conditions under which the unit is tested such as "bad login" or "invalid user" or "good password". You could describe the parameters being sent to the public method or the initial state of the system when the unit of work is invoked such as “system out of memory” or “no users exist” or “user already exists.”
  • Expected BehaviorWhat you expect the tested method to do under the specified conditions. This could be one of three possibilities: return a value as a result (a real value, or an exception), change the state of the system as a result (like adding a new user to the system, so the system behaves differently on the next login), or call a third-party system as a result (like an external web service).

In practice we keep your production code and test code in separate projects

Now as a last step for this section, add the reference of LogAn project to your test project

Installing NUnit
  • ctrl+q for quick launch
  • Type "Package Manager Console", choose and click enter
  • Choose Package source: "nuget.org", Default project: "LogAn.UnitTests"
  • type install-package nunit and enter. This downloads NUnit files, adds a refernce of NUnit's dll to your project
  • type install-package nunit.runner. This downloads the NUnit UI.

Using the NUnit attributes in your code

NUnit uses an attribute scheme to recognize and load tests

NUnit provides an assembly that contains these special attributes.

NUnit/NUnit runner identify tests by

  • [TestFixture]
  • [Test]

This is how your code would look like


using NUnit.Framework;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {
        [Test]
        public void IsValidLogFileName_BadExtension_ReturnFalse()
        {

        }
    }
}

Tip: NUnit requires the test methods to be public, return void and accept no parameters. But sometimes even these can have parameters.

Writing your first Test
A unit test comprises of 3 main actions:
  • Arrange(do whatever you will need to call the method under test. for example create parameter variables. Create a variable which will hold the expected end result of "Act")
  • Act(call the method under test and store the end result somewhere)
  • Assert(compare the end result of "Act" with what is expected and make a decision if the test passed or not)

Here's the code:


using NUnit.Framework;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {
        [Test]
        public void IsValidLogFileName_BadExtension_ReturnFalse()
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName("filewithbadextension.foo");

            //Assert
            Assert.False(result);
        }
    }
}

The Assert class
  • Assert class has static methods
  • Bridge between your code and NUnit framework
  • If the arguments that are passed in the Assert class turn out to be different that what you are expecting, NUnit will realize that the test has failed and will alert you
  • You can optionally send the message to alert to Assert
  • Some big ones
    • Assert.True()
    • Assert.False-syntactic sugar
    • Assert.AreEqual(expectedObject, actualObject, message)-to test if the values of expectedObject and actualObject are same
    • Assert.AreSame(expectedObject, actualObject, message)-to test if both the arguments reference the same object
  • Try not to provide the message. For most cases your test method name should describe exactly what happened if the test failed
Running your first test with NUnit

Find "nunit.exe" and run it

The test failed, lets fix the code and try to pass the test



namespace LogAn
{
    public class LogAnalyzer
    {
        public bool IsValidLogFileName(string fileName)
        {
            if (!fileName.EndsWith(".SLF"))
            {
                return false;
            }
            return true;
        }
    }
}

Adding some Positive Tests

Notice this is not TDD. Since we are writing tests after the code, we need to come up with many scenarios for which our method may fail. The bad extensions will be flagged. But what about different variety of good ones. Will they all pass? The below code will give an idea:


using NUnit.Framework;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {
        [Test]
        public void IsValidLogFileName_BadExtension_ReturnsFalse()
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName("filewithbadextension.foo");

            //Assert
            Assert.False(result);
        }

        [Test]
        public void IsValidLogFileName_GoodExtensionLowerCase_ReturnsTrue()
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName("filewithgoodextension.slf");

            //Assert
            Assert.True(result);
        }

        [Test]
        public void IsValidLogFileName_GoodExtensionUpperCase_ReturnsTrue()
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName("filewithgoodextension.SLF");

            //Assert
            Assert.True(result);
        }
    }
}

Before even we run our test in the NUnit UI, notice it has already picked up the new tests when we built our solution

Notice our test fail for IsValidLogFileName_GoodExtensionLowerCase_ReturnsTrue. This only indicates that you have to test if the file extension is bad, but also test for different kinds of correct file extension. Let's fix our code.



namespace LogAn
{
    public class LogAnalyzer
    {
        public bool IsValidLogFileName(string fileName)
        {
            if (!fileName.EndsWith(".SLF", System.StringComparison.CurrentCultureIgnoreCase))
            {
                return false;
            }
            return true;
        }
    }
}

From red to green: Passing the Tests
  • If all pass, green. If one fail, red
  • Tests will also fail when an exception occurs
  • Speaking of exceptions, you’ll also see later in this chapter a form of test that expects an exception to be thrown from some code, as a specific result or behavior. Those tests will fail if an exception is not thrown.
Test Code Styling
Notice that the test i am writing look different from usual code
  • The test name can be too long
  • Separate out Arrange/Act/Assert
Refactoring to parameterized tests

NUnit's feature parameterized tests. To use it take one of the existing test methods that look exactly the same as the others.

  • Replace the [Test] attribute with the [TestCase] attribute.
  • Extract all the hardcoded values the test is using into parameters for the test method
  • Move the values you had before into the braces of the [TestCase(param1, param2,...)] attribute.
  • Rename this test method to a more generic name
  • Add a [TestCase(...)] attribute on this same test method for each of the tests you want to merge into this test method, using the other test's values
  • Remove the other tests so you're left with just one test method that has multiple [TestCase] attributes


using NUnit.Framework;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {
        [Test]
        public void IsValidLogFileName_BadExtension_ReturnsFalse()
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName("filewithbadextension.foo");

            //Assert
            Assert.False(result);
        }

        [TestCase("filewithgoodextension.SLF")]
        [TestCase("filewithgoodextension.slf")]
        public void IsValidLogFileName_ValidExtensions_ReturnsTrue(string file)
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName(file);

            //Assert
            Assert.False(result);
        }
    }
}

Let's also refactor the negative test, so the code will finally look like. Notice the test method name, another parameter and the Assert statement.


using NUnit.Framework;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {
        [TestCase("filewithgoodextension.SLF", true)]
        [TestCase("filewithgoodextension.slf", true)]
        [TestCase("filewithbadextension.foo", false)]
        public void IsValidLogFileName_VariousExtensions_ReturnsTrue(string file, bool expected)
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName(file);

            //Assert
            Assert.AreEqual(expected, result);
        }
    }
}

Let's run this test now.

Readability: With this one test method, you got rid of multiple versions of similar tests. But notice how the name of the test has become so generic that it's hard to figure out what makes the difference between valid and invalid.

Maintainabiliti: Notice now we have just one test to maintain, but it can not be 1 big test.

Setup and teardown
We have how to run a unit test. Now, we will look at how to set up the initial state for each test and how to remove any garbage that's left by your test.

When a test ends, the state for that test should be such that it has never ran before. So all the instances of data have to be destroyed.

Each test should be independent

Attributes to be used for this

  • [SetUp]: can be put on a method, it causes NUnit to run that setup method each time it runs any of the tests in your classes.
  • [TearDown]: This attrubute denotes a method to be executed once after each test in your class has executed


using NUnit.Framework;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {

        private LogAnalyzer m_analyzer = null;

        [SetUp]
        public void Setup()
        {
            m_analyzer = new LogAnalyzer();
        }

        [TestCase("filewithgoodextension.SLF", true)]
        [TestCase("filewithgoodextension.slf", true)]
        [TestCase("filewithbadextension.foo", false)]
        public void IsValidLogFileName_VariousExtensions_ReturnsTrue(string file, bool expected)
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName(file);

            //Assert
            Assert.AreEqual(expected, result);
        }

        [TearDown]
        public void Teardown()
        {
            //the line below is included to show an anti pattern.
            //This isn't really needed. Don't do it in real life.
            m_analyzer = null;
        }
    }
}

  • Think of the setup and teardown methods as constructors and destructors for the tests in your class
  • You can have only one of these in your Test Class

In real life, do not use the setup method to initialize instances. Use factory methods instead.

We also have [TestFixtureSetUp] & [TestFixtureTearDown] for a class level setup and teardown. But this means that you might be sharing state between tests.

We almost never, ever use [TearDown], [TestFixtureSetUp] & [TestFixtureTearDown]. If you do, you're very likely writing an integration test where you're touching the filesystem or database, and you need to clean up the disk or the DB after the tests

The only time it makes sense to use a [TearDown] method in unit tests, is when you need to "reset" the state of a static variable or singleton in memory between tests.

In case you want to do integration tests, do that in a separate project

Checking for expected exceptions
One common testing scenario is making sure that correct exception is thrown from the tested method when it should be.

Let's assume that your method should throw an ArgumentException when you send in an empty filename. if your code doesn't throw an exception, it means your test should fail.


using System;
namespace LogAn
{
    public class LogAnalyzer
    {
        public bool IsValidLogFileName(string fileName)
        {
            if (string.IsNullOrEmpty(fileName))
            {
                throw new ArgumentException("filename has to be provided");
            }

            if (!fileName.EndsWith(".SLF", System.StringComparison.CurrentCultureIgnoreCase))
            {
                return false;
            }
            return true;
        }
    }
}

Let's see how to do this by a way, which we shouldn't do


using NUnit.Framework;
using System;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {

        private LogAnalyzer m_analyzer = null;

        [SetUp]
        public void Setup()
        {
            m_analyzer = MakeAnalyzer();
        }

        [TestCase("filewithgoodextension.SLF", true)]
        [TestCase("filewithgoodextension.slf", true)]
        [TestCase("filewithbadextension.foo", false)]
        public void IsValidLogFileName_VariousExtensions_ReturnsTrue(string file, bool expected)
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName(file);

            //Assert
            Assert.AreEqual(expected, result);
        }

        [ExpectedException(typeof(ArgumentException), ExpectedMessage="filename has to be provided")]
        public void IsValidFileName_EmptyFileName_ThrowsException()
        {
            m_analyzer.IsValidLogFileName(string.Empty);
        }

        //Factory
        private LogAnalyzer MakeAnalyzer()
        {
            return new LogAnalyzer();
        }
    }
}

See the code again

  • Expected value is provided in the attribute
  • No explicit Assert Call(the [ExpectedException] attribute contians the assert within it)
  • End result is not used

How it works is, this attribute tells the test runner to wrap the execution of this whole method in a big try catch block and fail the test if nothing was catched. The problem with this can be that you won't know which line threw the exception, and your test will pass, even though some other exception occured which threw the same exception you expected. Your logic that was intended to be tested didn't get tested. So try not to use it

Instead use Assert.Catch<T>(delegate)


using NUnit.Framework;
using System;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {

        private LogAnalyzer m_analyzer = null;


        [TestCase("filewithgoodextension.SLF", true)]
        [TestCase("filewithgoodextension.slf", true)]
        [TestCase("filewithbadextension.foo", false)]
        public void IsValidLogFileName_VariousExtensions_ReturnsTrue(string file, bool expected)
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName(file);

            //Assert
            Assert.AreEqual(expected, result);
        }

        [Test]
        public void IsValidFileName_EmptyFileName_ThrowsException()
        {
            m_analyzer = MakeAnalyzer();

            var expected = Assert.Catch<Exception>(()=>m_analyzer.IsValidLogFileName(String.Empty));

            StringAssert.Contains("filename has to be provided", expected.Message);
        }

        //Factory
        private LogAnalyzer MakeAnalyzer()
        {
            return new LogAnalyzer();
        }
    }
}

There are a lot of changes here:

  • [ExpectedException] is gone.
  • Use of Assert.Catch<T> that takes no arguments
  • Assert.Catch returns the instance of the exception object that was thrown inside the lambda. This allows you to later assert on that exception object
  • StringAssert makes testing with strings more readable
  • StringAssert.Contains is used just to check if the message contains a string you are looking for. Contains is used because you never know if a line break comes along with your string.

Other ways of Asserting for Exceptions is NUnit's fluent syntax. Documentation available at nunit.com

Ignoring Tests
Use the [Ignore] attribute on tests that are broken because of a problem in the test, not in the code. This should be rare!


using NUnit.Framework;
using System;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {

        private LogAnalyzer m_analyzer = null;

        [Ignore]
        [TestCase("filewithgoodextension.SLF", true)]
        [TestCase("filewithgoodextension.slf", true)]
        [TestCase("filewithbadextension.foo", false)]
        public void IsValidLogFileName_VariousExtensions_ReturnsTrue(string file, bool expected)
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName(file);

            //Assert
            Assert.AreEqual(expected, result);
        }

        [Test]
        public void IsValidFileName_EmptyFileName_ThrowsException()
        {
            m_analyzer = MakeAnalyzer();

            var expected = Assert.Catch<Exception>(()=>m_analyzer.IsValidLogFileName(String.Empty));

            StringAssert.Contains("filename has to be provided", expected.Message);
        }

        //Factory
        private LogAnalyzer MakeAnalyzer()
        {
            return new LogAnalyzer();
        }
    }
}

Setting Test Categories


using NUnit.Framework;
using System;

namespace LogAn.UnitTests
{
    [TestFixture]
    public class LogAnalyzerTests
    {

        private LogAnalyzer m_analyzer = null;

        [Category("Fast Tests")]
        [TestCase("filewithgoodextension.SLF", true)]
        [TestCase("filewithgoodextension.slf", true)]
        [TestCase("filewithbadextension.foo", false)]
        public void IsValidLogFileName_VariousExtensions_ReturnsTrue(string file, bool expected)
        {
            //Arrange
            LogAnalyzer analyzer = new LogAnalyzer();

            //Act
            bool result = analyzer.IsValidLogFileName(file);

            //Assert
            Assert.AreEqual(expected, result);
        }

        [Test]
        public void IsValidFileName_EmptyFileName_ThrowsException()
        {
            m_analyzer = MakeAnalyzer();

            var expected = Assert.Catch<Exception>(()=>m_analyzer.IsValidLogFileName(String.Empty));

            StringAssert.Contains("filename has to be provided", expected.Message);
        }

        //Factory
        private LogAnalyzer MakeAnalyzer()
        {
            return new LogAnalyzer();
        }
    }
}

Testing results that are system state changes instead of return values

This is the second type of end result that we expect

Definition: State-based testing(also called state verification) determines whether the exercised method worked correctly by examining the changed behavior of the system under test and its dependencies after the method is called.

If the system acts exactly the same as it did before, then you didn't really change its state, or there's a bug.


using System;
namespace LogAn
{
    public class LogAnalyzer
    {
        public bool WasLastFileNameValid { get; set; }

        public bool IsValidLogFileName(string fileName)
        {
            WasLastFileNameValid = false; // Changes the state of the system
            if (string.IsNullOrEmpty(fileName))
            {
                throw new ArgumentException("filename has to be provided");
            }

            if (!fileName.EndsWith(".SLF", System.StringComparison.CurrentCultureIgnoreCase))
            {
                return false;
            }

            WasLastFileNameValid = true; //Changes the state of the system
            return true;
        }
    }
}

WasLastFileNameValid keeps the state of the system. You can't simply test this functionality by writing a test that gets a return value from a method.

First, you have to identify the unit of work you're testing. Is it in the new property called WasLastFileNameValid? Partly. It's also in the IsValidLogFileName method, so your test should start with the name of that method because that's the unit of work you invoke publicly to change the state of the system. The following code shows a simple test to see if the outcome is remembered.


[Test]
        public void IsValidFileName_WhenCalled_ChangesWasLastFileNameValid()
        {
            LogAnalyzer la = MakeAnalyzer();

            la.IsValidLogFileName("badname.foo");

            Assert.False(la.WasLastFileNameValid);
        }

Below is a better test that tests both the positive and the negative


[TestCase("badfile.foo", false)]
[TestCase("goodfile.slf", true)]
public void IsValidFileName_WhenCalled_ChangesWasLastFileNameValid(string file, bool expected)
{
    LogAnalyzer la = MakeAnalyzer();

    la.IsValidLogFileName(file);

    Assert.AreEqual(expected, la.WasLastFileNameValid);
}

Some naming conventions

  • ByDefault can be used when there's an expected return value with no prior action.
  • WhenCalled or Always can be used in the second or third kind of unit of work results(change state or call a third party) when the state change is done with no prior configuration or when the third-party call is done with no prior configuration; examples: Sum_WhenCalled_CallsTheLogger, Sum_Always_CallsTheLogger

So far, so good. But what happens when the method you’re testing depends on an external resource, such as the filesystem, a database, a web service, or anything else that’s hard for you to control? And how do you test the third type of result for a unit of work - a call to a third party? That’s when you start creating test stubs, fake objects, and mock objects, which are discussed in the next few sections.