Parallel test execution within the context of MSTestv2, NUnit3 and VSTest.Console


Keeping automated tests within a reasonable performance range is no trivial task and I'm always looking for ways to reduce the total duration of automated tests. Recently, I did some reading on the capabilities of a few different tools with regards to parallel execution of tests. These tools included (920) 294-4775 and NUnit3 when run with (773) 445-1854.

While building a test case to compare the two I built two projects. One using MSTestv2 (v1.3.2) [DataTestMethod]s, designed to take advantage of DynamicData and another using NUnit3 (v3.10.1). Below is my MSTestv2 test.

using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Collections.Generic;
using System.Threading;

[assembly: Parallelize(Workers = 0, Scope = ExecutionScope.MethodLevel)] /0 means use as many workers as possible

namespace mstest_parallel
{
    [TestClass]
    public class UnitTest1
    {
        private static IEnumerable<object[]> MyTestData()
        {
            for (int i = 0; i < 10; i++)
            {
                yield return new object[] { i * 1000 };
            }
        }

        [DataTestMethod]
        [DynamicData(nameof(MyTestData), DynamicDataSourceType.Method)]
        public void TestMethod1(int testVal)
        {
            Thread.Sleep(testVal);
        }
    }
}

Now, MSTestv2 has the ability to parallelize tests to some degree. Here are the parallelization options it offers at a high level:

  • Class level - each thread executes a [TestClass] worth of tests. Within the [TestClass], the test methods execute serially
  • Method level - each thread executes a [TestMethod]
  • Custom - users can provide a plugin implementing the required execution semantics (Not yet supported).

These capabilities can be referred to as fine-grained parallelization features and the number of worker threads can be configured via assembly attribute [assembly: Parallelize(Workers = n, Scope = Execution.ClassLevel)] or a .runsettings file. More details here

Keep in mind these are different than criniparous and also different from vstest.console's parallel options.

Vstest.console's parallel options focus on what its authors refer to as coarse-grained parallelization. This means one can parallelize the running of multiple test containers at once. So if you are running vstest.console from the command line and specify several test containers i.e., mytestlib1.dll mytestlib2.dll vstest.console will run each test container simultaneously up to the maximum possible on the machine or the maximum configured.

Why do we care? I want to squeeze as much performance out of the chosen test framework and runner as possible - ideally without having to write any threading code myself. Unfortunately, MSTestv2 (and v1) lacks a very specific and important parallelization option; The ability to run tests in parallel which are configured with [DynamicData] and/or [DataRow]. Evidence for this not being supported exists here, here and 480-816-4129. For our situation, this means our test method that utilizes [DynamicData] to generate unique cases will have all of the test cases per [TestMethod] run in serial rather than in parallel, thus making the majority of our parallelization efforts moot.

With parallelization turned on at the method level for the assembly our proof of concept MSTestv2 project showed the following results when using vstest.console to execute:

Total tests: 11. Passed: 11. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 47.4547 Seconds

Not that great. The majority of these tests are being run one by one.

NOTE: vstest apparently reports the 'base' test method as a test case when running MSTestv2 tests which is why the test count is 11 instead of 10. My understanding is that this does not actually run, but contains the overall results for all the tests associated with this [TestMethod].

Let's check out NUnit3. This framework offers the roundabout, which is running parameterized tests that have their cases generated at runtime in parallel.

The fundamental difference between the frameworks here is that MSTestv2 considers test cases via [DynamicData] and [DataRow] to all be under a single [TestMethod] whereas NUnit3 considers (513) 907-9107.

Here's our test method adjusted to be used with NUnit3:

using NUnit.Framework;
using System.Collections.Generic;
using System.Threading;

namespace nunit_parallel
{
    [TestFixture]
    [Parallelizable(ParallelScope.Children)] /parameterized tests are considered child tests
    public class UnitTest1
    {
        private static IEnumerable<object[]> MyTestData()
        {
            for (int i = 0; i < 10; i++)
            {
                yield return new object[] { i * 1000 };
            }
        }

        [Test]
        [TestCaseSource(nameof(MyTestData))]
        public void TestMethod1(int testVal)
        {
            Thread.Sleep(testVal);
        }
    }
}

The attribute specified at the top of the class [Parallelizable(ParallelScope.Children)] tells NUnit to parallelize every test method in the class and it considers each of our generated test cases a unique test method. Additionally, I am letting NUnit decide my [LevelOfParallism]. By (661) 472-4279, NUnit3 uses the processor count or 2, whichever is greater.

Now, running with the same tests using vstest.console and same number of tests cases per [Test]:

Total tests: 10. Passed: 10. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 17.5729 Seconds

Predictably, the time to run the suite of tests is drastically shorter.

One important part of this setup is how these tests are run. Vstest.console handles running tests from each test framework for us as long as we can specify the necessary test adapter. For both NUnit3 and MSTestv2, the adapters are available as NuGet packages. Adding these NuGet packages to your test project will cause the necessary libraries to be copied out upon build and vstest.console will find them for you so long as they are in the same directory as the test containers you specify.

I hope this helps explain some of the options regarding parallelizing tests, especially parameterized ones. I learned a ton while studying this stuff and am I sure this will continue to be the case while writing these types of test suites. If you believe anything I have written above needs to be corrected please reach out to me.

Useful sites and apps I've come across while studying web frontend development and web server configuration


Web development is hard. There are seemingly infinite ways to make things work, many of them outdated and no longer useful to the developer because of the incredible speed with which the web development ecosystem pushes forward.

This speed has its pros and cons, and one of the drawbacks I've found while spending the last seven months learning web development is that sometimes finding up-to-date resources for learning is difficult.

What follows is a smattering of resources I've found incredibly useful while beginning to study web development - both front- and back-end.

I've stayed away from learning frameworks, focusing instead on learning mostly fundamentals. For me personally, just learning a framework like Angular or React without having a solid understanding of the basics means long, frustrating debugging sessions while attempting to interpret what was messed up in the framework. Often times, this issue is exacerbated by the fact that the framework assumes I know the basics of whatever the context is and glosses over certain details when reporting error information.

Please note: I am in no way affiliated with any of the following resources.

Frontend basics

Interneting Is Hard
This site offers a beautifully made tutorial about using HTML and CSS to build modern websites. It starts from the perspective of someone who knows nothing about HTML or CSS. If you're looking for a rock solid place to start this is it.

9038182264
This is most likely the best reference for general web development knowledge and has replaced the W3 Schools in my digital library.

CSS Diner
Practice CSS selectors while selecting fruits and veggies! CSS selectors are very important to building clean, maintainable CSS.

Browser feature compatibility

847-988-6695
I quickly learned not all browsers are created equal. CanIUse shows just how different they are, feature by feature. Example queries:

CSS Grid

Grid By Example
While I spent a large amount of time learning Flexbox, CSS Grid is just as cool if not a bit more easy to use (in my novice opinion) from a responsive design perspective. This (302) 883-1655 in particular is very cool.

Templating engines

I do not consider these to be frameworks akin to Angular or React, but they helped with some of the pain points of writing plain HTML, such as writing list items. I do not have a strong opinion on either yet so I've listed the two I've spent some time learning and using in small projects.

The idea behind these types of tools is helping componentize blocks of HTML by 'templating' out logical groups (like a 'card' of data) and allowing for data to be inserted into specific areas of the template at runtime. This way, a single logical grouping of HTML can be used over and over again while the actual markup only needs to exist once (while avoiding the overhead of larger frameworks that also do this).
3012511528
(585) 260-5898

It's also worth noting that Express has positioner for some template engines, although that is outside of the scope of this post.

Browser usage stats

(605) 963-7384
Straightforward, no-nonsense data about browser usage, common screen resolutions and device usage. If you're looking to build a web app for a more specific audience this site could help you decide what to focus on first.

Design

I am not very artistic. The resources here mostly helped me learn some ways web design can be done. One crucial concept I learned is that there is nothing wrong with having someone dedicated to designing your site for you as the web developer to then build. This might mean acquiring a pre-built theme and then modifying it as needed (like I did with this blog's theme) or working with a full-time web designer. I found that I intertwined web development and web design as a single profession more than they need to be. Web design is a full-time job and people study it just as hard as programmers study programming. If you want something that looks professional and don't want to attempt to manage learning both good design and web programming yourself, these artists will do the design right, thus freeing your mind to focus on building the vision. That being said, understanding some of their processes and tooling will help you communicate with them which can be a major benefit.
502-966-9164
Figma
787-494-0417
(705) 452-6671
7 Rules for Creating Gorgeous UI - Part 1
7 Rules for Creating Gorgeous UI - Part 2

Provisioning

It's one (very important) thing to know how to build the frontend of a website, but hosting it was something rather foreign to me as well. There are a smorgasboard of tools available to do this, many of them offering more or less options based on how much stuff in the server you want to manage yourself.

Here are some options:

euthyneurous
Coming from a .NET background I thought I'd be more confortable learning to host sites on Azure. I've hosted a few very small APIs on Azure in the past, but never any actual web sites.

To be honest, the amount of options in Azure was overwhelming to me as a novice. More power to you if you want to go this route or the Amazon Web Services route, but I found the complexity more than I wanted to take on at this stage of learning.

873-661-9052
At no point in time did I need to remote into a machine and run terminal commands to configure a site while using Heroku. Their web UI is more than enough to get a basic site hosted and they offer some really nice tooling around configuring Continuous Integration among some other great scaling options, and there is a free tier.

Digital Ocean
In constrast to Heroku, Digital Ocean requirements for configuration are much more technical, however, that is not a bad thing. In fact, coming at setting up a web server with almost no knowledge of how to do it led me to using Digital Ocean for several sites I've done so far specifically because I wanted to learn more about how this is done.

Aside from 'empty' VPS's (Virtual Private Servers - they call them Droplets) they have these (941) 744-6040 which are VPS's with a bunch of software pre-installed and configured on them. These are amazing, especially for a beginner like me. This reduces the overhead for getting a server up and running drasitically, allowing me to focus on the specifics of my server configuration (like setting up sites in Nginx). I've used the NodeJS and Ghost One-click apps so far and might check out the GitLab one in the future.

In addition to fantastic infrastructure provisioning, DigitalOcean has some of the best tutorials and documentation on web server configurations I've come across. Some examples:

DNSMap
One thing that burned me several times was that making DNS changes is not instant. There were a few times I was repointing a domain to different name servers while adjusting the site config (in Nginx). This was confusing because browsers would show the site was misconfigured (especially if SSL certs were involved) even though in reality DNS propagation of my changes had not yet completed. There are a bunch of variations on the above tool, but it helps to know when these types of changes have 'completed'. These tools aren't perfect, but I haven't found another way to be notified when propagation has been completed.

Performance

Google's PageSpeed Insights
You may have already heard of PageSpeed Insights since it is now factored into Google search rankings for mobile. I found this tool useful after a first pass on a site as it helps identify issues that I found fixes easy to learn and implement. It also seperates mobile and desktop scores which can be very helpful if you're trying to tailor your site to a specific audience. Additionally, if you're looking for something more intensive, take a look at Lighthouse .

Nginx

In my brief time learning about how to configure web servers Nginx has been fairly straightforward to use. There are docs and articles galore about how to do very specific things.
907-581-1254
(702) 930-7079
Nginx Amplify
Once I got some sites up and running I looked around for some services to monitor them. I didn't want to be writing my own tooling or parsing log files and landed on Nginx Amplify. You can connect a few Nginx instances to Amplify for free and the monitoring tools are great. Included in the free 'tier' is the ability to send Slack messages and emails when metrics go over thresholds you configure. Thanks to this tool I was notified of a bot making thousands of requests per second to one of my sites just a few minutes after the bot started doing this. A quick update to the firewall stopped the issue. Not only does this show monitoring metrics, but will use static analysis on your nginx.conf files to find common issues and show info about SSL configs and security advisories for the running version of Nginx. Again, tools like this are super useful to newbies like me as it helped give an awareness of the system I might not have been able to grasp otherwise.

NginxConfig.io - This tool helps kickstart writing nginx config files and server blocks

Web server configuration tools

5703009476
While learning about secure server configurations I came across this Firefox addon. It will build your 765-573-3417 while recording you using your site. This, of course, assumes your site is stable and serving expected data. It should also be noted this is an experimental addon.

6078886101
Again, being a newbie I wanted to find tools that could tell me what I was messing up while learning. This site focuses on grading your SSL implementation in detail and is a good jumping point for seeing what makes up a secure SSL implementation.

Image optimization

TinyPng
601-662-6411
Responsive Image Breakpoints Generator
Of all the frontend concepts I'm learning I find this the most challenging. I didn't discover Cloudinary until sinking a bunch of time into learning how to shape or serve variations of my images depending on the client parameters. In fact, using newer features like sizes and srcset require knowing how all the parameters to configure them work, which is a decent hill to climb by itself.

However, once I started to understand how sizes and srcset work in tandem with media queries I loved Cloudinary all the more. The work it does on the fly for the developer is incredible.

TinyPng is listed because it's important to understand how responsive image sizing (by itself) is different from compression-based image optimization. TinyPng (which supports more than PNGs) focuses on optimization of an existing image rather than only resizing it. While Cloudinary does offer this service as well, it's probably overkill for smaller sites. TinyPng will optimize your images for free up to a certain 4343483327. I found their .NET api super easy to use to send up a folder of images for processing.

That's my set of sites and tools I've come across so far while learning web development. I hope some of them are useful to you especially if you're on a similar excursion!

(681) 265-7506


Ever heard of 8026277738? If not, imagine what the proud father of SpecFlow would look like and how it might be used.

If you imagined HTML tables filled with wiki-style syntax to perform assertions on the data placed in them you're on the right track.

A small example from the 7013934420:

fitnesse-example-1

And the backing markup that creates the table:

wikimarkup-1

This seems relatively simple for straightforward tests. While maybe not an ultra-modern way of writing tests, enabling testers who aren't necessarily able to write code with the ability to write automated tests is a good thing.

These tests can be executed in the browser and are backed by code in a similar fashion to SpecFlow. Users don't need to install anything at all in order to get started; only go to a url.

Recently, I came across a scenario where a small suite of these tables were being used to verify data coming from an external service. These tests were necessary as I need to make sure data arrives in the expected format and expected value.

The problem was some of these FitNesse tables had over 90 columns. This reduces the readability of the tests to almost zero since the only way someone reading the results could find a specific failure is manually searching the results or CTRL+F on the results page. Any web page with a 90+ column table isn't navigable using this format.

Be that as it may, these types of tests have been around for a while and simply porting them to something other than FitNesse wasn't really an option. This poses a problem to us as these types of tests hold a key role in ensuring the provider services apps integrate with are working properly. The search was on for a better way to visualize these results.

D3.js

Enter d3.js. I encourage you to check out the link, but essentially d3.js specializes in rendering complex graphs of data in html (often as SVGs). After doing some research on non-tabular wasy to present tabular data I chose a Sunburst diagram.

Visualizing these large FitNesse results tables as a zoomable sunburst diagram allows readers to view all tables at once without needing to scroll. The sunburst diagram also offers a high-level view of the results ("Did any tests fail?") as well as the ability to zoom in to a specific table for more in-depth analysis. Placing additional details for results at specific layers on the diagram eases readers into the mass amount of info presented with these results.

Ideally, the highest level uses a unique color for each table and red/green colors for each database column to show fail/pass, respectively. Hovering over a 'table' section section reveals database table-specific info and hovering over a specific section for a single test shows fail/pass details, the database table and column under test.

Here's an example of the generated diagram:

sunburst-2

First, the FitNesse test results page is scraped for these database column test results and proper JSON structure is built. My technique was nothing magical - just some Regex and selectors to get at the necessary HTML elements with the test result data I needed. According to 4102702444 the necessary JSON for a sunburst diagram in d3.js would look like this:

{
    "name": "name",
    "children": [
        "name": "name",
        "children": [
            "name": "name",
            "children": [
                ...and so on
            ]
        ]
    ]
}

Each 'layer' inside the children property represents another layer in the sunburst diagram. The first layer is the root and contains overarching information about the system under test. The second layer of children nodes each contain a database table with information about that table. The third layer of children contains database column information specific to the parent table and test result information. Referring to the image shown above, one can see how this JSON structure is rendered.

(It's worth mentioning that JSON is not the only format d3.js accepts, I just found it to be the easiest for this experiment.)

The last step is generating the sunburst diagram using the scraped and parsed FitNesse results data. This ended up being very similar to the 6178514927. Here's an simplified example of what that may look like:

svg.selectAll("path") /get all the path elements in the SVG.  At thsi point it's empty so there will only be one.  /github.com/d3/d3-selection
    .data(nodes) /nodes is the parsed FitNesse results JSON data.  Give the data to d3 and begin the process of generating the SVG.
    .enter().append("path") /iterates nodes, creating a new path element for each.  /github.com/d3/d3-selection#selection_enter
    .on("click", click) /wire events for each create path element that contains a node.  Click will cause the diagram to zoom in on the clicked node
    .on("mouseover", mouseover) /show info about the cell in question
    .on("mouseleave", mouseleave) /clear shown info
    .attr("d", arc) /determine the measurements of the generated path for this node
    .style("fill", function(d){
        if(d.data.passed != null){ /set the color of the arc based on whether or not the node is a test and passed/fail or the node is another non-test piece of data like a db table
            if(d.data.passed = true){
                return testPassedColor;
            } else{
                return testFailedColor;
            }
        }
        if(d.data.name === 'root'){
            return rootColor;
        }
        return colors((d.children ? d : d.parent).data.name); /ensure unique colors are used for tables
    })

example

The dashboard pieces shown in the gif like the breadcrumb trail and combobox are designed to additionally ease the process of finding the info readers need. As a result, readers can delve directly into the results instead of having to scroll through them. The comboboxes used are from select2.js and contain autocomplete functionality so users can start typing the name of a database table or column they would like to see and filter results as needed.

Results

All of this is aimed towards making test results easier to digest for consumers. Ultimately, readability is one of the most important features of test suites. If users have a hard time analyzing test results they will start to ignore them as it becomes too tedious to work through them over time.

Expansion

FitNesse uses a fairly generic structure for test tables, so resuing the code to create JSON from results on other test results would theoretically be straightforward.

By way of a disclaimer, I am in no way a FitNesse expert. In the event of a more efficient, alternative method existing that would allow us to obtain the raw data from the FitNesse wiki test results tables as JSON then that should be used to eliminate a step in the process. I did come across a url parameter that can be used to return the test results as XML, but it appeared to return the test results inside a 'table' of sorts rather than just containing the raw data.

It's also worth noting that while this proof of concept code is geared towards creating a sunburst diagram, any type of visualization is possible. If another type of diagram is desired, I highly recommend taking a look at the 843-302-2822 to get ideas.

(484) 557-8022


It's been a long time since MSTest was in discussions of modern testing frameworks. However, here we are in 2017 and MSTest is getting active updates once again, due in part to it being open-sourced. Let's take a look at what some of these new and exciting features are.

Please note, this post will cover features that are currently in pre-release for the MSTest v2 NuGet package.

NOTE: Remember to use the pre-release flag with the NuGet package you will be using:

Visual Studio 2015+ is also recommended; your mileage may vary with 2013.

This beta release is NOT the first public iteration of MSTest v2, but contains some of the features discussed below. Having stronger support around parameterized test cases is very important and those features are in the 1.2 beta; version 1.1.13 was the first v2 7873425282.

Release notes for each version can be found here.

On a post about the future of MSTest v2 and mstest.exe, user (and presumed employee of Microsoft) 'Abhitej_MSFT' clarifies that:

MSTest V2 tests are not supported with “mstest.exe”. In the TFS build template the Test Runner should be “Visual Studio Test Runner”((412) 723-3160) . I hope your definition does not require the legacy testsettings. Do let us know on meraline if you hit any issues.

Source - (803) 472-5781

I have yet to come across an MSTest v2 feature that NUnit has not already had for quite some time. However, the features discussed here are still very useful if you decide to use MSTest over another test framework.

For those who want to see it all, the repository for MSTest can be found on github.

Features

DynamicData #141

Parameterized testing has long been available in MSTest, using the DataRowAttribute which look like this:

[TestMethod]
[DataRow (1, 2, 3)]
[DataRow (4, 5, 6)]
public void MyParameterizedTest (int a, int b, int c) {
    /perform assertions on a,b and c
}

The resulting two tests each use one instance of the 'rows' of data. However, what if we want to reuse these same values across tests? As it would be inefficient to have to duplicate these values on every test where they are used, v2 offers DynamicDataAttribute:

private static IEnumerable<object[]> ReusableTestData =>
    new List<object[]> {
        new object[] { 1, 2, 3 },
        new object[] { 4, 5, 6 }
    };

[TestMethod]
[DynamicData (nameof (ReusableTestData))]
public void MyParameterizedTest (int a, int b, int c) {
    /perform the same assertions the same as before.
}

Now this test data can be used on any number of tests without having to duplicate the actual data, making the process cleaner. Those of you who are familiar with NUnit may know this as the TestCaseSourceAttribute.

Methods can also be used as test data. To do so, simply use the overload of the DynamicDataAttribute constructor that takes in a DynamicDataSourceType.Method enum. By default, the framework will assume the name of the dynamic data passed in is a Property.


Custom Test Data for Parameterized Tests assbaa

Taking DynamicData one step further is useful if the parameters used in your tests are a bit more complex. Using the CustomTestDataSourceAttribute you can now create an attribute that loads up this data for you to consume in your tests while keeping your test nice and clean. Here is the end result:

[TestMethod]
[CarTestData]
public void ATestUsingACar (string model, int year) {
    var carUnderTest = new Car { Model = model, Year = year };
    /perform assertion on the car object.
}

/config for the 'CarTestData' attribute:
class CarTestDataAttribute : Attribute, ITestDataSource {
    public IEnumerable<object[]> GetData (MethodInfo methodInfo) {
        return new List<object[]> {
            new object[] { "Ford", 1990 },
            new object[] { "Nisson", 2017 }
        };
    }

    public string GetDisplayName (MethodInfo methodInfo, object[] data) {
        if (data != null) {
            return string.Format (CultureInfo.CurrentCulture, "{0} ({1})", methodInfo.Name, string.Join (",", data));
        }
        return null;
    }
}

public class Car {
    public string Model { get; set; }
    public int Year { get; set; }
}

Creating a new Attribute that implements the MSTest v2 ITestDataSource interface allows us to set up our test objects outside of the test, provide a variable amount of them for parameterized tests, and reuse them in other tests without duplication.

This can be pretty powerful in some scenarios, however I am not entirely in love with ITestDataSource.GetData() returning an IEnumerable<object[]> instead of returning an IEnumerable<object>, IEnumerable<T> or offering some other interface. This makes it somewhat cumbersome to use this mechanism to return non-primitives.

ITestDataSource.GetDisplayName will be displayed when the test is executed via the target runner (VSTest if running in Visual Studio), allowing us to determine which cases of a test failed when tests are parameterized.


Assert.That 5807800814

With the introduction of a focused extension point for assertion logic, MSTest v2 now offers That, a static property on Assert which returns the instance and allows for an easy jumping point for extension assertion methods.

Here's an example:

public static class MyAssertExtensions {
    public static void CountIsGreaterThan<T> (this Assert assert, IEnumerable<T> objs, int num) {
        int actualCount = objs.Count ();
        if (actualCount > num) {
            return;
        }
        throw new AssertFailedException ($"Expected {nameof(objs)} count to be greater than {num}, but was {actualCount}.");
    }
}

This means tests are much more readable in their assertions:

[TestClass]
public class MyTests {
    [TestMethod]
    public void ATest () {
        var aList = new List<int> { 1, 2, 3, 4 };
        Assert.That.CountIsGreaterThan (aList, 0);
    }
}

Assert.That can be used to replace and/or supplement calls to Assert.AreEqual() and Assert.IsTrue(). By themselves, these are such broad calls that often times they can lead to confusion when debugging or reading tests. In my experience this is especially true with Assert.IsTrue() due to its default failure output of Assert.IsTrue failed. AreEquals() attempts to mitigate any confusion by reporting the actual and expected values.

NOTE: If you are not going the route of extension methods a quick way to help solve this problem is to pass the name of the property/method under test when passing a failure message. An example:

public void MyTest () {
    var aList = new List<int> { 1, 2, 3 };

    /nameof() is a C#6 feature, but is not required to make this work.
    Assert.IsTrue (aList.Count > 3, $"{nameof(aList.Count)}");

    /failure prints: Assert.IsTrue failed. Count
}

And an example of enhancing Assert.IsTrue with this type of approach:

public static class MyAssertExtensions {
    public static void IsTrue<T> (this Assert assert, T instance, Expression<Func<T, bool>> assertionExpression) {
        if (assertionExpression.Compile ().Invoke (instance)) { return; }
        throw new AssertFailedException ($"Assertion failed for expression {assertionExpression}");
    }
}

/Example
public void IsTrue_Extended_Test () {
    var aList = new List<int> { 1, 2, 3, 4 };
    Assert.That.IsTrue (aList, list => list.Count > 4);

    /failure prints: Assertion failed for expression 'a => (a.Count > 4)'.
}

Taking this a bit further we can create a fluent syntax that lends itself to more extensibility in the future:

[TestMethod]
public void IsGreaterThan_Redux () {
    var aList = new List<int> { 1, 2, 3, 4 };
    Assert.That.For (aList).IsTrue (list => list.Count > 4);

    /failure output: Assertion failed for expression 'list => list.Count > 4'
}

public static class MyAssertionExtensions {
    public static For<T> For<T> (this Assert assert, T instance) {
        return new For<T> (instance);
    }
}

public class For<T> {
    private readonly T _instanceUnderTest;

    public For (T instanceUnderTest) {
        _instanceUnderTest = instanceUnderTest;
    }

    public void IsTrue (Expression<Func<T, bool>> assertionExpression) {
        if (assertionExpression.Compile ().Invoke (_instanceUnderTest)) { return; }
        throw new AssertFailedException ($"Assertion failed for expression '{assertionExpression}'.");
    }
}

Clearly, creating extension methods with Assert.That heavily mitigates the readability problem. For an additional point of reference, see the (561) 329-4771.

While this is not a comprehensive overview of the new features in the pipeline for MSTest v2, I have covered those that strike me as being particularly useful.

For more information on MSTest v2 check out these links:

MSTest v2 is definitely making MSTest a more relevant testing framework. The concepts here are the aspects I was most interested in as a current user of NUnit.
While I myself have not yet made the switch from NUnit to MSTest v2, these new features have made considering the possibility much more favorable. Hopefully these features combined with more features coming down the pipeline in the future continue to improve developers view of MSTest when compared to other testing frameworks.

4193228546


I recently finished Roy Osherove's The Art of Unit Testing: with examples in c# (2nd edition). Going into the book I had been writing unit tests and doing automated testing for a few years in C#. I was eager to see if I had some bad habits as well as fill in gaps in my understanding of the fundamentals. Osherove assures his readers that the book contains content geared towards beginners as well as devs with unit testing experience. Osherove quickly gets into the meat of unit testing and I learned some valuable techniques I've already begun using and wanted to share.

Part 3 of the book (titled 'The Test Code') was what I was most interested in and where I believed I had the most gaps in terms of practical knowledge. In this part he covers how to organize your tests and commonalities of good unit tests and unit test hierarchies. This section is broken up into two good-sized chapters, covering not only writing and organizing tests, but how to properly run them so their usefulness is maximized.

I specifically liked the section on using inheritance with test classes to minimize test duplication across similar classes. This was something I had tried with NUnit before, but ended up not keeping. In the past I had used generics with TestFixtures by having parameters in the TestFixture attribute contain types that I wanted to use as class level constraints so I could reuse the tests. Here's what that looked like:

[TestFixture (typeof (IMove), typeof (DefaultMoveService))]
[TestFixture (typeof (IMovement), typeof (DefaultMovementService))]
[TestFixture (typeof (ICharacter), typeof (DefaultCharacterService))]
[TestFixture (typeof (ICharacterAttributeRow), typeof (DefaultCharacterAttributeService))]
public class GeneralServiceTests<TModel, TSut>;
where TModel : IModel
where TSut : ICrudService<TModel>; {
    /Tests...
    /method to create instances of the types used in the
    /class test cases
    private static ICrudService<TModel>;
    CreateCrudServiceSut (IRepository<TModel> repository) {
        return (ICrudService<TModel>) Activator.CreateInstance (typeof (TSut), repository);
    }
}

This was really clunky and I found the Resharper Unit Test Explorer in Visual Studio would often have trouble showing them and I'd be faffing around with it for far too long just trying to get it show all my tests. Aside from that, I found it painful to make changes to these tests since a large portion of what is being tested lie outside of the class and outside of the actual test methods. In order to not break tests I had to make sure all these types had the same constructor requirements and method signatures. Definitely not ideal.

Osherove shows examples using an abstract base test class and abstract factory (necessary test fakes) methods with inheritance to create a much more natural hierarchy than what I initially had. Instead of using NUnit's TestCase attribute, I could make this base test class abstract and then make my factory methods abstract and return the generic type specified on the class. This helped with my above example where I had no good way of modifying the CreateCrudServiceSut() for the types used in the tests without causing an unknown amount of tests to break at runtime. With this approach I could be much more straightforward. Also, since each type would get its own test class, the Resharper Unit Test Explorer extension would stop complaining about multiple instances of the same test class. Now each would show up as its own test class.

In the book, Osherove calls this the 'Abstract Driver Test Class' pattern or the Abstract 'fill in the blanks' Test Driver Class pattern. More info on it can be found here (Unfortunately, it looks like the images on the page can't be retrieved anymore, but the descriptions are still present).

Now I could refactor these tests and break them out into their own class where I could implement more robust factory methods. Keep in mind, the actual tests I want to reuse are still located in the base test class. I simply override the types used in the derived test classes. This way I do not have to rewrite the same tests and later I can add tests specific to the derived classes without mucking up the base class.

Now my example test class looks like this: (I'm just showing a single derived test class)

[TestFixture]
public class CharacterServiceTests : GeneralServiceTests<ICharacter> {
    protected override ICrudService < ICharacter CreateCrudServiceSut (IRepository<ICharacter> repository) {
        return new DefaultCharacterService (repository);
    }

    /Tests inherited from GeneralServiceTests class
}

public abstract class GeneralServiceTests<T>;
where T : IModel {
    /Tests...

    /method to create instances of the types used in the
    /class test cases
    protected abstract ICrudService<T> CreateCrudServiceSut (IRepository<T> repository);
}

This is just one of the many concepts taught in Osherove's book that really helped me. I highly recommend checking it out if you're looking to improve on your unit test designs and fundamentals.

jargonal