My Issues with Parallel Test Execution in .Net

In a previous post I described how to get Selenium Grid up and running. The main reason for doing that is to speed up test execution. Well, that is my main reason. You may want easy testing cross browser, OS, device or something. At any rate, to get the speed we have to run the tests in parallel. Getting the Grid up was the easy part, running the tests in parallel is the hard part. How do we run tests in parallel? Warning this may be more of a rant that solution.

If we were using MBUnit, it would be simple. Parallel is built into the DNA of MBUnit, but the MBUnit project is on hiatus right now and direction seems to be up in the air. We currently use NUnit as our test framework at work, but it doesn’t support parallel execution. The next version of NUnit is supposed to support it, but it has been promised for a long time now. I use MSTest a lot, but I have never run it in parallel. I heard it is possible, but I also heard there are issues with it and the fact that Visual Studio 2013 now uses VSTest which doesn’t have parallel support, I am not sure what this means for MSTest. I guess I’m rambling as I am rehashing another post, “NUnit, OK Maybe“. The point is parellel test execution is a problem that should have been solved by now across the test frameworks in the .NET community. Hopefully, there will be changes this year.

Anyway, back to the topic at hand, I decided to look into PNUnit. PNUnit brings parallel execution to NUnit, but the documentation claims that it is a solution for running the same test suite in parallel across multiple instances and I need to run individual test across Nodes to increase speed of test execution, so I am not sure if PNUnit is a viable solution. Another problem with PNUnit is you have to configure the tests you want to run in parallel. This is not a maintainable solution at first glance as having to update configuration files every time I add a test would get old real quick.

So, even though MBUnit may or may not be dead I look more into it. Like I said, at work we currently use NUnit so we would have to convert a lot of tests, but it may be possible to automate a lot of the conversion. The main difference in code between the two would be the dependencies (using’s), class and method attributes, and assertions. So, we could probably convert 90% of the code with just plain find and replace or a quick code hack that does something similar. We would also have to do some work to get reporting to produce that same report on the build server. With that in mind I will look more into MBUnit, even if it is dead the source is available on GitHub.

While I am checking out MBUnit I will also have a look at TPL and the async keyword in .NET 4.5. My test framework already has concepts built in that would allow me to actually create my own test runner. I just have to learn the new tools in .NET for parallel and asynchronous coding. This would be a MAJOR under taking and not one I want to do right now, but I will if its what I have to do to speed these browser tests up.

Another issue I have now is I share a single driver across all tests in a fixture. I create a driver when the fixture is created and I cache it so the individual tests can use it without having to recreate it. This is great when the tests run one after the other. Now I need to change it so that each test has its own driver so they won’t step on each other’s toes. This isn’t that hard as I have a central setup methods for my fixtures and tests so the change would only have to occur in a couple spots (I try to keep it SOLID can’t stress this enough). So, I will move the driver creation from the fixture setup to the test setup and I am ready for parallel browser automation, or am I?

Having to fix the driver setup issue made me look closer at things that I am sharing across tests. Getting parallel tests is going to take a little more than flipping a couple switches to turn it on. Did I mention this is the hard part? The lesson learned from the shared driver is to insure your tests are idempotent during parallel execution. Hell, tests should be idempotent period, regardless of parallel execution. If you share anything across the tests, they can not have an effect on the outcome of other tests regardless of where or how they run. Whether they are in separate assemblies, in the same test fixture or they execute on different Grid Nodes a test can not create side effects that change the results of other tests. If they do produce side effects, you will begin to not trust your tests as they will start failing for reasons that have nothing to do with code changes. When you run tests in parallel it becomes a little tricky to locate where you violate the idempotent rule.

In my case I have to do an in depth review of shared data, functionality, and state. So far my biggest issue is I have a static class that I use to store various values during test execution. I call it TestSession and its basically a cache that allows me to save time by creating certain objects once and it also gives me a way to pass state from one test step to the next. TestSession keeps things separated by prefixing the session keys in a way that tests don’t share data, but there are fixture session keys that are shared amongst tests. Also, at the end of fixture execution we clear the entire TestSession object. So, if the state cached in a fixture session key is changed by one test it may have an effect on another test. I am pretty sure that I don’t do this, but there is nothing that prevents me from doing it so I have to look through the code. Also, if the session is cleared before all of the tests have completed there may be problems. So, I have to rethink TestSession completely. I may create an immutable session for the fixture and mutable session for the tests.

Well that’s it for now. I am starting to bore myself :). This is more of a post on my issues with parallel test execution and less of a solution post. I guess this is what you would call a filler just to keep my blogging going. If you made it this far and you have some ideas on how to solve parallel test execution in .NET, please leave me a comment.

Setup Selenium Grid for .NET

Purpose

I need to run my browser automation tests in parallel as they are painfully slow and I need to be able to provide developers with fast feedback on the state of there changes. Really, what is the point of spending time and effort on creating tests that no one wants to run because they take hours to run. In my opinion it was vital to get this going in order to get a return on the time and effort invested in our browser based test development.

We are a .Net shop and for our situation the best option for parallel browser testing is Selenium Grid, a Java platform. Quick overview, Selenium Grid uses a Hub to distribute tests to Nodes that actually run the test in the selected browser. Below I will explain how I got the Grid up and running.

Setting Up

Setup is pretty easy. All you have to do is download the Selenium Server standalone JAR file from http://selenium-release.storage.googleapis.com/index.html. You will have to install this file to all the machines that you will use as Hubs and Nodes. One quick note, you have to have JRE (Java Runtime Environment) setup on each machine you want to use as a Hub or Node.

The Hub

If you have the JRE setup and on your system environment path, you can simply start the server with this command from the location of your Selenium Server standalone JAR file:

java -jar selenium-server-standalone-2.39.0.jar -port 4444 -role hub -nodeTimeout 600

I am using the 2.39.0 version of the server (make sure you use the correct name for your JAR file). This command starts the server on port 4444 with the Hub role and a timeout of 600 seconds. You can tuck this away in a bat file for easy access.

You can verify the hub is listening by viewing it in a browser. If you are on the hub machine, you can just go to localhost like so, http://localhost:4444/grid/console. This will present you with a page that you can view the configuration for the server.

To stop the Hub you just send it a command:

http://localhost:4444/selenium-server/driver/?cmd=shutDownSeleniumServer

Adding Nodes

Having a Hub and no Node is just plain stupid. So, we need to add a Node to the Grid so that it can pass tests to it. The Hub will determine which Node to send a test to based on the properties requested by the test. So, we have to inform the Hub what properties the Node supports. There are a couple ways to do this, but I am going to use the command line to configure the Node. The way of the Ninja would be to use a JSON file, but I gave up on it after I couldn’t get it to recognize the JSON after about 30 minutes of research and trying. Actually, you can configure the Hub with JSON too, so you may want to look into this option as it makes your bat files cleaner and as you increase the Nodes you need it may cut down on duplication in your bat file too.

NOTE: If you are going to use IE or Chrome you should add the path to the drivers on your system path or you may see an error like:
*The path to the driver executable must be set by the webdriver.ie.driver
system property; for more information, see
http://code.google.com/p/selenium/wiki/InternetExplorerDriver. The latest
version can be downloaded from
http://selenium-release.storage.googleapis.com/index.html*

To start the node, run this command and don’t forget to run it in the path of your jar file and to update the file name/version appropriately.

java -jar selenium-server-standalone-2.39.0.jar -role webdriver -browser "browserName=internet explorer,version=8,maxinstance=1,platform=WINDOWS" -hubHost localhost –port 5555

This starts the Node and registers it on the Hub. You can also add this to a bat file. Again, you can verify it started by opening up the Grid console and you will see the Node browsers in the Browser tab, http://localhost:4444/grid/console

Running Tests

To run a test you have to use the RemoteDriver and pass it the DesiredCapabilities for the browser that tells the Hub what Node to use to satisfy the test.

DesiredCapabilities capabilities = new DesiredCapabilities.InternetExplorer();
capabilities.SetCapability(CapabilityType.Platform, "WINDOWS");
capabilities.SetCapability(CapabilityType.BrowserName, "internet explorer");
capabilities.SetCapability(CapabilityType.Version, "8");
IWebDriver webDriver = RemoteWebDriver(capabilities);

Note: the code above is not best practice because it is not SOLID. If your test code isn't SOLID, you need to change your code to use a factory, IoC, or whatever you need to do to remove the dependence on concrete drivers and its supporting DesiredCapabilities. I will leave that as an exercise for you until I publish my framework code :).

This may be a big change to your tests as you would have to replace every place you use a concrete WebDriver to using the RemoteDriver. If you code for IWebDriver instead of a concrete driver and have a single point for the creation of WebDrivers (see note in code above), it wouldn’t be that bad as you are just adding another instance of IWebDriver.

Conclusion

Well that’s it. The biggest hurdle IMHO is just writing your code in a manner that makes it easy to to switch WebDriver instances. Now I have to figure out how to run tests in parallel, oh boy!

References

A .Net Developer’s Adventure in Minecraft Modding

Why Mod Minecraft

My kids love Minecraft. Actually, love is too weak a word. They are always asking me to get this mod and that skin and they are just sooo passionate about this game. They watch YouTube videos of mod developers showing off their new mods and it occurred to me that I could probably make a mod. I mean I am a software engineer. It’s just Java. Mind you, I haven’t touched Java in many years, but its object oriented programming, how different is it from C#. So, I told my kids what I wanted to do and that I would need there help and they were very excited. This gives me a chance to discover something they are into and I get to introduce them to programming… win-win. And the adventure begins.

Java Development Environment

The first order of business is to get a Java Development environment up and running. I decided on Eclipse as my IDE as it comes highly recommended and has an awesome community. Below is what I installed.

Java JDK (Java Development Kit) – I downloaded JDK 7 – http://www.oracle.com/technetwork/java/javase/downloads/index.html

Java JRE (Java Runtime Environment) – this actually came with the JDK and I already had it installed.

Eclipse – I downloaded the standard – http://www.eclipse.org/downloads/

While at it I decided to round out my Dev Rig with JUnit for testing and a private GitHub account for source control (I will open it up to the public when I have something that won’t crash and burn). You will also see later in this post that I also added a build server, Gradle, and some JAVA functional programming goodness with Scala. If I’m going to do this I’m going all out.

Minecraft Loader

The Minecraft loader is the program that launches the Minecraft game. You purchase the loader on minecraft.net.

Minecraft Forge

Minecraft Forge is a Minecraft Mod Loader and API. As I understand it right now, it is a wrapper around the official Minecraft game that allows you to load mods (modifications). Forge provides a simplified API for working with the Minecraft source code to make mods. The Forge Mod Loader allows you to load mods made with the Forge API.

I decided on Minecraft Forge as it seemed to have a good community. I wanted to use something called MCP, which from what I understand, Forge actually uses MCP for decompilation of the Minecraft source. Although, I also read that Forge is working on their own decompiler. I didn’t use MCP because I got frustrated as I am a noob and couldn’t figure some things out in the install process. Plus, the kids tell me that Forge has the cool mods.

Anyway you download the latest version of Forge for your version of Minecraft, but I was told by a community member that version 1.6.4 had the most mods at the time I wrote this. Minecraft.exe currently defaults the game version to 1.7.4 and Forge is at 1.7.2 and this gave me the biggest headache trying to get the two to work together until I figured out you can edit your Minecraft profile in the loader and select the version you want to play (this would have fixed my MCP issue too). So let’s get Minecraft 1.6.4 working with Forge 1.6.4. So here is how I got the Forge loader working:

  1. Launch Minecraft
  2. Update  profile to use the 1.6.4 version and save it
  3. Click the Play button (this sets up the 1.6.4 files that are needed for Forge)
  4. Then download the recommended Forge 1.6.4 installer from http://files.minecraftforge.net/
  5. Run the installer
  6. Click windows start key plus R to open the run dialog and type %appdata% and click OK (you can also type this in Windows explorer)
  7. Open .minecraft folder, this holds all of the Minecraft assests
  8. Open the version folder
  9. Open the Forge folder for the Forge version you downloaded and copy the two files
  10. Go back to the version folder and create a new folder and name it whatever you want (remember this name you will need it)
  11. Open the folder and paste the files you copied
  12. Rename both files to the same name you used for the folder
  13. Open the file with extension .json and find the ID field and change it to the name you used for the folder and save
  14. Launch Minecraft
  15. Update profile to use the version that matches the name of the new version folder and save it
  16. Click the Play button
  17. You will be running the new Forge mod loader you just installed and you can add new Forge based mods to it by copying the mod zip files to the mod folder in the .minecraft folder (make sure your mods match the version you are running or it will crash and burn)

Mod Development Setup

Now we will continue with our Development Environment setup to get it ready for mod development.

  1. Download the recommended Forge 1.6.4 src from http://files.minecraftforge.net/ (you can get whatever version of Minecraft you want to develop for)
  2. Extract the zip to a folder any where you want
  3. Then open the folder and run the install.cmd
  4. After the install completes, open Eclipse
  5. Select a workspace by browsing to your Forge install, mcp/eclipse folder and click OK

This will import the Minecraft source code into the IDE and you can get to work modding your Minecraft world.The install took awhile so you can take a break when you start the install.

Other Goodies

Scala

During the Forge install I noticed in the command window that it is working with MCP and it does a lot of decompiling, updating, and recompiling of the Mincecraft source code. I also noticed a message: “scalac” is not found on the PATH. Scala files will not be recompiled. I am not sure if I will need to use Scala, but I always wanted to use Scala so this is a good a time as any to get it set up in my Java Environment. I downloaded the Scala installer from http://www.scala-lang.org/ and I allowed the setup to update my system path variables. I didn’t feel like reruning the Forge installer so hopefully I won’t need the recompiled Scala files for basic learning.

Gradle

In earlier versions of Forge you had to use the Gradle Build Server to setup your source code and Eclipse for development. Even though it isn’t necessary in the version I am using I still setup Gradle because it seems very cool, well Geek Cool. I really need to look into Gradle more for my .Net environment as they have some very interesting concepts for build environments. Anyway, you can install Gradle from http://www.gradle.org/downloads. You can just unzip to some location on your machine. Then copy the path to the Gradle bin folder and add it to your system path environment variable. That’s it, you have a Java build server.

Minecraft Server

I want to host our mods in our own Minecraft server, but I ran into the version issue again. I have 1.7.4 server, but my mods will be 1.6.4. Well with a little URL hacking you can download the 1.6.4 version of the server from the Minecraft download server. This is the same server jar URL as the latest release, I just change the release from 1.7.4 to 1.6.4. https://s3.amazonaws.com/Minecraft.Download/versions/1.6.4/minecraft_server.1.6.4.jar

Conlusion

That’s if for now, I will try to post some of my modding experience later. I am in a very strange land, but I think it will be great doing something both constructive and fun with my kids.

Validating Tab Order with WebDriver

I had a spec that defined the tab order on a form. Starting with the default form field the user will be able to press the tab key to move the cursor to the next form field. Tabbing through the fields will follow a specific order. I couldn’t find much on Google or Bing to help automate this with WebDriver, maybe I’m loosing my search skills.

Below is code to implement this with WebDriver. In production I use a SpecFlow Table instead of an array to hold the expected tab order and I have a custom wrapper around WebDriver so much of this code is hidden from test code. Below is the untested gist of my production implementation. Since all of my elements have IDs, and your’s should too, we are simply validating that the active element has the ID of the current element in the array iteration.

  • If the element doesn’t have an ID, fail the test.
  • If the element ID doesn’t match the expected ID, fail the test.
  • If the ID matches, tab to the next element and loop.
 public void TestTabOrder()
 {
 //Code to open the page elided.
 ....

 //This is the expected tab order. The strings are element IDs so the test assumes all of your elements have IDs.
 string[] orderedElementIds = new string[] { "FirstControl", "SecondControl", "NextControl" };

 foreach (var item in orderedElementIds)
 {
 string elementId = item;

 //Get the current active element, element with focus.
 IWebElement activeElement = webDriver.SwitchTo().ActiveElement();

 //Get the id of the active element
 string id = activeElement.GetAttribute("id");

 //If the active element doesn't have an id, fail the test because all of our elements have IDs.
 if (string.IsNullOrWhiteSpace(id))
 {
 throw new AssertionException("Element does not have expected ID: " + elementId);
 }

 //If the active element doesn't match the current ID in our orderedElementIds array, fail the test.
 if (elementId != id)
 {
 throw new AssertionException("Element: " + elementId + " does not have focus.");
 }

 //Tab to the next element.
 activeElement.SendKeys(Keys.Tab);
 }
 }

You don’t have to assert anything as the exceptions will fail the test (using MSTest AssertionException), hence no exception equals passing test. You get a bonus assert with this test in that it also verifies that you have a certain element with default focus (the first element in the array).

I am sure there is a better way to do this, but it works. Hope this helps someone as it wasn’t something well publicized.

Calling Overridden Virtual Method from Base Class in C#

I had a serious brain freeze today. I forgot polymorphism 101. I couldn’t remember if a virtual method defined in a base class and called in the constructor of the base class would call the overriden virtual method in a derived class. Instead of ol’reliable (Google Search), I decided to do a quick test because doing beats searching as it kind of burns in a lesson learned for me. Hopefully, I won’t forget this one for awhile. Anyway, the answer is yes, it will call the override and here is a test if you are having a brain freeze too and want to prove it (this is implemented with MSTest).

namespace BrainFreeze
{
	using System;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	[TestClass]
	public class PolymorphismTest
	{
		[TestMethod]
		public void BaseClassVirtualMethodWhenCalledFromBaseContructorShouldCallDerivedVirtualOverride()
		{
			string expected = "ImpCallVirtual";

			Imp imp = new Imp();
			string actual = imp.Result;

			Assert.AreEqual(expected, actual);
		}

		public class Base
		{
			public Base()
			{
				CallVirtual();
			}

			public string Result { get; set; }

			public virtual void CallVirtual()
			{
				this.Result = "BaseCallVirtual";
			}
		}

		public class Imp : Base
		{
			public Imp()
				: base()
			{
			}

			public override void CallVirtual()
			{
				this.Result = "ImpCallVirtual";
			}
		}
	}
}

Typing Git Username and Password is Lame

I set up a local Git server to serve as a central repository. Every time I push changes I have to submit my username and password and it got old real quick. I discovered that it is very easy to get around this, although what I am about to share is a little insecure as I am storing my credentials in plain text, but there are ways to secure this.

First a little background. I am using TortoiseGit as my Git client, I am on Windows 7, and my Git server is not exposed to the public internet.

To allow my credentials to be found I first ran this command:

setx HOME %USERPROFILE%

This sets up a Home environment variable on my system that points to my user profile (see this for more info http://technet.microsoft.com/en-us/library/cc755104.aspx).

Then I create a text file named  _netrc in the root of my user profile folder (C:\Users\{yourusername}\_netrc). In the text file I list the machine name, login, and password for each Git server I want to interact with. I assume this could also work for any server that accepts HTTP credentials.

machine mycoolserver
login mysecretlogin
password mysecretpassword
machine someotherhost.com
login mysecretlogin2
password mysecretpassword2

Machine is the root name of the server you are connecting to. In my case I have a local server without a top level domain (no .com). Then you add your credentials. Like I said this is saved in plain text, so you have to be careful with this and make sure you use credentials that you don’t use on any other accounts (e.g. your bank account).

Thanks to StackOverflow and VonC for help on this:

http://stackoverflow.com/questions/6031214/git-how-to-use-netrc-file-on-windows-to-save-user-and-password

Setup a NuGet Server

Setting up a NuGet Server is so easy that everyone should do it. Why? If you are beholden to corporate policies that restrict the applications and references your projects can have, you can still benefit from the awesomeness of NuGet by hosting corporate approved packages. If you have a critical build process, you may not want to depend on the reliability of third party servers. Oh, I can keep going, but I won’t. The point is with 5 easy steps (depending on how you may break it down), you can have a NuGet server up and serving packages.

  1. Create an Empty Web Application (I’m using Visual Studio)
  2. Use NuGet to add reference in the Web Application to “NuGet.Server”
  3. Add the nupkg files that you want to host to the Packages folder
  4. Deploy the Web Application
  5. Add the URL of the Web Application to your local NuGet package manager.

Thanks to docs.nuget.org and Adam James Naylor for opening my eyes to how simple this is:

http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds

http://www.adamjamesnaylor.com/2013/04/26/Setting-Up-A-Private-NuGet-Server.aspx

.NET Code Coverage with OpenCover

I made more progress in improving my Code Quality Pipeline. I added test code coverage reporting to my build script. I am using OpenCover and ReportBuilder to generate the code coverage reports. After getting these two tools from Nuget and Binging a few tips I got this going by writing a batch script to handle the details and having NAnt run the bat in a CodeCoverage target. Here is my bat

REM This is to run OpenCover and ReportGenerator to get test coverage data.
REM OpenCover and ReportGenerator where added to the solution via NuGet.
REM Need to make this a real batch file or execute from NANT.
REM See reference, https://github.com/sawilde/opencover/wiki/Usage, http://blog.alner.net/archive/2013/08/15/code-coverage-via-opencover-and-reportgenerator.aspx
REM Bring dev tools into the PATH.
call "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\VsDevCmd.bat"
mkdir .\Reports
REM Restore packages
msbuild .\.nuget\NuGet.targets /target:RestorePackages
REM Ensure build is up to date
msbuild "MyTestSolution.sln" /target:Rebuild /property:Configuration=Release;OutDir=.\Releases\Latest\NET40\
REM Run unit tests
.\packages\OpenCover.4.5.2316\OpenCover.Console.exe -register:user -target:"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\mstest.exe" -targetargs:"/testcontainer:.\source\tests\MytestProjectFolder\bin\Debug\MyTestProject.dll" -filter:"+[MyTestProjectNamespace]* -[MyTestProjectNamespace.*]*" -mergebyhash -output:.\Reports\projectCoverageReport.xml

REM the filter +[MyTestProjectNamespace]* includes all tested classes, -[MyTestProjectNamespace.*]* excludes items not tested
REM Generate the report
.\packages\ReportGenerator.1.9.1.0\ReportGenerator.exe -reports:".\Reports\projectCoverageReport.xml" -targetdir:".\Reports\CodeCoverage" -reporttypes:Html,HtmlSummary^ -filters:-MyTestProject*
REM Open the report - this is just for local running
start .\Reports\CodeCoverage\index.htm
pause

Issues

I have integration tests that depend on files in the local file system. These were failing because OpenCover runs the tests from a different path than the one the files are copied to during build. To overcome this I added the DepoloymentItem attribute to my test classes for all of the files I depend on for tests. This attribute will cause the files to be moved to the test run location with the DLLs with OpenCover does its thing.

[TestClass]
[DeploymentItem("YourFile.xml")] //Can also be applied to [TestMethod]
public class YourAwesomeTestClass
{

}

Another problem I had prevented the database connection strings from being read from the app.config. I was running MSTest with the /noisolation command line option. I removed the option and it worked. It seems like noisolation is there to improve performance of the test run. I don’t see much difference in timing right now and when I hit a wall in the time of my test execution I will revisit…no premature optimization for me. See

Virtualization Strategy for Browser Based Testing

I have been ramping up my knowledge and startegies for browser based testing on virtual machines (VM) and thought I would capture some of the best practices I have so far.

  • Start a new VM at start of test and destroy it at end of test.
  • Keep VM images small. Only have the bare minimum of software needed to run your test included in the VM image. Get rid of any default software that won’t be used.
  • Compress the VM image.
  • Image storage
    • SANS – storage area network. They are expensive ,but the best options for IO intensive scenarios such as this.
    • Use solid state drives – this is the next best option, but expensive. You’re able to have more efficient access from one drive when compared to rotating head drives.
    • Image per drive on rotating head drive – this is the least expensive option, but also the least efficient. Since IO is slow on these drives you could spread your images across multiple drives to improve parallel VM startup.

That’s where I am so far. Still need to get experience with various implementations of each practice. Should be fun.

NUnit, OK Maybe

Don’t get me wrong there is nothing wrong with NUnit and it may or may not be superior to MSTest. I am currently a user of MSTest in my personal projects and the jury is still out if I will use it at work. I just never found a truly compelling reason to use one over the other. MSTest comes well integrated in Visual Studio out the box and had the least amount of pain in terms of setup and getting a test project going. With the release of VS 2012, the playing field has been leveled a bit more as I can run an NUnit test through the Test Explorer, just like an MSTest/VSTest. This is accomplished by adding a simple NuGet package to the test project, NUnit Test Adapter for VS2012 and VS2013.

Anyway, another compelling reason to choose one over the other that I keep bumping into is being able to run tests in parallel. MSTest has the ability to run tests in parallel, but the implementation doesn’t sound solid by some of the posts I have been reading. VSTest, the VS 2012+ default test engine, does not run tests in parallel. NUnit does not support parallel either although the community has been waiting on the next version that is supposed to have this feature…if it ever is released.

Actually, the reason for this post is I was doing a little reading up on PNUnit. It is supposed to run NUnit tests in parallel. Not sure how good the project is, but their website started discussing the need to run their tests across Windows and Linux. Ah..ha! there you go. If you need to run cross platform tests you may lean towards NUnit and with PNUnit providing parallelization you may lean a little bit more.

I guess I am going to toy around more with NUnit VS2012 integration to see if I can somehow get as comfortable a workflow as I do running NUnit tests in VS2013. I will also toy around with PNUnit as this would have an immediate impact on my decision for automation engine at work.