Category: Quality

Architecture Validation in Visual Studio

As a part of my Quality Pipeline I want to validate my code against my architectural design. This means I don’t want invalid code integrations, like client code calling directly into data access code. With Visual Studio 2012 this is no problem. First I had to create a Modeling Project. Then I captured my architecture as a layer diagram. I won’t go over the details of how to do this, but you can find resources here

Next I added


to my model project’s .modelproj file. This instructs MSBuild to validate the architecture for each build. Since this is configured at the project level it will validate the architecture against all of the layer diagrams included in the project.

For a simpler way to add the configuration setting here is a MSDN walk through –

  1. In Solution Explorer, right-click the modeling project that contains the layer diagram or diagrams, and then click Properties.
  2. In the Properties window, set the modeling project’s Validate Architecture property to True.
    This includes the modeling project in the validation process.
  3. In Solution Explorer, click the layer diagram (.layerdiagram) file that you want to use for validation.
  4. In the Properties window, make sure that the diagram’s Build Action property is set to Validate.
    This includes the layer diagram in the validation process.

Adding this configuration to the project file only validates my local build. As part of my Quality Pipeline I also want to validate on Team Build (my continuous build server).  There was some guideance out there in the web and blogosphere, but for some reason my options did match what they were doing. You can try the solution on MSDN ( Like I said, this didn’t work for me. I had to right click the build definition in Build Explorer and click Edit Build Definition. On the Process tab, under Advanced, I added


to MSBuild Arguments.

Now my code is guarded against many of the issues that result from implementations that violate the designed architecture.

My Issues with Parallel Test Execution in .Net

In a previous post I described how to get Selenium Grid up and running. The main reason for doing that is to speed up test execution. Well, that is my main reason. You may want easy testing cross browser, OS, device or something. At any rate, to get the speed we have to run the tests in parallel. Getting the Grid up was the easy part, running the tests in parallel is the hard part. How do we run tests in parallel? Warning this may be more of a rant that solution.

If we were using MBUnit, it would be simple. Parallel is built into the DNA of MBUnit, but the MBUnit project is on hiatus right now and direction seems to be up in the air. We currently use NUnit as our test framework at work, but it doesn’t support parallel execution. The next version of NUnit is supposed to support it, but it has been promised for a long time now. I use MSTest a lot, but I have never run it in parallel. I heard it is possible, but I also heard there are issues with it and the fact that Visual Studio 2013 now uses VSTest which doesn’t have parallel support, I am not sure what this means for MSTest. I guess I’m rambling as I am rehashing another post, “NUnit, OK Maybe“. The point is parellel test execution is a problem that should have been solved by now across the test frameworks in the .NET community. Hopefully, there will be changes this year.

Anyway, back to the topic at hand, I decided to look into PNUnit. PNUnit brings parallel execution to NUnit, but the documentation claims that it is a solution for running the same test suite in parallel across multiple instances and I need to run individual test across Nodes to increase speed of test execution, so I am not sure if PNUnit is a viable solution. Another problem with PNUnit is you have to configure the tests you want to run in parallel. This is not a maintainable solution at first glance as having to update configuration files every time I add a test would get old real quick.

So, even though MBUnit may or may not be dead I look more into it. Like I said, at work we currently use NUnit so we would have to convert a lot of tests, but it may be possible to automate a lot of the conversion. The main difference in code between the two would be the dependencies (using’s), class and method attributes, and assertions. So, we could probably convert 90% of the code with just plain find and replace or a quick code hack that does something similar. We would also have to do some work to get reporting to produce that same report on the build server. With that in mind I will look more into MBUnit, even if it is dead the source is available on GitHub.

While I am checking out MBUnit I will also have a look at TPL and the async keyword in .NET 4.5. My test framework already has concepts built in that would allow me to actually create my own test runner. I just have to learn the new tools in .NET for parallel and asynchronous coding. This would be a MAJOR under taking and not one I want to do right now, but I will if its what I have to do to speed these browser tests up.

Another issue I have now is I share a single driver across all tests in a fixture. I create a driver when the fixture is created and I cache it so the individual tests can use it without having to recreate it. This is great when the tests run one after the other. Now I need to change it so that each test has its own driver so they won’t step on each other’s toes. This isn’t that hard as I have a central setup methods for my fixtures and tests so the change would only have to occur in a couple spots (I try to keep it SOLID can’t stress this enough). So, I will move the driver creation from the fixture setup to the test setup and I am ready for parallel browser automation, or am I?

Having to fix the driver setup issue made me look closer at things that I am sharing across tests. Getting parallel tests is going to take a little more than flipping a couple switches to turn it on. Did I mention this is the hard part? The lesson learned from the shared driver is to insure your tests are idempotent during parallel execution. Hell, tests should be idempotent period, regardless of parallel execution. If you share anything across the tests, they can not have an effect on the outcome of other tests regardless of where or how they run. Whether they are in separate assemblies, in the same test fixture or they execute on different Grid Nodes a test can not create side effects that change the results of other tests. If they do produce side effects, you will begin to not trust your tests as they will start failing for reasons that have nothing to do with code changes. When you run tests in parallel it becomes a little tricky to locate where you violate the idempotent rule.

In my case I have to do an in depth review of shared data, functionality, and state. So far my biggest issue is I have a static class that I use to store various values during test execution. I call it TestSession and its basically a cache that allows me to save time by creating certain objects once and it also gives me a way to pass state from one test step to the next. TestSession keeps things separated by prefixing the session keys in a way that tests don’t share data, but there are fixture session keys that are shared amongst tests. Also, at the end of fixture execution we clear the entire TestSession object. So, if the state cached in a fixture session key is changed by one test it may have an effect on another test. I am pretty sure that I don’t do this, but there is nothing that prevents me from doing it so I have to look through the code. Also, if the session is cleared before all of the tests have completed there may be problems. So, I have to rethink TestSession completely. I may create an immutable session for the fixture and mutable session for the tests.

Well that’s it for now. I am starting to bore myself :). This is more of a post on my issues with parallel test execution and less of a solution post. I guess this is what you would call a filler just to keep my blogging going. If you made it this far and you have some ideas on how to solve parallel test execution in .NET, please leave me a comment.

Setup Selenium Grid for .NET


I need to run my browser automation tests in parallel as they are painfully slow and I need to be able to provide developers with fast feedback on the state of there changes. Really, what is the point of spending time and effort on creating tests that no one wants to run because they take hours to run. In my opinion it was vital to get this going in order to get a return on the time and effort invested in our browser based test development.

We are a .Net shop and for our situation the best option for parallel browser testing is Selenium Grid, a Java platform. Quick overview, Selenium Grid uses a Hub to distribute tests to Nodes that actually run the test in the selected browser. Below I will explain how I got the Grid up and running.

Setting Up

Setup is pretty easy. All you have to do is download the Selenium Server standalone JAR file from You will have to install this file to all the machines that you will use as Hubs and Nodes. One quick note, you have to have JRE (Java Runtime Environment) setup on each machine you want to use as a Hub or Node.

The Hub

If you have the JRE setup and on your system environment path, you can simply start the server with this command from the location of your Selenium Server standalone JAR file:

java -jar selenium-server-standalone-2.39.0.jar -port 4444 -role hub -nodeTimeout 600

I am using the 2.39.0 version of the server (make sure you use the correct name for your JAR file). This command starts the server on port 4444 with the Hub role and a timeout of 600 seconds. You can tuck this away in a bat file for easy access.

You can verify the hub is listening by viewing it in a browser. If you are on the hub machine, you can just go to localhost like so, http://localhost:4444/grid/console. This will present you with a page that you can view the configuration for the server.

To stop the Hub you just send it a command:


Adding Nodes

Having a Hub and no Node is just plain stupid. So, we need to add a Node to the Grid so that it can pass tests to it. The Hub will determine which Node to send a test to based on the properties requested by the test. So, we have to inform the Hub what properties the Node supports. There are a couple ways to do this, but I am going to use the command line to configure the Node. The way of the Ninja would be to use a JSON file, but I gave up on it after I couldn’t get it to recognize the JSON after about 30 minutes of research and trying. Actually, you can configure the Hub with JSON too, so you may want to look into this option as it makes your bat files cleaner and as you increase the Nodes you need it may cut down on duplication in your bat file too.

NOTE: If you are going to use IE or Chrome you should add the path to the drivers on your system path or you may see an error like:
*The path to the driver executable must be set by the
system property; for more information, see The latest
version can be downloaded from*

To start the node, run this command and don’t forget to run it in the path of your jar file and to update the file name/version appropriately.

java -jar selenium-server-standalone-2.39.0.jar -role webdriver -browser "browserName=internet explorer,version=8,maxinstance=1,platform=WINDOWS" -hubHost localhost –port 5555

This starts the Node and registers it on the Hub. You can also add this to a bat file. Again, you can verify it started by opening up the Grid console and you will see the Node browsers in the Browser tab, http://localhost:4444/grid/console

Running Tests

To run a test you have to use the RemoteDriver and pass it the DesiredCapabilities for the browser that tells the Hub what Node to use to satisfy the test.

DesiredCapabilities capabilities = new DesiredCapabilities.InternetExplorer();
capabilities.SetCapability(CapabilityType.Platform, "WINDOWS");
capabilities.SetCapability(CapabilityType.BrowserName, "internet explorer");
capabilities.SetCapability(CapabilityType.Version, "8");
IWebDriver webDriver = RemoteWebDriver(capabilities);

Note: the code above is not best practice because it is not SOLID. If your test code isn't SOLID, you need to change your code to use a factory, IoC, or whatever you need to do to remove the dependence on concrete drivers and its supporting DesiredCapabilities. I will leave that as an exercise for you until I publish my framework code :).

This may be a big change to your tests as you would have to replace every place you use a concrete WebDriver to using the RemoteDriver. If you code for IWebDriver instead of a concrete driver and have a single point for the creation of WebDrivers (see note in code above), it wouldn’t be that bad as you are just adding another instance of IWebDriver.


Well that’s it. The biggest hurdle IMHO is just writing your code in a manner that makes it easy to to switch WebDriver instances. Now I have to figure out how to run tests in parallel, oh boy!


Validating Tab Order with WebDriver

I had a spec that defined the tab order on a form. Starting with the default form field the user will be able to press the tab key to move the cursor to the next form field. Tabbing through the fields will follow a specific order. I couldn’t find much on Google or Bing to help automate this with WebDriver, maybe I’m loosing my search skills.

Below is code to implement this with WebDriver. In production I use a SpecFlow Table instead of an array to hold the expected tab order and I have a custom wrapper around WebDriver so much of this code is hidden from test code. Below is the untested gist of my production implementation. Since all of my elements have IDs, and your’s should too, we are simply validating that the active element has the ID of the current element in the array iteration.

  • If the element doesn’t have an ID, fail the test.
  • If the element ID doesn’t match the expected ID, fail the test.
  • If the ID matches, tab to the next element and loop.
 public void TestTabOrder()
 //Code to open the page elided.

 //This is the expected tab order. The strings are element IDs so the test assumes all of your elements have IDs.
 string[] orderedElementIds = new string[] { "FirstControl", "SecondControl", "NextControl" };

 foreach (var item in orderedElementIds)
 string elementId = item;

 //Get the current active element, element with focus.
 IWebElement activeElement = webDriver.SwitchTo().ActiveElement();

 //Get the id of the active element
 string id = activeElement.GetAttribute("id");

 //If the active element doesn't have an id, fail the test because all of our elements have IDs.
 if (string.IsNullOrWhiteSpace(id))
 throw new AssertionException("Element does not have expected ID: " + elementId);

 //If the active element doesn't match the current ID in our orderedElementIds array, fail the test.
 if (elementId != id)
 throw new AssertionException("Element: " + elementId + " does not have focus.");

 //Tab to the next element.

You don’t have to assert anything as the exceptions will fail the test (using MSTest AssertionException), hence no exception equals passing test. You get a bonus assert with this test in that it also verifies that you have a certain element with default focus (the first element in the array).

I am sure there is a better way to do this, but it works. Hope this helps someone as it wasn’t something well publicized.

.NET Code Coverage with OpenCover

I made more progress in improving my Code Quality Pipeline. I added test code coverage reporting to my build script. I am using OpenCover and ReportBuilder to generate the code coverage reports. After getting these two tools from Nuget and Binging a few tips I got this going by writing a batch script to handle the details and having NAnt run the bat in a CodeCoverage target. Here is my bat

REM This is to run OpenCover and ReportGenerator to get test coverage data.
REM OpenCover and ReportGenerator where added to the solution via NuGet.
REM Need to make this a real batch file or execute from NANT.
REM See reference,,
REM Bring dev tools into the PATH.
call "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\VsDevCmd.bat"
mkdir .\Reports
REM Restore packages
msbuild .\.nuget\NuGet.targets /target:RestorePackages
REM Ensure build is up to date
msbuild "MyTestSolution.sln" /target:Rebuild /property:Configuration=Release;OutDir=.\Releases\Latest\NET40\
REM Run unit tests
.\packages\OpenCover.4.5.2316\OpenCover.Console.exe -register:user -target:"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\mstest.exe" -targetargs:"/testcontainer:.\source\tests\MytestProjectFolder\bin\Debug\MyTestProject.dll" -filter:"+[MyTestProjectNamespace]* -[MyTestProjectNamespace.*]*" -mergebyhash -output:.\Reports\projectCoverageReport.xml

REM the filter +[MyTestProjectNamespace]* includes all tested classes, -[MyTestProjectNamespace.*]* excludes items not tested
REM Generate the report
.\packages\ReportGenerator.\ReportGenerator.exe -reports:".\Reports\projectCoverageReport.xml" -targetdir:".\Reports\CodeCoverage" -reporttypes:Html,HtmlSummary^ -filters:-MyTestProject*
REM Open the report - this is just for local running
start .\Reports\CodeCoverage\index.htm


I have integration tests that depend on files in the local file system. These were failing because OpenCover runs the tests from a different path than the one the files are copied to during build. To overcome this I added the DepoloymentItem attribute to my test classes for all of the files I depend on for tests. This attribute will cause the files to be moved to the test run location with the DLLs with OpenCover does its thing.

[DeploymentItem("YourFile.xml")] //Can also be applied to [TestMethod]
public class YourAwesomeTestClass


Another problem I had prevented the database connection strings from being read from the app.config. I was running MSTest with the /noisolation command line option. I removed the option and it worked. It seems like noisolation is there to improve performance of the test run. I don’t see much difference in timing right now and when I hit a wall in the time of my test execution I will revisit…no premature optimization for me. See

Virtualization Strategy for Browser Based Testing

I have been ramping up my knowledge and startegies for browser based testing on virtual machines (VM) and thought I would capture some of the best practices I have so far.

  • Start a new VM at start of test and destroy it at end of test.
  • Keep VM images small. Only have the bare minimum of software needed to run your test included in the VM image. Get rid of any default software that won’t be used.
  • Compress the VM image.
  • Image storage
    • SANS – storage area network. They are expensive ,but the best options for IO intensive scenarios such as this.
    • Use solid state drives – this is the next best option, but expensive. You’re able to have more efficient access from one drive when compared to rotating head drives.
    • Image per drive on rotating head drive – this is the least expensive option, but also the least efficient. Since IO is slow on these drives you could spread your images across multiple drives to improve parallel VM startup.

That’s where I am so far. Still need to get experience with various implementations of each practice. Should be fun.

NUnit, OK Maybe

Don’t get me wrong there is nothing wrong with NUnit and it may or may not be superior to MSTest. I am currently a user of MSTest in my personal projects and the jury is still out if I will use it at work. I just never found a truly compelling reason to use one over the other. MSTest comes well integrated in Visual Studio out the box and had the least amount of pain in terms of setup and getting a test project going. With the release of VS 2012, the playing field has been leveled a bit more as I can run an NUnit test through the Test Explorer, just like an MSTest/VSTest. This is accomplished by adding a simple NuGet package to the test project, NUnit Test Adapter for VS2012 and VS2013.

Anyway, another compelling reason to choose one over the other that I keep bumping into is being able to run tests in parallel. MSTest has the ability to run tests in parallel, but the implementation doesn’t sound solid by some of the posts I have been reading. VSTest, the VS 2012+ default test engine, does not run tests in parallel. NUnit does not support parallel either although the community has been waiting on the next version that is supposed to have this feature…if it ever is released.

Actually, the reason for this post is I was doing a little reading up on PNUnit. It is supposed to run NUnit tests in parallel. Not sure how good the project is, but their website started discussing the need to run their tests across Windows and Linux. Ah..ha! there you go. If you need to run cross platform tests you may lean towards NUnit and with PNUnit providing parallelization you may lean a little bit more.

I guess I am going to toy around more with NUnit VS2012 integration to see if I can somehow get as comfortable a workflow as I do running NUnit tests in VS2013. I will also toy around with PNUnit as this would have an immediate impact on my decision for automation engine at work.

Page Object as Collection of Controls as Collection of Elements

I love the whole modular development ideal. Creating small discrete modular chunks of functionality instead of large monolithic globs of interdependent functionality helps to promote code reuse and increase maintainability of a code base. Well, for me, it was natural to extend this design pattern to the Page Object Model in functional browser based testing.

In ASP.NET Web Forms there is a style of web development that favors building the various parts of a page in User Controls. Your pages become a collection of User Controls that you can stitch together at runtime. So, your home page might have a main content control, slide show control, and secondary content control. Wrapping the home page could be a master page that has a masthead control, navigation control, and footer control. In each control would be the elements that make up a particular section of a web page. Anytime you need new functionality, you build a new user control and plug it into your application. When I am building my page objects for my tests I figured I would follow the same concept.

When I model a web page as a page object I start with the page wrapper that provides a shell for the other objects contained in the page. I will model the various user controls as control objects and I will add them to the page object that represents the page. This modularization also helps me to quickly and easily compose new pages as I don’t have to recreate common page parts.

The page wrapper just functions as a container control object and can provide communication between the controls and a central point to access page state. I try to keep the page wrapper light on functionality and focus on composition to provide the functionality that tests need.

I mentioned master pages and I model master pages through inheritance instead of composition. If the web page I am modeling uses a master page the page object for the page will inherit from another page object that models the master page. This is another way to cut down on duplication and while increasing maintainability.

This pattern is probably something common in the testing community, so I need to do more research on it. It is a work in progress for me as I am still not solid on how to implement the composition. Should the child objects know about their containing page object, how should I manage the WebDriver across controls, what if a User Control is added to a page multiple times, how should I model it. I am trying different solutions to these problems and more common and edge cases that get me stuck in the mud from time to time. Hopefully, I can provide some tried and true strategies for this extension of the page object pattern as I exercise it over the next year.

For now here is sort of where I am. I start with an interface to define the page object contract and a base page to define the core functionality all pages should have. From these abstractions I build up an actual page model as I described earlier by composing control objects that in turn compose element objects.

Below is some early code for my page abstractions. I won’t go into the specifics of this code, but you can get the gist of where I am headed. One thing to note is that I have abstracted the concept of Browser and Test Environment. This gives me flexibility in the usage of various Browser Automation Frameworks and the ability to easily configure tests for various test environments. Actually, I also have a base control object to model User Controls and an object that models page elements (think WebDriver element, but abstract so I can wrap any Browser Automation Framework). OK, last note is PageKey is used in my reporting module. As test results are collected I also store the page key with the results so that I have traceability and can be more expressive with analysis of the result data.

//This is not production code
public interface IPage
    string PageKey { get; }
    string PageUrl { get; }
    string PageVirtualUrl { get; }
    ITestEnvironment TestEnvironment { get; }
    string Title { get; }
    bool HasTitle();
    bool HasUrl();
    bool IsOpen();
    void Open();

public class BasePage : IPage
    public BasePage(string pageKey, Browser browser, ITestEnvironment environment)
        this.PageKey = pageKey;
        this.Browser = browser;
        this.TestEnvironment = environment;
    private BasePage()
    public Browser Browser { get; protected set; }
    public string PageKey { get; protected set; }
    public string PageUrl
            return this.GetPageUrl();
    public string PageVirtualUrl { get; protected set; }
    public ITestEnvironment TestEnvironment { get; protected set; }
    public string Title { get; protected set; }
    public virtual bool HasTitle()
        if (this.Title == this.Browser.Title)
            return true;
        return false;
    public virtual bool HasUrl()
        if (!string.IsNullOrEmpty(this.PageUrl))
            if (this.Browser.HasUrl(this.PageUrl))
                return true;
        return false;
    public virtual bool IsOpen()
        if (!this.HasUrl())
            return false;
        return this.HasTitle();
    public virtual void Open()
    private string GetPageUrl()
        if (this.TestEnvironment == null)
            return string.Empty;
        string baseUrl = this.TestEnvironment.BaseUrl;
        string virtualUrl = this.PageVirtualUrl;
        if (string.IsNullOrEmpty(baseUrl))
            return string.Empty;
        if (!baseUrl.EndsWith("/"))
            baseUrl += "/";
        if (virtualUrl.StartsWith("/"))
            virtualUrl = virtualUrl.Substring(1);
        return string.Format("{0}{1}", baseUrl, virtualUrl);

Generate Test Service from WSDL

I had the genius idea to create a test service implementation to test our client services. I would use the test service as a fake stand-in for the real services as the real services are managed by third parties. Imagine wanting to test your client calls to the Twitter API without having to get burned by the firewall trying to make a call to an external service.

As usual, it ends up that I am not a genius and my idea is not unique. There isn’t a lot of info on how to do it in the .Net stack, but I did find some discussions on a few Bings. I ended up using wsdl.exe to extract a service interface from the WSDL and implementing the WSDL with simple ASMX files. I won’t go into the details, but this is basically what I did:

    1. Get a copy of the WSDL and XSD from the actual service.
    2. Tell wsdl.exe where you stored these files and what you want to do, which is generate a service interface
      wsdl.exe yourFile.wsdl yourfile.xsd /l:CS /serverInterface
    3.  Then implement the interface as an ASMX (you could do WCF, but I was in a hurry)
    4. Lastly, point your client to your fake service

In your implementation, you can return whatever you are expecting in your tests. You can also capture the HTTP Messages. Actually, the main reason I wanted to do this was to figure out if my SOAP message was properly formatted and I didn’t want to go through all of the trace listener diagnostics configuration with a Fiddler port capture crap. There may be easier ways to do this and even best practices around testing service clients, but this was simple, fast, and easy, IMHO, and it actually opened up more testing opportunities for me as its all code and automatable (is that a word).

So You Want To Automate Web Based Testing

I am on my 3rd enterprise scale automated test project and figured that I should try to begin the process of distilling some of the lessons learned and best practices I have learned along the way. I am by no means an expert in automated testing, but with my new position as Automation Engineer I plan on being one. In addition to my day job, I am in the process of building an automated functional testing framework, Test Pipe. It’s part of my push to improve my Code Quality Pipeline in my personal projects. So, with all of the focus on automated testing I have right now, it’s a good time to start formalizing the stuff in my head in a way that I can share it with the development community. Don’t expect much from this first post and I am sorry if it gets incoherent. I am writing from the hip and may not have time for a heavy edit, but I have to post it because I said I would post on the 1st. As I get a better understanding of how to express my experience hopefully I will have much better posts coming up.

Test Environment

First I want to talk a little about the test environment. You should be able to build and deploy the application you want to test.This is more for Developers in Test or testers that actually write code. This is not necessary in every situation, but if you are responsible for proving that an application works as required, then you should be able to get the latest version of the app, build it, deploy it, configure it for testing, and test it. I am of the opinion that you should be able to do this manually before you actually get into the nuts and bolts of automating it. It is important to have a firm grasp on the DevOps Pipeline for your application if you are looking to validate the quality of the application’s development.

This isn’t feasible for every application, but having a local test environment is definitely worth the effort. Being able to checkout code, build and deploy it locally gives you visibility into the application that is hard to achieve on a server. Also, having a local environment affords you a more flexible work space to experiment and break stuff without affecting others on your team.

You should also have a server based test environment apart from the development and business QA environment. This could be physical or virtualized servers, but you should have a build server and the application and web servers necessary to deploy your application. Once you have your Quality Pipeline automated, you don’t want developers or QA messing around on the server invalidating your tests.

Test Configuration

Configuring the application for test is a big subject and you should give lots of thought into how best to manage the configuration. To configure the application for test you may want to create the application database, seed it with test data, update the configuration to change from using an actual SMTP server to a fake server that allows you to test email without needing the network…and more. Then there is configuring the type of test. You may want to run smoke tests every time developers check in code, a full functional test nightly, and a full regression before certifying the release as production ready. These tests can mean different things to different people, but the point is you will probably need to configure for multiple types of tests. On top of that, with web based and mobile testing you have to configure for different browsers and operating systems. Like I said configuration is a big subject that deserves a lot of thought and a strategy in order to have an effective Quality Pipeline.

Test Code

I won’t talk about the details of test code here, but a little about test code organization. Most of my test experience is in testing web applications and services on a Microsoft stack, but a well structured test code base is necessary for any development stack. I generally start with 4 basic projects in my test solutions:

  • Core – this provides the core functionality and base abstractions for the test framework.
  • Pages – this provides an interactive model of the pages and controls under test.
  • Data – this provides functionality to manage test data and data access.
  • Specs – this provides the definition and implementation of tests and utilizes the other three projects to do its job.

In reality, there are more projects to a robust testing framework, but this is what I consider my base framework and I allow the other projects to be born out of necessity. I may have a projects to provide base and extend d functionality for my browser automation framework to provide browser access to the Pages project. I may have a project to capture concepts for test design to provide better organization and workflow to the Specs project. I will have even more projects as I gain an understanding of how to test the application and my architecture will evolve as I find concepts that I foresee reusing or replacing. A typical start to a new project may look something like the layout below (this is somewhat similar to the TestPipe project I have underway to provide a ready made solution for a base test framework).

  • Core
    • IPage – contract that all pages must implement.
    • BasePage – base functionality that all pages inherit from.
    • TestHelper – various helpers for things like workflow, logging, and access to wrappers around testing tools like SpecFlow.
    • Cache – an abstraction and wrapper around a caching framework.
    • In Test Pipe, I also abstract the browser and page elements (or controls) so I am not dependent on a specific browser automation framework.
  • Pages
    • Section – I generally organize this project somewhat similar to that of the website sitemap and a section is just a grouping of pages and there would be a folder for each section. It may make more sense for you to organize this by functional grouping, but following the sitemap has worked for me so far.
      • Page – page is a model of a page or control and there is one for each page or control that needs to be tested.
  • Specs
    • Features – I have a secret love affair with SpecFlow and this folder holds all of the SpecFlow Feature files for the features I am testing.
      • Under the Features folder I may or may not have Section folders similar to the Pages project, but as I stated this could be functional groupings if it makes more sense to you.
    • Steps
      • Organizing steps is still a work in progress for me and I am still trying to find an optimal way to organize them. Steps are implementations of feature tests, but they are also sometimes abstract enough that they can be used for multiple features so I may also have a Sections folder, but also a folder for Common or Global folder for global or common steps or other units of organization.
  • Data
    • DTO – these are just plain old objects and they mirror tables in the database (ala active record).
    • TestData – this provides translation from DTO to what’s needed in the test. The actual data that is needed in tests can be defined and configured in an XML file, spreadsheet or database and the code in this folder manages seeding the test data in the database and the retrieving the data for the test scenarios based on the needs of the tests defined in the test data configuration.
    • DataAccess – this is usually a micro-ORM, PetaPoco, Massive…my current flavor is NPoco. This is just used to move data to and from the database to the DTOs. We don’t need a full blown ORM for our basic CRUD needs, but we don’t want to have to write a bunch of boilerplate code either so micro-ORMs provide a good balance.

Automated Test Framework

I use a unit test framework as my Automated Test Framework. The unit test framework is used as the test runner for the test scenarios steps and uses a browser automation framework like Selenium or Watin (.Net version of Watir) to drive the browser. The browser automation is hidden behind the Page objects, and we will discuss this later. The main take away is that unit test frameworks are not just for unit tests and I call them automated test frameworks.

I use both NUnit and MSTest as my unit test framework. To be honest, I haven’t run into a situation where MSTest was a terrible choice, but I have heard many arguments from the purist out there on why I should use something else if I want to achieve test enlightenment. I use NUnit because I use it at work and there is a strong community for it, but 9 times out of 10 I will use MSTest as my framework if there is no precedence to use another one.

Another benefit of MSTest is it doesn’t need any setup in Visual Studio as it comes out the box with Visual Studio. The integration with Visual Studio and the MS stack is great when you want something quick to build a test framework around. It just works and its one less piece of the puzzle I have to think about.

If you have a requirement that points more in the direction of another unit test framework, as long as it can run your browser automation framework, use it. Like I said I also use NUnit and actually, its not that difficult to set up. To setup NUnit I use NuGet. I generally install the NuGet package along with the downloadable MSI. The NuGet package only includes the framework so I use the MSI to get the GUI setup. With Visual Studio 2013 you also get some integration in that you can run and view results in the VS IDE so you really don’t need the GUI if you run 2013 or you install an integrated runner in lower versions of VS.

In the end, it doesn’t matter what you use for your test runner, but you should become proficient at one of them.

Browser Automation

I use Selenium WebDriver as my browser automation framework, but you could use anything that works for your environment and situation. WaitN is a very good framework and there are probably more. WebDriver is just killing it in the browser automation space right now. In the end, you just need a framework that will allow you to drive a browser from your Automated Test framework. You could roll your own crude driver that makes HTTP requests if you wanted to, but why would you? Actually, there is a reason, but let’s not go there.

Again, I am a .Net Developer, and I use NuGet to install Selenium WebDriver. Actually, there are two packages that I install, Selenium WebDriver and Selenium WebDriver Support Classes. The Support Classes provide helper classes for HTML Select elements, waiting for conditions, and Page Object creation.

Test Design

As you build tests you will discover that patterns emerge and some people go as far as building DSL (domain specific languages) around the patterns to help them define tests in a more structured and simplified manner. I use SpecFlow to provide a more structured means of defining tests. It is an implementation of Cucumber and brings Gerkihn and BDD to .Net development. I use it to generate the unit test code and stubs of the actual steps that the automation test framework code calls to run tests.

In the step stubs that SpecFlow generates for the test automation framework I call page objects that call the browser automation framework to drive the browser as a user would for the various test scenarios. Inside of the steps you will undoubtedly find patterns that you can abstract and you may even go as far as creating a DSL, but you should definitely formalize how you will design your tests.

More .Net specific info. I use NuGet to install SpecFlow also. Are you seeing a pattern here? If you are a .Net developer and you aren’t using NuGet, you are missing something special. In addition to NuGet you will need to download the SpecFlow Visual Studio integration so that you get some of the other goodies like item templates, context menus, and step generation…etc.

Build Server

I use both Team Foundation Server Build and Cruise Control. Again, the choice of platform doesn’t matter, this is just another abstraction in your Quality Pipeline that provides a service. Here the focus is on being able to checkout code from the source code repository and build it and do it all automatically without your intervention. The build server is the director of the entire process. After it builds the code it can deploy the application, kick off the tests and collect and report test results.

I use Nant with Cruise Control to provide automation of the deployment and configuration. TFS can do the same. They both have the ability to manage the running of tests, run static code analysis, and report on the quality of the build.


Well that’s the low down on the parts of my general test framework. Hopefully, I can keep it going and give some insight into the nitty gritty details of each piece.