Tagged: testpipe

Video Recording C# WebDriver Tests in TestPipe

The title is a little misleading because you can use the technique below to do a screen capture of anything happening on the screen and not just WebDriver tests. Yet, TestPipe does use C# WebDriver out the box so we will be recording these types of tests.

So, we want to add video recording tests to TestPipe. At first I thought this would be very difficult, but after finding Microsoft Expression Encoder SDK it became a lot easier. I was even able to find other people that have used this SDK which made a decision to move forward with this a little easier to take on.

First, I read the Working with Screen Capture section of the Overview of the Expression Encoder SDK. From this I learned that I needed to create an instance of ScreenCaptureJob. The question is, where do I create it?

In TestPipe we have a ScenarioSession class that holds state of various things while a test scenario runs and it makes sense to expose this new functionality there because we want to be able to control video recording within the context of individual test scenarios. Do we add a new property on the session or should it be a new property on the IBrowser interface. We already have a TakeScreenshot method on IBrowser. Yet, I don’t think it is a good fit on the browser interface because there is a bit of setup that needs to take place for ScreenCaptureJob that is out of scope for a browser and I don’t want to muddy up the API more than it already is.

When we setup a scenario we want to allow setup of the ScreenCaptureJob based on configuration for a feature and/or a scenario. We define features and scenarios in a text file, currently using Gherkin, and we store data used in feature and scenario tests in a JSON file. So, we have to configure video recording in the Gherkin, JSON or both.

Do we keep all recordings or only failing recordings? What if we want to keep only failing, but from time to time we need non-failing recordings for some reason? Do we overwrite old recordings or store in unique folders or filenames?

To trigger the recording we could use tags. If an @Video tag is present on the scenario or feature, record the scenario(s) and only keep the recording if the scenario fails. If the @Debug tag is present on the Feature or the Scenario, keep the recordings even if they don’t fail.

We can create a unique folder for the recordings so that we can store videos of multiple runs of the same scenario. We may want to think about how we clean these up, but we may have enough file clean up in other processes. We will just have to watch hard drive space in production use.

So, we have a strategy to automatically configure recording. Now, we have to implement it in a way that also allows manual configuration just in case we want to hard wire video recording in a test.

So, I found our seam to make the changes for video recording. In our RunnerBase class we have methods to setup and teardown a scenario. It is there that we will make the change to configure, start, stop, and delete video recordings.

Now to implement. First I download the encoder from http://www.microsoft.com/en-us/download/details.aspx?id=27870. This will have to be installed on every server that will run tests so I create a Powershell script to install it. It would be nice to also do a Chocolatey package, but that is overkill for me because I am not using Chocolatey on my servers right not. You can create your own automated installer by extracting the setup file from the download then creating a Powershell script to run

setup.exe -q

to quietly install. I believe you can use the -x parameter to uninstall, but I didn’t test this yet. (Assuming msiexe command line options are used https://msdn.microsoft.com/en-us/library/aa367988(v=vs.85).aspx)

With the encoder installed we have access to the DLLs that we need to work with. In VisualStudio I add a reference to the extensions for Microsoft.Expression.Encoder, Microsoft.Expression.Encoder.Api2, Microsoft.Expression.Encoder.Types, and Microsoft.Expression.Encoder.Utilities. Not sure if I need all these, but they were added by the installer so I will keep them for now.

From here I can add a using

using Microsoft.Expression.Encoder.ScreenCapture;

and implement recording based on the sample code, updating to fit TestPipe standards.

One caveat is the encoder outputs some kind of Microsoft proprietary video format xesc. I thought about collecting all the videos that are kept at the end of a test run and run some kind of parallel task to convert them to a more portable format. In the end, I just left it alone. This is a new feature and only my team will be looking at the videos and everyone has Windows Media Player that can play the format.

I won’t write more on implementation details because I am boring myself, but if you want to check it out you can view it on GitHub (RunnerBase is where we use the recorder and you should be able to figure out the rest). One interesting twist is we implemented Expression Encoder behind an interface so that it isn’t requirement to use TestPipe. If we didn’t do this, you wouldn’t be able to build or use TestPipe without first installing the dependent encoder.

So, TestPipe comes out the box with a dummy implementation of the interface that won’t actually do the recordings. If you want to capture actual recording you can use the TestPipe.MSVideoRecorder plug-in or implement the IVideoRecorder interface on another screen capture program to enable video recording of tests. Right now TestPipe.MSVideoRecorder, is included in the TestPipe solution, but it is not set to build automatically. When we make changes we set it to build and manually move the binary to the folder we have configured to hold the video recorder plug-ins. Eventually, we will move it to a separate repository and create a NuGet package, but I’m tired.


Overview of the Expression Encoder SDK – https://msdn.microsoft.com/en-us/library/Gg602440(v=Expression.40).aspx

Road to screen recording in webdriver with C# – http://roadtoautomation.blogspot.com/2013/07/road-to-screen-recording-in-webdriver.html

Record video of your Selenium Tests – https://blog.testingbot.com/2011/12/19/record-video-of-your-selenium-tests

Selenium WebDriver Custom Table Element

One thing I like to do is to create wrappers that simplify usage of complex APIs. I also like wrappers that allow me to add functionality to an API. And when I can use a wrapper to get rid of duplicate code, I’m down right ecstatic.

Well today I’d like to highlight a little wrapper magic that helps simplify testing HTML tables. The approach isn’t new, but it was introduced in TestPipe by a junior developer with no prior test automation experience (very smart guy).

First some explanation before code as it may be a little hard to understand. TestPipe actually provides a wrapper around Selenium Web Driver. This includes wrappers around the Driver, SearchContext, and Elements. TestPipe has a BaseControl that is basically a wrapper around WebDriver’s IWebElement. This not only allows us to bend WebDriver to our will, the biggest benefit, but it also gives us the ability to swap browser drivers without changing our test code. So, we could opt to use WatiN or some custom System.Net and helpers and not have to do much, if anything, in terms of adjusting our test code. (Side Note: this isn’t a common use case, almost as useless as wrapping the database just so you can change the database later…doesn’t happen often, if ever, but its nice that its there just in case)


Like I said, we have a BaseControl. To provide some nice custom functionality for working with tables we will extend BaseControl.

public class GridControl : BaseControl
 public GridControl(IBrowser browser)
   : base(browser, null, null)
 //...Grid methods

Breaking this down, we create a new class, GridControl, that inherits BaseControl. In the constructor we inject an IBrowser, which is the interface implemented by our WebDriver wrapper. So, outside of TestPipe, this would be like injecting IWebDriver. The BaseControl handles state for Driver and exposes it to our GridControl class through a public property. This same concept can be used to wrap text boxes, drop down lists, radio button…you get the idea.


I won’t show all of the code, but it’s on GitHub for your viewing pleasure. Instead, I will break down a method that provide some functionality to help finding cells in tables. GetCell, this method provides a simple way to get any cell in a table by the row and column.

public IControl GetCell(int row, int column)
 StringBuilder xpathCell = new StringBuilder();
 Select selector = new Select(FindByEnum.XPath, xpathCell.ToString());
 IControl control = this.Find(selector);
 return control;

This isn’t the actual TestPipe code, but enough to get the gist of how this is handled in TestPipe. We are building up an XPath query to find the tr at the specified row index and the td at the specified column index. Then we pass the XPath to a Selector class that is used in the Find method to return an IControl. IControl is the interface for the wrapper around WebDriver’s IWebElement. The Find method uses our wrapper around WebDrivers ISearchContext to find the element…who’s on first?

One problem with this approach is the string “/tbody/tr”. What if your table doesn’t have a tbody? Exactly, it won’t find the cell. This bug was found by another junior developer that recently joined the project, another very smart guy. Anyway, this is a problem we are solving as we speak and it looks like we will default to “/tr” and allow for injection of a different structure like “/tbody/tr” as a method argument. Alternatively, we could search for “/tr” then if not found search for “/tbody/tr”. We may do both search for the 2 structures and allow injection. These solutions are pretty messy, but it’s better than having to write this code every time you want to find a table cell. The point is we are encapsulating this messy code so we don’t have to look at it and getting a cell is as simple as passing the row and column we want to get from the table.

Usage in Tests

In our page object class we could use this by assigning GridControl to a property that represents a table on our page. (If you aren’t using page objects in your automated browser tests, get with the times)

//In our Page Object class we have this property
public GridControl MyGrid
        return new GridControl(this.Browser);

Then it is pretty easy to use it in tests.

//Here we use myPage, an instance of our Page Object,
//to access MyGrid to get the cell we want to test
string cell = myPage.MyGrid.GetCell(2,5).Text;
Assert.AreEqual("hello", cell);

Boom! A ton of evil duplicate code eradicated and some sweet functionality added on top of WebDriver.


The point here is that you should try to find ways to wrap complex APIs to find simpler uses of the API, cut down on duplication, and extend the functionality provided by the API. This is especially true in test automation where duplication causes maintenance nightmares.

TestPipe Test Automation Framework Release Party

Actually, you missed the party I had with myself when I unchecked private, clicked save on GitHub, and officially release TestPipe. You didn’t miss your chance to checkout TestPipe, a little Open Source project that has the goal of making automated browser based testing more maintainable for .NET’ters. The project source code is hosted on GitHub and the binaries are hosted on NuGet:



If you would like to become a TestPipe Plumber and contribute, I’ll invite you to the next party :).


Results of my personal make a logo in 10 minutes challenge. Image

Agile Browser Based Testing with SpecFlow

This may be a little misleading as you may think that I am going to give some sort of Scrumish methodology for browser based testing with SpecFlow. Actually, this is more about how I implemented a feature to make browser based testing more realistic in a CI build.

Browser based testing is slow, real slow. So, if you want to integrate this type of testing into your CI build process you need a way to make the tests run faster or it may add considerable time to the feedback loop given to developers. My current solution is to only run tests for the current sprint. To do this I use a mixture of SpecFlow and my own home grown test framework, TestPipe, to identify the tests I should run and ignore.

The solution I’m using at the moment centers on SpecFlow Tags. Actually, I have blogged about this before in my SpecFlow Tagging post. In this post I want to show a little more code that demonstrates how I accomplish it.

Common Scenario Setup

The first step is to use a common Scenario setup method. I add the setup as a static method to a class accessible by all Step classes.

public class CommonStep
 public static void SetupScenario()
 catch (CharlesBryant.TestPipe.Exceptions.IgnoreException ex)

TestPipe Runner Scenario Setup

The method calls the TestPipe Runner method SetupScenario that handles the Tag processing. If SetupScenario determines that the scenario should be ignored, it will throw the exception that is caught. We handle the exception by Asserting the the test ignored with the test frameworks ignore method (in this case NUnit). We also pass the ignore method the exception method as there are a few reasons why a test may be ignored and we want the reason included in our reporting.

SetupScenario includes this bit of code

if (IgnoreScenario(tags))
 throw new IgnoreException();

Configuring Tests to Run

This is similar to what I blogged about in the SpecFlow Tagging post, but I added a custom exception. Below are the interesting methods for the call stack walked by the SetupScenario method.

public static bool IgnoreScenario(string[] tags)
 if (tags == null)
 return false;
string runTags = GetAppConfigValue("test.scenarios");
runTags = runTags.Trim().ToLower();
return Ignore(tags, runTags);

This method gets the tags that we want to run from configuration. For each sprint there is a code branch and the app.config for tests in the branch will contain the tags for the tests we want to run for the sprint on the CI build. There is also a regression branch that will run all the tests which runs weekly. All of the feature files are kept together, so being able to tag specific scenarios in a feature file to run gives the ability to run specific tests for a sprint while keeping all of the features together.

Test Selection

Here is the selection logic.

public static bool Ignore(string[] tags, string runTags)
 if (string.IsNullOrWhiteSpace(runTags))
 return false;
//If runTags has a value the tag must match or is ignored
 if (tags == null)
 throw new IgnoreException("Ignored tags is null.");
 if (tags.Contains("ignore", StringComparer.InvariantCultureIgnoreCase))
 throw new IgnoreException("Ignored");
if (tags.Contains("manual", StringComparer.InvariantCultureIgnoreCase))
 throw new IgnoreException("Manual");
if (runTags == "all" || runTags == "all,all")
 return false;
if (tags.Contains(runTags, StringComparer.InvariantCultureIgnoreCase))
 return false;
return true;

This provides the meat of the solution where most of the logic for the solution is. As you can see the exceptions contain messages for when a test is explicitly ignored with the Ignore or Manual tag. The manual tag identifies features that are defined, but can’t be automated. This way we still have a formal definition that can guide our manual testing.

The variable runTags holds the value retrieved from configuration. If the config defines “all” or “all,all”, we run all the tests that aren’t explicitly ignored. The “all,all” is a special case when ignoring test at the Feature level, but this post is about Scenario level ignoring.

The final test is to compare the tags to the runTags config. If the tags include the runTags, we run the test. Any tests that don’t match are ignored. For scenarios this only works for one runTag. Maybe we name the tag after the sprint, a sprint ticket, or whatever it is it has to be unique for the sprint. I like the idea of tagging with a ticket number as it gives tracability to tickets in the project management system.

Improvements and Changes

I have contemplated using a feature file organization similar to SpecLog. They advocate a separate folder to hold feature files for the current sprint. Then I believe, but not sure, that they tag the current sprint feature files so they can be identified and ran in isolation. The problem with this is that the current sprint features have to be merged with the current features after the sprint is complete.

Another question I have asked myself is do I want to allow some kind of test selection through command line parameters. I am not really sure yet. I will put that thought on hold for now or until a need for command line configuration makes itself evident.

Lastly, another improvement would be to allow specifying multiple runTags. We would have to then iterate the run tags and compare or come up with a performant way of doing. Performance would be an issue as this would have to run on every test and for a large project there could be thousands of tests with each test already taking having an inherent performance issue in having to run in a browser.


Well that’s it. Sprint testing can include browser based testing and still run significantly faster than running every test in a test suite.

C# MEF BrowserFactory for Browser Based Testing

In my browser based test framework I use MEF to help abstract the concept of a browser. The reason I do this is so I am not tied to a specific browser driver framework (e.g. WebDriver, WatiN, System.Net.WebClient). This allows me to change drivers without having to touch my test code. Here’s how I do it.

Browser Interface

First I created an interface that represents a browser. I used a mixture of interfaces from WebDriver and WatiN.

namespace CharlesBryant.TestPipe.Interfaces
 using System;
 using System.Collections.Generic;
 using System.Collections.ObjectModel;
 using CharlesBryant.TestPipe.Browser;
 using CharlesBryant.TestPipe.Enums;
public interface IBrowser
 IBrowserSearchContext BrowserSearchContext { get; }
BrowserTypeEnum BrowserType { get; }
string CurrentWindowHandle { get; }
string PageSource { get; }
string Title { get; }
string Url { get; }
ReadOnlyCollection WindowHandles { get; }
IElement ActiveElement();
void Close();
void DeleteAllCookies();
void DeleteCookieNamed(string name);
Dictionary<string, string> GetAllCookies();
bool HasUrl(string pageUrl);
void LoadBrowser(BrowserTypeEnum browser, BrowserConfiguration configuration = null);
void Open(string url, uint timeoutInSeconds = 0);
void Quit();
void Refresh();
void SendBrowserKeys(string keys);
void TakeScreenshot(string screenshotPath);
void AddCookie(string key, string value, string path = "/", string domain = null, DateTime? expiry = null);

Pretty basic stuff although BrowserSearchContext took some thought to get it working. Basically, this abstraction provides the facility to search for elements. A lot of the concepts here are borrowed from WebDriver and WaitN and are just a way to be able to wrap there functionality and use it without being directly dependent on them. To use this you have to change your tests from directly using a browser driver to using this abstraction. At the start of your tests you use the BrowserFactory to get the specific implementation of this interface that you want to test with.

Browser Factory

Then I created a BrowserFactory that uses MEF to load browsers that implement the browser interface. When I need to use a browser I call Create in the BrowserFactory to get the browser driver I want to test with. To make this happen I have to actually create wrappers around the browser drivers I want available. One caveat about MEF is that it needs to be able to find your extensions so you have to tell it where to find them. To make the browsers available to the factory I added an MEF class attribute, [Export(typeof(IBrowser))] to my browser implementations. Then I add a post build event to the browser implementation projects to copy their DLL to a central folder:

copy $(TargetPath) $(SolutionDir)\Plugins\Browsers\$(TargetFileName)

Then I added a appConfig key with a value that points to this directory to the config of my clients that use the BrowserFactory. Now I can reference this config value to tell MEF where to load browsers from. Below is sort of how I use the factory with MEF.

namespace CharlesBryant.TestPipe.Browser
 using System;
 using System.ComponentModel.Composition;
 using System.ComponentModel.Composition.Hosting;
 using System.Configuration;
 using System.IO;
 using System.Reflection;
 using CharlesBryant.TestPipe.Enums;
 using CharlesBryant.TestPipe.Interfaces; 

 public class BrowserFactory
 private IBrowser browser; 

 public static IBrowser Create(BrowserTypeEnum browserType)
 BrowserFactory factory = new BrowserFactory();
 return factory.Compose(browserType);

 private IBrowser Compose(BrowserTypeEnum browserType)
 this.browser = null; 

 AggregateCatalog aggregateCatalogue = new AggregateCatalog();
 aggregateCatalogue.Catalogs.Add(new DirectoryCatalog(ConfigurationManager.AppSettings["browser.plugins"])); 
 CompositionContainer container = new CompositionContainer(aggregateCatalogue);
 catch (FileNotFoundException)
 catch (CompositionException)

 return this.browser;

namespace CharlesBryant.TestPipe.Enums
public enum BrowserTypeEnum


Well that’s the gist of it. I have untethered my tests from browser driver frameworks. This is not fully tested across a broad range of scenarios so there may be issues, but so far its doing OK for me.

The examples above are not production code, use at your own risk.